About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(NixOS BSD OpenBSD Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on libera.chat, solene+www at dataswamp dot org or @solene@bsd.network (mastodon). If for some reason you want to support my work, this is my paypal address: donate@perso.pw.

How to trigger services restart after OpenBSD update

Written by Solène, on 25 September 2022.
Tags: #openbsd #security #deployment

Comments on Fediverse/Mastodon

Introduction §

Keeping an OpenBSD system up-to-date requires two daily operation:

  • updating the base system with the command: /usr/sbin/syspatch
  • updating the packages (if any) with the command: /usr/sbin/pkg_add -u

However, OpenBSD isn't very friendly with regard to what to do after upgrading: modified binaries should be restarted to use the new code, and a new kernel requires an upgrade

It's not useful to update if the newer binaries are never used.

Syspatch reboot §

I wrote a small script to automatically reboot if syspatch deployed a new kernel. Instead of running syspatch from a cron job, you can run a script with this content:

#!/bin/sh

OUT=$(/usr/sbin/syspatch)
SUCCESS=$?

if [ "$SUCCESS" -eq 0 ]
then
    if echo "$OUT" | grep reboot >/dev/null
    then
        reboot
    fi
fi

It's not much, it runs syspatch and if the output contains "reboot", then a reboot of the system is done.

Binaries restart §

It's getting more complicated when a running program is updated, whether it's a service with a rc.d script, or a program currently in use.

This would be nice to see something to help to restart them appropriately, I currently use the program "checkrestart" in a script like this:

checkrestart | grep smtpd && rcctl restart smtpd
checkrestart | grep httpd && rcctl restart httpd
checkrestart | grep dovecot && rcctl restart dovecot
checkrestart | grep lua && rcctl restart prosody

This works well for system services, except when the binary is different from the service name like for prosody, in which case you must know the exact name of the binary.

But for long-lived commands like a 24/7 emacs or an IRC client, there isn't any mechanism to handle it. At best, you can email you checkrestart output, or run checkrestart upon SSH login.

My NixOS workflow after migrating from OpenBSD

Written by Solène, on 24 September 2022.
Tags: #openbsd #nixos #life

Comments on Fediverse/Mastodon

Introduction §

After successfully switching my small computer fleet to NixOS, I'd like to share about the journey.

I currently have a bunch of computers running NixOS:

  • my personal laptop
  • the work laptop
  • home router
  • home file server
  • some home lab computer
  • e-mail / XMPP / Gemini server hosted at openbsd.amsterdam

That sums up to 6 computers running NixOS, half of them is running the development version, and the other is running the latest release.

Migration §

From OpenBSD to NixOS §

All the computers above used to run OpenBSD, let me explain why I migrated. It was a very complicated choice for me, because I still like OpenBSD despite I uninstalled it.

  • NixOS offers more software choice than OpenBSD, this is especially true for recent software, and porting them to OpenBSD is getting difficult over time.
  • After spending too much time with OpenBSD, I wanted to explore a whole new world, NixOS being super different, it was a good opportunity. As a professional IT worker, it's important for me to stay up to date, Linux ecosystem evolved a lot over that past ten years. What's funny is OpenBSD and NixOS share similar issues such as not being able to use binaries found on the Internet (but for various reasons)
  • NixOS maintenance is drastically reduced compared to OpenBSD
  • NixOS helps me to squeeze more from my hardware (speed, storage capacity, reliability)
  • systemd: I bet this one will be controversial, but since I learned how to use it, I really like it (and NixOS make it even greater for writing units)

Security is hard to measure, but it's the main argument in favor of OpenBSD, however it is possible to enable mitigations on Linux as well such as hardened memory allocator or a hardened Kernel. OpenBSD isn't practical to separate services from running all in the same system, while on Linux you can easily sandbox services. In the end, the security mechanisms are different, but I feel the result is pretty similar for my threat model of protecting against script kiddies.

I give a bonus point for Linux for its ability to account CPU/Memory/Swap/Disk/network per user, group and process. This allows spotting unusual activity. Security is about protection, but also about being aware of intrusion, OpenBSD isn't very good at it at the moment.

NixOS modules §

One issue I had migrating my mail server and the router was to find what changes were made in /etc. I was able to figure which services were enabled, but not really all the steps done a few years ago to configure them. I had to scrape all the configuration file to see if it looked like verbatim default configuration or something I changed manually.

This is where NixOS shines for maintenance and configuration, everything is declarative, so you never touch anything in /etc. At anytime, even in a few years, I'll be able to exactly tell what I need for each service, without having to dig up /etc/ and compare with default files. This is a saner approach, and also ease migration toward another system (OpenBSD? ;) ) because I'd just have to apply these changes to configuration files.

Workflow §

Working with NixOS can be disappointing. Most of the system is read-only, you need to learn a new language (Nix) to configure services, you have to "rebuild" your system to make a change as simple as adding an entry in /etc/hosts, not very "Unix-like".

Your biggest friend is the man page configuration.nix which contains all the possible configurations settings available in NixOS, from Kernel choice and grub parameters, to Docker containers started at boot or your desktop environment.

The workflow is pretty easy, take your configuration.nix file, apply changes to it, and run "nixos-rebuild test" (or switch if you prefer) to try the changes. Then, you may want something more elaborated like tracking your changes in a git or darcs repository, and start sharing pieces of configuration between machines.

But in the end, you just declare some configuration. I prefer to keep my configurations very easy to read, I still don't have any modules or much variable, the common pieces are just .nix files imported for the systems needing it. It's super easy to track and debug.

Bento §

Bento GitHub project page

After a while, I found it very tedious to have to run nixos-rebuild on each machine to keep them up to date, so I started using the autoUpgrade module which basically do it for you in a scheduled task.

But then, I needed to centralize each configuration file somewhere, and have fun with ssh keys because I don't like publishing my configuration files publicly. Which isn't optimal either as if you make a change locally, you need to push the changes and connect to the remote host to pull the changes and rebuild immediately instead of waiting for the auto upgrade process.

So, I wrote bento, which allows me to manage all the configuration files in a single place, but better than that, I can build the configuration locally to ensure they will work once shipped. I quickly added a way to track the status of each remote system to be sure they picked up and applied the changes (every 10 minutes). Later, I improved the network efficiency by central management computer as a local binary cache, so other systems are now downloading packages from it locally, instead of downloading them again from the Internet.

The coolest thing ever is that I can manage offline systems such as my work laptop, I can update its configuration file in the weekend for an update or to improve the environment (it mostly shares the same configuration as my main laptop), and it will automatically pick it up when I boot it.

Conclusion §

Moving to NixOS was a very good and pleasant experience, but I had some knowledge about it before starting. It might be confusing a lot of people, and you certainly need to get into NixOS mindset to appreciate it.

Sharing some statistics about BTRFS compression

Written by Solène, on 21 September 2022.
Tags: #btrfs #filesystem

Comments on Fediverse/Mastodon

Introduction §

As I'm moving to Linux more and more, I took the opportunity to explore the BTRFS file system which was mostly unknown to me.

Let me share some data about compression ratio with BTRFS (ZFS should give similar results).

Work laptop §

First data §

This is my work computer with a big Nix store, and some build programs involving a lot of cache files and many git repositories.

Processed 3570629 files, 894690 regular extents (1836135 refs), 2366783 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       61%       55G          90G         155G
none       100%       35G          35G          52G
zlib        37%       20G          54G         102G
prealloc   100%      138M         138M          67M

The output reads that the real disk usage is 61%, so 39% of the disk compressed data. We have more details per compression algorithm about the content, "none" represents uncompressed data and "zlib" the files compressed using this algorithm.

Files compressed with zlib are down to 37% of their real size, this is not bad. I made a mistake when creating the BTRFS mount point: I used zlib compression algorithm which is quite obsolete nowadays. For history record, zlib is the library used to provide the "deflate compression algorithm" found in zip or gzip.

Let's change the compression to use zstd algorithm instead. This can be changed with the command "btrfs filesystem defrag -czstd -r /". Basically, all files are scanned, if they can be compressed with zstd, they are rewritten on the disk with the new algorithm.

Data after switching to zstd §

After 37 minutes of recompressing everything, the results are surprising. It didn't change much!

Processed 3570427 files, 928646 regular extents (1869080 refs), 2364661 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       60%       54G          90G         155G
none       100%       33G          33G          51G
zstd        37%       21G          56G         104G
prealloc   100%      138M         138M          67M

Real data usage on the disk is now 60% instead of 61% with zlib, not much of an improvement, I'd have expected zstd to perform a lot better.

However, I didn't measure compression and decompression times. zstd should perform a lot better in this area, so I'll stick with zstd.

LinuxReviews: comparison of compression algorithms

Personal computer §

My own laptop has a huge Nix store, a lot of binaries files (music, pictures), a few hundreads of gigabytes of video games. I suppose it's quite a realistic and balanced environment.

Processed 1804099 files, 755845 regular extents (1295281 refs), 980697 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       93%      429G         459G         392G
none       100%      414G         414G         332G
zstd        34%       15G          45G          59G
prealloc   100%       92M          92M          91M

The saving due to compression is 30 GB, but this only count as 7% of the global file system. That's not impressive compared to the other computer, but having an extra 30 GB for free is clearly something I enjoy.

Using Arion to use NixOS modules in containers

Written by Solène, on 21 September 2022.
Tags: #nixos #containers #docker #podman

Comments on Fediverse/Mastodon

Introduction §

NixOS is cool, but it's super cool because it has modules for many services, so you don't have to learn how to manage them (except if you want them in production), and you don't need to update them like a container image.

But it's specific to NixOS, while the modules are defined in the nix nixpkgs repository, you can't use them if you are not using NixOS.

But there is a trick, it's called arion and is able to generate containers to leverage NixOS modules power in them, without being on NixOS. You just need to have Nix installed locally.

arion GitHub project page

Nix project page

Docker vs Podman §

Long story short, docker is a tool to manage containers but requires going through a local socket and root daemon to handle this. Podman is a docker drop-in alternative that is almost 100% compatible (including docker-compose), and can run containers in userland or through a local daemon for more privileges.

Arion works best with podman, this is so because it relies on some systemd features to handle capabilities, and docker is diverting from this while podman isn't.

Explanations about why Arion should be used with podman

Prerequisites §

In order to use arion, I found these prerequisites:

  • nix must be in path
  • podman daemon running
  • docker command in path (arion is calling docker, but to use podman)
  • export DOCKER_HOST=unix:///run/podman/podman.sock

Different modes §

Arion can create different kind of container, using more or less parts of NixOS. You can run systemd services from NixOS, or a full blown NixOS and its modules, this is what I want to use here.

There are examples of the various modes that are provided in arion sources, but also in the documentation.

Arion documentation

Arion GitHub project page: examples

Let's try! §

We are now going to create a container to run a Netdata instance:

Create a file arion-compose.nix

{
  project.name = "netdata";
  services.netdata = { pkgs, lib, ... }: {
    nixos.useSystemd = true;
    nixos.configuration.boot.tmpOnTmpfs = true;

    nixos.configuration = {
      services.netdata.enable = true;
    };

    # required for the service, arion tells you what is required
    service.capabilities.SYS_ADMIN = true;

    # required for network
    nixos.configuration.systemd.services.netdata.serviceConfig.AmbientCapabilities =
      lib.mkForce [ "CAP_NET_BIND_SERVICE" ];

    # bind container local port to host port
    service.ports = [
      "8080:19999" # host:container
    ];
  };
}

And a file arion-pkgs.nix

import <nixpkgs> {
  system = "x86_64-linux";
}

And then, run "arion up -d", you should have Netdata reachable over http://localhost:8080/ , it's managed like any docker / podman container, so usual commands work to stop / start / export the container.

Of course, this example is very simple (I choose it for this reason), but you can reuse any NixOS module this way.

Making changes to the network §

If you change the network parts, you may need to delete the previous network creating in docker. Just use "docker network ls" to find the id, and "docker network rm" to delete it, then run "arion up -d" again.

Conclusion §

Arion is a fantastic tool allowing to reuse NixOS modules anywhere. These modules are a huge weight in NixOS appeal, and being able to use them outside is a good step toward a ubiquitous Nix, not only to build programs but also to run services.

Using Netdata on NixOS and connecting to Netdata cloud

Written by Solène, on 16 September 2022.
Tags: #nixos #monitoring #netdata #cloud

Comments on Fediverse/Mastodon

Introduction §

I'm still playing with monitoring programs, and I've been remembered about Netdata. What an improvement over the last 8 years!

This tutorial explains how to get Netdata installed on NixOS, and how to register your node in Netdata cloud.

Netdata GitHub project page

Netdata live demo

What's Netdata? §

This program is a simple service to run on a computer, it will automatically gather a ton of metrics and make them easily available over the local TCP port 19999. You just need to run Netdata and nothing else, and you will have every metrics you can imagine from your computer, and some explanations for each of them!

That's pretty cool because Netdata is very efficient, it draws nearly no CPU while gathering a few thousands metrics every few seconds, and is memory efficient and can be constrained to a dozen of megabytes.

While you can export its metrics to something like graphite or Prometheus, you lose the nice display which is absolutely a blast compare to Grafana (in my opinion).

Update: as pointed out by a reader (thanks!), it's possible to connect Netdata instances to only one used for viewing metrics. I'll investigate this soon.

Netdata documentation about streaming.

Netdata also added some machine learning anomaly detection, it's simple and doesn't use many resources or require a GPU, it only builds statistical models to be able to report if some metrics have an unusual trend. It takes some time to gather enough data, and after a few days it's starting to work.

Installing Netdata on NixOS §

As usual, it's simple, add this to your NixOS configuration and reconfigure the system.

  services.netdata = {
    enable = true;

    config = {
      global = {
        # uncomment to reduce memory to 32 MB
        #"page cache size" = 32;

        # update interval
        "update every" = 15;
      };
      ml = {
        # enable machine learning
        "enabled" = "yes";
      };
    };
  };

You should have Netdata dashboard available on http://localhost:19999 .

Streaming mode §

Here is a simple configuration on NixOS to connect a headless node without persistency to send all on a main Netdata server storing data but also displaying them.

You need to generate an UUID with uuidgen, replace UUID in the text with the result. It can be per system or shared by multiple Netdata instances.

My networks are 10.42.42.0/24 and 10.43.43.0/24, I'll allow everything matching 10.* on the receiver, I don't open port 19999 on a public interface.

Senders §

  services.netdata.enable = true;
  services.netdata.config = {
      global = {
          "default memory mode" = "none"; # can be used to disable local data storage
      };
  };
  services.netdata.configDir = {
    "stream.conf" = pkgs.writeText "stream.conf" ''
      [stream]
        enabled = yes
        destination = 10.42.42.42:19999
        api key = UUID
      [UUID]
        enabled = yes
    '';
  };

Receiver §

  networking.firewall.allowedTCPPorts = [19999];
  services.netdata.enable = true;
  services.netdata.configDir = {
    "stream.conf" = pkgs.writeText "stream.conf" ''
      [UUID]
        enabled = yes
        default history = 3600
        default memory mode = dbengine
        health enabled by default = auto
        allow from = 10.*
    '';
  };

Netdata cloud §

Netdata company started a "cloud" offer that is free, but they plan to keep it free but also propose more services for paying subscribers. The free plan is just a convenience to see metrics from multiple nodes at the same place, they don't store any metrics apart metadata (server name, OS version, kernel, etc..), when you look at your metrics, they just relay from your server to your web browser without storing the data.

The free cloud plan offers a correlating feature, but I still didn't have the opportunity to try it, and also email alerting when an alarm is triggered.

Netdata cloud website

Netdata cloud data privacy information

Adding a node §

The official way to connect a Netdata agent to the Netdata cloud is to use a script downloaded on the internet and run it with some parameter.

Connecting a Linux agent

I strongly dislike this method as I'm not a huge fan of downloading script to run as root that are not provided by my system.

When you want to add a new node, you will be given a long command line and a token, keep that token somewhere. NixOS Netdata package offers a script named `netdata-claim.sh` (which seems to be part of Netdata source code) that will generate a pair of RSA keys, and look for the token in a file.

Netdata data page: Add a node

Once you got the token, we will claim it to associate it to a node:

  1. create /var/lib/netdata/cloud.d/token and write the token in it
  2. run nix-shell -p netdata --run "netdata-claim.sh" as root
  3. your node should be registered in Netdata cloud

Conclusion §

Netdata is really a wonderful tool, ideally I'd like it to replace all the Grafana + storage + agent stack, but it doesn't provide persistent centralized storage compatible with its dashboard. I'm going to experiment with their Netdata cloud service, I'm not sure if it would add value for me, and while they have a very correct data privacy policy, I prefer to self-host everything.

Explaining modern server monitoring stacks for self-hosting

Written by Solène, on 11 September 2022.
Tags: #nixos #monitoring #efficiency

Comments on Fediverse/Mastodon

!/bin/introduction §

Hello 👋🏻, it's been a long time I didn't have to take a look at monitoring servers. I've set up a Grafana server six years ago, and I was using Munin for my personal servers.

However, I recently moved my server to a small virtual machine which has CPU and memory constraints (1 core / 1 GB of memory), and Munin didn't work very well. I was curious to learn if the Grafana stack changed since the last time I used it, and YES.

There is that project named Prometheus which is used absolutely everywhere, it was time for me to learn about it. And as I like to go against the flow, I tried various changes to the industry standard stack by using VictoriaMetrics.

In this article, I'm using NixOS configuration for the examples, however it should be obvious enough that you can still understand the parts if you don't know anything about NixOS.

The components §

VictoriaMetrics is a Prometheus drop-in replacement that is a lot more efficient (faster and use less resources), which also provides various API such as Graphite or InfluxDB. It's the component storing data. It comes with various programs like VictoriaMetrics agent to replace various parts of Prometheus.

Update: a dear reader shown me VictoriaMetrics can be used to scrape remote agents without the VictoriaMetrics agent, this reduce the memory usage and configuration required.

VictoriaMetrics official website

VictoriaMetrics documentation "how to scrape prometheus exporters such as node exporter"

Prometheus is a time series database, which also provide a collecting agent named Node Exporter. It's also able to pull (scrape) data from remote services offering a Prometheus API.

Prometheus official website

Node Exporter GitHub page

NixOS is an operating system built with the Nix package manager, it has a declarative approach that requires to reconfigure the system when you need to make a change.

NixOS official website

Collectd is a agent gathering metrics from the system and sending it to a remote compatible database.

Collectd official website

Grafana is a powerful Web interface pulling data from time series databases to render them under useful charts for analysis.

Grafana official website

Node exporter full Grafana dashboard

Setup 1: Prometheus server scraping remote node_exporter §

In this setup, a Prometheus server is running on a server along with Grafana, and connects to remote servers running node_exporter to gather data.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB and Prometheus 63 MB.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
prometh+  983975  0.3  6.3 1226012 63284 ?       Ssl  17:07   0:00 prometheus

Setup 1 diagram

  • model: pull, Prometheus is connecting to all servers

Pros §

  • it's the industry standard
  • can use the "node exporter full" Grafana dashboard

Cons §

  • uses memory
  • you need to be able to reach all the remote nodes

Server §

{
  services.grafana.enable = true;
  services.prometheus.exporters.node.enable = true;

  services.prometheus = {
    enable = true;
    scrapeConfigs = [
      {
        job_name = "kikimora";
        static_configs = [
          {targets = ["10.43.43.2:9100"];}
        ];
      }
      {
        job_name = "interbus";
        static_configs = [
          {targets = ["127.0.0.1:9100"];}
        ];
      }
    ];
  };
}

Client §

{
  networking.firewall.allowedTCPPorts = [9100];
  services.prometheus.exporters.node.enable = true;
}

Setup 2: VictoriaMetrics + node-exporter in pull model §

In this setup, a VictoriaMetrics server is running on a server along with Grafana. A VictoriaMetrics agent is running locally to gather data from remote servers running node_exporter.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
root      987944  0.0  1.3 1086276 13856 ?       Sl   18:30   0:00 vmagent

Setup 2 diagram

  • model: pull, VictoriaMetrics agent is connecting to all servers

Pros §

  • can use the "node exporter full" Grafana dashboard
  • lightweight and more performant than Prometheus

Cons §

  • you need to be able to reach all the remote nodes

Server §

let
  configure_prom = builtins.toFile "prometheus.yml" ''
    scrape_configs:
    - job_name: 'kikimora'
      stream_parse: true
      static_configs:
      - targets:
        - 10.43.43.1:9100
    - job_name: 'interbus'
      stream_parse: true
      static_configs:
      - targets:
        - 127.0.0.1:9100
  '';
in {
  services.victoriametrics.enable = true;
  services.grafana.enable = true;

  systemd.services.export-to-prometheus = {
    path = with pkgs; [victoriametrics];
    enable = true;
    after = ["network-online.target"];
    wantedBy = ["multi-user.target"];
    script = "vmagent -promscrape.config=${configure_prom} -remoteWrite.url=http://127.0.0.1:8428/api/v1/write";
  };
}

Client §

{
  networking.firewall.allowedTCPPorts = [9100];
  services.prometheus.exporters.node.enable = true;
}

Setup 3: VictoriaMetrics + node-exporter in push model §

In this setup, a VictoriaMetrics server is running on a server along with Grafana, on each server node_exporter and VictoriaMetrics agent are running to export data to the central VictoriaMetrics server.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB, which is exactly the same as the setup 2, except the VictoriaMetrics agent is running on all remote servers.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
root      987944  0.0  1.3 1086276 13856 ?       Sl   18:30   0:00 vmagent

Setup 3 diagram

  • model: push, each agent is connecting to the VictoriaMetrics server

Pros §

  • can use the "node exporter full" Grafana dashboard
  • memory efficient
  • can bypass firewalls easily

Cons §

  • you need to be able to reach all the remote nodes
  • more maintenance as you have one extra agent on each remote
  • may be bad for security, you need to allow remote servers to write to your VictoriaMetrics server

Server §

{
  networking.firewall.allowedTCPPorts = [8428];
  services.victoriametrics.enable = true;
  services.grafana.enable = true;
  services.prometheus.exporters.node.enable = true;
}

Client §

let
  configure_prom = builtins.toFile "prometheus.yml" ''
    scrape_configs:
    - job_name: '${config.networking.hostName}'
      stream_parse: true
      static_configs:
      - targets:
        - 127.0.0.1:9100
  '';
in {
  services.prometheus.exporters.node.enable = true;
  
  systemd.services.export-to-prometheus = {
    path = with pkgs; [victoriametrics];
    enable = true;
    after = ["network-online.target"];
    wantedBy = ["multi-user.target"];
    script = "vmagent -promscrape.config=${configure_prom} -remoteWrite.url=http://victoria-server.domain:8428/api/v1/write";
  };
}

Setup 4: VictoriaMetrics + Collectd §

In this setup, a VictoriaMetrics server is running on a server along with Grafana, servers are running Collectd sending data to VictoriaMetrics graphite API.

Running it on my server, Grafana takes 67 MB, VictoriaMetrics 30 MB and Collectd 172 kB (yes).

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
collectd  844275  0.0  0.0 610432   172 ?        Ssl  02:07   0:00 collectd

Setup 4 diagram

  • model: push, VictoriaMetrics receives data from the Collectd servers

Pros §

  • super memory efficient
  • can bypass firewalls easily

Cons §

  • you can't use the "node exporter full" Grafana dashboard
  • may be bad for security, you need to allow remote servers to write to your VictoriaMetrics server
  • you need to configure Collectd for each host

Server §

The server requires VictoriaMetrics to run exposing its graphite API on ports 2003.

Note that in Grafana, you will have to escape "-" characters using "\-" in the queries. I also didn't find a way to automatically discover hosts in the data to use variables in the dashboard.

UPDATE: Using write_tsdb exporter in collectd, and exposing a TSDB API with VictoriaMetrics, you can set a label to each host, and then use the query "label_values(status)" in Grafana to automatic discover hosts.

{
  networking.firewall.allowedTCPPorts = [2003];
  services.victoriametrics = {
    enable = true;
    extraOptions = [
      "-graphiteListenAddr=:2003"
    ];
  };
  services.grafana.enable = true;
  
}

Client §

We only need to enable Collectd on the client:

{
  services.collectd = {
    enable = true;
    autoLoadPlugin = true;
    extraConfig = ''
      Interval 30
    '';
    plugins = {
      "write_graphite" = ''
        <Node "${config.networking.hostName}">
          Host "victoria-server.fqdn"
          Port "2003"
          Protocol "tcp"
          LogSendErrors true
          Prefix "collectd_"
        </Node>
      '';
      cpu = ''
        ReportByCpu false
      '';
      memory = "";
      df = ''
        Mountpoint "/"
        Mountpoint "/nix/store"
        Mountpoint "/home"
        ValuesPercentage True
        ValuesAbsolute False
      '';
      load = "";
      uptime = "";
      swap = ''
        ReportBytes false
        ReportIO false
        ValuesPercentage true
      '';
      interface = ''
        ReportInactive false
      '';
    };
  };
}

Trivia §

The first section named #!/bin/introduction" is on purpose and not a mistake. It felt super fun when I started writing the article, and wanted to keep it that way.

The Collectd setup is the most minimalistic while still powerful, but it requires lot of work to make the dashboards and configure the plugins correctly.

The setup I like best is the setup 2.

Bento 1.0.0 released

Written by Solène, on 09 September 2022.
Tags: #nixos #deployment #bento

Comments on Fediverse/Mastodon

Introduction §

Bento 1.0.0 is alive!

GitHub Bento project

Tildegit mirror

Compared to the previous news, it received

  • bento is now a single script, easy to package and add to $PATH. Before that it was a set of scripts with a shared shell files with functions in it, not very practical…
  • the hosts directory can contain directories with flakes in it, that may contain multiple hosts, it’s now handled. If there is no flake in it, then the machine is named after the directory name
  • bento supports rollbacks, if something is wrong during the deployment then the previous system is roll backed
  • enhancement to the status output when you don't have a flaked system, as build are not reproducible (without efforts) we can't really compare local and remote builds
   machine   local version   remote version              state                                     time
   -------       ---------      -----------      -------------                                     ----
  interbus      non-flakes      1dyc4lgr 📌      up to date 💚                              (build 11s)
  kikimora        996vw3r6      996vw3r6 💚    sync pending 🚩       (build 5m 53s) (new config 2m 48s)
       nas        r7ips2c6      lvbajpc5 🛑 rebuild pending 🚩       (build 5m 49s) (new config 1m 45s)
      t470        b2ovrtjy      ih7vxijm 🛑      rollbacked 🔃                           (build 2m 24s)
        x1        fcz1s2yp      fcz1s2yp 💚      up to date 💚                           (build 2m 37s)
  • network measurements shown that polling for configuration changes costs 5.1 kB IN and OUT
  • many checks has been added when something is going wrong

On step §

It's a huge milestone for me, I thought it would be too much work to get there, but in one week and 441 lines of shell, bento is a real thing.

Video - talk about NixOS deployments tools

Written by Solène, on 09 September 2022.
Tags: #nixos #deployment

Comments on Fediverse/Mastodon

Intro §

At work, we have a weekly "knowledge sharing" meeting, yesterday I talked about the state of NixOS deployments tools.

I had to look at all the tools we currently have at hand before starting my own, so it made sense to share all what I found.

This is a real topic, it doesn't make much sense to use regular sysadmins tools like ansible / puppet / salt etc... on NixOS, we need specific tools, and there is currently a bunch of them, and it can be hard to decide which one to use.

YouTube video: A journey into the world of NixOS deployment tools

Text file used for the presentation

Git - How to prevent a branch to be pushed

Written by Solène, on 08 September 2022.
Tags: #git #unix

Comments on Fediverse/Mastodon

Introduction §

I was looking for a simple way to prevent pushing a specific git branch. A few searches on the Internet didn't give me good results, so let me share a solution.

Hooks §

Hooks are scripts run by git at a specific time, you have the "pre-" hooks before an action, and "post-" hooks after an action.

We need to edit the hook "pre-push" that happens at push time, before the real push action taking place.

Edit or create the file .git/hooks/pre-push:

#!/bin/sh

branch="$(git branch --show-current)"

if [ "${branch}" = "private" ]
then
    echo "Pushing to the branch ${branch} is forbidden"
    exit 1
fi

Mark the file as executable, otherwise it won't work.

In this example, if you run "git push" while on the branch "private", the process will be aborted.

NixOS Bento: now able to compare local and remote NixOS version

Written by Solène, on 06 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

Bento §

Project update: the report is now able to compare if the remote server is using the NixOS version we built locally. This is possible as NixOS builds are reproducible, I get the same result on the server and the remote system.

The tool is getting in a better shape, the code received extra checks in a lot of place.

A bit later (blog post update), I added the possibility to trigger the update from the user.

Bento git project repository

Listening to socket §

With systemd it's possible to trigger a command upon connecting on a socket, I made bento systemd service to listen on port TCP/51337, a connection would start the service "bento-update.service", and display the output to the TCP client.

This totally works in the web browser, it's now possible to create a bookmark that just starts the update and give instant feedback about the update process. This will be particularly useful in case of a debug phone session to ask the remote person to trigger an update on their side instead of waiting for a timer.

Status display demo §

It is now possible to differenciate the "not up to date" state into two categories:

  • the bento scripts were updated but not NixOS version change, this is called "sync pending". Changes could be distributing the updating file to give a new address for the remote server, so we can ensure they all received it.
  • the local NixOS version differs from the remote version, a rebuild is required, thus it's called "rebuild pending"

The "sync pending" is very fast, it only need to copy the files, but won't rebuild anything.

   machine   local version   remote version              state                                     time
   -------       ---------      -----------      -------------                                     ----
  kikimora        996vw3r6      996vw3r6 💚    sync pending 🚩       (build 5m 53s) (new config 2m 48s)
       nas        r7ips2c6      lvbajpc5 🛑 rebuild pending 🚩       (build 5m 49s) (new config 1m 45s)
      t470        ih7vxijm      ih7vxijm 💚      up to date 💚                           (build 2m 24s)
        x1        fcz1s2yp      fcz1s2yp 💚      up to date 💚                           (build 2m 37s)

NixOS Bento: new reporting feature

Written by Solène, on 05 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

Bento §

Bento received a new feature, it is now able to report if the remote hosts are up-to-date, how much time passed since their last update, and if they are not up-to-date, how long passed since the configuration change.

Bento git project repository

As Bento is using SFTP, it's possible to deposit information on the central server, I'm currently using log files from the builds, and compare this date to the date of the configuration.

This will be very useful to track deployments across the fleet. I plan to also check the version expected for a host and make them report their version after an update, this should possible for flakes system at least.

Asciinema demonstration (was done during development, doesn't contain report features)

Demonstration §

I pushed a new version affecting all hosts on the SFTP server, and run the status report regularly.

This is the output 15 seconds after making the changes available.

status of kikimora  not up to date 🚩 (last_update 15m 6s ago) (since config change 15s ago)
status of      nas  not up to date 🚩 (last_update 12m  ago) (since config change 15s ago)
status of     t470  not up to date 🚩 (last_update 16m 9s ago) (since config change 15s ago)
status of       x1  not up to date 🚩 (last_update 16m 24s ago) (since config change 14s ago)

This is the output after two systems picked up the changes and reported a success.

status of kikimora  not up to date 🚩 (last_rebuild 16m 46s ago) (since config change 1m 55s ago)
status of      nas      up to date 💚 (last_rebuild 8s ago)
status of     t470  not up to date 🚩 (last_rebuild 17m 49s ago) (since config change 1m 55s ago)
status of       x1      up to date 💚 (last_rebuild 4s ago)

This is the output after all systems reported a success.

status of kikimora  up to date 💚 (last_rebuild 0s ago)
status of      nas  up to date 💚 (last_rebuild 1m 24s ago)
status of     t470  up to date 💚 (last_rebuild 1m 2s ago)
status of       x1  up to date 💚 (last_rebuild 1m 20s ago)

Managing a fleet of NixOS Part 3 - Welcome to Bento

Written by Solène, on 04 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

Introducing Bento 🥳 §

I finally wrote an implementation for the NixOS fleet management, it's called Bento.

Bento git project repository

Features §

  • secure 🛡️: each client can only access its own configuration files (ssh authentication + sftp chroot)
  • efficient 🏂🏾: configurations can be built on the central management server to serve binary packages if it is used as a substituters by the clients
  • organized 💼: system administrators have all configurations files in one repository to easy management
  • peace of mind 🧘🏿: configurations validity can be verified locally by system administrators
  • smart 💡: secrets (arbitrary files) can (soon) be deployed without storing them in the nix store
  • robustness in mind 🦾: clients just need to connect to a remote ssh, there are many ways to bypass firewalls (corkscrew, VPN, Tor hidden service, I2P, ...)
  • extensible 🧰 🪡: you can change every component, if you prefer using GitHub repositories to fetch configuration files instead of a remote sftp server, you can change it
  • for all NixOS 💻🏭📱: it can be used for remote workstations, smartphones running NixoS, servers in a datacenter

Evolutions §

The project is still bare right now, I started it yesterday and I have many ideas to improve it:

  • package it to provide commands in `$PATH` instead of adding scripts to your config repository
  • add a rollback features in case an upgrade is losing connectivity
  • upgrades can depose a log file in the remote sftp server
  • upgrades could be triggered by the user by accessing a local socket, like opening a web page in a web browser to trigger it, if it returns output that'd be better
  • provide more useful modules in the utility nix file (automatically use the host as a binary cache for instance)
  • have a local information how to ssh to the client to ease the rebuild trigger (like a SSH file containing ssh command line)
  • a way to tell a client (when using flakes) to try to update flakes every time even if no configuration changed, to keep them up to date

Managing a fleet of NixOS Part 2 - A KISS design

Written by Solène, on 03 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

Introduction §

Let's continue my series trying to design a NixOS fleet management.

Yesterday, I figured out 3 solutions:

  1. periodic data checkout
  2. pub/sub - event driven
  3. push from central management to workstations

I retained solutions 2 and 3 only because they were the only providing instantaneous updates. However, I realize we could have a hybrid setup because I didn't want to let the KISS solution 1 away.

In my opinion, the best we can create is a hybrid setup of 1 and 3.

A new solution §

In this setup, all workstations will connect periodically to the central server to look for changes, and then trigger a rebuild. This simple mechanism can be greatly extended per-host to fit all our needs:

  • periodicity can be configured per-host
  • the rebuild service can be triggered on purpose manually by the user clicking on a button on their computer
  • the rebuild service can be triggered on purpose manually by a remote sysadmin having access to the system (using a VPN), this partially implements solution 3
  • the central server can act as a binary cache if configured per-host, it can be used to rebuild each configuration beforehand to avoid rebuilding on the workstations, this is one of Cachix Deploy arguments
  • using ssh multiplexing, remote checks for the repository can have a reduced bandwidth usage for maximum efficiency
  • a log of the update can be sent to the sftp server
  • the sftp server can be used to check connectivity and activate a rollback to previous state if you can't reach it anymore (like "magic rollback" with deploy-rs)
  • the sftp server is a de-facto available target for potential backups of the workstation using restic or duplicity

The mechanism is so simple, it could be adapted to many cases, like using GitHub or any data source instead of a central server. I will personally use this with my laptop as a central system to manage remote servers, which is funny as my goal is to use a server to manage workstations :-)

File access design §

One important issue I didn't approach in the previous article is how to distribute the configuration files:

  • each workstation should be restricted to its own configuration only
  • how to send secrets, we don't want them in the nix-store
  • should we use flakes or not? Better to have the choice
  • the sysadmin on the central server should manage everything in a single git repository and be able to use common configuration files across the hosts

Addressing each of these requirements is hard, but in the end I've been able to design a solution that is simple and flexible:

Design pattern for managing users

The workflow is the following:

  • the sysadmin writes configuration files for each workstation in a dedicated directory
  • the sysadmin creates a symlink to a directory of common modules in each workstation directories
  • after a change, the sysadmin runs a program that will copy each workstation configuration into a directory in a chroot, symlinks have to be resolved
  • OPTIONAL: we can dry-build each host configuration to check if they work
  • OPTIONAL: we can build each host configuration to provide them as a binary cache

The directory holding configuration is likely to have a flake.nix file (can be a symlink to something generic), a configuration file, a directory with a hierarchy of files to copy as-this in the system to copy things like secrets or configuration files not managed by NixOS, and a symlink to a directory of nix files factorized for all hosts.

The NixOS clients will connect to their dedicated users with ssh using their private key, this allows to separate each client on the host system and restrict what they can access using the SFTP chroot feature.

A diagram of a real world case with 3 users would look like this:

Real world example with 3 users

Work required for the implementation §

The setup is very easy and requires only a few components:

  • a program to translates the configuration repository into separate directories in the chroot
  • some NixOS configuration to create the SFTP chroots, we just need to create a nix file with a list of pair of values containing "hostname" "ssh-public-key" for each remote host, this will automate the creation of the ssh configuration file
  • a script on the user side that connects and look for changes and run nixos-rebuild if something changes, maybe rclone could be used to "sync" over SFTP efficiently
  • a systemd timer for the user script
  • a systemd socket triggering the user script, so people can just open http://localhost:9999 to trigger the socket and forcing the update, create a bookmark named "UPDATE MY MACHINE" on the user system

Conclusion §

I absolutely love this design, it's simple, and each piece can easily be replaced to fit one's need. Now, I need to start writing all the bits to make it real, and offer it to the world 🎉.

There is a NixOS module named autoUpgrade, I'm aware of its existence, but while it's absolutely perfect for the average user workstation or server, it's not practical for managing a fleet of NixOS efficiently.

How to host a local front-end for Reddit / YouTube / Twitter on NixOS

Written by Solène, on 02 September 2022.
Tags: #nixos #privacy

Comments on Fediverse/Mastodon

Introduction §

I'm not a consumer of proprietary social networks, but sometimes I have to access content hosted there, and in that case I prefer to use a front-end reimplementation of the service.

These front-ends are network services that acts as a proxy to the proprietary service, and offer a different interface (usually cleaner) and also remove tracking / ads.

In your web browser, you can use the extension Privacy Redirect to automatically be redirected to such front-ends. But even better, you can host them locally instead of using public instances that may be unresponsive, on NixOS it's super easy.

We are going to see how to deploy them on NixOS.

Privacy Redirect GitHub project page

libreddit GitHub project page: Reddit front-end

Invidious project website: YouTube front-end

nitter GitHub project page: Twitter front-end

Deployment §

As September 2022, libreddit, invidious and nitter have NixOS modules to manage them.

The following pieces of code can be used in your NixOS configuration file (/etc/nixos/configuration.nix as the default location) before running "nixos-rebuild" to use the newer configuration.

I focus on running the services locally and not expose them on the network, thus you will need a bit more configuration to add HTTPS and tune the performance if you need more users.

Libreddit §

We will use the container and run it with podman, a docker alternative. The service takes only a few megabytes to run.

The service is exposed on http://127.0.0.1:12344

  services.libreddit = {
      address = "127.0.0.1";
      port = 12344;
  };

Invidious §

This is using the NixOS module.

The service is exposed on http://127.0.0.1:12345

  services.invidious = {
      enable = true;
      nginx.enable = false;
      port = 12345;

      # if you want to disable recommended videos
      settings = {
        default_user_preferences = {
          "related_videos" = false;
        };
      };
  };

Nitter §

This is using the NixOS module.

The service is exposed on http://127.0.0.1:12346

  services.nitter = {
      enable = true;
      server.port = 12346;
      server.address = "127.0.0.1";
  };

Privacy redirect §

By default, the extension will pick a random public instance, you can configure it per service to use your local instance.

Conclusion §

I very enjoy these front-ends, they draw a lot less resources when browsing these websites. I prefer to run them locally for performance reasons.

If you run such instances on your local computer, this doesn't help with regard to privacy. If you care about privacy, you should use public instances, or host your own public instances so many different users are behind the same service and this makes profiling harder. But if you want to host such instance, you may need to tweak the performance, and add a reverse proxy and a valid TLS certificate.

Managing a fleet of NixOS Part 1 - Design choices

Written by Solène, on 02 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

Introduction §

I have a grand project in my mind, and I need to think about it before starting any implementation. The blog is a right place for me to explain what I want to do and the different solutions.

It's related to NixOS. I would like to ease the management of a fleet of NixOS workstations that could be anywhere.

This could be useful for companies using NixOS for their employees, to manage all the workstations remotely, but also for people who may manage NixOS systems in various places (cloud, datacenter, house, family computers).

In this central management, it makes sense to not have your users with root access, they would have to call their technical support to ask for a change, and their system could be updated quickly to reflect the request. This can be super useful for remote family computers when they need an extra program not currently installed, and that you took the responsibility of handling your system...

With NixOS, this setup totally makes sense, you can potentially reproduce users bugs as you have their configuration, stage new changes for testing, and users can roll back to a previous working state in case of big regression.

Cachix company made it possible before I figure a solution. It's still not late to propose an open source alternative.

Cachix Deploy

Defining the project §

The purpose of this project is to have a central management system on which you keep the configuration files for all the NixOS around, and allow the administrator to make the remote NixOS to pick up the new configuration as soon as possible when required.

We can imagine three different implementations at the highest level:

  • a scheduled job on each machine looking for changes in the source. The source could be a git repository, a tarball or anything that could be used to carry the configuration.
  • NixOS systems could connect to something like a pub/sub and wait for an event from the central management to trigger a rebuild, the event may or not contain information / sources.
  • the central management system could connect to the remote NixOS to trigger the build / push the build

These designs have all pros and cons. Let's see them more in details.

Solution 1 - Scheduled job §

In this scenario, The NixOS system would use a cron or systemd timer to periodically check for changes and trigger the update.

Pros §

  • low maintenance
  • could interactively ask the user when they want to upgrade if not now

Cons §

  • may not run at all if the system is not up at the correct time, or could be run at a delayed time depending on situation
  • can't force an update as soon as possible
  • not really bandwidth effective if you often poll
  • no feedback from the central management about who made/receive the update (except by adding a call to the server?)

Solution 2 - Remote systems are listening for changes (publisher / subscriber) §

In this scenario, the NixOS system would always be connected to the central management, using some kind of protocol like MQTT, BOCH or similar.

Pros §

  • you know which systems are up
  • events from central management are instantaneous and should wait for an acknowledgment
  • updates should propagate very quickly
  • could interactively ask the user when they want to upgrade if not now

Cons §

  • this can lead to privacy issue as you know when each host is connected
  • this adds complexity to the server
  • this adds complexity on each client
  • firewalls usually don't like long-lived connections, HTTPS based solution would help bypass firewalls

Solution 3 - The central management pushes the updates to the remote systems §

In this scenario, the NixOS system would be reachable over a protocol allowing to run commands like SSH. The central management system would run a remote upgrade on it, or push the changes using tools like deploy-rs, colmena, morph or similar...

Awesome-nix list: deployment-tools

Pros §

  • update is immediate
  • SSH could be exposed over TOR or I2P for maximum firewall bypassing capability

Cons §

  • offline systems may be complicated to update, you would need to try to connect to them often until they are reachable
  • you can connect to the remote machine and potentially spy the user. In the alternatives above, you can potentially achieve the same by reconfiguring the computer to allow this, but it would have to be done on purpose

Making a choice §

I tried to state the pros and cons of each setup, but I can't see a clear winner. However, I'm not convinced by the Solution 1 as you don't have any feedback or direct control on the systems, I prefer to abandon it.

The Solutions 2 and 3 are still in the competition, we basically ended with a choice between a PUSH and a PULL workflow.

Conclusion §

In order to choose between 2 and 3, I will need to experiment with the Solution 2 technologies as I never used them (MQTT, RabbitMQ, BOCH etc…).

NixOS specific feature: specialisations

Written by Solène, on 29 August 2022.
Tags: #nixos #nix #tweag

Comments on Fediverse/Mastodon

Credits §

This blog post is a republication of the article I published on my employer's blog under CC BY 4.0. I'm grateful to be allowed to publish NixOS related content there, but also to be able to reuse it here!

License CC by 4.0

Original publication place: Tweag I/O - NixOS Specialisations

After the publication of the original post, the NixOS wiki got updated to contain most of this content, I added some extra bits for the specific use case of "options for the non-specialisation that shouldn't be inherited by specialisations" that wasn't convered in this text.

NixOS wiki: Specialisation

Introduction §

I often wished to be able to define different boot entries for different uses of my computer, be it for separating professional and personal use, testing kernels or using special hardware. NixOS has a unique feature that solves this problem in a clever way — NixOS specialisations.

A NixOS specialisation is a mechanism to describe additional boot entries when building your system, with specific changes applied on top of your non-specialised configuration.

When do you need specialisations §

You may have hardware occasionally connected to your computer, and some of these devices may require incompatible changes to your day-to-day configuration. Specialisations can create a new boot entry you can use when starting your computer with your specific hardware connected. This is common for people with external GPUs (Graphical Processing Unit), and the reason why I first used specialisations.

With NixOS, when I need my external GPU, I connect it to my computer and simply reboot my system. I choose the eGPU specialisation in my boot menu, and it just works. My boot menu looks like the following:

NixOS specialisation shown in Grub

You can also define a specialisation which will boot into a different kernel, giving you a safe opportunity to try a new version while keeping a fallback environment with the regular kernel.

We can push the idea further by using a single computer for professional and personal use. Specialisations can have their own users, services, packages and requirements. This would create a hard separation without using multiple operating systems. However, by default, such a setup would be more practical than secure. While your users would only exist in one specialisation at a time, both users’ data are stored on the same partition, so one user could be exploited by an attacker to reach the other user’s data.

In a follow-up blog post, I will describe a secure setup using multiple encrypted partitions with different passphrases, all managed using specialisations with a single NixOS configuration. This will be quite awesome :)

How to use specialisations §

As an example, we will create two specialisations, one having the user Chani using the desktop environment Plasma, and the other with the user Paul using the desktop environment Gnome. Auto login at boot will be set for both users in their own specialisations. Our user Paul will need an extra system-wide package, for example dune-release. Specialisations can use any argument that would work in the top-level configuration, so we are not limited in terms of what can be changed.

NixOS manual: Configuration options

If you want to try, add the following code to your configuration.nix file.

specialisation = {
  chani.configuration = {
    system.nixos.tags = [ "chani" ];
    services.xserver.desktopManager.plasma5.enable = true;
    users.users.chani = {
      isNormalUser = true;
      uid = 1001;
      extraGroups = [ "networkmanager" "video" ];
    };
    services.xserver.displayManager.autoLogin = {
      enable = true;
      user = "chani";
    };
  };

  paul.configuration = {
    system.nixos.tags = [ "paul" ];
    services.xserver.desktopManager.gnome.enable = true;
    users.users.paul = {
      isNormalUser = true;
      uid = 1002;
      extraGroups = [ "networkmanager" "video" ];
    };
    services.xserver.displayManager.autoLogin = {
      enable = true;
      user = "paul";
    };
    environment.systemPackages = with pkgs; [
      dune-release
    ];
  };
};

After applying the changes, run "nix-rebuild boot" as root. Upon reboot, in the GRUB menu, you will notice a two extra boot entries named “chani” and “paul” just above the last boot entry for your non-specialised system.

Rebuilding the system will also create scripts to switch from a configuration to another, specialisations are no exception.

Run "/nix/var/nix/profiles/system/specialisation/chani/bin/switch-to-configuration switch" to switch to the chani specialisation.

When using the switch scripts, keep in mind that you may not have exactly the same environment as if you rebooted into the specialisation as some changes may be only applied on boot.

Conclusion §

Specialisations are a perfect solution to easily manage multiple boot entries with different configurations. It is the way to go when experimenting with your system, or when you occasionally need specific changes to your regular system.

My BTRFS cheatsheet

Written by Solène, on 29 August 2022.
Tags: #btrfs #linux

Comments on Fediverse/Mastodon

Introduction §

I recently switched my home "NAS" (single disk!) to BTRFS, it's a different ecosystem with many features and commands, so I had to write a bit about it to remember the various possibilities...

BTRFS is an advanced file-system supported in Linux, it's somehow comparable to ZFS.

Layout §

A BTRFS file-system can be made of multiple disks and aggregated in mirror or "concatenated", it can be split into subvolumes which may have specific settings.

Snapshots and quotas are applying on subvolumes, so it's important to think beforehand when creating BTRFS subvolumes, one may want to use a subvolume for /home and for /var for most cases.

Snapshots / Clones §

It's possible to take an instant snapshot of a subvolume, this can be used as a backup. Snapshots can be browsed like any other directory. They exist in two flavors: read-only and writable. ZFS users will recognize writable snapshots as "clones" and read-only as regular ZFS snapshots.

Snapshots are an effective way to make a backup and rolling back changes in a second.

Send / Receive §

Raw filesystem can be sent / receive over network (or anything supporting a pipe) to allow incremental differences backup. This is a very effective way to do incremental backups without having to scan the entire file-system each time you run your backup.

Deduplication §

I covered deduplication with bees, but one can also use the program "duperemove" (works on XFS too!). They work a bit differently, but in the end they have the same purpose. Bees operates on the whole BTRFS file-system, duperemove operates on files, it's different use cases.

duperemove GitHub project page

Bees GitHub project page

Compression §

BTRFS supports on-the-fly compression per subvolume, meaning the content of each file is stored compressed, and decompressed on demand. Depending on the files, this can result in better performance because you would store less content on the disk, and it's less likely to be I/O bound, but also improve storage efficiency. This is really content dependent, you can't compress binary files like pictures/videos/music, but if you have a lot of text and sources files, you can achieve great ratios.

From my experience, compression is always helpful for a regular user workload, and newer algorithm are smart enough to not compress binary data that wouldn't yield any benefit.

There is a program named compsize that reports compression statistics for a file/directory. It's very handy to know if the compression is beneficial and to which extent.

compsize GitHub project page

Defragmentation §

Fragmentation is a real thing and not specific to Windows, it matters a lot for mechanical hard drive but not really for SSDs.

Fragmentation happens when you create files on your file-system, and delete them: this happens very often due to cache directories, updates and regular operations on a live file-system.

When you delete a file, this creates a "hole" of free space, after some time, you may want to gather all these small parts of free space to have big chunks of free space, this matters for mechanical disks has the physical location of data is tied to the raw performance. The defragmentation process is just physically reorganizing data to order files chunks and free space into continuous blocks.

Defragmentation can be used to force compression in a subvolume, like if you want to change the compression algorithm or enabled compression after saving the files.

The command line is: btrfs filesystem defragment

Scrubbing §

The scrubbing feature is one of the most valuable feature provided by BTRFS and ZFS. Each file in these file-system is associated with its checksum in some metadata index, this mean you can actually check each file integrity by comparing its current content with the checksum known in the index.

Scrubbing costs a lot of I/O and CPU because you need to compute the checksum of each file, but it's a guarantee for validating the stored data. In case of a corrupted file, if the file-system is composed of multiple disks (raid1 / raid5), it can be repaired from mirrored copies, it should work most of the time because such file corruption is often related to the drive itself, thus other drives shouldn't be affected.

Scrubbing can be started / paused / resumed, this is handy if you need to operate heavy I/O and you don't want the scrubbing process to increase time. While the scrub commands can take a device or a path, the path parameter is only used to find the related file-system, it won't just scrub the files in that directory.

The command line is: btrfs scrub

Rebalancing §

When you are aggregating multiple disks into one BTRFS file-system, files are written on a disk and some other files are written to the other, after a while, a disk may contain more data than the other.

The rebalancing purpose is to redistribute data across the disks more evenly.

Swap file §

You can't create a swap file on a BTRFS disk without a tweak. You must create the file in a directory with the special attribute "no COW" using "chattr +C /tmp/some_directory", then you can move it anywhere as it will inherit the "no COW" flag.

If you try to use a swap file with COW enabled on it, swapon will report a weird error, but you get more details in the dmesg output.

Converting §

It's possible to convert a ext2/3/4 file-system into BTRFS, obviously it must not be currently in use. The process can be rolled back until a certain point like defragmenting or rebalancing.

My blog workflow

Written by Solène, on 28 August 2022.
Tags: #blog #life

Comments on Fediverse/Mastodon

Introduction §

I occasionally get feedback about my blog, most of the time people are impressed with the rate of publication when they see the index page. I'm surprised it appears to be huge efforts, so I'll explain how I work on my blog.

Make it simple §

I rarely spend more than 40 minutes for a blog post, the average blog post takes 20 minutes. Most of them are sharing something I fiddled with in the day or week, so the topic is still fresh for me. The content of the short articles often consists of dumping a few commands / configuration I used, and write a bit of text around so the reader knows what to expect from the article, how to use the content and what's the point of the topic.

It's important to keep track of commands/configuration beforehand, so when I'm trying something new, and I think I could write about it, I keep a simple text file somewhere with the few commands I typed or traps I encountered.

Write ideas down §

My fear with regard to the blog is to be out of ideas, this would mean I would have boring days and I would have nothing to write about. Sometimes I look at packages repository updates in different Linux distribution, and look at the projects homepages for which the name is unknown to me. This is a fun way to discover new programs / tools and ideas. When something looks interesting, I write its name down somewhere and may come later to it. I also write down any idea that I could get in my mind about some unusual setup I would like to try, if I come to try it, it will certainly end up as a new blog entry to share my experience.

Don't think too much §

There are two rules for the blog: having fun and not lie/be accurate. Having fun? Yes, writing can be fun, organizing ideas and sharing them is a cool exercise. Watching the result is fun. Thinking too much about perfection is not fun.

I prefer to write most of the blog posts in one shot, quickly proofread and publish, and be done with it. If I save a blog post as a draft, I may not pick it up quickly, and it's not fun to get into the context to continue it. I occasionally abandon some posts because of that, or simply delete the file and start over.

Sometimes it happens I'm wrong when writing, in the case I prefer to remove the blog post than keeping it online at all cost. When I know a text is terribly outdated, I either remove it from the index or update it.

I don't use any analytics services and I do the blog for free, the only incentive is to have fun and to know it will certainly help someone to look for information.

The blog software §

This website is generated with a custom blog generator I wrote a few years ago (cl-yag), the workflow to use it is very simple it never fails to me:

  • write the blog file in the format I want, I currently use GemText but in the past some blog posts were written in org-mode, man page or markdown
  • add an entry in the list of articles, this contains all the metadata such as the title, date, tags and description for the open graph protocol (optional)
  • run "make"
  • wait 30s, it's online on HTTP / gopher / Gemini

The program is really fast despite it's generating all the files every time, the "raw text to HTML" content is cached and reused when wrapping the HTML in the blog layout, the Gemini version is published as-this, and the gopher files are processed by a Perl script rewriting all the links and wrapping the text (takes a while).

Quick proofreading §

Before publishing, I read my text and run a spellcheck program on it, my favorite is LanguageTool because it finds so many mistake versus aspell which only finds obvious typos.

More advanced blog posts §

It happens for some blog posts to be more elaborated, they often describe a complex setup and I need to ensure readers can reproduce all the steps and get the same results as me. This kind of blog post takes a day to write, they often require using a spare computer for experimentation, formatting, installing, downloading things, adjusting the text, starting over because I changed the text...

Conclusion §

If you want to publish a blog, my advices would be to have fun, to use a blog/website generator that doesn't get in your way, and to not be afraid to get started. It could be scary at first to publish texts on the wild Internet, and fear to be wrong, but it happens, accept it, learn from your mistakes and improve for the next time.

Local peer to peer binary cache with NixOS and Peerix

Written by Solène, on 25 August 2022.
Tags: #nixos

Comments on Fediverse/Mastodon

Introduction §

There is a cool project related to NixOS, called Peerix. It's a local daemon exposed as a local substituter (a server providing binary packages) that will discover other Peerix daemon on the local network, and use them as a source of binary packages.

Peerix is a simple way to reuse package already installed somewhere on the network instead of downloading it again. Packages delivered by Peerix substituters are signed with a private key, so you need to import each computer public key before being able to download/use their packages. While this can be cumbersome, this also mandatory to prevent someone on the network to spoof packages.

Perrix should be used wisely, because secrets in your store could be leaked to others.

Peerix GitHub page

Generating the keys §

First step is to generate a pair of keys for each computer using Peerix.

In the directory in which you have your configurations files, use the command:

nix-store --generate-binary-cache-key "peerix-$(hostname -s)" peerix-private peerix-public

Setup §

I will only cover the flakes installation on NixOS. Add the files peerix-private and peerix-public to git as this is a requirement to flakes.

NOTE: if you find a way to not add the private key to the store, I'll be glad to hear about your solution!

Add this input in your flake.nix file:

  peerix = {
    url = "github:cid-chan/peerix";
    inputs.nixpkgs.follows = "nixpkgs";
  };

Add "nixos-hardware" in the outputs parameters lile:

outputs = { eslf, nixpkgs, nixos-hardware}: {

And in the modules list of your configuration, add this:

  peerix.nixosModules.peerix
  {
    services.peerix = {
      enable = true;
      package = peerix.packages.x86_64-linux.peerix;
      openFirewall = true; # UDP/12304
      privateKeyFile = ./peerix-private;
      publicKeyFile =  ./peerix-public;
      publicKey = "THE CONTENT OF peerix-public FROM THE OTHER COMPUTER";
      # example # publicKey = "peerix-laptop:1ZjzxYFhzeRMni4CyK2uKHjgo6xy0=";
    };
  }

If you have multiple public keys to use, just add them with a space between each value.

Run "nix flake lock --update-input peerix" and you can now reconfigure your system.

How to use §

There is nothing special to do, when you update your system, or use nix-shell, the nix-daemon will use the local Peerix substituter first which will discover other Peerix instances if any, and will use them when possible.

You can check the logs of the peerix daemons using "journalctl -f -u peerix.service" on both systems.

Conclusion §

While Peerix isn't a big project, it has a lot of potential to help NixOS users with multiple computers to have a more efficient bandwidth usage, but also build time. If you build the same project (with same inputs) on your computers, you can pull the result from the other.

My RSS feed with HTML content is back

Written by Solène, on 23 August 2022.
Tags: #blog #cl-yag

Comments on Fediverse/Mastodon

Dear readers, given the popular demand for a RSS feed with HTML in it (which used to be the default), I modified the code to generate a new RSS file using HTML for its content.

Here is a list of RSS feeds available on my blog:

RSS feed using the same raw content I'm using to write, available over HTTP

RSS feed using HTML, available over HTTP

RSS feed with gopher links and raw content, available over HTTP

RSS feed with gemini links and raw content, available over Gemini

RSS feed with gopher links and raw content, available over Gopher

I hope you find the one that fits the best for you. If you don't know, pick the first or second item in the list.

Using nix download bandwidth limit feature

Written by Solène, on 23 August 2022.
Tags: #bandwidth #nix #linux

Comments on Fediverse/Mastodon

Introduction §

I submitted a change to the nix package manager last week, and it got merged! It's now possible to define a bandwidth speed limit in the nix.conf configuration file.

Link to the GitHub pull request

This kind of limit setting is very important for users who don't have a fast Internet access, this allows the service to download packages while keep the network usable meanwhile.

Unfortunately, we need to wait for the next Nix version to be available to use it, fortunately it's easy to override a package settings to use the merge commit as a new version for nix.

Let's see how to configure NixOS to use a newer Nix version from git.

Setup §

On NixOS, we will override the nix package attributes to change its version and the according checksum.

We want the new option "download-speed" that takes a value for the kilobytes per second speed limit.

  nix.extraOptions = ''
    download-speed = 800
  '';
  nixpkgs.overlays = [
      (self: super:
      {
          nix = super.nix.overrideDerivation (oldAttrs: {
              name = "nix-unstable";
              src = super.fetchFromGitHub {
                  owner = "NixOS";
                  repo = "nix";
                  rev = "8d84634e26d6a09f9ca3fe71fcf9cba6e4a95107";
                  sha256 = "sha256-Z6weLCmdPZR044PIAA4GRlkQRoyAc0s5ASeLr+eK1N0=";
              };
          });
      })
  ];

Run "nixos-rebuild switch" as root, and voilà!

For non-NixOS, you can clone the git repository, checkout the according commit, build nix and install it on your system.

Going further §

Don't forget to remove that override setting once a new nix release will be published, or you will keep an older version of nix.

Minecraft performance improvement using the Sodium mod

Written by Solène, on 21 August 2022.
Tags: #minecraft #gaming #performance

Comments on Fediverse/Mastodon

Introduction §

This text is some kind of personal notes I save here, but it may be useful for some people. Don't expect high quality writing here 😀.

Modding §

Minecraft is quite slow and unoptimized, fortunately using the mod "Sodium", you get access to more advanced video settings that allow to reduce the computer power usage, or just make the game playable for older computers.

Sodium GitHub page

This requires PolyMC, a launcher for Minecraft which takes care of mods and other things. PolyMC is available on Linux and Windows.

PolyMC wiki

Setup §

In PolyMC:

  • create a new instance
  • pick your the minecraft version you want
  • below the minecraft versions, in "mod loader", choose "Fabric" and choose the version you want (the one with the star is recommended)
  • Press Ok
  • Modify the instance and choose Mods tab / right click on it to see the mods
  • Click on "Download mods"
  • Search "Sodium" in the list and click on it
  • Click on "Add mod for download"
  • Press OK
  • Close

Now, your Minecraft is using the Sodium mod, this allows greater choice within the "Video settings" like the Performance tab with more options.

Using systemd to make a Minecraft server to start on-demand and stop when it has no player

Written by Solène, on 20 August 2022.
Tags: #minecraft #nixos #systemd #automation

Comments on Fediverse/Mastodon

Introduction §

Sometimes it feels I have specific use cases I need to solve alone. Today, I wanted to have a local Minecraft server running on my own workstation, but only when someone needs it. The point was that instead of having a big java server running all the system, Minecraft server would start upon connection from a player, and would stop when no player remains.

However, after looking a bit more into this topic, it seems I'm not the only one who need this.

on-demand-minecraft: a project to automatically start a remote cloud server for whitelisted players

minecraft-server-hibernation: a wrapper that starts and stop a Minecraft server upon condition

As often, I prefer not to rely on third party tools when I can, so I found a solution to implement this using only systemd.

Even better, note that this method can work with any daemon given you can programmatically get the information whether to let it running or stop. In this example, I'm using Minecraft and the server stop is decided based on the player connecting fetch through rcon (a remote administration protocol).

The setup §

I made a simple graph to show the dependencies, there are many systemd components used to build this.

systemd dependency graph

The important part is the use of the systemd proxifier, it's a command to accept a connection over TCP and relay it to another socket, meanwhile you can do things such as starting a server and wait for it to be ready. This is the key of this setup, without it, this couldn't be possible.

Basically, listen-minecraft.socket listens on the public TCP port and runs listen-minecraft.service upon connection. This service needs hook-minecraft.service which is responsible for stopping or starting minecraft, but will also make listen-minecraft.service wait for the TCP port to be open so the proxifier will relay the connection to the daemon.

Then, minecraft-server.service is started alongside with stop-minecraft.timer which will regularly run stop-minecraft.service to try to stop the server if possible.

Configuration §

I used NixOS to configure my on-demand Minecraft server. This is something you can do on any systemd capable system, but I will provide a NixOS example, it shouldn't be hard to translate to a regular systemd configuration files.

{ config, lib, pkgs, modulesPath, ... }:
let

  # check every 20 seconds if the server
  # need to be stopped
  frequency-check-players = "*-*-* *:*:0/20";

  # time in second before we could stop the server
  # this should let it time to spawn
  minimum-server-lifetime = 300;

  # minecraft port
  # used in a few places in the code
  # this is not the port that should be used publicly
  # don't need to open it on the firewall
  minecraft-port = 25564;

  # this is the port that will trigger the server start
  # and the one that should be used by players
  # you need to open it in the firewall
  public-port = 25565;

  # a rcon password used by the local systemd commands
  # to get information about the server such as the
  # player list
  # this will be stored plaintext in the store
  rcon-password = "260a368f55f4fb4fa";

  # a script used by hook-minecraft.service
  # to start minecraft and the timer regularly
  # polling for stopping it
  start-mc = pkgs.writeShellScriptBin "start-mc" ''
    systemctl start minecraft-server.service
    systemctl start stop-minecraft.timer
  '';

  # wait 60s for a TCP socket to be available
  # to wait in the proxifier
  # idea found in https://blog.developer.atlassian.com/docker-systemd-socket-activation/
  wait-tcp = pkgs.writeShellScriptBin "wait-tcp" ''
    for i in `seq 60`; do
      if ${pkgs.libressl.nc}/bin/nc -z 127.0.0.1 ${toString minecraft-port} > /dev/null ; then
        exit 0
      fi
      ${pkgs.busybox.out}/bin/sleep 1
    done
    exit 1
  '';

  # script returning true if the server has to be shutdown
  # for minecraft, uses rcon to get the player list
  # skips the checks if the service started less than minimum-server-lifetime
  no-player-connected = pkgs.writeShellScriptBin "no-player-connected" ''
    servicestartsec=$(date -d "$(systemctl show --property=ActiveEnterTimestamp minecraft-server.service | cut -d= -f2)" +%s)
    serviceelapsedsec=$(( $(date +%s) - servicestartsec))

    # exit if the server started less than 5 minutes ago
    if [ $serviceelapsedsec -lt ${toString minimum-server-lifetime} ]
    then
      echo "server is too young to be stopped"
      exit 1
    fi

    PLAYERS=`printf "list\n" | ${pkgs.rcon.out}/bin/rcon -m -H 127.0.0.1 -p 25575 -P ${rcon-password}`
    if echo "$PLAYERS" | grep "are 0 of a"
    then
      exit 0
    else
      exit 1
    fi
  '';

in
{

  # use NixOS module to declare your Minecraft
  # rcon is mandatory for no-player-connected
  services.minecraft-server = {
    enable = true;
    eula = true;
    openFirewall = false;
    declarative = true;
    serverProperties = {
      server-port = minecraft-port;
      difficulty = 3;
      gamemode = "survival";
      force-gamemode = true;
      max-players = 10;
      level-seed = 238902389203;
      motd = "NixOS Minecraft server!";
      white-list = false;
      enable-rcon = true;
      "rcon.password" = rcon-password;
    };
  };

  # don't start Minecraft on startup
  systemd.services.minecraft-server = {
      wantedBy = pkgs.lib.mkForce [];
  };

  # this waits for incoming connection on public-port
  # and triggers listen-minecraft.service upon connection
  systemd.sockets.listen-minecraft = {
    enable = true;
    wantedBy = [ "sockets.target" ];
    requires = [ "network.target" ];
    listenStreams = [ "${toString public-port}" ];
  };

  # this is triggered by a connection on TCP port public-port
  # start hook-minecraft if not running yet and wait for it to return
  # then, proxify the TCP connection to the real Minecraft port on localhost
  systemd.services.listen-minecraft = {
    path = with pkgs; [ systemd ];
    enable = true;
    requires = [ "hook-minecraft.service" "listen-minecraft.socket" ];
    after =    [ "hook-minecraft.service" "listen-minecraft.socket"];
    serviceConfig.ExecStart = "${pkgs.systemd.out}/lib/systemd/systemd-socket-proxyd 127.0.0.1:${toString minecraft-port}";
  };

  # this starts Minecraft is required
  # and wait for it to be available over TCP
  # to unlock listen-minecraft.service proxy
  systemd.services.hook-minecraft = {
    path = with pkgs; [ systemd libressl busybox ];
    enable = true;
    serviceConfig = {
        ExecStartPost = "${wait-tcp.out}/bin/wait-tcp";
        ExecStart     = "${start-mc.out}/bin/start-mc";
    };
  };

  # create a timer running every frequency-check-players
  # that runs stop-minecraft.service script on a regular
  # basis to check if the server needs to be stopped
  systemd.timers.stop-minecraft = {
    enable = true;
    timerConfig = {
      OnCalendar = "${frequency-check-players}";
      Unit = "stop-minecraft.service";
    };
    wantedBy = [ "timers.target" ];
  };

  # run the script no-player-connected
  # and if it returns true, stop the minecraft-server
  # but also the timer and the hook-minecraft service
  # to prepare a working state ready to resume the
  # server again
  systemd.services.stop-minecraft = {
    enable = true;
    serviceConfig.Type = "oneshot";
    script = ''
      if ${no-player-connected}/bin/no-player-connected
      then
        echo "stopping server"
        systemctl stop minecraft-server.service
        systemctl stop hook-minecraft.service
        systemctl stop stop-minecraft.timer
      fi
    '';
  };

}

Conclusion §

I'm really happy to have figured out this smart way to create an on-demand Minecraft, and the design can be reused with many other kinds of daemons.

How to hack on Nix and try your changes

Written by Solène, on 19 August 2022.
Tags: #nix #development #nixos

Comments on Fediverse/Mastodon

Introduction §

Not obvious development process is hard to document. I wanted to make changes to the nix program, but I didn't know how to try them.

Fortunately, a coworker explained to me the process, and here it is!

The nix project GitHub page

Get the sources and compile §

First, you need to get the sources of the project, and compile it in some way to run it from the project directory:

git clone https://github.com/NixOS/nix/
cd nix
nix-shell
./bootstrap.sh
./configure --prefix=$PWD
make

Run the nix daemon §

In order to try nix, we need to stop nix-daemon.service, but also stop nix-daemon.socket to prevent it to restart the nix-daemon.

systemctl stop nix-daemon.socket
systemctl stop nix-daemon.service

Now, when you want your nix-daemon to work, just run this command from the project directory:

sudo bin/nix --extra-experimental-features nix-command daemon

Note this command doesn't fork on background.

If you need some settings in the nix.conf file, you have to create etc/nix/nix.conf relative to the project directory.

Restart the nix-daemon §

Once you are done with the development, exit your running daemon and restart the service and socket.

systemctl start nix-daemon.socket
systemctl start nix-daemon.service

Why is the OpenBSD documentation so good?

Written by Solène, on 18 August 2022.
Tags: #openbsd #documentation

Comments on Fediverse/Mastodon

Introduction §

The OpenBSD operating system is known to be secure, but also for having an accurate and excellent documentation. In this text, I'll try to figure out what makes the OpenBSD documentation so great.

The OpenBSD project website

A multi medias documentation §

Here is a list of supports used to distribute information:

  • first email upon installation
  • man pages
  • website
  • Frequently Asked Questions on the website
  • Examples
  • Commit history
  • Newsletters for announcement

Let's study them one by one.

The first email §

After you installed OpenBSD, when you log in as root for the first time, you are greeted by a message saying you received an email. In fact, there is an email from Theo De Raadt crafted at install time which welcomes you to OpenBSD. It gives you a few hints about how to get started, but most notably it leads you to the afterboot(8) man page.

The afterboot(8) man page is described as "things to check after the first complete boot", it will introduce you to the most common changes you may want to do on your system. But most importantly, it explains how to use the man page like looking at the SEE ALSO section leading to other man pages related to the current one.

The afterboot(8) man page

Man pages §

The man pages are a way to ship documentation with a software, usually you find a man page with the same name as the command or configuration file you want to document. It seems man pages appeared in 1971, the "man" stands for manual.

Wikipedia page about the man page

The manual pages are literally the core of the OpenBSD documentation, they follow some standard and contains much metadata in it. When you write a man page, you not only write text, but you describe your text. For instance, when we need to refer to another man page, we will use the tag "cross-reference", this rich format allows accurate rendering but also accurate searches.

When we refer at a page in a text discussion, we often write their name including the section, like man(1). If you see man(1), you understand it's a man page for "man" within the first section. There are 9 sections of man pages, this is an old way to sort them into categories, so if two things have the same name, you use the section to distinguishes them. Here is an example, "man passwd" will display passwd(1), which is a program to change the password of a user, however you could want to read passwd(5) which describes the format of the file /etc/passwd, in this case you would use "man 5 passwd". I always found this way of referring to man pages very practical.

On OpenBSD, there are man pages for all the base systems programs, and all the configuration files. We always try to be very consistent in the way information is shown, and the wording is carefully chosen to be as clear as possible. They are a common effort involving multiple reviewers, changes must be approved by at least one member of the team. When an OpenBSD program is modified, the man page must be updated accordingly. The pages are also occasionally updated to include more history explaining the origins of the commands, it's always very instructive.

When it comes to packages, there is no guarantee as we just bundle upstream software, they may not provide a man page. However, packages maintainers offers a "pkg-readme" file for packages requiring very specific tuning, theses files can be found in /usr/local/share/doc/pkg-readmes/.

Online OpenBSD man pages reader: the rich format shines here

Website §

One way to distribute information related to OpenBSD is the website, it explains what the project is about, on which hardware you can install it, why it exists and what it provides. It has a lot of information which are interesting before you install OpenBSD, so they can't be in a man pages.

The OpenBSD website

FAQ §

I chose the treat the Frequently Asked Questions part of the website as a different support for documentation. It's a special place that contains real world use cases, while the man pages are the reference for programs or configuration, they lack the big picture overview like "how to achieve XY on OpenBSD". The FAQ is particularly well crafted, it has different categories such as multimedia, virtualization and VPNs...

The OpenBSD FAQ

Examples §

The OpenBSD installation comes with a directory /etc/examples/ providing configuration file samples and comments. They are a good way to get started with a configuration file and understand the file format described in the according man page.

Commits history §

This part is not for end users, but for contributors. When a change is done in the sources, there is often a great commit message explaining the logic of the code and the reasons for the changes. I say often because some trivial changes doesn't require such explanations every time. The commit messages are a valuable source of information when you need to know more about a component.

Announcements by email §

Documentation is also keeping the users informed about important news. OpenBSD is using an opt-in method with the mailing lists. One list that is important for information is announce@openbsd.org, news releases and erratas are published here. This is a simple and reliable method working for everyone having an email.

No wiki §

This is an important point in my opinion, all the OpenBSD documentation is stored in the sources trees, they must be committed by someone with a commit access. Wiki often have orphan pages, outdated information, duplicates pages with contrary content. While they can be rich and useful, their content often tend to rot if the community doesn't spend a huge time to maintain them.

One system as a whole §

Finally, most of the above is possible because OpenBSD is developed by the same team. The team can enforce their documentation requirements from top to bottom, which lead to accurate and consistent documentation all across the system. This is more complicated on a Linux system where all components come from various teams with different methods.

When you get your hands on OpenBSD, you should be able to understand how to use all the components from the base system (= not the packages) with just the man pages, being offline doesn't prevent you to configure your system.

Conclusion §

What makes a good documentation? It's hard to tell. In my opinion, having a trustful source of knowledge is the most important, whatever the format or support. If you can't trust what you read because it may be outdated, or not applying on your current version, it's hard to rely on it. Man pages are a good format, very practical, but only when they are well written, but this is a difficult task requiring a lot of time.

BTRFS deduplication using bees

Written by Solène, on 16 August 2022.
Tags: #nixos #btrfs #linux

Comments on Fediverse/Mastodon

Introduction §

BTRFS is a Linux file system that uses a Copy On Write (COW) model. It is providing many features like on the fly compression, volumes management, snapshots and clones etc...

Wikipedia page about Copy on write

However, BTRFS doesn't natively support deduplication, which is a feature that looks for chunks in files to see if another file share that block, if so, only one chunk of data can be used for both files. In some scenarios, this can drastically reduce the disk space usage.

This is where we can use "bees", a program that can do offline deduplication for BTRFS file systems. In this context, offline means it's done when you run a command, while it could be live/on the fly where deduplication is instantly applied. HAMMER file system from DragonFly BSD is doing offline deduplication, while ZFS is doing it live. There are pros and cons for both models, ZFS documentation recommends 1 GB of memory per Terabyte of disk when deduplication is enabled, because it requires to have all chunks hashes in memory.

Bees GitHub page project

Usage §

Bees is a service you need to install and start on your system, it has some limitations and caveats documented, but it should work for most users.

You can define a BTRFS file system on which you want deduplication and a load target. Bees will work silently when your system is below the load threshold, and will stop when the load exceeds the limit, this is a simple mechanism to prevent bees to eat all your system resources after some freshly modified/created files need to be scanned.

First time you run bees on a file system that is not empty, it may take a while to scan everything, but then it's really quiet except if you do heavy I/O operation like downloading big files, but it's doing a good job at staying behind the scene.

Installation on NixOS §

Add this code to /etc/nixos/configuration.nix and run "nixos-rebuild switch" to apply the changes.

services.beesd.filesystems = {
  root = {
    spec = "LABEL=nixos";
    hashTableSizeMB = 256;
    verbosity = "crit";
    extraOptions = [ "--loadavg-target" "2.0" ];
  };
};

The code suppose your root partition is labelled "nixos", you want a hash table of 256 MB (this will be used by bees) and you don't want bees to run when the system load is more than 2.0.

You may want to tune the values, mostly the hash size, depending on your file system size. Bees is for terabytes file systems, but this doesn't mean you can use it for the average user disks.

Results §

I tried on my workstation with a lot of build artifacts and git repositories, bees reduced the disk usage from 160 GB to 124 GB, so it's a huge win here.

Later, I tried again on some Steam games with a few proton versions, it didn't save much on the games but saved a lot on the proton installations.

On my local cache server, it did save nothing, but is to be expected.

Conclusion §

BTRFS is a solid alternative to ZFS, it requires less memory while providing volumes, snapshots and compression. The only thing it needed for me was deduplication, and I'm glad it's offline, so it doesn't use too much memory.

How to get NixOS hosted at OpenBSD Amsterdam

Written by Solène, on 07 August 2022.
Tags: #nixos #openbsd #hosting

Comments on Fediverse/Mastodon

Introduction §

In this guide, I'll explain how to create a NixOS VM in the hosting company OpenBSD Amsterdam which only provides OpenBSD VMs hosted on OpenBSD.

I'd like to thank the team at OpenBSD Amsterdam who offered me a VM for this experiment. While they don't support NixOS officially, they are open to have customers running non-OpenBSD systems on their VMs.

OpenBSD Amsterdam hosting service website

The steps from OpenBSD to NixOS §

Here is a short description of the steps required to get NixOS installed on OpenBSD Amsterdam.

  1. Generate a NixOS VM disk file or use the one I provide
  2. Rent a VM at OpenBSD Amsterdam (5€ / month for 1 vCPU, 1GB of memory and 50 GB of hdd, with a dedicated IPv4, working IPv6 and reverse DNS)
  3. Connect to the hypervisor in order to get the serial console access to your VM
  4. Connect with ssh to your VM to reboot it
  5. In the serial console, upon reboot, boot on bsd.rd (the OpenBSD installer ramdisk)
  6. Overwrite the local disk by fetching your NixOS VM disk file through http/ftp and writing it on the disk file
  7. Reboot on NixOS
  8. Configure the network from the serial console, rebuild the system
  9. Enjoy

How to proceed §

You need to order a VM at OpenBSD Amsterdam first. You will receive an email with your VM name, its network configuration (IPv4 and IPv6), and explanations to connect to the hypervisor. We will need to connect to the hypervisor to have a serial console access to the virtual machine. A serial console is a text interface to a machine, you get the machine output displayed in your serial console client, and what you type is sent to the machine as if you had a keyboard connected to it.

It can be useful to read the onboarding guide before starting.

OpenBSD Amsterdam onboarding guide

Get into the OpenBSD installer §

Our first step is to get into the OpenBSD installer, so we can use it to overwrite the disk with our VM.

Connect to the hypervisor, attach to your virtual machine serial console by using the following command, we admit your VM name is "vm40" in the example:

vmctl console vm40

You can leave the console anytime by typing "~~." to get back into your ssh shell. The keys sequence "~." is used to drop ssh or a local serial console, but when you need to leave a serial console from a ssh shell, you need to use "~~.".

You shouldn't see anything because you won't get anything displayed until something is showed in the machine virtual first tty, you can press "enter" and you should see a login prompt. We don't need it, but it confirms the serial console is working.

In parallel, connect to your VM using ssh, find the root password at the end of ~/.ssh/authorized_keys, use "su -" to become root and run "reboot".

You should see the shutdown sequence scrolling in the hypervisor ssh session displaying the serial console, wait for the machine to reboot to spot for the login prompt, in which you will type bsd.rd:

Using drive 0, partition 3.
Loading......
probing: pc0 com0 mem[638K 3838M 4352M a20=on]
disk: hd0+
>> OpenBSD/amd64 BOOT 3.53
com0: 115200 baud
switching console to com0
>> OpenBSD/amd64 BOOT 3.53
boot> bsd.rd [ENTER] # you need to type bsd.rd

Copy the NixOS VM from the installer §

In this step, we will use the installer to fetch the NixOS VM disk and overwrite the local disk with it.

  • in the installer, type "S" to get a shell:
[...]
Welcome to the OpenBSD/amd64 7.2 installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?
  • enable the network using DHCP with the command:
ifconfig vio0 up autoconf
  • create the disk device in /dev because it's missing by default:
cd /dev
sh MAKEDEV sd0
  • fetch the NixOS disk and overwrite the local drive with it:
  • (remove the gunzip part if you didn't compress your VM disk file)
ftp -o - https://perso.pw/nixos/vm.disk.gz | gunzip -f -c | dd of=/dev/rsd0c bs=10M
  • reboot using the command "reboot"

NixOS grub menu §

At this step, in the serial console you should see a GRUB boot menu, it will boot the first entry after a few seconds. Then NixOS will start booting. In this menu you can access older versions of your system.

After the text stopped scrolling press enter. You should see a login prompt, you can log in with the username "root" and the default password "nixos" if you used my disk image.

Configuring NixOS §

If you used my template, your VM still doesn't have network connectivity, you need to edit the file /etc/nixos/configuration.nix in which I've put the most important variables you want to customize at the top of the file. You need to configure your IPv4 and IPv6 addresses and their gateways, and also your username with an ssh key to connect to it, and the system name.

Once you are done, run "nixos-rebuild switch", you should have network if you configured it correctly.

After the rebuild, run "passwd your_user" if you want to assign a password to your newly declared user.

You should be able to connect to your VM using its public IP and your ssh key with your username.

EXTRA: You may want to remove the profile minimal.nix which is imported: it disables documentation and the use of X libraries, but this may trigger packages compilation as they are not always built without X support.

Resizing the partition (last step) §

Because we started with a small 2 GB raw disk to create the virtual machine, the partition still has 2 GB only. We will have to resize the partition /dev/vda1 to take all the disk space, and then resize the ext4 file system.

First step is to extend the partition to 50 GB, the size of the virtual disk offered at openbsd.amsterdam.

# nix-shell -p parted
# parted /dev/vda
(parted) resizepart 1
Warning: The partition /dev/vda1 is currently in use. Are you sure to continue?
Yes/No? yes
End? [2147MB]? 50GB
(parted) quit

Second step is to resize the file system to fill up the partition:

# resize2fs /dev/vda1
The file system /dev/vda1 is mounted on / ; Resizing done on the fly
old_desc_blocks = 1, new_desc_blocks = 6
The file system /dev/vda1 now has a size of 12206775 blocks (4k).

Done! "df -h /" should report the new size.

Congratulations §

You have a fully functional NixOS VM!

Creating the VM §

While I provide a bootable NixOS disk image at https://perso.pw/nixos/vm.disk.gz , you can generate yours with this guide.

  • create a raw disk of 2 GB to install the VM in it
qemu-img create -f raw vm.disk 2G
  • run qemu in a serial console to ensure it works, in the grub boot menu you will need to select the 4th choice enabling serial console in the installer. In this no graphics qemu mode, you can stop qemu by pressing "ctrl+a" and then "c" to drop into qemu's own console, and type "quit" to stop the process.
qemu-system-x86_64 \
  -smp 2 -m 4G \
  -enable-kvm \
  -display curses -nographic \
  -cdrom nixos-minimal*.iso \
  -drive file=vm.disk,if=virtio,format=raw
  • we create the partitions and prepare the chroot
sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB 100%
mkfs.ext4 -L nixos /dev/vda1
mount /dev/disk/by-label/nixos /mnt
mkdir -p /mnt/etc/nixos/
  • edit the file /mnt/etc/nixos/configuration.nix , the NixOS install has nano available by default, but you can have your favorite editor by using "nix-shell -p vim" if you prefer vim. Here is a configuration file that will work:

NixOS configuration.nix file for OpenBSD Amsterdam

  • edit the file /mnt/etc/nixos/hardware-configuration.nix

NixOS hardware-configuration.nix file for OpenBSD Amsterdam

  • we can run the installer, it will ask for the root password, and then we can shut down the VM
nixos-install
systemctl poweroff

Now, you have to host the disk file somewhere to make it available through http or ftp protocol in order to retrieve it from the openbsd.amsterdam VM. I'd recommend compressing the file by running gzip on it, that will drastically reduce its size from 2GB to ~500MB.

Full disk encryption §

The ext4 file system offers a way to encrypt specific directories, it can be enough for most users.

However, if you want to enable full disk encryption, you need to use the guide above to generate your VM, but you need to create a separate /boot partition and create a LUKS volume for the root partition. This is explained in the NixOS manual, in the installer section. You should adapt the according bits in the configuration file to match your new setup.

Don't forget you will need to connect to the hypervisor to type your password through the serial access every time you will reboot.

Known issue and workaround §

There is an issue with the OpenBSD hypervisor and Linux kernels at the moment, when you reboot your Linux VM, the VM process on the OpenBSD host crashes. Fortunately, it crashes after all the shutdown process is done, so it doesn't let the file system in a weird state.

This problem is fixed in OpenBSD -current as of August 2022, and won't happen in OpenBSD 7.2 hypervisors that will be available by the end of the year.

A simple workaround is to open a tmux session in the hypervisor to run an infinite loop regularly checking if your VM is running, and starting it when it's stopped:

while true ; do vmctl status vm40 | grep stopped && vmctl start vm40 ; sleep 30 ; done

Mailing list archives: vmx_fault_page: uvm_fault returns 14, GPA=0xfe001818, rip=0xffffffffc0d6bb96

Mailing list archives: vmm page fault with VM upgraded from Ubuntu 18LTS to 20LTS

Conclusion §

It's great to have more choice when you need a VM. The OpenBSD Amsterdam team is very kind, professional and regularly give money to the OpenBSD project.

Going further §

This method should work for other hosting providers, given you can access the VM disk from an live environment (installer, rescue system etc..). You may need to pay attention to the disk device, and if you can't obtain a serial console access to your system, you need to get the network done right in the VM before copying it to the disk.

In the same vein, you can use this method to install any operating system supported by the hypervisor. I chose NixOS because I love this system, and it's easy to reproduce a result with its declarative paradigm.

Solving a bad ARP behavior on a Linux router

Written by Solène, on 05 August 2022.
Tags: #linux #network

Comments on Fediverse/Mastodon

Introduction §

So, I recently switched my home router to Linux but had a network issues for devices that would get/renew their IP with DHCP. They were obtaining an IP, but they couldn't reach the router before a while (between 5 seconds to a few minutes), which was very annoying and unreliable.

After spending some time with tcpdump on multiple devices, I found the issue, it was related to ARP (the protocol to discover MAC addresses associate them with IPs).

Wikipedia page about the ARP protocol

The arp flux problem explained

My setup §

I have an unusual network setup at home as I use my ISP router for Wi-Fi, switch and as a modem, the issue here is that there are two subnets on its switch.


      +------------------+                                +-----------------+
      | ISP MODEM        | ethernet #1         ethernet #1|                 |
      |                  |<------------------------------>|                 |
      |                  | 192.168.1.254     192.168.1.111|                 |
      |                  |                                |  linux router   |
      |                  |                                |                 |
      |                  | ethernet #2         ethernet #2|                 |
      |                  |<------------------------------>|                 |
      |                  |                    10.42.42.42 |                 |
      |                  |                                |                 |
      |                  |                                |                 |
      +------------------+                                +-----------------+
       ^ethernet #4     ^ ethernet #3
       |                |
       |                |
       |                +----> some switch with many devices
       |
       v 10.42.42.150
       NAS

Because the modem is reachable over 192.168.1.0/24 and is used by the router on that switch, but that the LAN network uses the same switch with 10.42.42.0/24, ARP packets arrives on two network interfaces of the router, for addresses that are non routables (ARP packets for 10.42.42.0 would arrive at the interface 192.168.1.0 or the opposite).

Solution §

There is simple solution, but it was very complicated to find as it's not obvious. We can configure the Linux kernel to discard ARP packets that are related to non routable addresses, so the interface with a 192.168.1.0/24 address will discard packets for the 10.42.42.0/24 network and vice-versa.

You need to define the sysctl net.ipv4.conf.all.arp_filter to 1.

sysctl net.ipv4.conf.all.arp_filter=1

This can be set per interface if you have specific need.

Documentation of the sysctl available on Linux

Conclusion §

This was a very annoying issue, incredibly hard to troubleshoot. I suppose OpenBSD has this strict behavior by default because I didn't have this problem when the router was running OpenBSD.

Fair Internet bandwidth management on a network using Linux

Written by Solène, on 05 August 2022.
Tags: #linux #bandwidth #qos

Comments on Fediverse/Mastodon

Introduction §

A while ago I wrote an OpenBSD guide to fairly share the Internet bandwidth to the LAN network, it was more or less working. Now I switched my router to Linux, I wanted to achieve the same. Unfortunately, it's not really documented as well as on OpenBSD.

The command needed for this job is "tc", acronym for Traffic Control, the Jack of all trades when it comes to manipulate your network traffic. It can add delays or packets lost (this is fun when you want to simulate poor conditions), but also traffic shaping and Quality of Service (QoS).

Wikipedia page about tc

Fortunately, tc is not that complicated for what we will achieve in this how-to (fair share) and will give results way better than what I achieved with OpenBSD!

How it works §

I don't want to explain how the whole stack involved works, but with tc we will define a queue on the interface we want to apply the QoS, it will create a number of flows assigned to each active network streams, each active flow will receive 1/total_active_flows shares of bandwidth. It mean if you have three connections downloading data (from the same computer or three different computers), they should in theory receive 1/3 of bandwidth each. In practice, you don't get exactly that, but it's quite close.

Setup §

I made a script with variables to make it easy to reuse, it deletes any traffic control set on the interfaces and then creates the configuration. You are supposed to run it at boot.

It contains two variables, DOWNLOAD_LIMIT and UPLOAD_LIMIT that should be approximately 95% of each maximum speed, it can be defined in bits with kbits/mbits or in bytes with kbps/mbps, the reason to use 95% is to let the router some room for organizing the packets. It's like a "15 puzzle", you need one empty square to use it.

#!/bin/sh

TC=$(which tc)

# LAN interface on which you have NAT
LAN_IF=br0

# WAN interface which connects to the Internet
WAN_IF=eth0

# 95% of maximum download
DOWNLOAD_LIMIT=13110kbit

# 95% of maximum upload
UPLOAD_LIMIT=840kbit

$TC qdisc del dev $LAN_IF root
$TC qdisc del dev $WAN_IF root

$TC qdisc add dev $WAN_IF root handle 1: htb default 1
$TC class add dev $WAN_IF parent 1: classid 1:1 htb rate $UPLOAD_LIMIT
$TC qdisc add dev $WAN_IF parent 1:1 fq_codel noecn

$TC qdisc add dev $LAN_IF root handle 1: htb default 1
$TC class add dev $LAN_IF parent 1: classid 1:1 htb rate $DOWNLOAD_LIMIT
$TC qdisc add dev $LAN_IF parent 1:1 fq_codel

Conclusion §

tc is very effective but not really straightfoward to understand. What's cool is you can apply it on the fly without incidence.

It has been really effective for me, now if some device is downloading on the network, it doesn't affect much the other devices when they need to reach the Internet.

Credits §

After lurking on the Internet looking for documentation about tc, I finally found someone who made a clear explanation about this tool. tc is documented, but it's too abstract for me.

linux home router traffic shaping with fq_codel

Creating a NixOS live USB for a full featured APU router

Written by Solène, on 03 August 2022.
Tags: #network #security #nixos #apu

Comments on Fediverse/Mastodon

Introduction §

At home, I'm running my own router to manage Internet, run DHCP, do filter and caching etc... I'm using an APU2 running OpenBSD, it works great so far, but I was curious to know if I could manage to run NixOS on it without having to deal with serial console and installation.

It turned out it's possible! By configuring and creating a live NixOS USB image, one can plug the USB memory stick into the router and have an immutable NixOS.

NixOS wiki about creating a NixOS live CD/USB

Network diagram §

Here is a diagram of my network. It's really simple except the bridge part that require an explanation. The APU router has 3 network interfaces and I only need 2 of them (one for WAN and one for LAN), but my switch doesn't have enough ports for all the devices, just missing one, so I use the extra port of the APU to connect that device to the whole LAN by bridging the two network interfaces.

                +----------------+
                |  INTERNET      |
                +----------------+
                       |
                       |
                       |
                +----------------+
                | ISP ROUTER     |
                +----------------+
                       | 192.168.1.254
                       |
                       |
                       | 192.168.1.111
                +----------------+
                |   APU ROUTER   |
                +----------------+
                |bridge #2 and #3|
                | 10.42.42.42    |
                +----------------+
                  |port #3    |
                  |           | port #2
       +----------+           |
       |                      |
       |                   +--------+     +----------+
       | 10.42.42.150      | switch |-----| Devices  |
  +--------+               +--------+     +----------+
  | NAS    |
  +--------+

Feature list §

Here is a list of services I need on my router, this doesn't include all my filtering rules and specific tweaks.

- DHCP server

- DNS resolving caching using unbound

- NAT

- SSH

- UPnP

- Munin

- Bridge ethernets ports #2 and #3 to use #3 as an extra port like a switch

The whole configuration §

For the curious, here is the whole configuration of the setup. In the sections after, I'll explain each parts of the code.

{ config, pkgs, ... }:
{

  isoImage.squashfsCompression = "zstd -Xcompression-level 5";

  powerManagement.cpuFreqGovernor = "ondemand";

  boot.kernelPackages = pkgs.linuxPackages_xanmod_latest;
  boot.kernelParams = [ "copytoram" ];
  boot.supportedFilesystems = pkgs.lib.mkForce [ "btrfs" "vfat" "xfs" "ntfs" "cifs" ];

  services.irqbalance.enable = true;

  networking.hostName = "kikimora";
  networking.dhcpcd.enable = false;
  networking.usePredictableInterfaceNames = true;
  networking.firewall.interfaces.eth0.allowedTCPPorts = [ 4949 ];
  networking.firewall.interfaces.br0.allowedTCPPorts = [ 53 ];
  networking.firewall.interfaces.br0.allowedUDPPorts = [ 53 ];

  security.sudo.wheelNeedsPassword = false;

  services.acpid.enable = true;
  services.openssh.enable = true;

  services.unbound = {
    enable = true;
    settings = {
      server = {
        interface = [ "127.0.0.1" "10.42.42.42" ];
        access-control =  [
          "0.0.0.0/0 refuse"
          "127.0.0.0/8 allow"
          "10.42.42.0/24 allow"
        ];
      };
    };
  };

  services.miniupnpd = {
      enable = true;
      externalInterface = "eth0";
      internalIPs = [ "br0" ];
  };

  services.munin-node = {
      enable = true;
      extraConfig = ''
      allow ^63\.12\.23\.38$
      '';
  };

  networking = {
    defaultGateway = { address = "192.168.1.254"; interface = "eth0"; };
    interfaces.eth0 = {
        ipv4.addresses = [
            { address = "192.168.1.111"; prefixLength = 24; }
        ];
    };

    interfaces.br0 = {
        ipv4.addresses = [
            { address = "10.42.42.42"; prefixLength = 24; }
        ];
    };

    bridges.br0 = {
        interfaces = [ "eth1" "eth2" ];
    };

    nat.enable = true;
    nat.externalInterface = "eth0";
    nat.internalInterfaces = [ "br0" ];
  };

  services.dhcpd4 = {
      enable = true;
      extraConfig = ''
      option subnet-mask 255.255.255.0;
      option routers 10.42.42.42;
      option domain-name-servers 10.42.42.42, 9.9.9.9;
      subnet 10.42.42.0 netmask 255.255.255.0 {
          range 10.42.42.100 10.42.42.199;
      }
      '';
      interfaces = [ "br0" ];
  };

  time.timeZone = "Europe/Paris";

  users.mutableUsers = false;
  users.users.solene.initialHashedPassword = "$6$ffffffffffffffff$TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "sudo" "wheel" ];
  };
}

Explanations §

This setup deserves some explanations with regard to each part of it.

Live USB specific §

I prefer to use zstd instead of xz for compressing the liveUSB image, it's way faster and the compression ratio is nearly identical as xz.

  isoImage.squashfsCompression = "zstd -Xcompression-level 5";

There is currently an issue when trying to use a non default kernel, ZFS support is pulled in and create errors. By redefining the list of supported file systems you can exclude ZFS from the list.

  boot.supportedFilesystems = pkgs.lib.mkForce [ "btrfs" "vfat" "xfs" "ntfs" "cifs" ];

Kernel and system §

The CPU frequency should stay at the minimum until the router has some load to compute.

  powerManagement.cpuFreqGovernor = "ondemand";
  services.acpid.enable = true;

This makes the system to use the XanMod Linux kernel, it's a set of patches reducing latency and improving performance.

Xanmod XanMod project website

  boot.kernelPackages = pkgs.linuxPackages_xanmod_latest;

In order to reduce usage of the USB memory stick, upon boot all the content of the liveUSB will be loaded in memory, the USB memory stick can be removed because it's not useful anymore.

  boot.kernelParams = [ "copytoram" ];

The service irqbalance is useful as it assigns certain IRQ calls to specific CPUs instead of letting the first CPU core to handle everything. This is supposed to increase performance by hitting CPU cache more often.

  services.irqbalance.enable = true;

Network interfaces §

As my APU wasn't running Linux, I couldn't know the name if the interfaces without booting some Linux on it, attach to the serial console and check their names. By using this setting, Ethernet interfaces are named "eth0", "eth1" and "eth2".

  networking.usePredictableInterfaceNames = true;

Now, the most important part of the router setup, doing all the following operations:

- assign an IP for eth0 and a default gateway

- create a bridge br0 with eth1 and eth2 and assign an IP to br0

- enable NAT for br0 interface to reach the Internet through eth0

  networking = {
    defaultGateway = { address = "192.168.1.254"; interface = "eth0"; };
    interfaces.eth0 = {
        ipv4.addresses = [
            { address = "192.168.1.111"; prefixLength = 24; }
        ];
    };

    interfaces.br0 = {
        ipv4.addresses = [
            { address = "10.42.42.42"; prefixLength = 24; }
        ];
    };

    bridges.br0 = {
        interfaces = [ "eth1" "eth2" ];
    };

    nat.enable = true;
    nat.externalInterface = "eth0";
    nat.internalInterfaces = [ "br0" ];
  };

This creates a user solene with a predefined password, add it to the wheel and sudo groups in order to use sudo. Another setting allows wheel members to run sudo without password, this is useful for testing purpose but should be avoided on production systems. You could add your SSH public key to ease and secure SSH access.

  users.mutableUsers = false;
  security.sudo.wheelNeedsPassword = false;
  users.users.solene.initialHashedPassword = "$6$bVPyGA3aTEMTIGaX$FYkFnOqwk8GNfeLEfppgGjZ867XxirQ19v1337.GSRdzxw7JrRi6IcpaEdeSuNTHSxIIhunter2Iy6clqB14b0";
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "sudo" "wheel" ];
  };

Networking services §

This will run a DHCP server advertising the local DNS server and the default gateway, as it defines ranges for DHCP clients in our local network.

  services.dhcpd4 = {
      enable = true;
      extraConfig = ''
      option subnet-mask 255.255.255.0;
      option routers 10.42.42.42;
      option domain-name-servers 10.42.42.42, 9.9.9.9;
      subnet 10.42.42.0 netmask 255.255.255.0 {
          range 10.42.42.100 10.42.42.199;
      }
      '';
      interfaces = [ "br0" ];
  };

All systems require a name in order to work, and we don't want to use DHCP to get the IPs addresses. We also have to define a time zone.

  networking.hostName = "kikimora";
  networking.dhcpcd.enable = false;
  time.timeZone = "Europe/Paris";

This enables OpenSSH daemon listening on port 22.

  services.openssh.enable = true;

This enables the service unbound, a DNS resolver that is able to do some caching as well. We need to allow our network 10.42.42.0/24 and listen on the LAN facing interface to make it work, and not forget to open the ports TCP/53 and UDP/53 in the firewall. This caching is very effective on a LAN server.

  services.unbound = {
    enable = true;
    settings = {
      server = {
        interface = [ "127.0.0.1" "10.42.42.42" ];
        access-control =  [
          "0.0.0.0/0 refuse"
          "127.0.0.0/8 allow"
          "10.42.42.0/24 allow"
        ];
      };
    };
  };
  networking.firewall.interfaces.br0.allowedTCPPorts = [ 53 ];
  networking.firewall.interfaces.br0.allowedUDPPorts = [ 53 ];

This enables the service miniupnpd, this can be quite dangerous because its purpose is to allow computer on the network to create NAT forwarding rules on demand. Unfortunately, this is required to play some video games and I don't really enjoy creating all the rules for all the video games requiring it.

  services.miniupnpd = {
      enable = true;
      externalInterface = "eth0";
      internalIPs = [ "br0" ];
  };

This enables the service munin-node and allow a remote server to connect to it. This service is used to gather metrics of various data and make graphs from them. I like it because the agent running on the systems is very simple and easy to extend with plugins, and on the server side, it doesn't need a lot of resources. As munin-node listens on the port TCP/4949 we need to open it.

  services.munin-node = {
      enable = true;
      extraConfig = ''
      allow ^13\.17\.23\.28$
      '';
  };
  networking.firewall.interfaces.eth0.allowedTCPPorts = [ 4949 ];

Conclusion §

By building a NixOS live image using Nix, I can easily try a new configuration without modifying my router storage, but I could also use it to ssh into the live system to install NixOS without having to deal with the serial console.

How to use sshfs on OpenBSD

Written by Solène, on 23 July 2022.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

Today we will learn about how to use sshfs, a program to mount a remote directory through ssh into our local file system.

But OpenBSD has a different security model than in other Unixes systems, you can't use FUSE (Filesystem in USErspace) file systems from a non-root user. And because you need to run your fuse mount program as root, the mount point won't be reachable by other users because of permissions.

Fortunately, with the correct combination of flags, this is actually achievable.

sshfs project website

Setup §

First, as root we need to install sshfs-fuse from packages.

# pkg_add sshfs-fuse

Permissions errors when mounting with sshfs §

If we run sshfs as our user, we will get the error "fuse_mount: permission denied", so root is mandatory for running the command.

But if we run "sshfs server.local:/home /mnt" as root, we can't reach the /mnt directory with our regular user because it's root property:

$ ls /mnt/
ls: /mnt/: Permission denied

This confirms sshfs needs some extra flags to be used for non-root users on OpenBSD.

The solution §

As root, we will run sshfs to mount a directory from t470-wifi.local (my laptop Wi-Fi IP address on my LAN) to make it available to our user with uid 1000 and gid 1000 (this is the ids for the first user added), you can find the information about your users with the command "id". We will also use the allow_other mount option.

# sshfs -o idmap=user,allow_other,uid=1000,gid=1000 solene@t470-wifi.local:/home/solene/ /mnt

After this command, when I switch to my user whose id and gid is 1000, I can read and write into /mnt.

Credits §

This article exists because many OpenBSD users struggle using sshfs, and it's not easy to find the solution on the Internet.

OpenBSD as NAS FOSDEM talk giving an example of sshfs use

= > https://marc.info/?l=openbsd-misc&m=153390693400573&w=2 misc@openbsd.org email thread explaining why fuse mount behavior changed in 2018

Make nix flakes commands using the same nixpkgs as NixOS does

Written by Solène, on 20 July 2022.
Tags: #nixos #linux #nix

Comments on Fediverse/Mastodon

Introduction §

This article will explain how to make the flakes enabled nix commands reusing the nixpkgs repository used as input to build your NixOS system. This will regularly save you time and bandwidth.

Flakes and registries §

By default, nix commands using flakes such as nix shell or nix run are pulling a tarball of the development version of nixpkgs. This is the default value set in the nix registry for nixpkgs.

$ nix registry list | grep nixpkgs
global flake:nixpkgs github:NixOS/nixpkgs/nixpkgs-unstable

Because of this, when you run a command, you are likely to download a tarball of the nixpkgs repository including the latest commit every time you use flakes, this is particularly annoying because the tarball is currently around 30 MB. There is a simple way to automatically set your registry to define the nixpkgs repository to the local archive used by your NixOS configuration.

To your `flake.nix` file describing your system configuration, you should have something similar to this:

inputs.nixpkgs.url = "nixpkgs/nixos-unstable";

[...]
nixosConfiguration = {
  my-computer =lib.nixosSystem {
    specialArgs = { inherit inputs; };
    [...]
  };
};

Edit /etc/nixos/configuration.nix and make sure you have "inputs" listed in the first line, such as:

{ lib, config, pkgs, inputs, ... }:

And add the following line to the file, and then rebuild your system.

nix.registry.nixpkgs.flake = inputs.nixpkgs;

After this change, running a command such as "nix shell nixpkgs#gnumake" will reuse the same nixpkgs from your nix store used by NixOS, otherwise it would have been fetching the latest archive from GitHub.

nix-shell vs nix shell §

If you started using flakes, you may wonder why there are commands named "nix-shell" and "nix shell", they work totally differently.

nix-shell and non flakes commands use the nixpkgs offered in the NIX_PATH environment variable, which should be set to a directory managed by nix-channel, but the channels are obsoleted by flakes...

Fortunately, in the same way we synchronized the system flakes with the commands flakes, you can add this code to use the system nixpkgs with your nix-shell:

nix.nixPath = [ "nixpkgs=${inputs.nixpkgs}" "nixos-config=/etc/nixos/configuration.nix" "/nix/var/nix/profiles/per-user/root/channels" ];

This requires your user to logout from your current session to be effective. You can then check nix-shell and nix shell use the same nixpkgs source with this snippet. This asks the full path of the test program named "hello" and compares both results, they should match if they use the same nixpkgs.

[ "$(nix-shell -p hello --run "which hello")" = "$(nix shell nixpkgs#hello -c which hello)" ] && echo success

Conclusion §

Flakes are awesome, and are in the way of becoming the future of Nix. I hope this article shed some light about nix commands, and saved you some bandwidth.

Credits §

I found this information on a blog post of the company Tweag (which is my current employer) in a series of articles about Nix flakes. That's a bit sad I didn't find this information in the official NixOS documentation, but as flakes are still experimental, they are not really covered.

Tweag blog: Nix Flakes, Part 3: Managing NixOS systems

As I found this information on their blog post, and I'm fine giving credits to people, so I have to link their blog post license here.

Creative Commons Attribution 4.0 International license

How to account systemd services bandwidth usage on NixOS

Written by Solène, on 20 July 2022.
Tags: #nixos #bandwidth #monitoring

Comments on Fediverse/Mastodon

Introduction §

Did you ever wonder how many bytes a system service is daily receiving from the network? Thanks to systemd, we can easily account this.

This guide targets NixOS, but the idea could be applied on any Linux system using systemd.

NixOS project website

In this article, we will focus on the nix-daemon service.

Setup §

We will enable the attribute IPAccounting on the systemd service nix-daemon, this will make systemd to account bytes and packets that received and sent by the service. However, when the service is stopped, the counters are reset to zero and the information logged into the systemd journal.

In order to efficiently gather the network information over time into a database, we will run a script just before the service stops using the preStop service hook.

The script checks the existence of a sqlite database /var/lib/service-accounting/nix-daemon.sqlite, creates it if required, and then inserts the received bytes information of the nix-daemon service about to stop. The script uses the service attribute InvocationID and the current day to ensure that a tuple won't be recorded more than once, because if we restart the service multiple times a day, we need to distinguish all the nix-daemon instances.

Here is the code snippet to add to your `/etc/nixos/configuration.nix` file before running `nixos-rebuild test` to apply the changes.

  systemd.services.nix-daemon = {
      serviceConfig.IPAccounting = "true";
      path = with pkgs; [ sqlite busybox systemd ];
      preStop = ''
#!/bin/sh

SERVICE="nix-daemon"
DEST="/var/lib/service-accounting"
DATABASE="$DEST/$SERVICE.sqlite"

mkdir -p "$DEST"

# check if database exists
if ! dd if="$DATABASE" count=15 bs=1 2>/dev/null | grep -Ea "^SQLite format.[0-9]$" >/dev/null
then
cat <<EOF | sqlite3 "$DATABASE"
CREATE TABLE IF NOT EXISTS accounting (
        id TEXT PRIMARY KEY,
        bytes INTEGER NOT NULL,
        day DATE NOT NULL
);
EOF
fi

BYTES="$(systemctl show "$SERVICE.service" -P IPIngressBytes | grep -oE "^[0-9]+$")"
INSTANCE="'$(systemctl show "$SERVICE.service" -P InvocationID | grep -oE "^[a-f0-9]{32}$")'"

cat <<EOF | sqlite3 "$DATABASE"
INSERT OR REPLACE INTO accounting (id, bytes, day) VALUES ($INSTANCE, $BYTES, date('now'));
EOF
     '';
  };

If you want to apply this to another service, the script has a single variable SERVICE that has to be updated.

Display the information from the database §

You can use the following command to display the bandwidth usage of the nix-daemon service with a day-by-date report:

$ echo "SELECT day, sum(bytes)/1024/1024 AS Megabytes FROM accounting group by day" | sqlite3 -header -column /var/lib/service-accounting/nix-daemon.sqlite
day         Megabytes
----------  ---------
2022-07-17  173
2022-07-19  3018
2022-07-20  84

Please note this command requires the sqlite package to be installed in your environment.

Enhancement §

I have some ideas to improve the setup:

  • The script could be improved to support multiple services within the database by using a new field
  • The command to display data could be improved and turned into a system package to make it easier to use
  • Provide an SQL query for monthly summary

Conclusion §

Systemd services are very flexible and powerful thanks to the hooks provided to run script at the right time. While I was interested into network usage accounting, it's also possible to achieve a similar result with CPU usage and I/O accesses.

The Old Computer Challenge V2: done!

Written by Solène, on 19 July 2022.
Tags: #life #offline #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

The Old Computer Challenge V2 is over! What a week! It was even more than a week, as it was from 10th to 17th july included, that was 8 days.

What I've learned §

To be honest, this challenge was hard and less fun than the previous one as we couldn't communicate about our experiences. It was so hard to schedule my Internet needs over the days than I tried to not use it at all, leaving some time when I had some unexpected need to check something.

Nevertheless, it was still a good experience to go through, it helped me realize many daily small things required Internet without me paying attention anymore. Fortunately, I avoid most streaming services and my multimedia content is all local.

I spend a lot of time every day in instant messaging software, even if they work asynchronously, it often happen to have someone answering within seconds and then we start to chat and time passes. This was a huge time consumer of the daily limited Internet time available in the challenge.

We have a few other people who made the challenge, reading their reports was very interesting and fun.

Toward the next challenge §

Now this second challenge is over, our community is still strong and regained some activity. People are already thinking about the next edition and we need to find what do to next. An currently popular idea would be to reduce the Internet speed to RTC (~5 kB/s) instead of limiting time, but we still have some time to debate about the next rules.

We waited one year between the first and second challenge, but this doesn't mean we can't do this more often!

To conclude this article and challenge, I would like to give special thanks to all the people who got involved or interested into the challenge.

How to use Docker from a Linux host system to escalate to root

Written by Solène, on 19 July 2022.
Tags: #security #linux #docker

Comments on Fediverse/Mastodon

Introduction §

It's often said Docker is not very good with regard to security, let me illustrate a simple way to get root access to your Linux system through a docker container. This may be useful for people who would have docker available to their user, but whose company doesn't give them root access.

This is not a Docker vulnerability being exploited, just plain Docker by design. It is not a way to become root from *within* the container, you need to be able to run docker on the host system.

If you use this to break against your employer internal rules, this is your problem, not mine. I do write this to raise awareness about why Docker for systems users could be dangerous.

UPDATE: It is possible to run the Docker as a regular user since October 2021.

Run the docker daemon as a user

How to proceed §

We will start a simple Alpine docker container, and map the system root file system / on the /mnt container directory.

docker run -v /:/mnt -ti alpine:latest

From there, you can use the command `chroot /mnt` to obtain a root shell of your system.

You are now free to use "passwd" to change root password, or `visudo` to edit sudo rules, or you could use the system package manager to install extra software you want.

Some analogy §

If you don't understand why this works, here is a funny analogy. Think about being in a room as a human being, but you have a super power that allows you to imagine some environment in a box in front of you.

Now, that box (docker) has a specific feature: it permits you to take a piece of your current environment (the filesystem) to project it in the box itself. This can be useful if you want to imagine a beach environment and still have your desk in it.

Now, project your whole room (the host filesystem) into your box, and now, you are all mighty for what's happening in the box, which turn to be your own room (you are root, the super user).

Conclusion §

Users who have access to docker can escalate to root in a few seconds and megabytes.

Storing information on paper using the Pen To Paper protocol

Written by Solène, on 15 July 2022.
Tags: #life #fun

Comments on Fediverse/Mastodon

Introduction §

Here is a draft for a protocol named PTPDT, an acronym standing for Pen To Paper Data Transfer. It comes with its companion specification Paper To Brain.

The protocol describes how a pen can be used to write data on a sheet of paper. Maybe it would be better named as Brain To Paper Protocol.

Terminology §

Some words refer to specific concepts:

  • pen: a pen or pencil
  • paper: material on which pen can be used
  • writer: the author when using the pen
  • reader: the author when reading the paper
  • anoreader: anonymous reader reading the paper

Model §

The writer uses a pen on a paper in order to duplicate information from his memories into the paper.

We won't go into technical implementation details about how the pen does transmit information into the paper, we will admit some ink or equivalent is used in the process without altering data.

Nomenclature §

When storing data with this protocol, paper should be incrementally numbered for ordered information that wouldn't fit on a single storage paper unit. The reader could then read the papers in the correct order by following the numbering.

It is advised to add markers before and after the data to delimit its boundaries. Such mechanism can increase reliability of extracting data from paper, or help to recover from mixed up papers.

Encoding §

It is recommended to use a single encoding, often known as language, for a single piece of paper. Abstract art is considered a blob, and hence doesn't have any encoding.

Extracting data §

There are three ways to extract data from paper:

  1. lossless: all the information is extracted and can be used and replicated by the reader
  2. lossy: all the information is extracted and could be used by the reader
  3. partial: some pieces of information are extracted with no guarantee it can be replicated or used

In order to retrieve data from paper, reader and anoreader must use their eyesight to pass the paper data to their brain which will decode the information and store it internally. If reader's brain doesn't know the encoding, the data could be lossy or partially extracted.

It's often required to make multiple read passes to achieve a lossless extraction.

Compression §

There are different compression algorithms to increase the pen output bandwidth, the reader and anoreader must be aware of the compression algorithm used.

Encryption §

The protocol doesn't enforce encryption. The writer can encrypt data on paper so anoreader won't be able to read this, however this will increase the mental charge for both the writer and the reader.

Accessibility §

This protocol requires the writer to be able to use a pen.

This protocol requires the reader and anoreader to be able to see. We need to publish Braille To Paper Data Transfer for an accessible alternative.

The Old Computer Challenge V2: day 5

Written by Solène, on 14 July 2022.
Tags: #life #offline #oldcomputerchallenge

Comments on Fediverse/Mastodon

Some quick news for the Old Computer Challenge!

As it's too tedious to monitor the time spent on the Internet, I'm now using a chronometer for the day... and stopped using Internet in small bursts. It's also currently super hot where I live right now, so I don't want to do much stuff with the computer...

I can handle most of my computer needs offline. When I use Internet, it's now for a solid 15 minutes, except when I connect from my phone for checking something quickly without starting my computer, I rarely need to connect it more than a minute.

This is a very different challenge than the previous one because we can't stay online on IRC all day speaking about tricks to improve our experience with the current challenge. On the other hand, it's the opportunity to show our writing skills to tell about what we are going through.

I didn't write the last days because there wasn't much to say. I miss internet 24/7 though, and I'll be happy to get back on the computer without having to track my time and stop after the hour, which always happen too soon!

The Old Computer Challenge V2: day 2

Written by Solène, on 11 July 2022.
Tags: #life #offline #oldcomputerchallenge

Comments on Fediverse/Mastodon

Intro §

Day 2 of the Old Computer Challenge, 60 minutes of Internet per day. Yesterday I said it was easy. I changed my mind.

Internet feels natural §

I think my parents switched their Internet subscription from RTC to DSL around 2005, 17 years ago, it was a revolution for us because not only it was multiple time faster (up to 16 kB/s !) but it was unlimited in time! Since then, I only had unlimited Internet (no time, no quota), and it became natural to me to expect to have Internet all the time.

Because of this, it's really hard for me to just think about tracking my Internet time. There are many devices in my home connected to the Internet and I just don't think about it when I use them, I noticed I was checking emails or XMPP on my phone, I turned its Wi-Fi on in the morning and forgot about it then.

There are high chances I used more than my quota yesterday because of my phone, but I also forgot to stop the time accounting script. (It had a bug preventing it to stop correctly for my defense). And then I noticed I was totally out of time yesterday evening, I had to plan a trip for today which involved looking at some addresses and maps, despite I have a local OpenStreetMap database it's rarely enough to prepare a trip when you go somewhere the first time, and that you know you will be short on time to figure things out on the spot.

Internet everywhere §

Ah yes, my car also has an Internet connection with its own LTE access, I can't count it as part as the challenge because it's not really useful (I don't think I used it at all), but it's there.

And it's in my Nintendo Switch too, but it has an airplane mode to disable connectivity.

And Steam (the game library) requires being online when streaming video games locally (to play on the couch)...

So, there are many devices and software silently (not always) relying on the Internet to work that we don't always know exactly why they need it.

Open source work §

While I said I wasn't really restrained with only one hour of Internet, this was yesterday. I didn't have a feeling to work on open source project in the day, but today I would like to help to review packages updates/changes, but I couldn't. Packaging requires a lot of bandwidth and time, it requires searching for errors if they are known or new, it just can't be done offline because it relies on many external packages that has to be downloaded, and with a DSL line it takes a lot of time to keep a system up to date with its development branch.

Of course, with some base materials like the project main repository, it's possible to contribute, but not really at reviewing packages.

Second day review §

I will add my counter a 30 minutes penalty for not tracking my phone Internet usage today. I still have 750 seconds of Internet when writing this blog post (including the penalty).

Yesterday I improved my blog deployment to reduce the time taken by the file synchronization process, from 18s to 4s. I'm using rsync, but I have four remote servers to synchronize: 1 for http, 1 for gemini, 1 for gopher and 1 for a gopher backup. As the output files of my blog are always generated and brand new, rsync was recopying all the files to update the modification time, now I'm using -c for checksum and -I to ignore times, and it's significantly faster and ensure the changes are copied. I insist about the changes being copied, because if you rely on size only, it will work 99% of the time, except when you fix a single letter type that won't change the file size... been there.

Links to the challenge reports from others

The Old Computer Challenge V2: day 1

Written by Solène, on 10 July 2022.
Tags: #life #offline #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

Today is the beginning of the 2022 Old Computer Challenge, for a week I am now restricted to one hour of Internet access per day.

Old Computer Challenge V2 announcement

How do I account time? §

For now, I turned off my smartphone Wi-Fi because it would be hard to account its time.

My main laptop is using the very nice script from our community member prahou.

The script design is smart, it's accounting time and displaying time consumed, it can be described as a machine state like this:


   +------------+                    +----------------------------+
   | wait for   |                    | Accounting time for today  |
   | input      |  Type Enter        | Internet is enabled        |
   |            |------------------->|                            |
   | Internet   |                    | display time used          |
   | offline    |                    | today                      |
   +------------+                    +----------------------------+
          ^                                         v
          |                       press ctrl+C      |
          |       (which is trapped to run a func)  |
          +-----------------------------------------+

As the way to disable / enable internet is specific to every one, the script has two empty fuctions: NETON and NETOFF, they enable or disable Internet access. On my Linux computer I found an easy way to achieve this by adding a bogus default route with a metric 1, bypassing my default route. Because the default route doesn't work my system can't reach the Internet, but it let my LAN in a working state.

My own version of prahou's script (I made some little changes)

How's life? §

So far, it's easy to remember I don't have Internet all the time, but with my Internet usage it works fine. I use the script to "start" Internet, check my emails, read IRC channels and reply, and then I disconnect. By using small amount of time, I can achieve most of my needs in less than a minute. However, that wouldn't be practical if I had to download anything big, and people with a fast Internet access (= not me) would have an advantage.

My guess about this first day being easy is that as I don't use any streaming service, I don't need to be connected all the time. All my data are saved locally, and most of my communication needs can be done asynchronously. Even publishing this blog post shouldn't consume more than 20 seconds.

Let's go for a week §

I suppose it will be easy to forget about limited Internet time, so it will be best for me to run the accounting script in a terminal (disabling Internet until I manually accept to enable it), and think a bit ahead if I will need more time later so I can be more conservative about time usage.

So far, it's a great experience I enjoy a lot. I hope other participant will enjoy it as much as I do. We will start gathering and aggregating reports soon, so you could enjoy all the reports from our community.

It's not late to join §

Despite the challenge officially started today (10th July), it's not late to start it yourself. The important is to have fun, if you want to try, you could just use a chronometer and see if you could hold with only 60 minutes a day.

The Old Computer Challenge V2: back to RTC

Written by Solène, on 01 July 2022.
Tags: #life #offline #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

Hello! Let me start straight into the topic: The Old Computer Challenge, second edition!

Some readings if you don't know about the first Old Computer Challenge

The first edition of the challenge consisted into spending a week (during your non-work time) using an old computer, the recommended machine specifications were 1 core and 512 MB of memory at best, however some people enjoyed doing this challenge with other specifications and requirements, and it's fine, the purpose of the challenge is to have fun.

While experimenting the challenge last year, a small but solid community gathered on IRC, we shared tips and our feelings about the challenge, it was very fun and a good opportunity to meet new people. One year later, the community is still there and over the last months we had regular ideas exchange for renewing the challenge.

I didn't want to do the same challenge again, the fun would be spoiled, and it would have a feeling of déjà vu. I recently shared a new idea and many adopted it, and it was clear this would be the main topic of the new challenge.

The Old Computer Challenge v2 §

This new challenge will embrace the old time of RTC modems with a monthly time budget. Back in these days, in France at least, people had to subscribe to an ISP for a given price, but you would be able to connect only for 10, 20, 30, 40... hours a month depending on your subscription. Any extra hour was very expensive. We used the Internet the most efficiently possible because it was time limited (and very slow, 4 kB/s at best). Little story, phone lines were not available while a modem was connected, and we had to be careful not to forget to manually disconnect the modem after use, otherwise it would stay connected and wasting the precious Internet time! (and making expensive bills)

The new challenge rules are easy: you are allowed to _connect_ your computer to the Internet for a maximum cumulated time of 1h per day, from 10th to 17th July included. This mean you can connect six times for ten minutes, twice for thirty minutes, or once for one hour in the day.

Remember, the challenge is about having fun and helping you to step back on your computer habits, it's also recommended to share your thoughts and feeling a few times over the challenge week on your usual medias. There is nothing to prove to anyone, if you want to cheat or do the challenge with two or six hours a day, please do as you prefer.

The old computer challenge v2 cover

This artwork was created by our community member prahou (thanks!), and is under the license CC BY-NC-ND 4.0, you can reuse it as-this. It features a CD because back in the RTC time, ISP were offering CDs to connect to the Internet and subscribe from home, I remember using those as flying discs.

A page gathering the reports from all the participants

Time accounting §

While I don't have any implementation yet, here is an ideas list to help you to accounting your Internet time:

  • simple but effective, use airplane mode for Wi-Fi or unplug Ethernet, and use a chronometer when you connect
  • adding/removing the default route can be easier than playing with the firewall and still allow you to use the local network
  • a script that would try a ping every minute and account success in a file with a timestamp, it becomes easy to get information from this
  • some firewall rules you would trigger after a sleep 3600 command
  • define a time slot in your day for the challenge and use a cron job to manipulate the firewall to allow/block network depending on the current time

prahou's shell script counting time and enabling/disabling Internet, you need to modify NETOFF and NETON to adapt to your operating system

Frequently asked questions §

Does it apply on work time? §

No.

Can I have an exemption? §

If you really need to use the Internet for something, it's up to you. Don't make your life unbearable for a week because of the challenge.

Does it apply to 1h/day per device? §

No, it's 1h cumulated for all your devices, including smartphones.

Where is the community? §

We are reachable on #old-computer-challenge IRC channel on the Libera.chat network

Website of the libera.chat network and instructions how to connect

However, during the challenge I expect the channel to be quiet because people will be limited to 1h a day.

How I would sell OpenBSD as a salesperson

Written by Solène, on 22 June 2022.
Tags: #openbsd #opensource #business

Comments on Fediverse/Mastodon

Introduction §

Let's have fun today. I always wondered how I would sell OpenBSD licences to customers if I was a salesperson.

This text is pure fiction and fun. The OpenBSD project is free of charge and under a libre software licence.

Website of The OpenBSD Project

Killer features §

When selling a product, it's always important to talk about the killer features, what makes a product a good one and why it would solve the customer problems.

Learn once §

If you were to use OpenBSD, you certainly would have a slight learning curve, but then the system is so stable over time that the acquired knowledge would be reused from release to release. Most base tools in OpenBSD are evolving while keeping compatibility with regard to how you administrate them.

Can we say so for the Linux ecosystem which changes its sound and init system every 5 years? Can we say so for Windows which revisites most of its interface at every new release?

Learning OpenBSD is a good investment that will save you time later, so you can use your computer without frustration.

Secure by default §

OpenBSD comes with strong security defaults, you don't have to tweak anything, the development did it for you! You can confidently use your OpenBSD computer, and you will be safe from all the bad actors targetting mainstream systems.

Even more, OpenBSD takes care of your privacy and doesn't run any telemetry, doesn't record what you type, doesn't upload any data. The team took care of disabling microphone and webcam by faking their input stream with empty data until you explicitely allow one or the other to record audio/video.

Community driven §

Because you certainly don't want to suffer from big IT actors decisions affecting your favorite OS, OpenBSD is community driven and take care of not being infecting by big tech agendas. The system is made for the developers, by the developers, and you can use it as a customer! Doesn't this feel great to know the authors use their own software?

No obsolescence / eco-friendly §

Rest assured that your brand-new computer will still be able to run OpenBSD in 20 years. The team is taking a special care of keeping compatibility for older hardware until it's too hard to find spare components. It's almost a lifetime of system upgrades for your hardware! Are the competitors still supporting Sparc64 and 32-bit PowerPC for a modern computer experience? I don't think so! The installer is still available for floppy disk, I think this says it all!

Very low maintenance §

As OpenBSD is designed to be highly resilient and so simple that it can't break, be sure you won't waste time fixing problems on your system. With a FREE major update every six months and regular security updates, your system keeps being bulletproof with no more maintenance from you than running the update; more experienced users can even automate this using the built-in and free of charge task scheduler.

Licencing §

OpenBSD is perfect for people who want to become rich! Think about this, you love your OpenBSD system, and you want to make a product out of it? Perfect! The licencing allows you to make changes to OpenBSD, redistribute it, charge people for it, and you don't even have to show a single line of your product source code to your customers. This is a perfect licencing for people who would like to build proprietary devices based on OpenBSD, a rock solid system.

Against all industry standards, in case you would improve your OpenBSD, you are allowed to make changes to it without losing the warrantly coming with the licensing.

Technical support §

If you ever need help, you will have direct access for free to the mailing lists of the project, allowing you to exchange directly with the people developping OpenBSD.

Documentation §

Don't be afraid to jump into OpenBSD from another operating system, we took care of documenting everything you will need. We are very proud of our documentation, and you can even use your OpenBSD system without Internet connectivity and still being able to read the top-notch documentation to configure your system to your needs. No more need to use a search engine to find old blog posts with outdated and inaccurate advice.

Fast to install §

You can install OpenBSD very fast by just answering to a few questions about the setup. However, you should never need to install OpenBSD more than once so most people will never notice about it. Experimented users can even automate installation to spread OpenBSD to their family without effort.

Behind the scenes §

Of course, as a good salesperson, I would have to avoid some topics because this would make the customer lose interest into OpenBSD. However, they could be turned as a positive fact:

  • OpenBSD doesn't support Bluetooth, but you can see this as a security feature. The code was entirely removed from the kernel because Bluetooth is full of traps and could easily leak data over the air. You certainly don't want that?
  • You may think OpenBSD slow performance could hit your productivity, but on the contrary it's a feature that will prevent you from losing focus on what you are currently working on. Think about the Tortoise and the Hare!
  • Maybe your favorite software is proprietary and will not be provided for OpenBSD, then your provider is entirely at fault because they don't want to make their software compliant with OpenBSD strong quality requirements to provide a working binary
  • You may have heard some hardware won't run on OpenBSD, this can happen for very niche hardware. The OpenBSD team is working hard to give you the best experience on a selection of affordable hardware with premium support.

Conclusion §

I hope you understood this was a fiction; OpenBSD is free and anyone can use it. It has strength and weaknesses, as always it's important to use the right tool for the right job. The team would be happy to receive contributions from you if you want to improve OpenBSD, by doing so you could help me improve my speech as a saleperson.

"Take my money" meme

Use a gamepad to control mpv video playback

Written by Solène, on 21 June 2022.
Tags: #opensource #unix

Comments on Fediverse/Mastodon

Introduction §

This is certainly not a common setup, but I have a laptop plugged on my TV through an external GPU, and it always has a gamepad connected to it. I was curious to see if I could use the gamepad to control mpv when watching videos; it turns out it's possible.

In this text, you will learn how to control mpv using a gamepad / game controller by configuring mpv.

Configuration §

All the work will happen in the file ~/.config/mpv/inputs.conf. As mpv uses the SDL framework this gives easy names to the gamepad buttons and axis. For example, forget about brand specific buttons names (A, B, Y, square, triangle etc...), and welcome generic names such as action UP, action DOWN etc...

Here is my own configuration file, comments included:

# left and right (dpad or left stick axis) will move time by 30 seconds increment
GAMEPAD_DPAD_RIGHT seek +30
GAMEPAD_DPAD_LEFT seek -30

# using up/down will move to next/previous chapter if the video supports it
GAMEPAD_DPAD_UP add chapter 1
GAMEPAD_DPAD_DOWN add chapter -1

# button down will pause or resume playback, the "cycle" keyword means there are different states (pause/resume)
GAMEPAD_ACTION_DOWN cycle pause

# button up will switch between windowed or fullscreen
GAMEPAD_ACTION_UP cycle fullscreen

# right trigger will increase playback speed every time it's pressed by 20%
# left trigger resets playback speed
GAMEPAD_RIGHT_TRIGGER multiply speed 1.2
GAMEPAD_LEFT_TRIGGER set speed 1.0

You can find the actions list in mpv man page, or by looking at the sample inputs.conf that should be provided with mpv package.

Run mpv §

By default, mpv won't look for gamepad inputs, you need to add --input-gamepad=yes parameter when you run mpv, or add "input-gamepad=yes" as a newline in ~/.config/mpv/mpv.conf mpv configuration file.

If you use a button on the gamepad while mpv is running from a terminal, you will have some debug output showing you which button was pressed, including its name, this is helpful to find the inputs names.

Conclusion §

Using the gamepad instead of a dedicated remote is very convenient for me, no extra expense, and it's very fun to use.

How to make a local NixOS cache server

Written by Solène, on 02 June 2022.
Tags: #nixos #unix #bandwidth

Comments on Fediverse/Mastodon

Introduction §

If like me, you have multiple NixOS system behind the same router, you may want to have a local shared cache to avoid downloading packages multiple time.

This can be done simply by using nginx as a reverse proxy toward the official repository and by enabling caching the result.

nix-binary-cache-proxy project I used as a base

Server side configuration §

We will declare a nginx service on the server, using http protocol only to make setup easier. The packages are signed, so their authenticity can't be faked. In this setup, using https would add anonymity which is not much of a concern in a local network, for my use case.

In the following setup, the LAN cache server will be reachable at the address 10.42.42.150, and will be using the DNS resolver 10.42.42.42 every time it needs to reach the upstream server.

  services.nginx = {
    enable = true;
    appendHttpConfig = ''
      proxy_cache_path /tmp/pkgcache levels=1:2 keys_zone=cachecache:100m max_size=20g inactive=365d use_temp_path=off;
      
      # Cache only success status codes; in particular we don't want to cache 404s.
      # See https://serverfault.com/a/690258/128321
      map $status $cache_header {
        200     "public";
        302     "public";
        default "no-cache";
      }
      access_log /var/log/nginx/access.log;
    '';
    
    virtualHosts."10.42.42.150" = {
      locations."/" = {
        root = "/var/public-nix-cache";
        extraConfig = ''
          expires max;
          add_header Cache-Control $cache_header always;
          # Ask the upstream server if a file isn't available locally
          error_page 404 = @fallback;
        '';
      };
      
      extraConfig = ''
        # Using a variable for the upstream endpoint to ensure that it is
        # resolved at runtime as opposed to once when the config file is loaded
        # and then cached forever (we don't want that):
        # see https://tenzer.dk/nginx-with-dynamic-upstreams/
        # This fixes errors like
        #   nginx: [emerg] host not found in upstream "upstream.example.com"
        # when the upstream host is not reachable for a short time when
        # nginx is started.
        resolver 10.42.42.42;
        set $upstream_endpoint http://cache.nixos.org;
      '';
      
      locations."@fallback" = {
        proxyPass = "$upstream_endpoint";
        extraConfig = ''
          proxy_cache cachecache;
          proxy_cache_valid  200 302  60d;
          expires max;
          add_header Cache-Control $cache_header always;
        '';
      };
      
      # We always want to copy cache.nixos.org's nix-cache-info file,
      # and ignore our own, because `nix-push` by default generates one
      # without `Priority` field, and thus that file by default has priority
      # 50 (compared to cache.nixos.org's `Priority: 40`), which will make
      # download clients prefer `cache.nixos.org` over our binary cache.
      locations."= /nix-cache-info" = {
        # Note: This is duplicated with the `@fallback` above,
        # would be nicer if we could redirect to the @fallback instead.
        proxyPass = "$upstream_endpoint";
        extraConfig = ''
          proxy_cache cachecache;
          proxy_cache_valid  200 302  60d;
          expires max;
          add_header Cache-Control $cache_header always;
        '';
      };
    };
  };

Be careful, the default cache is located under /tmp/ but the nginx systemd service is hardened and its /tmp/ is faked in a temporary directory, meaning if you restart nginx you lose the cache. I'd advise using a directory like /var/cache/nginx/ if you want your cache to persist across restarts.

Client side configuration §

Using the cache server on a system is really easy. We will define the binary cache to our new local server, the official cache is silently added so we don't have to list it.

  nix.binaryCaches = [ "http://10.42.42.150/" ];

Note that you have to use this on the cache server itself if you want the system to use the cache for its own needs.

Conclusion §

Using a local cache can save a lot of bandwidth when you have more than one computer at home (or if you extensively use nix-shell and often run the garbage collector). Due to NixOS packages names being unique, we won't have any issues of a newer package version behind hidden by a local copy cached, which make the setup really easy.

Creating a NixOS thin gaming client live USB

Written by Solène, on 20 May 2022.
Tags: #nixos #gaming

Comments on Fediverse/Mastodon

Introduction §

This article will cover a use case I suppose very personal, but I love the way I solved it so let me share this story.

I'm a gamer, mostly on computer, but I have a big rig running Windows because many games still don't work well with Linux, but I also play video games on my Linux laptop. Unfortunately, my laptop only has an intel integrated graphic card, so many games won't run well enough to be played, so I'm using an external GPU for some games. But it's not ideal, the eGPU is big (think of it as a big shoes box), doesn't have mouse/keyboard/usb connectors, so I've put it into another room with a screen at a height to play while standing up, controller in hands. This doesn't solve everything, but I can play most games running on it and allowing a controller.

But if I install a game on both the big rig and the laptop, I have to manually sync the saves (I'm buying most of the games on GOG which doesn't have a Linux client to sync saves), it's highly boring and error-prone.

So, thanks to NixOS, I made a recipe to generate a USB live media to play on the big rig, using the data from the laptop, so it's acting as a thin client. The idea of a read only media to boot from is very nice, because USB memory sticks are terrible if you try to install Linux on them (I tried many times, it always ended with I/O errors quickly) and there is exactly what you need, generated from a declarative file.

What does it solve concretely? I can play some games on my laptop anywhere on the small screen, I can also play with my eGPU on the standing desk, but now I can also play all the installed games from the big rig with mouse/keyboard/144hz screen.

What's in the live image? §

The generated ISO (USB capable) should come with a desktop environment like Xfce, Nvidia drivers, Steam, Lutris, Minigalaxy and some other programs I like to use, I keep the programs list minimal because I could still use nix-shell to run a program later.

For the system configuration, I declare the user "gaming" with the same uid as the user on my laptop, and use an NFS mount at boot time.

I'm not using Network Manager because I need the system to get an IP before connecting to a user account.

The code §

I'll be using flakes for this, it makes pinning so much easier.

I have two files, "flake.nix" and "iso.nix" in the same directory.

flake.nix file:

{
  inputs = {
    nixpkgs.url = "nixpkgs/nixos-unstable";

  };

  outputs = { self, nixpkgs, ... }@inputs:
    let
      system = "x86_64-linux";

      pkgs = import nixpkgs { inherit system; config = { allowUnfree = true; }; };
      lib = nixpkgs.lib;

    in
    {

      nixosConfigurations.isoimage = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          ./iso.nix
          "${nixpkgs}/nixos/modules/installer/cd-dvd/installation-cd-base.nix"
        ];
      };

    };
}

And iso.nix file:

{ config, pkgs, ... }:
{

  # compress 6x faster than default
  # but iso is 15% bigger
  # tradeoff acceptable because we don't want to distribute
  # default is xz which is very slow
  isoImage.squashfsCompression = "zstd -Xcompression-level 6";
  
  # my azerty keyboard
  i18n.defaultLocale = "fr_FR.UTF-8";
  services.xserver.layout = "fr";
  console = {
    keyMap = "fr";
  };
  
  # xanmod kernel for better performance
  # see https://xanmod.org/
  boot.kernelPackages = pkgs.linuxPackages_xanmod;
  
  # prevent GPU to stay at 100% performance
  hardware.nvidia.powerManagement.enable = true;
  
  # sound support
  hardware.pulseaudio.enable = true;
 
  # getting IP from dhcp
  # no network manager
  networking.dhcpcd.enable = true;
  networking.hostName = "biggy"; # Define your hostname.
  networking.wireless.enable = false;

  # many programs I use are under a non-free licence
  nixpkgs.config.allowUnfree = true;

  # enable steam
  programs.steam.enable = true;

  # enable ACPI
  services.acpid.enable = true;

  # thermal CPU management
  services.thermald.enable = true;

  # enable XFCE, nvidia driver and autologin
  services.xserver.desktopManager.xfce.enable = true;
  services.xserver.displayManager.lightdm.autoLogin.timeout = 10;
  services.xserver.displayManager.lightdm.enable = true;
  services.xserver.enable = true;
  services.xserver.libinput.enable = true;
  services.xserver.videoDrivers = [ "nvidia" ];
  services.xserver.xkbOptions = "eurosign:e";

  time.timeZone = "Europe/Paris";

  # declare the gaming user and its fixed password
  users.mutableUsers = false;
  users.users.gaming.initialHashedPassword = "$6$bVayIA6aEVMCIGaX$FYkalbiet783049zEfpugGjZ167XxirQ19vk63t.GSRjzxw74rRi6IcpyEdeSuNTHSxi3q1xsaZkzy6clqBU4b0";
  users.users.gaming = {
    isNormalUser = true;
    shell = pkgs.fish;
    uid = 1001;
    extraGroups = [ "networkmanager" "video" ];
  };
  services.xserver.displayManager.autoLogin = {
    enable = true;
    user = "gaming";
  };

  # mount the NFS before login
  systemd.services.mount-gaming = {
    path = with pkgs; [ nfs-utils ];
    serviceConfig.Type = "oneshot";
    script = ''
      mount.nfs -o fsc,nfsvers=4.2,wsize=1048576,rsize=1048576,async,noatime t470-eth.local:/home/jeux/ /home/jeux/
    '';
    before = [ "display-manager.service" ];
    wantedBy = [ "display-manager.service" ];
    after = [ "network-online.target" ];
  };

  # useful packages
  environment.systemPackages = with pkgs; [
    bwm_ng
    chiaki
    dunst # for notify-send required in Dead Cells
    file
    fzf
    kakoune
    libstrangle
    lutris
    mangohud
    minigalaxy
    ncdu
    nfs-utils
    steam
    steam-run
    tmux
    unzip
    vlc
    xorg.libXcursor
    zip
  ];

}

Then I can update the sources using "nix flake lock --update-input nixpkgs", that will tell you the date of the nixpkgs repository image you are using, and you can compare the dates for updating. I recommend using a program like git to keep track of your files, if you see a failure with a more recent nixpkgs after the lock update, you can have fun pinpointing the issue and reporting it, or restoring the lock to the previous version and be able to continue building ISOs.

You can build the iso with the command "nix build .#nixosConfigurations.isoimage.config.system.build.isoImage", this will create a symlink "result" in the directory, containing the ISO that you can burn on a disk or copy to a memory stick using dd.

Server side §

Of course, because I'm using NFS to share the data, I need to configure my laptop to serves the files over NFS, this is easy to achieve, just add the following code to your "configuration.nix" file and rebuild the system:

services.nfs.server.enable = true;
services.nfs.server.exports = ''
  /home/gaming 10.42.42.141(rw,nohide,insecure,no_subtree_check)
'';

If like me you are using the firewall, I'd recommend opening the NFS 4.2 port (TCP/2049) on the Ethernet interface only:

networking.firewall.enable = true;
networking.firewall.allowedTCPPorts = [ ];
networking.firewall.allowedUDPPorts = [ ];
networking.firewall.interfaces.enp0s31f6.allowedTCPPorts = [ 2049 ];

In this case, you can see my NFS client is 10.42.42.141, and previously the NFS server was referred to as laptop-ethernet.local which I declare in my LAN unbound DNS server.

You could make a specialisation for the NFS server part, so it would only be enabled when you choose this option at boot.

NFS performance improvement §

If you have a few GB of spare memory on the gaming computer, you can enable cachefilesd, a service that will cache some NFS accesses to make the experience even smoother. You need memory because the cache will have to be stored in the tmpfs and it needs a few gigabytes to be useful.

If you want to enable it, just add the code to the iso.nix file, this will create a 10 MB * 300 cache disk. As tmpfs lacks user_xattr mount option, we need to create a raw disk on the tmpfs root partition and format it with ext4, then mount on the fscache directory used by cachefilesd.

services.cachefilesd.enable = true;
services.cachefilesd.extraConfig = ''
  brun 6%
  bcull 3%
  bstop 1%
  frun 6%
  fcull 3%
  fstop 1%
'';

# hints from http://www.indimon.co.uk/2016/cachefilesd-on-tmpfs/
systemd.services.tmpfs-cache = {
  path = with pkgs; [ e2fsprogs busybox ];
  serviceConfig.Type = "oneshot";
  script = '' 
    if [ ! -f /disk0 ]; then 
      dd if=/dev/zero of=/disk0 bs=10M count=600 
      echo 'y' | mkfs.ext4 /disk0 
    fi 
    mkdir -p /var/cache/fscache 
    mount | grep fscache || mount /disk0 /var/cache/fscache -t ext4 -o loop,user_xattr 
  '';
  before = [ "cachefilesd.service" ];
  wantedBy = [ "cachefilesd.service" ];
};

Security consideration §

Opening an NFS server on the network must be done only in a safe LAN, however I don't consider my gaming account to contain any important secret, but it would be bad if someone on the LAN mount it and delete all the files.

However, there are two NFS alternatives that could be used:

  • using sshfs using an SSH key that you transport on another media, but it's tedious for a local LAN, I've been surprised to see sshfs performance were nearly as good as NFS!
  • using sshfs using a password, you could only open ssh to the LAN, which would make security acceptable in my opinion
  • using WireGuard to establish a VPN between the client and the server and use NFS on top of it, but the secret of the tunnel would be in the USB memory stick so better not have it stolen

Size optimization §

The generated ISO can be reduced in size by removing some packages.

Gnome §

for example Gnome comes with orca which will bring many dependencies for text-to-speech. You can easily exclude many Gnome packages.

environment.gnome.excludePackages = with pkgs.gnome; [
  pkgs.orca
  epiphany
  yelp
  totem
  gnome-weather
  gnome-calendar
  gnome-contacts
  gnome-logs
  gnome-maps
  gnome-music
  pkgs.gnome-photos
];

Wine §

I found that Wine came with the Windows compiler as a dependency, but yet it doesn't seem useful for running games in Lutris.

NixOS discourse: Wine installing mingw32 compiler?

It's possible to rebuild Wine used by Lutris without support for the mingw compiler, replace the lutris line in the "systemPackages" list with the following code:

(lutris-free.override {
  lutris-unwrapped = lutris-unwrapped.override {
    wine = wineWowPackages.staging.override {
      mingwSupport = false;
    };
  };
})

Note that I'm using lutris-free which doesn't support Steam because it makes it a bit lighter and I don't need to manage my Steam games with Lutris.

Possible improvements §

It could be possible to try getting a package from the nix-store on the NFS server before trying cache.nixos.org which would improve bandwidth usage, it's easy to achieve but yet I need to try it in this context.

Issue §

I found Steam games running with Proton are slow to start. I made a bug report on the Steam Linux client github.

Github: Proton games takes around 5 minutes to start from a network share

This can be solved partially by mounting ~/.local/share/Steam/steamapps/common/SteamLinuxRuntime_soldier/var as tmpfs, it will uses less than 650MB.

Conclusion §

I really love this setup, I can backup my games and saves from the laptop, play on the laptop, but now I can extend all this with a bigger and more comfortable setup. The USB live media doesn't take long to be copied to a USB memory stick, so in case one is defective, I can just recopy the image. The live media can be booted all in memory then be unplugged, this gives a crazy fast responsive desktop and can't be altered.

My previous attempts at installing Linux on an USB memory stick all gave bad results, it was extremely slow, i/o errors were common enough that the system became unusable after a few hours. I could add a small partition to one disk of the big rig or add a new disk, but this will increase the maintenance of a system that doesn't do much.

Using a game engine to write a graphical interface to the OpenBSD package manager

Written by Solène, on 05 May 2022.
Tags: #openbsd #godot #opensource

Comments on Fediverse/Mastodon

Introduction §

I'm really trying hard to lower the barrier entry to OpenBSD, I realize most of my efforts are toward making OpenBSD easier.

One thing I often mumbled about on OpenBSD was the lack of a user interface to browse packages and install them, there was a console program named pkg_mgr, but I never got it to work. Of course, I'm totally able to install packages using the command line, but I like to stroll looking for packages I wouldn't know about, a GUI is perfect for doing so, and is also useful for people less comfortable with the command line.

So, today, I made a graphical user interface (GUI) using OpenBSD, using a game engine. Don't worry, all the packages operations are delegated to pkg_add and pkg_delete because they are doing they job fine.

OpenBSD AppManager project website

AppManager main menu

AppManager giving a summary of changes

What is it doing? §

The purpose of this program is simple, display the list of available packages, highlight in yellow the one you have installed on your system, and let you select new packages to install or installed packages to remove.

It features a search input instead of displaying a blunt list of a dozen of thousands of entries. The development was made on my Thinkpad T400 (core 2 duo), performance are excellent.

One simple feature I'm proud of is the automatic classification of packages into three categories: GUI programs, terminal/console user interface programs and others. While this is not perfect because we don't have this metadata anywhere, I'm reusing the dependencies' information to guess in which category each package belongs, so far it's giving great results.

About the engine §

I rarely write GUI application because it's often very tedious and give poor results, so the ratio time/result is very bad. I've been playing with the Godot game engine for a week now, and I was astonished when I've been told the engine editor is done using the engine itself. As it was blazing fast and easy to make small games, I wondered if this would be suitable for a simple program like a package manager interface.

First thing I checked was if it was supporting sqlite or json data natively without much work. This was important as the data used to query the package list is originally found in a sqlite database provided by the sqlports package, however the sqlite support was only available through 3rd party code while JSON was natively supported. When writing then simple script converting data from the sqlite database into a json, I took the opportunity to add the logic to determine if it's a GUI or a TUI (Terminal UI) and make the data format very easy to reuse.

Finally, I got a proof of concept within 2h, it was able to install packages from a list. Then I added support for displaying already installed packages and then to delete packages. The polishing of the interfaces took the most time, but the whole project didn't take more than 8h which is unbelievable for me.

Conclusion §

From today, I'll seriously think about using Godot for writing GUI application, did I say it's cross platform? AppManager can be run on Linux or Windows (given you have pkg.json), except it will just fail at installing packages, but the whole UI works.

Thinking about it, it could be easy to reuse it for another package manager.

Managing OpenBSD installed packages declaratively

Written by Solène, on 05 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

I wrote a simple utility to manage OpenBSD packages on a system using a declarative way.

pkgset git repository

Instead of running many pkg_add or pkg_delete commands to manage my packages, now I can use a configuration file (allowing includes) to define which package should be installed, and the installed but not listed packages should be removed.

After using NixOS too long, it's a must have for me to manage packages this way.

How does it work? §

pkgset works by marking extra packages as "auto installed" (the opposite is manually installed, see pkg_info -m), and by installing missing packages. After those steps, pkgset runs "pkg_delete -a" to remove unused packages (the one marked as auto installed) if they are not a dependency of another required package.

How to install? §

The installation is easy, download the sources and run make install as root, it will install pkgset and its man page on your system.

$ git clone https://tildegit.org/solene/pkgset.git
$ cd pkgset
$ doas make install

Configuration file example §

Here is the /etc/pkgset.conf file on my laptop.

borgbackup--%1.2
bwm-ng
fish
fzf
git
git-annex
gnupg
godot
kakoune
musikcube
ncdu
rlwrap
sbcl
vim--no_x11
vlc
xclip
xfce
xfce-extras
yacreader

Limitations §

The only "issue" with pkgset is that for some packages that "pkg_add" may find ambiguous due to multiples versions or favors available without a default one, you must define the exact package version/flavor you want to install.

Risks §

If you use it incorrectly, running pkgset doesn't have more risks than losing some or all installed packages.

Why not use pkg_add -l ? §

I know pkg_add as an option to install packages from a list, but it won't remove the extra packages. I may look at adding the "pkgset" feature to pkg_add one day maybe.

How to contribute to the OpenBSD project

Written by Solène, on 03 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

Intro §

You like OpenBSD? Then, I'm quite sure you can contribute to it! Let me explain the many ways your skills can be used to improve the project and contribute back.

Official FAQ section about how to support the Project

Contributing to OpenBSD §

I proposed to update the official FAQ with this content, but it has been dismissed, so I'm posting it here as I'm convinced it's valuable.

Writing and reviewing code §

Programmers who enjoy writing operating systems are naturally always welcome. The team would appreciate your skills on the base system, kernel, userland.

How create a diff to share a change with other

There is also place for volunteers willing to help at packaging and maintaing software up to date in our ports tree.

The porter guide

Use the development version §

Switch your systems to the branch -current and report system or packages regressions. With more users testing the development version, the releases are more likely to be bug free. Why not join the

What is -current, how to use it

It's also important to use the packages regularly on the development branch to report any issue.

FAQ guide to testing packages

Try OpenBSD on as many hardware as you can, send a bug report if you find incompatibility or regressions.

How to write an useful bug report

Supported hardware platform

Documentation §

Help maintain documentation by submitting new FAQ material to the misc@openbsd.org mailing list.

Challenging the documentation accuracy and relevance on a regular basis is a good way to contribute for everyone.

Community §

Follow the mailing lists, you may be able to help answer questions from other users. This is also a good opportunity to proofread submitted changes proposed by others or to try those and report how it works for you.

The OpenBSD mailing lists

Form or join a local group and get your friends hooked on OpenBSD.

List of OpenBSD user groups

Spread the word on social networks, show the project under a good light, share your experiences and your use cases. OpenBSD is definitely not a niche operating system anymore.

Make a case to your employer for using OpenBSD at work. If you're a student, talk to your professors about using OpenBSD as a learning tool for Computer Science or Engineering courses.

Donate money or hardware §

The project has a constant need for cash to pay for equipment, network connectivity, etc. Even small donations make a profound difference, donating money or hardware is important.

Donating money

Donate equipment and parts (wishlist)

Blog post: just having fun making games

Written by Solène, on 29 April 2022.
Tags: #gaming #godot #life

Comments on Fediverse/Mastodon

Hi! Just a short blog entry about making games.

I've been enjoying learning how to use a game engine for three days now. I also published my two last days on the itch.io platform for independant video games. I'm experimenting a lot with various ideas, a new game must be different than the other to try new mechanics, new features and new gameplay.

This is absolutely refreshing to have a tool in hand that let me create interactive content, this is really fantastic. I wish I studied this earlier.

Despite my games being very short and simplistic, I'm quite proud of the accomplished work. If someone in the world had fun with them even for 20 seconds, this is a win for me.

My profile on itch.io (for potential future game publications)

Writing my first OpenBSD game using Godot

Written by Solène, on 28 April 2022.
Tags: #gaming #openbsd #godot

Comments on Fediverse/Mastodon

Introduction §

I'm a huge fan of video games but never really thought about writing one. Well, this crossed my mind a few times, but I don't know anything about writing a GUI software or using OpenGL, but a few days ago I discovered the open source game engine Godot.

This game engine is a full-featured tool allowing to easily write 2D or 3D games that are portables on Android, Mac, Windows, Linux, HTML5 (using WebASM) and operating systems where the Godot engine is available, like OpenBSD.

Godot engine project website

Learning §

Godot offers a GUI to write games, the GUI itself being a Godot game, it's full featured and come with a code editor, documentation, 2D/3D views, animation, tile set management, and much more.

The documentation is well written and gives introduction to the concepts, and then will just teach you how to write a simple 2D game! It only took me a couple of hours to be able to start creating my very own first game and getting the grasps.

Godot documentation

I had no experience into writing games but only programming experience. The documentation is excellent and give simple examples that can be easily reused thanks to the way Godot is designed. The forums are also a good way to find a solution for common problems.

Demo §

I wrote a simple game, OpenBSD themed, especially themed against its 6.8 version for which the artwork is dedicated to the movie "Hackers". It took me like 8 hours I think to write it, it's long, but I didn't see time passing at all, and I learned a lot. I have a very interesting game in my mind, but I need to learn a lot more to be able to do it, so starting with simple games is a nice training for me.

It's easy to play and fun (I hope so), give it a try!

Play it on the web browser

Play it on Linux

Play it on Windows

If you wish to play on OpenBSD or any other operating system having Godot, download the Linux binary and run "godot --main-pack puffy-bubble.x86_64" and enjoy.

I chose a neon style to fit to the theme, it's certainly not everyone's taste :)

A screenshot of the game, displaying a simple maze in the neon style, a Puffy mascot, the text "Hack the planet" and a bubble on the top of the maze.

Routing a specific user on a specific network interface on Linux

Written by Solène, on 23 April 2022.
Tags: #linux #networking #security

Comments on Fediverse/Mastodon

Introduction §

I have a special network need on Linux, I must have a single user going through specific VPN tunnel. This can't be done using a different metric for the VPN or by telling the program to bind on a specific interface.

How does it work §

The setup is easy once you find how to proceed on Linux: we define a new routing table named 42 and add a rule assigning user with uid 1002 to this routing table. It's important to declare the VPN default route on the exact same table to make it work.

#!/bin/sh

REMOTEGW=YOUR_VPN_REMOTE_GATEWAY_IP
LOCALIP=YOUR_VPN_LOCAL_IP
INTERFACE=tun0

ip route add table 42 $REMOTEGW dev tun0
ip route add table 42 default via $REMOTEGW dev tun0 src $LOCALIP
ip rule add pref 500 uidrange 1002-1002 lookup 42
ip rule add from $LOCALIP  table 42

Conclusion §

It's quite complicated to achieve this on Linux because there are many ways to proceed like netns (network namespace), iptables or vrf but the routing solution is quite elegant, and the documentation are never obvious for this use case.

I'd like to thank @loweel@bbs.keinpfusch.net from the Fediverse for giving me the first bits about ip rules and using a different route table.

Video guide to install OpenBSD 7.1 with the GNOME desktop

Written by Solène, on 23 April 2022.
Tags: #how-to #openbsd #video #gnome

Comments on Fediverse/Mastodon

Introduction §

I asked the community recently if they would like to have a video tutorial about installing OpenBSD, many people answered yes so here it is! I hope you will enjoy it, I'm quite happy of the result while I'm not myself fan of watching video tutorials.

The links §

The videos are published on Peertube, but you are free to reupload them on YouTube if you want to, the licence permits it. I won't publish on YouTube because I don't want to feed this platform.

The English video has Italian subtitles that have been provided by a fellow reader.

[English] Guide to install OpenBSD 7.1 with the GNOME desktop

[French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

Why not having used a VM? §

I really wanted to use a real hardware (an IBM ThinkPad T400 with an old Core 2 Duo) instead of a virtual machine because it feels a lot more real (WoW :D) and has real world quirks like firmwares that would be avoided in a VM.

Youtube Links §

If you prefer YouTube, someone republished the video on this Google proprietary platform.

[YOUTUBE] [English] Guide to install OpenBSD 7.1 with the GNOME desktop

[YOUTUBE] [French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

Making-off §

I rarely make videos, and it was a first time for me to create this, so I wanted to share about how I made it because it was very amateurish and weird :D

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn't work well

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

The first part on Linux was recorded locally with ffmpeg from the T400 computer, the rest is recorded with the GoPro camera, I applied a few filters with the shotcut video editing software to flatten the picture (the lens is crazy on the GoPro).

I spent like 8 hours to create the video, most of the time was editing, blurring my Wi-Fi password, adjusting the speed of the sequences, and once the video was done I recorded my audio comment (using a USB Rode microphone) while watching it, I did it in English and in French, and used shotcut again to sync the audio with the video and merge them together.

Reduce httpd web server bandwidth usage by serving compressed files

Written by Solène, on 22 April 2022.
Tags: #openbsd #selfhosting

Comments on Fediverse/Mastodon

Introduction §

When reaching a website, most web browsers will send a header (some metadata about the requestion) informing the web server that you supported compressed content. In OpenBSD 7.1, the httpd web server received a new feature allowing it to serves a pre-compressed file of a requested file if the web browser supports compression. The benefits are a bandwidth usage reduced by 2x to 10x depending on the file content, this is particularly interesting for people who self-host and for high traffic websites.

Configuration §

In your httpd.conf, in a server block add the "gzip-static" keyword, save the file and reload the httpd service.

A simple server block would look like this:

server "perso.pw" {
        root "/htdocs/solene"
        listen on * port 80
        gzip-static
}

Creating the files §

In addition to this change, I added a new flag to the gzip command to easily compress files while keeping the original files. Run "gzip -k" on the files you want to serve compressed when the clients support the feature.

It's best to compress text files, such as HTML, JS or CSS for the most commons. Compressing binary files like archives, pictures, audio or videos files won't provide any benefit.

How does it work? §

When the client connects to the httpd server requesting "foobar.html", if gzip-static is used for this location/server, httpd will look for a file named "foobar.html.gz" that is not older than "foobar.html". When found, "foobar.html.gz" is transparently transferred to the client requesting "foobar.html".

Take care to regenerate the gz files when you update the original files, remember that the gz files must be newer to be used.

Conclusion §

This is for me a major milestone for using httpd in self-hosting and with static websites. We battle tested this change with the webzine server often hitting big news websites leading to many people visiting the website in a short time span, this drastically reduced the bandwidth usage of the server, allowing it to support more clients per second.

OpenBSD 7.1: fan noise and high temperature solution

Written by Solène, on 21 April 2022.
Tags: #openbsd #obsdfreqd #openbsd71

Comments on Fediverse/Mastodon

Introduction §

OpenBSD 7.1 has been released with a change that will set the CPU to max speed when plugged to the wall. This brings better performance and entirely let the CPU and mainboard do the frequency throttling.

However, it may doesn't throttle well for some users, resulting in huge power usage even when idle, heat from the CPU and also fan noise.

As the usual "automatic" frequency scheduling mode is no longer available when connected to powergrid, I wrote a simple utility to manage the frequency when the system is plugged to the wall, I took the opportunity to improve it, giving better performance than the previous automatic mode, but also giving more battery life when using on a laptop on battery.

obsdfreqd project page

Installation §

The project README or man page explains how to install, but here are the instructions to proceed. It's important to remove the automatic mode from apmd which would kill obsdfreqd, apmd can be kept to have its ability to run commands on resume/suspend etc...

doas pkg_add git
cd /tmp/ && git clone https://tildegit.org/solene/obsdfreqd.git
cd obsdfreqd
make
doas make install
rcctl ls on | grep ^apmd && doas rcctl set apmd flags -L && doas rcctl restart apmd
doas rcctl enable obsdfreqd
doas rcctl start obsdfreqd

Configuration §

No configuration are required, it works out of the box with a battery saving profile when on battery and a performance profile when connected to power.

If you feel adventurous, obsdfreqd man page will give you information about all the parameters available if you want to tailor yourself a specific profile.

Note that obsdfreqd can target a specific temperature limit using -T parameter, see the man page for explanations.

FAQ §

Using hw.perfpolicy="auto" sysctl won't help, the kernel code entirely bypass the frequency management if the system is not running on battery.

sched_bsd.c line shipped in OpenBSD 7.1

Using apmd -A doesn't solve the issue because apmd was simply setting the sysctl hw.perfpolicy to auto, which as explained above set the frequency to full speed when not on battery.

Operating systems battle: OpenBSD vs NixOS

Written by Solène, on 18 April 2022.
Tags: #openbsd #nixos #life #opensource

Comments on Fediverse/Mastodon

Introduction §

While I'm an OpenBSD contributor, I also enjoy using Linux especially the NixOS distribution which I consider a system apart from the other Linux distributions because of how different it is. Because I use both, I have two SSDs in my laptop with each system installed and I can jump from one to another depending on the task I'm doing or which I want to use.

My main system, the one with all my data, is OpenBSD, unfortunately the lack of an interoperable and good file system between NixOS and OpenBSD make it difficult to share data between them without using a network storage offering a protocol they have in common.

OpenBSD and NixOS §

Let me quickly introduce the two operating systems if you don't know them.

OpenBSD is a 25+ years old fork of NetBSD, it's full of history and a solid system, it's also the place where OpenSSH or tmux are developed. It's a BSD system with its own kernel and own drivers, it's not related to Linux but will share most of well known open source programs you can have on Linux, they are provided as packages (programs such as GIMP, Libreoffice, Firefox, Chromium etc...). The whole OpenBSD system (kernel, drivers, userland and packages) is managed by a team of approximately 150 persons (without counting people sending updates and who don't have a commit access).

The OpenBSD project website

NixOS will be soon a 20 years old Linux distribution based on the nix package manager. It's offering a new approach to system management, based on reproducible builds and declarative configurations, basically you define how your computer should be configured (packages, services, name, users etc..) in a configuration file and "build" the system to configure itself, if you share this configuration file on another computer, you should be able to reproduce the exact same system. Packages are not installed in a standard file hierarchy but each package files are stored into a dedicated directory and the users profiles are made of symbolic links and many environment variables to permit programs to find libraries or dependencies, for example the path to Firefox may look like something like /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-33.1/bin/firefox.

The NixOS project website

NixOS wiki: How Nix works

Performance §

OpenBSD is lacking hardware acceleration for encoding/decoding video, this make it a lot slower when working with videos.

Interactive desktop usage and I/O also feel slower on OpenBSD, on the other hand the Linux kernel used in NixOS benefits from many people working full time at improving its performance, we have to admit the efforts pay off.

Although OpenBSD is slower than Linux, it's actually usable for most tasks one may need to achieve.

Hardware support §

OpenBSD doesn't support as many devices as NixOS and its Linux kernel. On NixOS I can use an external NVIDIA card using a thunderbolt case, OpenBSD doesn't have support for this case nor has it a driver for NVIDIA cards (which is mostly NVIDIA's fault for not providing documentation).

However, OpenBSD barely requires any configuration to work, if the hardware is supported, it will work.

Finally, OpenBSD can be used on old computers from various architectures, like i386, old Apple powerpc, risc, arm, while NixOS is only focusing on modern hardware such as Amd64 and Arm64.

Software choice §

Both systems provide a huge packages set, but the one from Nix has more choice. It's not that bad on the OpenBSD side though, most common packages are available and often with a recent version, I also found many times a package available in OpenBSD but not in Nix.

Most notably, I feel the quality of OpenBSD packages is slightly higher than on Nix, they have less issues (Nix packages sometimes have issues that may be related to nix unusual file hierarchy) and are sometimes patched to have better defaults (for instance I'm thinking of disabling network accesses opened by default in some GUI applications).

Both of them make a new release every six months, but while OpenBSD only backport packages security fixes for its latest release, NixOS provides a lot more updates to its packages for the release users.

Updating packages is painless on OpenBSD and NixOS, but it's easier to find which version you are currently using on OpenBSD. This may be because I don't know enough the nix shell but I find it very hard to know if I'm actually using a program that has been updated (after a CVE I often check that) or if it's not.

OpenBSD packages list

NixOS packages list

Network §

Network is certainly the area where OpenBSD is the most well-known, its firewall Packet Filter is easy to use/configure and efficient. OpenBSD provides mechanisms such as routing tables/domains to assign a network interface to an entire separated network, allowing to expose a program/user to a specific interface reliably, I didn't find how to achieve this on Linux yet. OpenBSD comes with all the required daemons to manage a network (dhcp, slaacd, rpki, email, http, NAT, ftp, tftp etc...) within its base system.

The performance when dealing with network throughput may be sub-par on OpenBSD compared to Linux but for the average user or server it's fine, it will mostly depend on the network card used and its driver support.

I don't really enjoy playing with network on Linux as I find it very complicated, I never found how to aggregate wifi and Ethernet interfaces to transparently switch from one to the other when I (un)plug the rj45 cable on my laptop, doing this is easy to achieve on OpenBSD (I don't enjoy losing all my TCP connections when moving the laptop around).

Maintenance §

The maintenance topic will be very personal, for a personal workstation/server case and not a farm of hundreds of servers.

OpenBSD doesn't change much, it has a new release every six months but the upgrades are always easy to handle, most corner cases are documented in the upgrade guide and I'm ALWAYS confident when I have to update an OpenBSD system.

NixOS is also easy to update and keep clean, I never had any issue when upgrading yet and it would still be possible to rollback to the previous version in case something is going wrong.

I can say they have both a different approach but they both work well.

Documentation §

I have to say the NixOS documentation is rather huge but yet not always useful. There is a nice man page named "configuration.nix" giving all the options to parameter a system, but it's generated from the Nix code and is often lacking explanations in addition to describe an API. There are also a few guides and manual available on NixOS website but they are either redundant or not really describing how to solve real world problems.

NixOS documentation

On the OpenBSD side, the website provides a simple "Frequently Asked Questions" section for some use case, and then all the system and its internal are detailed in very well written man pages, it may feel unfriendly or complicated at first but once you taste the OpenBSD man pages you easily get sad when looking at another documentation. If you had to setup an OpenBSD system for some task relying on components from the base system (= not packages), I'm confident to say you could do it offline with only the man pages. OpenBSD is not a system that you find its documentation on various forums or github gists, while I often feel this with NixOS :(

OpenBSD FAQ

OpenBSD man pages

Contributing §

I would say NixOS have a modern contribution system, it relies on github and a bot automatically do many checks to the contributions, helping contributors to check their work quickly without "wasting" the time of someone who would have to read every submitted code.

OpenBSD is doing exactly that, changes to the code are done on a mailing list, only between humans. It doesn't scale very well but the human contact will give better explanations than a bot, but this is when your work is interesting someone who want to spend time on it, sometimes you will never get any feedback and it's a bit sad we are losing updates and contributors because of this.

Conclusion §

I can't say one is better to the other nor that one is doing absolutely better at one task.

My love for OpenBSD may come from its small community, made of humans that like working on something different. I know how OpenBSD works, when something is wrong it's easy to debug because the system has been kept relatively simple. It's painless, when your hardware is supported, it just works fine. The default configuration is good and I don't have to worry about it.

But I also love NixOS, it's adventurous, it offers a new experience (transactional updates, reproducibility) that I feel are the future of computing, but it also make the whole very complicated to understand and debug. It's a huge piece of software that could be bend to many forms given you are a good Nix arcanist.

I'd be happy to hear about your experiences with regards to OpenBSD and NixOS, feel free to write me (mastodon or email) about this!

Keep your OpenBSD system cool with obsdfreqd

Written by Solène, on 21 March 2022.
Tags: #openbsd #power

Comments on Fediverse/Mastodon

Introduction §

Last week I wrote a system daemon to manage the CPU frequency from userland, entirely bypassing the kernel automatic mode. While this was more of a toy at first because I only implemented the same automatic mode used in the kernel but with all the variables being easily changed, I found it valuable for many use case to improve battery life or even temperature.

The coolest feature I added today is to support a maximum temperature and let the program do its best to keep the CPU temperature below the limit.

obsdfreqd project page

Installation §

As said in the "Too Long Didn't Read" section of the project README, a simple `make install` as root and starting the service is enough.

Results §

A nice benchmark to run was to start the compilation of the rust package with all the four cores of my T470 laptop and run obsdfreqd with various temperature limits and see how it goes. The program did a good job at reducing the CPU frequency to keep the temperature around the threshold.

Diagram of benchmark results of various temperature limitation

Conclusion §

While this is ultimately not a replacement for the in-kernel frequency scheduler, it can be used to keep a computer a lot cooler or make a system comply with some specific requirements (performance for given battery life or maximum temperature).

The customization is so that you can have various settings depending if the system is running on battery or not, which can be tailored to suit every kind of user. The defaults are made to provide good performance when on AC, and provide a balanced performance/battery life mode when on battery.

Reproducible clean $HOME in OpenBSD using impermanence

Written by Solène, on 15 March 2022.
Tags: #openbsd #reproducible #nixos #unix

Comments on Fediverse/Mastodon

Introduction §

Let me present you my latest project: home-impermanence, under this name is a reference to the NixOS community project impermanence. The name may not be obvious about what it is doing, let me explain.

NixOS wiki about Impermanence, a community module

home-impermanence for OpenBSD

The original goal of impermanence in NixOS is to have a fully reproducible system mounted on tmpfs where only user-defined files and directories are hooked into the temporary file system to be persistent (such as /var/lib, /var/lib and some /etc files for instance). Why this is something achievable on NixOS, on OpenBSD side we are far from having the tooling to go that deep so I wrote home-impermanence that allows an user to just do that at their $HOME level.

What does it mean exactly? When you start your system, your $HOME directory will be mounted with an empty memory based file system (using mfs) and symbolic links to files and directories listed in the configuration file will be done in your $HOME. Every time you reboot, you will have the exact same set of files, extra files created meanwhile will be lost. When you hold a $HOME directory for long, you know you get many directories and files created in various ~/.config or ~/.local or directly as dotfiles in the top level of the home directory, with impermanence you can get ride of all the noise.

A benefit is that you can run software as if it was their first run, in some software upgrade you will avoid old settings that would create troubles, or settings that would disturb a whole class of applications (like a gtk setting affecting all gtk programs), with impermanence the user can decide exactly what should remain across reboots or disappear.

Implementation §

My implementation is a Perl script relying on some libraries packaged on OpenBSD, it will run as root from a rc service and the settings done in rc.conf.local. It will read the configuration file from the persistent directory holding the user data and create symlinks in the target directory to the files and directories, doing some sanitizing in the process to prevent listed files to be included in listed directories which would nest symlinks incorrectly.

I chose Perl because it's a stable language, OpenBSD ships with Perl and the very few dependencies required were already available in the ports tree.

The program could easily be ported to Linux, FreeBSD and maybe NetBSD, the mount_mfs calls could be replaced by a mount_tmpfs and the directories symlinks could be done with a mount_bind or mount_nullfs which we don't have on OpenBSD, if someone wants to port my project to another system I could help adding the required logic.

How to use §

I wrote a complete README file explaining the installation and configuration process, for full instructions refer to this document and the man page that ships with home-impermanence.

home-impermanence README

Installation §

Quick method:

git clone https://tildegit.org/solene/home-impermanence/
cd home-impermanence
doas make install
doas rcctl enable impermanence
doas rcctl set impermanence flags -u user -d /home/persist/
doas install -d /home/persist/

From now, you may want to make things quickly, logout from your user and run these commands, this will move your user directory and prepare the mountpoint.

mv /home/user /home/persist/user
install -d -o user -g wheel /home/user

Now, it's time to configure impermanence before running it.

Configuration §

Reusing the paths from the installation example, the configuration file should be in /home/persist/user/impermanence.yml , the file must be using YAML formatting. Here is my personal configuration file that you can use as a base.

size: 500m
files:
  - .Xdefaults
  - .Xresources
  - .bashrc
  - .gitconfig
  - .kshrc
  - .profile
  - .xsession
  - .tmux.conf
  - .config/kwalletrc
directories:
  - .claws-mail
  - .config/Thunar
  - .config/asciinema
  - .config/gajim
  - .config/kak
  - .config/keepassxc
  - .config/lagrange
  - .config/mpv
  - .config/musikcube
  - .config/openttd
  - .config/xfce4
  - .config/zim
  - .local/share/cozy
  - .local/share/gajim
  - .local/share/ibus-typing-booster
  - .local/share/kwalletd
  - .mozilla
  - .ssh
  - Documents
  - Downloads
  - Music
  - bin
  - dev
  - notes
  - tmp

When you think you are done, start the impermanence rc service with rcctl start impermanence and log-in. You should see all the symlinks you defined in your configuration file.

Result §

Here is the content of my $HOME directory when I use impermanence.

solene@daru ~> ls -la
total 104
drwxr-xr-x   8 solene  wheel    1024 Mar 15 12:10 .
drwxr-xr-x  17 root    wheel     512 Mar 14 15:36 ..
-rw-------   1 solene  wheel     165 Mar 15 09:08 .ICEauthority
-rw-------   1 solene  solene     53 Mar 15 09:08 .Xauthority
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .Xdefaults -> /home/permanent//solene/.Xdefaults
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .Xresources -> /home/permanent//solene/.Xresources
-rw-r--r--   1 solene  wheel      48 Mar 15 12:07 .aspell.en.prepl
-rw-r--r--   1 solene  wheel      42 Mar 15 12:07 .aspell.en.pws
lrwxr-xr-x   1 root    wheel      31 Mar 15 09:08 .bashrc -> /home/permanent//solene/.bashrc
drwxr-xr-x   9 solene  wheel     512 Mar 15 12:10 .cache
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .claws-mail -> /home/permanent//solene/.claws-mail
drwx------   8 solene  wheel     512 Mar 15 12:27 .config
drwx------   3 solene  wheel     512 Mar 15 09:08 .dbus
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .gitconfig -> /home/permanent//solene/.gitconfig
drwx------   3 solene  wheel     512 Mar 15 12:32 .gnupg
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 .kshrc -> /home/permanent//solene/.kshrc
drwx------   3 solene  wheel     512 Mar 15 09:08 .local
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .mozilla -> /home/permanent//solene/.mozilla
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .profile -> /home/permanent//solene/.profile
lrwxr-xr-x   1 solene  wheel      30 Mar 15 12:10 .sbclrc -> /home/permanent/solene/.sbclrc
drwxr-xr-x   2 solene  wheel     512 Mar 15 09:08 .sndio
lrwxr-xr-x   1 root    wheel      28 Mar 15 09:08 .ssh -> /home/permanent//solene/.ssh
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .tmux.conf -> /home/permanent//solene/.tmux.conf
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 .xsession -> /home/permanent//solene/.xsession
-rw-------   1 solene  wheel   25273 Mar 15 13:26 .xsession-errors
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Documents -> /home/permanent//solene/Documents
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Downloads -> /home/permanent//solene/Downloads
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 HANGAR -> /home/permanent//solene/HANGAR
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 dev -> /home/permanent//solene/dev
lrwxr-xr-x   1 root    wheel      29 Mar 15 09:08 notes -> /home/permanent//solene/notes
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 quicklisp -> /home/permanent//solene/quicklisp
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 tmp -> /home/permanent//solene/tmp

Rollback §

If you want to rollback it's easy, disable impermanence, move /home/persist/user to /home/user and you are done.

Conclusion §

I really don't want to go back to not using impermanence since I tried it on NixOS. I thought implementing it only for $HOME would be good enough as a start and started thinking about it, made a proof of concept to see if the symbolic links method was enough to make it work, and it was!

I hope you will enjoy this as much as I do, feel free to contact me if you need some help understanding the setup.

Reed-alert: five years later

Written by Solène, on 10 February 2022.
Tags: #unix #reed-alert #linux #lisp

Comments on Fediverse/Mastodon

Introduction §

I wrote the program reed-alert five years ago, I've been using it since its first days, here is some feed back about it.

The software reed-alert is meant to be used by system administrators who want to monitor their infrastructures and get alerts when things go wrong. I got a lot more experience in the monitoring field over time and I wanted to share some thoughts about this project.

reed-alert source code

Reed-alert §

The name §

The software name is a pun I found in a Star Trek Enterprise episode.

Reed alert pun origins

Project finished §

The code didn't receive many commits over the last years, I consider the program to be complete with regard to features, but new probes could be added, or bug fixes could be done. But the core of the software itself is perfect to me.

The probes are small parts of code allowing to monitor extra states, like http return code, working ping, service started etc... It's already easy to extend reed-alert using a shell command returning 0 or not 0 to define a custom probe.

Reliability §

I don't remember having a single issue with reed-alert since I've set it up on my server. It's run by a cron job every 10 minutes, this mean a common lisp interpreter is loading the code, evaluating the configuration file, running the check commands and alerts commands if required, and stops. I chose a serviceless paradigm for reed-alert as it make the code and usage a lot simpler. With a running service, it could fail, leak memory, be exploited and certainly many other bugs I can't think of.

Reed-alert is simple as it only need a common lisp interpreter, the most notable sbcl and ecl interpreters are absolutely reliable and change very little over time. Some unix standard commands are required for some checks or default alerts, such as ping, service, mail or curl but this defers all the work to well established binaries.

The source code is minimal with 179 lines for reed-alert core and 159 lines for the probes, a total of 338 lines of code (including empty lines and comments), hacking on reed-alert is super easy and always a lot of fun for me. For whatever reason, my common lisp software often work at first try when I add new features, so it's always pleasant to work on them.

Awesome features §

One aspect of reed-alert that may disturb users at first is the choice of common lisp code as a configuration file, this may look complicated at first, but a simple configuration doesn't require more common lisp knowledge than what is explained in reed-alert documentation. But it gives all its power when you need to loop over a data entry to run checks, allowing to make reed-alert dynamic instead of handwriting all the configuration.

The use of common lisp as configuration has other advantages, it's possible to chain checks to easily prevent some checks to be done in case a condition is failing. Let me give a few examples for this:

  • if you monitor a web server, you first want to check if it replies on ICMP before trying to check and report errors on HTTP level
  • if you monitor remote servers, you first want to check if you can reach the internet and that your local gateway is online
  • if you check a local web server, it would be a good idea to check if all the required services are running first

All the previous conditions can be done with reed-alert thanks to the code-as-configuration choice.

Scalability §

I've been asked a few times if reed-alert could be used in a professional context. Depending on what you call a professional environment, I will reply it depends.

Reed-alert is dumb, it needs to be run from a scheduling software (such as cron) and will sequentially run the checks. It won't guarantee a perfect timing between checks.

If you need multiples machines to run a set of checks, reed-alert is not able to share the states to continue to work reliably in a high availability environment.

In regard to resources usage, while reed-alert is small it needs to run the command lisp interpreter every time, if you want to run reed-alert every minute or multiple time per minute, I'd recommend using something else.

A real life example §

Here is a chunk of the configuration I've been running for years, it checks the system itself and some remote servers.

(=> mail disk-usage  :path "/"     :limit 60 :desc "partition /")
(=> mail disk-usage  :path "/var"  :limit 70 :desc "partition /var")
(=> mail disk-usage  :path "/home" :limit 95 :desc "partition /home")
(=> mail service :name "dovecot")
(=> mail service :name "spamd")
(=> mail service :name "dkimproxy_out")
(=> mail service :name "smtpd")
(=> mail service :name "ntpd")

(=> mail number-of-processes :limit 140)

;; check dataswamp server is working
(=> mail ping :host "dataswamp.org" :desc "Dataswamp")

;; check webzine related web servers
(and
    (=> mail ping :host "openports.pl"     :desc "Liaison Grifon.fr")
    (=> mail curl-http-status :url "https://webzine.puffy.cafe" :desc "Webzine Puffy.cafe" :timeout 10)
    (=> mail curl-http-status :url "https://puffy.cafe" :desc "Puffy.cafe" :timeout 10)
    (=> mail ssl-expiration :host "webzine.puffy.cafe" :seconds (* 7 24 60 60))
    (=> mail ssl-expiration :host "puffy.cafe" :seconds (* 7 24 60 60)))

;; check openports.pl is working
(and
    (=> mail ping :host "46.23.90.152"  :desc "Openports.pl ping")
    (=> mail curl-http-status :url "http://46.23.90.152" :desc "Packages OpenBSD http" :timeout 10))

;; check www.openbsd.org website is replying under 10 seconds
(=> mail curl-http-status :url "https://www.openbsd.org" :desc "OpenBSD.org" :timeout 10)

;; check if a XML file is created regularly and valid
(=> mail file-updated :path "/var/www/htdocs/solene/openbsd-current.xml" :limit 1440)
(=> mail command :command (format nil "xmllint /var/www/htdocs/solene/openbsd-current.xml") :desc "XML openbsd-current.xml is not valid")


;; monitoring multiple gopher servers
(loop for host in '("grifon.fr" "dataswamp.org" "gopherproject.org")
      do
      (=> mail command
          :try 6
          :command (format nil "echo '/is-alive?done-by-solene-at-libera' | nc -w 3 ~a 70" host)
          :desc (concatenate 'string "Gopher " host)))

(quit)

Conclusion §

I wrote a simple software using an old programming language (Common LISP ANSI is from 1994), the result is that it's reliable over time, require no code maintenance and is fun to code on.

Common Lisp on Wikipedia

Harden your NixOS workstation

Written by Solène, on 13 January 2022.
Tags: #nix #nixos #security

Comments on Fediverse/Mastodon

Introduction §

Coming from an OpenBSD background, I wanted to harden my NixOS system for better security. As you may know (or not), security mitigations must be thought against a security threat model. My model here is to prevent web browsers to leak data, prevent services to be exploitable remotely and prevent programs from being exploited to run malicious code.

NixOS comes with a few settings to improve in these areas, I'll share a sample of configuration to increase the default security. Unrelated to security defense itself, but you should absolutely encrypt your filesystem, so in case of physical access to your computer no data could be extracted.

Use the hardened profile §

There are a few profiles available by default in NixOS which are files with a set of definitions and one of them is named "hardened" because it enables many security measures.

Link to the hardened profile definition

Here is a simplified list of important changes:

  • use the hardened Linux kernel (different defaults and some extra patches from https://github.com/anthraxx/linux-hardened/)
  • use the memory allocator "scudo", protecting against some buffer overflow exploits
  • prevent kernel modules to be loaded after boot
  • protect against rewriting kernel image
  • increase containers/virtualization protection at a performance cost (L1 flush or page table isolation)
  • apparmor is enabled by default
  • many filesystem modules are forbidden because old/rare/not audited enough
  • many other specific tweaks

Of course, using this mode will slightly reduce the system performance and may trigger some runtime problems due to the memory management being less permissive. On one hand, it's good because it allows to catch programming errors, but on the other hand it's not fun to have your programs crashing when you need them.

With the scudo memory allocator, I have troubles running Firefox, it will only start after 2 or 3 crashes and then will work fine. There is a less permissive allocator named graphene-hardened, but I had too much troubles running programs with it.

Use firewall §

One simple rule is to block any incoming traffic that would connect to listening services. It's way more secure to block everything and then allow the services you know must be open to the outside than relying on the service's configuration to not listen on public interfaces.

Use Clamav §

Clamav is an antivirus, and yes it can be useful on Linux. If it can prevent you at least once to run a hostile binary, then it's worth running it.

Firejail §

I featured firejail previously on my blog, I'm convinced of its usefulnes. You can run a program using firejail, and it will restrict its permissions and rights so in case of security breach, the program will be restricted.

This is rather important to run web browsers with it because it will prevent them any access to the filesystem except ~/Downloads/ and a few required directories (local profile, /etc/resolv.conf, font cache etc...).

Enable this on NixOS §

Because NixOS is declarative, it's easy to share the configuration. My configuration supports both Firefox and Chromium, you can remove the related lines you don't need.

Be careful about the import declaration, you certainly already have one for the ./hardware-configuration.nix file.

 imports =
   [
      ./hardware-configuration.nix
      <nixpkgs/nixos/modules/profiles/hardened.nix>
   ];

  # enable firewall and block all ports
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [];
  networking.firewall.allowedUDPPorts = [];

  # disable coredump that could be exploited later
  # and also slow down the system when something crash
  systemd.coredump.enable = false;

  # required to run chromium
  security.chromiumSuidSandbox.enable = true;

  # enable firejail
  programs.firejail.enable = true;

  # create system-wide executables firefox and chromium
  # that will wrap the real binaries so everything
  # work out of the box.
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";
      };
      chromium = {
          executable = "${pkgs.lib.getBin pkgs.chromium}/bin/chromium";
          profile = "${pkgs.firejail}/etc/firejail/chromium.profile";
      };
  };

  # enable antivirus clamav and
  # keep the signatures' database updated
  services.clamav.daemon.enable = true;
  services.clamav.updater.enable = true;

Rebuild the system, reboot and enjoy your new secure system.

Going further: network filtering §

If you want to absolutely control your network connections, I'd absolutely recommend the service OpenSnitch. This is a daemon that will listen to all the network done on the system and allow you to allow/block connections per executable/source/destination/protocol/many parameters.

OpenSnitch comes with a GUI app called opensnitch-ui which is mandatory, if the ui is not running, no filtering is done. When the ui is running, every time a new connection is not matching an existing rule, you will be prompted with information telling you what executable is trying to do on which protocol with which host, then you can decide how long you allow this (or block).

Just use `services.opensnitch.enable = true;` in the system configuration and run opensnitch-ui program in your graphical session. To have persistent rules, open opensnitch-ui, go in the Preferences menu and tab Database, choose "Database type: File" and pick a path to save it (it's a sqlite database).

From this point, you will have to allow / block all network done on your system, it can be time-consuming at first, but it's user-friendly enough and rules can be done like "allow this entire executable" so you don't have to allow every website visited by your web browser (but you could!). You may be surprised by the amount of traffic done by non networking programs. After some time, the rule set should be able to cope with most of your needs without needing to add new entries.

OpenSnitch wiki: getting started

How to pin a nix-shell environment using niv

Written by Solène, on 12 January 2022.
Tags: #nix #nixos #shell

Comments on Fediverse/Mastodon

Introduction §

In the past I shared a bit about Nix nix-shell tool, allowing to have a "temporary" environment with a specific set of tools available. I'm using it on my blog to get all the dependencies required to rebuild it without having to remember what programs to install.

But while this method was practical, as I'm running NixOS development version (called unstable channel), I have to download the new versions of the dependencies every time I use the nix shell. This is long on my DSL line, and also a waste of bandwidth.

There is a way to pin the version of the packages, so I always use the exact same environment, whatever the version of my nix.

Use niv tool §

Let's introduce you to niv, a program to manage nix dependencies, for this how-to I will only use a fraction of its features. We just want it to init a directory with a default configuration pinning the nixpkgs repository to a branch / commit ID, and we will tell the shell to use this version.

niv project GitHub homepage

Let's start by running niv (you can get niv from nix package manager) in your directory:

niv init

It will create a nix/ directory with two files: sources.json and sources.nix, looking at the content is not fascinating here (you can take a look if you are curious though). The default is to use the latest nixpkgs release.

Create a shell.nix file §

My previous shell.nix file looked like this:

with (import <nixpkgs> {});
mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox toot nawk mandoc libxml2
    ];
}

Yes, I need all of this for my blog to work because I have texts in org-mode/markdown/mandoc/gemtext/custom. The blog also requires toot (for mastodon), sbcl (for the generator), make (for building and publishing).

Now, I will make a few changes to use the nix/sources.nix file to tell it where to get the nixpkgs information, instead of which is the system global.

let
  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs {};
in
with pkgs;
pkgs.mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox
        toot nawk mandoc libxml2
    ];
}

That's all! Now, when I run nix-shell in the directory, I always get the exact same shell and set of packages every day.

How to update? §

Because it's important to update from time to time, you can easily manage this using niv, it will bump the latest commit id of the branch of the nixpkgs repository:

niv update nixpkgs -b master

When a new release is out, you can switch to the new branch using:

niv modify nixpkgs -a branch=release-21.11

Using niv with configuration.nix §

It's possible to use niv to pin the git revision you want to use to build your system, it's very practical for many reasons like following the development version on multiple machines with the exact same revision. The snippet to use sources.nix for rebuilding the system is a bit different.

Replace "{ pkgs, config, ... }:" with:

{
  sources ? import ./nix/sources.nix,
  pkgs ? import sources.nixpkgs {},
  config, ...
}:

Of course, you need to run "niv init" in /etc/nixos/ before if you want to manage your system with niv.

Extra tip: automatically run nix-shell with direnv §

It's particularly comfortable to have your shell to automatically load the environment when you cd into a project requiring a nix-shell, this is doable with the direnv program.

nixos documentation about direnv usage

direnv project homepage

This can be done in 3 steps after you installed direnv in your profile:

  1. create a file .envrc in the directory with the content "use nix" (without double quotes of course)
  2. execute "direnv allow"
  3. create the hook in your shell, so it knows how to do with direnv (do this only once)

How to hook direnv in your shell

Everytime you will cd into the directory, nix-shell will be automatically started.

My plans for 2022

Written by Solène, on 08 January 2022.
Tags: #life #blog

Comments on Fediverse/Mastodon

Greetings dear readers, I wish you a happy new year and all the best. Like I did previously at the new year time, although it's not a yearly exercise, I would like to talk about the blog and my plan for the next twelve months.

About me §

Let's talk about me first, it will make sense for the blog part after. I plan to find a new job, maybe switch into the cybersecurity field or work in some position allowing me to contribute to an open source project, it's not that easy to find, but I have hope.

This year, I will work at getting new skills, this should help me find jobs, but I also think I've been a resting a bit about learning over the last two years. My plan is to dedicate 45 minutes every day to learn about a topic. I already started doing so with some security and D language readings.

About the blog §

With regular learning time, I'm not sure yet if I will have much desire to write here as often as I did in 2021. I'm absolutely sure the publication rate will drop, but I will try to maintain a minimum, because I'm learning I will want to share some ideas, experiences or knowledge hopefuly.

I'm thanksful to readers community I have, I often get feedback by email or IRC or mastodon about my posts, so I can fix them, extend them or rework them if I was wrong. This is invaluable to me, it helps me to make connections to other people, and it's what make life interesting.

Podcast §

In December 2021, I had the chance to be interviewed by the people of the BSDNow podcast, I'm talking about how I got into open source, about my blog but also about the old laptop challenge I made last year.

Access to the podcast link on BSDNow

Thanks everyone! Let's have fun with computers!

My NixOS configuration

Written by Solène, on 21 December 2021.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

Introduction §

Let me share my NixOS configuration file, the one in /etc/nixos/configuration.nix that describe what is installed on my Lenovo T470 laptop.

The base of NixOS is that you declare every user, services, network and system settings in a file, and finally it configures itself to match your expectations. You can also install global packages and per-user packages. It makes a system environment reproducible and reliable.

The file §

{ config, pkgs, ... }:

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  # run garbage collector at 19h00 everyday
  # and remove stuff older than 60 days
  nix.gc.automatic = true;
  nix.gc.dates = "19:00";
  nix.gc.persistent = true;
  nix.gc.options = "--delete-older-than 60d";

  # clean /tmp at boot
  boot.cleanTmpDir = true;

  # latest kernel
  boot.kernelPackages = pkgs.linuxPackages_latest;

  # sync disk when buffer reach 6% of memory
  boot.kernel.sysctl = {
      "vm.dirty_ratio" = 6;
  };

  # allow non free stuff
  nixpkgs.config.allowUnfree = true;

  # Use the systemd-boot EFI boot loader.
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;

  networking.hostName = "t470";
  time.timeZone = "Europe/Paris";
  networking.networkmanager.enable = true;

  # wireguard VPN
  networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "192.168.5.1/24" ];
              listenPort = 1234;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # server
               publicKey = "MY PUB KEY";
               endpoint = "SERVER:PORT";
               allowedIPs = [ "192.168.5.0/24" ];
              }];
      };
  };

  # firejail firefox by default
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";
      };
  };


  # azerty keyboard <3
  i18n.defaultLocale = "fr_FR.UTF-8";
  console = {
  #   font = "Lat2-Terminus16";
    keyMap = "fr";
  };

  # clean logs older than 2d
  services.cron.systemCronJobs = [
      "0 20 * * * root journalctl --vacuum-time=2d"
  ];

  # nvidia prime offload rendering for eGPU
  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # programs
  programs.steam.enable = true;
  programs.firejail.enable = true;
  programs.fish.enable = true;
  programs.gamemode.enable = true;
  programs.ssh.startAgent = true;

  # services
  services.acpid.enable = true;
  services.thermald.enable = true;
  services.fwupd.enable = true;
  services.vnstat.enable = true;

  # Enable the X11 windowing system.
  services.xserver.enable = true;
  services.xserver.displayManager.sddm.enable = true;
  services.xserver.desktopManager.plasma5.enable = true;
  services.xserver.desktopManager.xfce.enable = false;
  services.xserver.desktopManager.gnome.enable = false;

  # Configure keymap in X11
  services.xserver.layout = "fr";
  services.xserver.xkbOptions = "eurosign:e";

  # Enable sound.
  sound.enable = true;
  hardware.pulseaudio.enable = true;

  # Enable touchpad support
  services.xserver.libinput.enable = true;

  users.users.solene = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [
        gajim audacity chromium dmd dtools
     	kate kdeltachat pavucontrol rclone rclone-browser
     	zim claws-mail mpv musikcube git-annex
     ];
     extraGroups = [ "wheel" "sudo" "networkmanager" ];
  };

  # my gaming users running steam/lutris/emulators
  users.users.gaming = {
     isNormalUser = true;
     shell = pkgs.fish;
     extraGroups = [ "networkmanager" "video" ];
     packages = with pkgs; [ lutris firefox ];
  };

  users.users.aria = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [ aria2 ];
  };

  # global packages
  environment.systemPackages = with pkgs; [
      ncdu kakoune git rsync restic tmux fzf
  ];

  # Enable the OpenSSH daemon.
  services.openssh.enable = true;

  # Open ports in the firewall.
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.allowedUDPPorts = [ ];

  # user aria can only use tun0
  networking.firewall.extraCommands = "
iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT
  ";

  # This value determines the NixOS release from which the default
  # settings for stateful data, like file locations and database versions
  # on your system were taken. It‘s perfectly fine and recommended to leave
  # this value at the release version of the first install of this system.
  # Before changing this value read the documentation for this option
  # (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
  system.stateVersion = "21.11"; # Did you read the comment?

}

Restrict users to a network interface on Linux

Written by Solène, on 20 December 2021.
Tags: #linux #network #security #privacy

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to prevent a system user to use network interfaces except one, it's doable with a couple of iptables commands.

The use case would be to force your user to go through a VPN and make sure it can't reach the Internet if the VPN is not available.

iptables man page

Iptables §

We can use simple rules using the "owner" module, basically, we will allow traffic through tun0 interface (the VPN) for the user, and reject traffic for any other interface.

Iptables is applying first matching rule, so if traffic is going through tun0, it's allowed and otherwise rejected. This is quite simple and reliable.

We will need the user id (uid) of the user we want to restrict, this can be found as third field of /etc/passwd or by running "id the_user".

iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT

Note that instead of --uid-owner it's possible to use --gid-owner with a group ID if you want to make this rule for a whole group.

To make the rules persistent across reboots, please check your Linux distribution documentation.

Going further §

I trust firewall rules to do what we expect from them. Some userland programs may be able to restrict the traffic, but we can't know for sure if it's truly blocking or not. With iptables, once you made sure the rules are persistent, you have a guarantee that the traffic will be blocked.

There may be better ways to achieve the same restrictions, if you know one that is NOT complex, please share!

Playing video games on Linux

Written by Solène, on 19 December 2021.
Tags: #linux #gaming

Comments on Fediverse/Mastodon

Introduction §

While I mostly make posts about playing on OpenBSD, I also do play video games on Linux. There is a lot more choice, but it comes with the price that the choice comes from various sources with pros and cons.

Commercial stores §

There are a few websites where you can get games:

itch.io §

Itch.io is dedicated to indie games, you can find many games running on Linux, most games there are free. Most games could be considered "amateurish" but it's a nice pool from which some gems get out like Celeste, Among Us or Noita.

itch.io website

Steam §

It is certainly the biggest commercial platform, it requires the steam desktop Client and an account to be useful. You can find many free-to-play video games, (including some open source games like OpenTTD or Wesnoth who are now available on Steam for free) but also paid games. Steam is working hard on their tool to make Windows games running on Linux (based on Wine + many improvements on the graphic stack). The library manager allows Linux games filtering if you want to search native games. Steam is really a big DRM platform, but it also works well.

Steam website

GOG §

GOG is a webstore selling video games (many old games from people's childhood but not only), they only require you to have an account. When you buy a game in their store, you have to download the installer, so you can keep/save it, without any DRM beyond the account registration on their website to buy games.

GOG website

Your packager manager / flatpak §

There are many open source video games around, they may be available in your package manager, allowing a painless installation and maintenance.

Flatpak package manager also provides video games, some are recent and complex games that are not found in many package managers because of the huge work required.

flathub flatpak repository, games page

Developer's website §

Sometimes, when you want to buy a game, you can buy it directly on the developer's website, it usually comes without any DRM and doesn't rely on a third party vendor. I know I did it for Rimworld, but some other developers offer this "service", it's quite rare though.

Epic game store §

They do not care about Linux.

Streaming services §

It's now possible to play remotely through "cloud computing", using a company's computer with a good graphic card. There are solutions like Nvidia with Geforce Now or Stadia from Google, both should work in a web browser like Chromium.

They require a very decent Internet access with at least 15 MB/s of download speed for a 1080p stream but will work almost anywhere.

How to manage games §

Let me describe a few programs that can be used to manage games libraries.

Steam §

As said earlier, Steam has its own mandatory desktop client to buy/install/manage games.

Lutris §

Lutris is an ambitious open source project, it aims to be a game library manager allowing to mix any kind of game: emulation / Steam / GOG / Itch.io / Epic game Store (through Wine) / Native linux games etc...

Its website is a place where people can send recipes for installing some games that could be complicated, allowing to automate and distribute in the community ways to install some games. But it makes very easy to install games from GOG. There is a recent feature to handle the Epic game store, but it's currently not really enjoyable and the launcher itself running through wine draw for CPU like madness.

It has nice features such as activating a HUD for displaying FPS, automatically run "gamemode" (disabling screen effects, doing some optimization), easy offloading rendering to graphic card, set locale or switch to qwerty per game etc...

It's really a nice project that I follow closely, it's very useful as a Linux gamer.

lutris project website

Minigalaxy §

Minigalaxy is a GUI to manage GOG games, installing them locally with one click, keeping them updated or installing DLC with one click too. It's really simplistic compared to Lutris, but it's made as a simple client to manage GOG games which is perfectly fine.

Minigalaxy can update games while Lutris can't, both can be used on the same installed video games. I find these two are complementary.

Minigalaxy project website

play.it §

This tool is a set of script to help you install native Linux video games in your system, depending on their running method (open source engine, installer, emulator etc...).

play.it official website

Conclusion §

It has never been so easy to play video games on Linux. Of course, you have to decide if you want to run closed sources programs or not. Even if some games are closed sources, some fans may have developed a compatible open source engine from scratch to play it again natively given you have access to the "assets" (sets of files required for the game which are not part of the engine, like textures, sounds, databases).

List of game engine recreation (Wikipedia EN)

OpenVPN on OpenBSD in its own rdomain to prevent data leak

Written by Solène, on 16 December 2021.
Tags: #openbsd #openvpn #security

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to establish an OpenVPN tunnel through a dedicated rdomain to only expose the VPN tunnel as an available interface, preventing data leak outside the VPN (and may induce privacy issues). I did the same recently for WireGuard tunnels, but it had an integrated mechanism for this.

Let's reuse the network diagram from the WireGuard text to explain:


    +-------------+
    |   server    | tun0 remote peer
    |             |---------------+
    +-------------+               |
           | public IP            |
           | 1.2.3.4              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |OpenVPN
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ tun0
                    rdomain 0 (default)

We have our computer and have been provided an OpenVPN configuration file, we want to establish the OpenVPN toward the server 1.2.3.4 using rdomain 1. We will set our network interfaces into rdomain 1 so when the VPN is NOT up, we won't be able to connect to the Internet (without the VPN).

Network configuration §

Add "rdomain 1" to your network interfaces configuration file like "/etc/hostname.trunk0" if you use a trunk interface to aggregate Ethernet/Wi-Fi interfaces into an automatic fail over trunk, or in each interface you are supposed to use regularly. I suppose this setup is mostly interesting for wireless users.

Create a "/etc/hostname.tun0" file that will be used to prepare the tun0 interface for OpenVPN, add "rdomain 0" to the file, this will be enough to create the tun0 interface at startup. (Note that the keyword "up" would work too, but if you edit your files I find it easier to understand the rdomains of each interface).

Run "sh /etc/netstart" as root to apply changes done to the files, you should have your network interfaces in rdomain 1 now.

OpenVPN configuration §

From here, I assume your OpenVPN configuration works. The OpenVPN client/server setup is out of the scope of this text.

We will use rcctl to ensure openvpn service is enabled (if it's already enabled this is not an issue), then we will configure it to use rtable 1 to run, this mean it will connect through the interfaces in the rdomain 1.

If your OpenVPN configuration runs a script to set up the route(s) (through "up /etc/something..." directive in the configuration file), you will have to by add parameter -T0 to the command route in the script. This is important because openvpn will run in rdomain 1 so calls to "route" will apply to routing table 1, so you must change the route command to apply the changes in routing table 0.

rcctl enable openvpn
rcctl set openvpn rtable 1
rcctl restart openvpn

Now, you should have your tun0 interface in rdomain 0, being the default route and the other interfaces in rdomain 1.

If you run any network program it will go through the VPN, if the VPN is down, the programs won't connect to the Internet (which is the wanted behavior here).

Conclusion §

The rdomain and routing tables concepts are powerful tools, but they are not always easy to grasp, especially in a context of a VPN mixing both (one for connectivity and one for the tunnel). People using VPN certainly want to prevent their programs to not go through the VPN and this setup is absolutely effective in that task.

Persistency management of memory based filesystem on OpenBSD

Written by Solène, on 15 December 2021.
Tags: #openbsd #performance

Comments on Fediverse/Mastodon

Introduction §

For saving my SSD and also speeding up my system, I store some cache files into memory using the mfs filesystem on OpenBSD. But that would be nice to save the content upon shutdown and restore it at start, wouldn't it?

I found that storing the web browser cache in a memory filesystem drastically improve its responsiveness, but it's hard to make measurements of it.

Let's do that with a simple rc.d script.

Configuration §

First, I use a mfs filesystem for my Firefox cache, here is the line in /etc/fstab

/dev/sd3b	   /home/solene/.cache/mozilla mfs rw,-s400M,noatime,nosuid,nodev 1 0

This mean I have a 400 MB partition using system memory, it's super fast but limited. tmpfs is disabled in the default kernel because it may have issues and is not well enough maintained, so I stick with mfs which is available out of the box. (tmpfs is faster and only use memory when storing file, while mfs reserves the memory chunk at first).

The script §

We will write /etc/rc.d/persistency with the following content, this is a simple script that will store as a tgz file under /var/persistency every mfs mountpoint found in /etc/fstab when it receives the "stop" command. It will also restore the files at the right place when receiving the "start" command.

#!/bin/ksh

STORAGE=/var/persistency/

if [[ "$1" == "start" ]]
then
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
    do
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        tar_path="${STORAGE}/${tar_name}"
        test -f ${tar_path}
        if [ $? -eq 0 ]
        then
            cd $mountpoint
            if [ $? -eq 0 ]
            then
                tar xzfp ${tar_path} && rm ${tar_path}
            fi
        fi
    done
fi

if [[ "$1" == "stop" ]]
then
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
    do
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        cd $mountpoint
        if [ $? -eq 0 ]
        then
            tar czf ${STORAGE}/${tar_name} .
        fi
    done
fi

All we need to do now is to use "rcctl enable persistency" so it will be run with start/stop at boot/shutdown times.

Conclusion §

Now I'll be able to carry my Firefox cache across reboots while keeping it in mfs.

  • Beware! A situation like using a mfs for a cache can lead to getting a full filesystem because it's never emptied, I think I'll run into the mfs filesystem full after a week or two.
  • Beware 2! If the system has a crash, mfs data will be lost. The script remove the archives at boot after using it, you could change the script to remove them before creating the newer archive upon stop, so at least you could recover "latest known version", but it's absolutely not a backup. mfs data are volatile and I just want to save it softly for performance purpose.

What are the VPN available on OpenBSD

Written by Solène, on 11 December 2021.
Tags: #openbsd #vpn

Comments on Fediverse/Mastodon

Introduction §

I wanted to write this text for some time, a list of VPN with encryption that can be used on OpenBSD. I really don't plan to write about all of them but I thought it was important to show the choices available when you want to create a VPN between two peers/sites.

VPN §

VPN is an acronym for Virtual Private Network, is the concept of creating a network relying on a virtual layer like IP to connect computers, while regular network use physical network layer like Ethernet cable, wifi or light.

There are different VPN implementation existing, some are old, some are new. They have pros and cons because they were done for various purpose. This is a list of VPN protocols supported by OpenBSD (using base or packages).

OpenVPN §

Certainly the most known, it's free and open source and is widespread.

Pros:

  • works with tun or tap interfaces. tun device is a virtual network interface using IP while tap device is a virtual network interface passing Ethernet and which can be used to interconnect Ethernet networks across internet (allowing remote dhcp or device discovery)
  • secure because it uses SSL, if the SSL lib is trusted then OpenVPN can be trusted
  • can work with TCP or UDP, this allow setups such as using TCP/443 or UDP/53 to try to bypass local restrictions
  • flexible in regards to version difference allowed between client and server, it's rare to have an incompatible client

Cons:

  • certificate management isn't straightforward for the initial setup

WireGuard §

A recent VPN protocol joined the party with an interesting approach. It's supported by OpenBSD base system using ifconfig.

Pros:

  • connection is stateless, so if your IP change (when switching network for example) or you experience network loss, you don't need to renegotiate the connection every time this happen, making the connection really resilient.
  • setup is easy because it only require exchanging public keys between clients

Cons:

  • the crypto choice is very limited and in case of evolution older clients may have issue to connect (this is a cons as deployment but may be considered a good thing for security)

OpenBSD ifconfig man page anchored to WireGuard section

Examples of wg interfaces setup

SSH §

SSH is known for being a secure way to access a remote shell but it can also be used to create a VPN with a tun interface. This is not the best VPN solution available but at least it doesn't require much software and could be enough for some users.

Pros:

  • everyone has ssh

Cons:

  • performance are not great
  • documentation about the -w flag used for creating a VPN may be sparse for many

mlvpn §

mlvpn is a software to aggregate links through VPN technology

Pros:

  • it's a simple way to aggregate links client side and NAT from the server

Cons:

  • it partly obsolete due to MPTCP protocol doing the same but a lot better (but OpenBSD doesn't do MPTCP)
  • it doesn't work very well when using different kind of internet links (DSL/4G/fiber/modem)

IPsec §

IPSec is handled with iked in base system or using strongswan from ports. This is the most used VPN protocol, it's reliable.

Pros:

  • most network equipment know how to do IPsec
  • it works

Cons:

  • it's often complicated to debug
  • older compatibility often means you have to downgrade security to make the VPN work instead of saying it's not possible and ask the other peer to upgrade

OpenBSD FAQ about VPN

Tinc §

Meshed VPN that works without a central server, this is meant to be robust and reliable even if some peers are down.

Pros:

  • allow clients to communicate between themselves

Cons:

  • it doesn't use a standardized protocol (it's not THAT bad)

Note that Tailscale is a solution to create something similar using WireGuard.

Dsvpn §

Pros:

  • works on TCP so it's easier to bypass filtering
  • easy to setup

Cons:

  • small and recent project, one could say it has less "eyes" reading the code so security may be hazardous (the crypto should be fine because it use common crypto)

Openconnect §

I never heard of it before, I found it in the ports tree while writing this text. There is openconnect package to act as a client and ocserv to act as a server.

Pros:

  • it can use TCP to try to bypass filtering through TCP/443 but can fallback to UDP for best performance

Cons:

  • the open source implementation (server) seems minimalist

gre §

gre is a special device on OpenBSD to create VPN without encryption, it's recommended to use it over IPSec. I don't cover it more because I was emphasing on VPN with encryption.

gre interface man page

Conclusion §

If you never used a VPN, I'd say OpenVPN is a good choice, it's versatile and it can easily bypass restrictions if you run it on port TCP/443.

I personnaly use WireGuard on my phone to reach my emails, because of WireGuard stateless protocol the VPN doesn't draw battery to maintain the connection and doesn't have to renogicate every time the phone gets Internet access.

Port of the week: cozy

Written by Solène, on 09 December 2021.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

The Port of the week of this end of 2021 is Cozy a GTK audio book player. There are currently not much alternative outside of audio players if you want to listen to audio books.

Cozy project website

How to install §

On OpenBSD I imported cozy in December 2021 so it will be available from OpenBSD 7.1 or now in -current, a simple "pkg_add cozy" is required to install.

On Linux, there is a flatpak package if your distribution doesn't provide a package.

Features §

Cozy provides a few features making it more interesting than a regular music player:

  • keep track of your advancement of each book
  • playback speed can be changed if you want to listen faster (or slower)
  • automatic rewind can be configured when you resume playing, it's useful when you need to pause when disturbed and you want to resume the playback
  • sleep timer if you want playback to stop after some time
  • the UI is easy to use and nice
  • can make local copies of audio books from remote sources

Screenshot of Cozy ready to play an audio book

Nvidia card in eGPU and NixOS

Written by Solène, on 05 December 2021.
Tags: #linux #games #nixos #egpu

Comments on Fediverse/Mastodon

Updates §

  • 2022-01-02: add entry about specialization and how to use the eGPU as a display device

Introduction §

I previously wrote about using an eGPU on Gentoo Linux. It was working when using the eGPU display but I never got it to work for accelerating games using the laptop display.

Now, I'm back on NixOS and I got it to work!

What is it about? §

My laptop has a thunderbolt connector and I'm using a Razer Core X external GPU case that is connected to the laptop using a thunderbolt cable. This allows to use an external "real" GPU on a laptop but it has performance trade off and on Linux also compatibility issues.

There are three ways to use the nvidia eGPU:

- run the nvidia driver and use it as a normal card with its own display connected to the GPU, not always practical with a laptop

- use optirun / primerun to run programs within a virtual X server on that GPU and then display it on the X server (very clunky, originally created for Nvidia Optimus laptop)

- use Nvidia offloading module (it seems recent and I learned about it very recently)

The first case is easy, just install nvidia driver and use the right card, it should work on any setup. This is the setup giving best performance.

The most complicated setup is to use the eGPU to render what's displayed on the laptop, meaning the video signal has to come back from the thunderbolt cable, reducing the bandwidth.

Nvidia offloading §

Nvidia made work in their proprietary driver to allow a program to have its OpenGL/Vulkan calls to be done in a GPU that is not the one used for the display. This allows to throw optirun/primerun for this use case, which is good because they added performance penalty, complicated setup and many problems.

Official documentation about offloading with nvidia driver

NixOS §

I really love NixOS and for writing articles it's so awesome, because instead of a set of instructions depending on conditions, I only have to share the piece of config required.

This is the bits to add to your /etc/nixos/configuration.nix file and then rebuild system:

hardware.nvidia.modesetting.enable = true;
hardware.nvidia.prime.sync.allowExternalGpu = true;
hardware.nvidia.prime.offload.enable = true;
hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
services.xserver.videoDrivers = ["nvidia" ];

A few notes about the previous chunk of config:

- only add nvidia to the list of video drivers, at first I was adding modesetting but this was creating troubles

- the PCI bus ID can be found with lspci, it has to be translated in decimal, here my nvidia id is 10:0:0 but in lspci it's 0a:00:00 with 0a being 10 in hexadecimal

NixOS wiki about nvidia offload mode

How to use it §

The use of offloading is controlled by environment variables. What's pretty cool is that if you didn't connect the eGPU, it will still work (with integrated GPU).

Running a command §

We can use glxinfo to be sure it's working, add the environment as a prefix:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo

In Steam §

Modify the command line of each game you want to run with the eGPU (it's tedious), by:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command%

In Lutris §

Lutris has a per-game or per-runner setting named "Enable Nvidia offloading", you just have to enable it.

Advanced usage / boot specialisation §

Previously I only explained how to use the laptop screen and the eGPU as a discrete GPU (not doing display). For some reasons, I've struggled a LOT to be able to use the eGPU display (which gives more performance because it's hitting less thunderbolt limitations).

I've discovered NixOS "specialisation" feature, allowing to add an alternative boot entry to start the system with slight changes, in this case, this will create a new "external-display" entry for using the eGPU as the primary display device:

  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # external display on the eGPU card
  # otherwise it's discrete mode using laptop screen
  specialisation = {
    external-display.configuration = {
        system.nixos.tags = [ "external-display" ];
        hardware.nvidia.modesetting.enable = pkgs.lib.mkForce false;
        hardware.nvidia.prime.offload.enable = pkgs.lib.mkForce false;
        hardware.nvidia.powerManagement.enable = pkgs.lib.mkForce false;
        services.xserver.config = pkgs.lib.mkOverride 0
  ''
Section "Module"
    Load           "modesetting"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    BusID          "10:0:0"
    Option         "AllowEmptyInitialConfiguration"
    Option         "AllowExternalGpus" "True"
EndSection
'';
    };
  };

With this setup, the default boot is the offloading mode but I can choose "external-display" to use my nvidia card and the screen attached to it, it's very convenient.

I had to force the xserver configuration file because the one built by NixOS was not working for me.

Using awk to pretty-display OpenBSD packages update changes

Written by Solène, on 04 December 2021.
Tags: #openbsd #awk

Comments on Fediverse/Mastodon

Introduction §

You use OpenBSD and when you upgrade your packages you often wonder which one is a rebuild and which one is a real version update? The packages updates are logged in /var/log/messages and using awk it's easy to achieve some kind of report.

Command line §

The typical update line will display the package name, its version, a "->" and the newer version of the installed package. By verifying if the newer version is different from the original version, we can report updated packages.

awk is already installed in OpenBSD, so you can run this command in your terminal without any other requirement.

awk -F '-' '/Added/ && /->/ { sub(">","",$0) ; if( $(NF-1) != $NF ) { $NF=" => "$NF ; print }}' /var/log/messages

The output should look like this (after a pkg_add -u):

Dec  4 12:27:45 daru pkg_add: Added quirks 4.86  => 4.87
Dec  4 13:01:01 daru pkg_add: Added cataclysm dda 0.F.2v0  => 0.F.3p0v0
Dec  4 13:01:05 daru pkg_add: Added ccache 4.5  => 4.5.1
Dec  4 13:04:47 daru pkg_add: Added nss 3.72  => 3.73
Dec  4 13:07:43 daru pkg_add: Added libexif 0.6.23p0  => 0.6.24
Dec  4 13:40:41 daru pkg_add: Added kakoune 2021.08.28  => 2021.11.08
Dec  4 13:43:27 daru pkg_add: Added kdeconnect kde 1.4.1  => 21.08.3
Dec  4 13:46:16 daru pkg_add: Added libinotify 20180201  => 20211018
Dec  4 13:51:42 daru pkg_add: Added libreoffice 7.2.2.2p0v0  => 7.2.3.2v0
Dec  4 13:52:37 daru pkg_add: Added mousepad 0.5.7  => 0.5.8
Dec  4 13:52:50 daru pkg_add: Added munin node 2.0.68  => 2.0.69
Dec  4 13:53:01 daru pkg_add: Added munin server 2.0.68  => 2.0.69
Dec  4 13:53:14 daru pkg_add: Added neomutt 20211029p0 gpgme sasl 20211029p0 gpgme  => sasl
Dec  4 13:53:20 daru pkg_add: Added nethack 3.6.6p0 no_x11 3.6.6p0  => no_x11
Dec  4 13:58:53 daru pkg_add: Added ristretto 0.12.0  => 0.12.1
Dec  4 14:01:07 daru pkg_add: Added rust 1.56.1  => 1.57.0
Dec  4 14:02:33 daru pkg_add: Added sysclean 2.9  => 3.0
Dec  4 14:03:57 daru pkg_add: Added uget 2.0.11p4  => 2.2.2p0
Dec  4 14:04:35 daru pkg_add: Added w3m 0.5.3pl20210102p0 image 0.5.3pl20210102p0  => image
Dec  4 14:05:49 daru pkg_add: Added yt dlp 2021.11.10.1  => 2021.12.01

Limitations §

The command seems to mangle the separators when displaying the result and doesn't work well with flavors packages that will always be shown as updated.

At least it's a good start, it requires a bit more polishing but that's already useful enough for me.

The state of Steam on OpenBSD

Written by Solène, on 01 December 2021.
Tags: #openbsd #gaming #steam

Comments on Fediverse/Mastodon

Introduction §

There is a very common question within the OpenBSD community, mostly from newcomers: "How can I install Steam on OpenBSD?".

The answer is: You can't, there is no way, this is impossible, period.

Why? §

Steam is a closed source program, while it's now also available on Linux doesn't mean it run on OpenBSD. The Linux Steam version is compiled for linux and without the sources we can't port it on OpenBSD.

Even if Steam was able to be installed and could be launched, games are not made for OpenBSD and wouldn't work either.

On FreeBSD it may be possible to install Windows Steam using Wine, but Wine is not available on OpenBSD because it require some specific Kernel memory management we don't want to implement for security reasons (I don't have the whole story), but FreeBSD also has a Linux compatibility mode to run Linux binaries, allowing to use programs compiled for Linux. This linux emulation layer has been dropped in OpenBSD a few years ago because it was old and unmaintained, bringing more issues than helping.

So, you can't install Steam or use it on OpenBSD. If you need Steam, use a supported operating system.

I wanted to make an article about this in hope my text will be well referenced within search engines, to help people looking for Steam on OpenBSD by giving them a reliable answer.

Nethack: end of Sery the Tourist

Written by Solène, on 27 November 2021.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

Hello, if you remember my previous publications about Nethack and my character "Sery the tourist", I have bad news. On OpenBSD, nethack saves are stored in /usr/local/lib/nethackdir-3.6.0/logfile and obviously I didn't save this when changing computer a few months ago.

I'm very sad of this data loss because I was enjoying a lot telling the story of the character while playing. Sery reached 7th floor while being a Tourist, which is incredible given all the nethack plays I've done and this one was going really well.

I don't know if you readers enjoyed that kind of content, if so please tell me so I may start a new game and write about it.

As an end, let's say Sery stayed too long in 7th floor and the Langoliers came to eat the Time of her reality.

Langoliers on Stephen King wiki fandom

Simple network dashboard with vnstat

Written by Solène, on 25 November 2021.
Tags: #openbsd #network

Comments on Fediverse/Mastodon

Introduction §

Hi! If you run a server or a router, you may want to have a nice view of the bandwidth usage and statistics. This is easy and quick to achieve using vnstat software. It will gather data regularly from network interfaces and store it in rrd files, it's very efficient and easy to use, and its companion program vnstati can generate pictures, perfect for easy visualization.

My simple router network dashboard with vnstat

vnstat project homepage

Setup (on OpenBSD) §

Simply install vnstat and vnstati packages with pkg_add. All the network interfaces will be added to vnstatd databases to be monitored.

# pkg_add vnstat vnstati
# rcctl enable vnstatd
# rcctl start vnstatd
# install -d -o _vnstat /var/www/htdocs/dashboard

Create a script in /var/www/htdocs/dashboard and make it executable:

#!/bin/sh

cd /var/www/htdocs/dashboard/ || exit 1

# last 60 entries of 5 minutes stats
vnstati --fiveminutes 60 -o 5.png

# vertical summary of last two days
# refresh only after 60 minutes
vnstati -c 60 -vs -o vs.png

# daily stats for 14 last days
# refresh only after 60 minutes
vnstati -c 60 --days 14 -o d.png

# monthly stats for last 5 months
# refresh only after 300 minutes
vnstati -c 300 --months 5 -o m.png

and create a simple index.html file to display pictures:

<html>
    <body>
        <div style="display: inline-block;">
                <img src="vs.png" /><br />
                <img src="d.png" /><br />
                <img src="m.png" /><br />
        </div>
        <img src="5.png" /><br />
    </body>
</html>

Add a cron as root to run the script every 10 minutes using _vnstat user:

# add /usr/local/bin to $PATH to avoid issues finding vnstat
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

*/10  *  *  *  * -ns su -m _vnstat -c "/var/www/htdocs/dashboard/vnstat.sh"

My personal crontab runs only from 8h to 23h because I will never look at my dashboard while I'm sleeping so I don't need to keep it updated, just replace * by 8-23 for the hour field.

Http server §

Obviously you need to serve /var/www/htdocs/dashboard/ from your http server, I won't cover this step in the article.

Conclusion §

Vnstat is fast, light and easy to use, but yet it produces nice results.

As an extra, you can run the vnstat commands (without the i) and use the raw text output to build an pure text dashboard if you don't want to use pictures (or http).

OpenBSD and Linux comparison: data transfer benchmark

Written by Solène, on 14 November 2021.
Tags: #openbsd #network

Comments on Fediverse/Mastodon

Introduction §

I had a high suspicion about something but today I made measurements. My feeling is that downloading data from OpenBSD use more "upload data" than on other OS

I originally thought about this issue when I found that using OpenVPN on OpenBSD was limiting my download speed because I was reaching the upload limit of my DSL line, but it was fine on Linux. From there, I've been thinking since then that OpenBSD was using more out data but I never measured anything before.

Testing protocol §

Now that I have an OpenBSD router it was easy to make the measures with a match rule and a label. I'll be downloading a specific file from a specific server a few times with each OS, so I'm adding a rule matching this connection.

match proto tcp from 10.42.42.32 to 145.238.169.11 label benchmark

Then, I've been downloading this file three times per OS and resetting counter after each download and saved the results from "pfctl -s labels" command.

OpenBSD comp70.tgz file from an OpenBSD mirror

The variance of each result per OS was very low, I used the average of each columns as the final result per OS.

Raw results §

OS        total packets    total bytes    packets OUT    bytes OUT    packets IN    bytes IN
-----     -------------    -----------    -----------    ---------    ----------    --------
OpenBSD   175348           158731602      72068          3824812      10328         154906790
OpenBSD   175770           158789838      72486          3877048      10328         154912790
OpenBSD   176286           158853778      72994          3928988      10329         154924790
Linux     154382           157607418      51118          2724628      10326         154882790
Linux     154192           157596714      50928          2713924      10326         154882790
Linux     153990           157584882      50728          2705092      10326         154879790

About the results §

A quick look will show that OpenBSD sent +42% OUT packets compared to Linux and also +42% OUT bytes, meanwhile the OpenBSD/Linux IN bytes ratio is nearly identical (100.02%).

Chart showing the IN and OUT packets of Linux and OpenBSD side by side

Conclusion §

I'm not sure what to conclude except that now, I'm sure there is something here requiring investigation.

How I ended up liking GNOME

Written by Solène, on 10 November 2021.
Tags: #life #unix #gnome

Comments on Fediverse/Mastodon

Introduction §

Hi! This was a while without much activity on my blog, the reason is that I stabbed through my right index with a knife by accident, the injury was so bad I can barely use my right hand because I couldn't move my index at all without pain. So I've been stuck with only my left hand for a month now. Good news, it's finally getting better :)

Which leads me to the topic of this article, why I ended liking GNOME!

Why I didn't use GNOME §

I will first start about why I didn't use it before. I like to try everything all the time, I like disruption, I like having an hostile (desktop/shell/computer) environment to stay sharp and not being stuck on ideas.

My current setup was using Fvwm or Stumpwm, mostly keyboard driven, with many virtual desktop to spatially regroup different activities. However, with an injured hand, I've been facing a big issue, most of my key binding were for two hands and it seemed too weird for me to change the bindings to work with one hand.

I tried to adapt using only one hand, but I got poor results and using the cursor was not very efficient because stumpwm is hostile to cursor and fvwm is not really great for this either.

The road to GNOME §

With only one hand to use my computer, I found the awesome program ibus-typing-booster to help me typing by auto completing words (a bit like on touchscreen phones), it worked out of the box with GNOME due to the ibus integration working well. I used GNOME to debug the package but ended liking it in my current condition.

How do I like it now, while I was pestling about it a few months ago as I found it very confusing? Because it's easy to use and spared me movements with my hands, absolutely.

  • The activity menu is easy to browse, icons are big, dock is big. I've been using a trackball with my left hand instead of the usual right hand, aiming at a small task bar was super hard so I was happy to have big icons everywhere, only when I wanted them
  • I actually always liked the alt+tab for windows and alt+² (on my keyboard the key up to TAB is ², must be ~ for qwerty keyboards) for switching into same kind of window
  • alt+tab actually display everything available (it's not per virtual desktop)
  • I can easily view windows or move them between virtual desktop when pressing "super" key

This is certainly doing in MATE or Xfce too without much work, but it's out of the box with GNOME. It's perfectly usable without knowing any keyboard shortcut.

Mixed feelings §

I'm pretty sure I'll return to my previous environment once my finger/hand because I have a better feeling with it and I find it more usable. But I have to thanks the GNOME project to work on this desktop environment that is easy to use and quite accessible.

It's important to put into perspective when dealing with desktop environment. GNOME may not be the most performing and ergonomic desktop, but it's accessible, easy to use and forgiving people who doesn't want to learn tons of key bindings or can't do them.

Conclusion §

There is a very recurrent question I see on IRC or forums: what's the best desktop environment/window manager? What are YOU using? I stopped having a bold opinion about this topic, I simply reply there are many desktop environments because they are many kind of people and the person asking the question need to find the right one to suiting them.

Update (2021-11-11) §

Using the xfdashboard program and assigning it to Super key allows to mimic the GNOME "activity" view in your favorite window manager: choosing windows, moving them between desktops, running applications. I think this can easily turn any window manager into something more accessible, or at least "GNOME like".

What if Internet stops? How to rebuild an offline federated infrastructure using OpenBSD

Written by Solène, on 21 October 2021.
Tags: #openbsd #distributed #opensource #drp

Comments on Fediverse/Mastodon

Introduction §

What if we lose Internet tomorrow and we stop building computers? What would you want on your computer in the eventuality we would still have *some* power available to run it?

I find it to be an interesting exercise in the continuity of my old laptop challenge.

Bootstrapping §

My biggest point would be that my computer could be used to replicate itself to other computer owners, give them the data so they can spread it again. Data copied over and over will be a lot more resilient than a single copy with a few local backups (local as in same city at best because there is no Internet).

Because most people's computers relying on the Internet to have data turned into useless bricks, I think everyone would be glad to be part of an useful infrastructure that can replicate and extend.

Essentials §

I think I would have to argue this is very useful to have computers and knowledge they can carry if we are short on electricity for running computers. We would want science knowledge (medicine, chemistry, physics, mathematics) but also history and other topics in the long run. We would also require maps of the local region/country to make long term plans and help decisions and planning to build infrastructures (pipes, roads, lines). We would require software to display but also edit these data.

Here is a list of sources I would keep synced on my computer.

  • wikipedia dumps (by topics so it's lighter to distribute)
  • openstreetmap local maps
  • OpenBSD source code
  • OpenBSD ports distfiles
  • kiwix and openstreetmap android APK files

The wikipedia dumps in zim format are very practical to run an offline wikipedia, we would require some OpenBSD programs to make it work but we would like more people to have them, Android tablets and phones are everywhere, small and doesn't draw much battery, I'd distribute the wikipedia dumps along with a kiwix APK file to view them without requiring a computer. Keeping the sources of the Android programs would be a wise decision too.

As for maps, we can download areas on openstreetmap and rework them with Qgis on OpenBSD and redistribute maps and a compatible viewer for Android devices with the OSMand~ free software app.

It would be important to keep the data set rather small, I think under 100 GB because it would be complicated to have a 500GB requirement for setting up a new machine that can re-propagate the data set.

If I would ever need to do that, the first time would be to make serious backups of the data set using multiples copies on hard drives that I would I hand to different people. Once the propagation process is done, it matters less because I could still gather the data somewhere.

Kiwix compatible data sets (including Wikipedia)

Android Kiwix app on F-droid

Android OSMand~ app for OSM maps on F-droid

Why OpenBSD? §

I'd choose OpenBSD because it's a system I know well, but also because it's easy to hack on it to make changes on the kernel. If we ever need to connect a computer to an industrial machine, I'd rather try to port if on OpenBSD.

This is also true for the ports library, with all the distfiles it's possible to rebuild packages for multiple architectures, allowing to use older computers that are not amd64, but also easily patching distfiles to fix issues or add new features. Carrying packages without their sources would be a huge mistake, you will have a set of binary blobs that can't evolve.

OpenBSD is also easy to install and it works fine most of the time. I'd imagine automatic installation process from USB or even from PXE, and then share all the data so other people can propagate installation and data again.

This would also work with another system of course, the point is to keep the sources of the system and of its package to be able to rebuild the system for older supported architecture but also be able to enhance and work on the sources for bug fixing and new features.

Distributing §

I think a very nice solution would be to use Git, there are plugins to handle binary data so the repository doesn't grow over time. Git is decentralized, you can get updates from someone who receives an update from someone else and git can also report if someone messed with the history.

We could imagine some well known places running a local server with a WiFi hotspot that can receive updates from someone allowed to (using ssh+git) push updates to a git repository. There could be repositories for various topics like: news, system update, culture (music, videos, readings), maybe some kind of social network like twtxt. Anyone could come and sync their local git repository to get the news and updates, and be able to spread it again.

twtxt project github page

Conclusion §

This is often a topic I have in mind when I think at why we are using computers and what makes them useful. In this theoretic future which is not "post-apocalyptic" but just something went wrong and we have a LOT of computers that become useless. I just want to prove that computers can still be useful without the Internet but you just need to understand their genuine purpose.

I'd be interested into what others would do, please let me know if you want to write on that topic :)

Use fzf for ksh history search

Written by Solène, on 17 October 2021.
Tags: #openbsd #shell #ksh #fzf

Comments on Fediverse/Mastodon

Introduction §

fzf is a powerful tool to interactively select a line among data piped to stdin, a simple example is to pick a line in your shell history and it's my main fzf use.

fzf ships with bindings for bash, zsh or fish but doesn't provide anything for ksh, OpenBSD default shell. I found a way to run it with Ctrl+R but it comes with a limitation!

This setup will run fzf for looking a history line with Ctrl+R and will run it without allowing you to edit the line! /!\

Configuration §

In your interactive shell configuration file (should be the one set in $ENV), add the following function and binding, it will rebind Ctrl+R to fzf-histo function that will look into your shell history.

function fzf-histo {
    RES=$(fzf --tac --no-sort -e < $HISTFILE)
    test -n "$RES" || exit 0
    eval "$RES"
}

bind -m ^R=fzf-histo^J

Reload your file or start a new shell, Ctrl+R should now run fzf for a more powerful history search. Don't forget to install fzf package.

Typing faster with assistive technology

Written by Solène, on 16 October 2021.
Tags: #accessibility #a11y

Comments on Fediverse/Mastodon

Introduction §

This article is being written only using my left hand with the help of ibus-typing-booster program.

ibus-typing-booster project

The purpose of this tool is to assist the user by proposing words while typing, a bit like smartphones do. It can be trained with a dictionary, a text file but also learn from user inputs over time.

A package for OpenBSD is on the tracks.

Installation §

This program requires ibus to work, on Gnome it is already enabled but in other environments some configuration are required. Because this may be subject to change over time and duplicating information is bad, I'll give the links for configuring ibus-typing-booster.

How to enable ibus-typing-booster

How to use §

Once you have setup ibus and ibus-typing-booster you should be able to switch from normal input to assisted input using "super"+space.

When you type with ibus-typing-booster enabled, with default settings, the input should be underlined to show a suggestion can be triggered using TAB key. Then, from a popup window you can pick a word by using TAB to cycle between the suggestions and pressing space to validate, or use the F key matching your choice number (F1 for first, F2 for second etc...) and that's all.

Configuration §

There are many ways to configure it, suggestions can be done inline while typing which I think is more helpful when you type slowly and you want a quick boost when the suggestion is correct. The suggestions popup can be vertical or horizontal, I personally prefer horizontal which is not the default. Colors and key bindings can changed.

Performance §

While I type very fast when I have both my hands, using one hand requires me to look the keyboard and make a lot of moves with my hand. This work fine and I can type reasonably fast but this is extremely exhausting and painful for my hand. With ibus-typing-booster I can type full sentences with less efforts but a bit slower. However this is a lot more comfortable than typing everything using my hand.

Conclusion §

This is an assistive technology easy to setup and that can be a life changer for disabled users who can make use of it.

This is not the first time I'm temporarily disabled in regards to using a keyboard, I previously tried a mirrored keyboard layout reverting keys when pressing caps lock, and also Dasher which allow to make words from simple movements such as moving mouse cursor. I find this ibus plugin to be easier to integrate for the brain because I just type with my keyboard in the programs, with Dasher I need to cut and paste content, and with mirrored layout I need to focus on the layout change.

I am very happy of it.

Full WireGuard setup with OpenBSD

Written by Solène, on 09 October 2021.
Tags: #openbsd #wireguard #vpn

Comments on Fediverse/Mastodon

Introduction §

We want all our network traffic to go through a WireGuard VPN tunnel automatically, both WireGuard client and server are running OpenBSD, how to do that? While I thought it was simple at first, it soon became clear that the "default" part of the problem was not easy to solve, fortunately there are solutions.

This guide should work from OpenBSD 6.9.

pf.conf man page about NAT

WireGuard interface man page

ifconfig man page, WireGuard section

Setup §

For this setup I assume we have a server running OpenBSD with a public IP address (1.2.3.4 for the example) and an OpenBSD computer with Internet connectivity.

Because you want to use the WireGuard tunnel as the default route, you can't define a default route through WireGuard as this, that would prevent our interface to reach the WireGuard endpoint to make the tunnel working. We could play with the routing table by deleting the default route found on the interface, create a new route to reach the WireGuard server and then create a default route through WireGuard, but the whole process is fragile and there is no right place to trigger a script doing this.

Instead, you can assign the network interface used to access the Internet to the rdomain 1, configure WireGuard to reach its remote peer through rdomain 1 and create a default route through WireGuard on the rdomain 0. Quick explanation about rdomain: they are different routing tables, default is rdomain 0 but you can create new routing tables and run commands using a specific routing table with "route -T 1 exec ping perso.pw" to make a ping through rdomain 1.


    +-------------+
    |   server    | wg0: 192.168.10.1
    |             |---------------+
    +-------------+               |
           | public IP            |
           | 1.2.3.4              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |WireGuard
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ wg0: 192.168.10.2
                    rdomain 0 (default)

Configuration §

The configuration process will be done in this order:

  1. create the WireGuard interface on your computer to get its public key
  2. create the WireGuard interface on the server to get its public key
  3. configure PF to enable NAT and enable IP forwarding
  4. reconfigure computer's WireGuard tunnel using server's public key
  5. time to test the tunnel
  6. make it default route

Our WireGuard server will accept connections on address 1.2.3.4 at the UDP port 4433, we will use the network 192.168.10.0/24 for the VPN, the server IP on WireGuard will be 192.168.10.1 and this will be our future default route.

On your computer §

We will make a simple script to generate the configuration file, you can easily understand what is being done. Replace "1.2.3.4 4433" by your IP and UDP port to match your setup.

PRIVKEY=$(openssl rand -base64 32)
cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer wgendpoint 1.2.3.4 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
up
EOF

# start interface so you can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the remote peer"

On the server §

WireGuard §

Like we did on the computer, we will use a script to configure the server. It's important to get the PUBKEY displayed in the previous step.

PUBKEY=PASTE_PUBKEY_HERE
PRIVKEY=$(openssl rand -base64 32)

cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer $PUBKEY wgaip 192.168.10.0/24
inet 192.168.10.1/24
wgport 4433
up
EOF

# start interface so you can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the local peer"

Keep the public key for next step.

Firewall §

You want to enable NAT so you can reach the Internet through the server using WireGuard, edit /etc/pf.conf to add the following line (after the skip lines):

pass out quick on egress from wg0:network to any nat-to (egress)

Reload with "pfctl -f /etc/pf.conf".

NOTE: if you block all incoming traffic by default, you need to open UDP port 4433. You will also need to either skip firewall on wg0 or configure PF to open what you need. This is beyond the scope of this guide.

IP forwarding §

We need to enable IP forwarding because we will pass packets from an interface to another, this is done with "sysctl net.inet.ip.forwarding=1" as root. To make it persistent across reboot, add "net.inet.ip.forwarding=1" to /etc/sysctl.conf (you may have to create the file).

From now, the server should be ready.

On your computer §

Edit /etc/hostname.wg0 and paste the public key between "wgpeer" and "wgaip", the public key is wgpeer's parameter. Then run "sh /etc/netstart wg0" to reconfigure your wg0 tunnel.

After this step, you should be able to ping 192.168.10.1 from your computer (and 192.168.10.2 from the server). If not, please double check the WireGuard and PF configurations on both side.

Default route §

This simple setup for the default route will truly make WireGuard your default route. You have to understand services listening on all interfaces will only attach to WireGuard interface because it's the only address in rdomain 0, if needed you can use a specific routing table for a service as explained in rc.d man page.

Replace the line "up" with the following:

wgrtable 1
up
!route add -net default 192.168.10.1

Your configuration file should look like this:

wgkey YOUR_KEY
wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
wgrtable 1
up
!route add -net default 192.168.10.1

Now, add "rdomain 1" to your network interface used to reach the Internet, in my setup it's /etc/hostname.iwn0 and it looks like this.

join network wpakey superprivatekey
join home wpakey notsuperprivatekey
rdomain 1
up
autoconf

Now, you can restart network with "sh /etc/netstart" and all the network should pass through the WireGuard tunnel.

Handling DNS §

Because you may use a nameserver in /etc/resolv.conf that was provided by your local network, it's not reachable anymore. I highly recommend to use unwind (in every case anyway) to have a local resolver, or modify /etc/resolv.conf to use a public resolver.

unwind can be enabled with "rcctl enable unwind" and "rcctl start unwind", from OpenBSD 7.0 you should have resolvd running by default that will rewrite /etc/resolv.conf if unwind is started, otherwise you need to write "nameserver 127.0.0.1" in /etc/resolv.conf

Bypass VPN §

If you need for some reason to run a program and not route its traffic through the VPN, it is possible. The following command will run firefox using the routing table 1, however depending on the content of your /etc/resolv.conf you may have issues resolving names (because 127.0.0.1 is only reachable on rdomain 0!). So a simple fix would be to use a public resolver if you really need to do so often.

route -T 1 exec firefox

route man page about exec command

WireGuard behind a NAT §

If you are behind a NAT you may need to use the KeepAlive option on your WireGuard tunnel to keep it working. Just add "wgpka 20" to enable a KeepAlive packet every 20 seconds in /etc/hostname.wg0 like this:

wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0 wgpka 20
[....]

ifconfig man page explaining wgpka parameter

Conclusion §

WireGuard is easy to deploy but making it a default network interface adds some complexity. This is usually simpler for protocols like OpenVPN because the OpenVPN daemon can automatically do the magic to rewrite the routes (and it doesn't do it very well) and won't prevent non-VPN access until the VPN is connected.

Port of the week: foliate

Written by Solène, on 04 October 2021.
Tags: #openbsd #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today I wanted to share with you about the program Foliate, a GTK Ebook reader with interesting features. First, there aren't many epub readers available on OpenBSD (and also on Linux).

Foliate project website

How to install §

On OpenBSD, a simple "pkg_add foliate" and you are done.

Features §

Foliate supports multiple features such as:

  • bookmarks
  • table of content
  • annotations in the document (including import / export to share and save your annotations)
  • font and rendering: you can choose font, margins, spacing
  • color scheme: Foliate comes with a dozen of color scheme and can be customized
  • library management: all your books available in one place with the % of reading of each

Port of the week §

Because it's easy to use, its feature and that it works very well compared to alternatives this port is nominated for the port of the week!

Story of making the OpenBSD Webzine

Written by Solène, on 01 October 2021.
Tags: #openbsd #webzine

Comments on Fediverse/Mastodon

Introduction §

Hello readers! I just started a Webzine dedicated to the OpenBSD project and community. I'd like to tell you the process of its creation.

The OpenBSD Webzine

Idea §

A week ago I joked on an french OpenBSD IRC channel that it would be nice to do a webzine to gather some quotes and links about OpenBSD, I didn't thought it would be real a few days later. OpenBSD has a small community and even if we can get some news from Mastodon, Twitter, watching new commits, writing blog articles about stuff, we had nothing gathering all of that. I can't imagine most OpenBSD users being able or willing to follow everything happening in the project, so I thought a Webzine targeting average OpenBSD users would be fine. The ultimate accomplishment would be that when we release a new Webzine issue, readers would enjoy reading it with a nice cup of their favorite drink, like if it was one's favorite hobby 'zine.

Technology doesn't matter §

At first I wanted the Webzine to look like a news paper, so I tried to use Scribus (used to make magazines and serious stuff) and make a mockup to see what it would look like. Then I shared it with a small French community and some people suggested I should use LaTeX for the job, I replied it was not great for handling the layout exactly as I wanted but I challenged that person to show me something done with LaTeX that looks better than my Scribus mockup.

One hour later, that person came with a PDF generated from LaTeX with the same content, and it looked very great! I like LaTeX but I couldn't believe it could be used efficiently for this job. I immediately made changes to my Scribus version to improve it, taking the LaTeX PDF version as a model and I released a new version. At that time, I had two PDF generated from two different tools.

A few people suggested me to make a version using mdoc, I joked because it wasn't serious, but because boredom is a powerful driving force I decided to reuse the content of my mockup to do another mockup with mdoc. I chose to export it to html and had to write a simple CSS style sheet to make it look nice, but ultimately mdoc export had some issues and required to apply changes with sed to the output to fix the HTML rendering to not look like a man page misused for something else.

Anyway, I got three mockups of the same Webzine example and decided to use Scribus to export its version as a SVG file and embed it in a html file for allowing web browsers to display it natively.

I asked the Mastodon community (thank you very much to everyone who participated!) which version they liked the most and I got many replies: the mdoc html version was the most preferred by with 41%, while 32% liked the SVG-in-html version and 27% the PDF. Results were very surprising! The version I liked the least was the most preferred, but there were reasons underneath.

The PDF version was not available in web browsers (or at least didn't display natively) and some readers didn't enjoy that. As for the SVG version it didn't work well on mobile phones and both versions didn't work at all in console web clients (links, lynx, w3m). There was also accessibility concerns with the PDF or SVG for screen readers / text-to-speech users and I wanted the Webzine to be available for everyone so both formats were a no-go.

Ultimately, I decided the best way would be to publish the Webzine as HTML if I wanted it to look nice and being accessible on any device for any users. I'm not a huge fan of web and html, but it was the best choice for the readers. From this point, I started working with a few people (still from the same French OpenBSD community) to decide how to make it as HTML, from this moment I wasn't alone anymore in the project.

In the end, the issue is done by writing html "by hand" because it just works and doesn't require extra complexity layer. Simple html is not harder than markdown or LaTeX or weird format because it doesn't require extra tweaks after conversion.

Community §

I created a git repository on tildegit.org where I already host some projects so we could work on this project as a team. Requirements and what we wanted to do was getting refined a bit more every day. I designed a simplistic framework in shell that would suits our needs. It wasn't long before we got the framework to generate html pages, some styles changes happened all along the development and I think this will still happen regularly in the near future. We had a nice base to start writing content.

We had to choose a licensing, contributions processes, who is doing what etc... Fun times, I enjoyed this a lot. Our goal was to make a Webzine that would work everywhere, without JS, with a dark mode and still usable on phone or console clients so we regularly checked all of that and reported issues that were getting fixed really quickly.

Simple framework §

Let's talk a bit about the website framework. There is a simple hierarchy of directories, one used to write each issue in a dedicated directory, a Makefile to build everything, parts that are common to each generated pages (containing style, html header and footer). Each issue is made from of lot of file starting with a number, so when a page is generated by the concatenation of all the parts parts we can keep the numbers ordering.

It may not be optimized CPU wise, but concatenating parts allow reusing common parts (mainly header and footer) but also working on smaller files: each file of the issues represents a section of it (Quote, Going further, Headlines etc...).

Conclusion §

This is a fantastic journey, we are starting to build a solid team for the webzine. Everyone is allowed to contribute. My idea was to give every reader a small slice of the OpenBSD project life every so often and I think we are on good tracks now. I'd like to thanks all the people from the https://openbsd.fr.eu.org/ community who joined me at the early stages to make this project great.

Git repository of the OpenBSD Webzine (if you want to contribute)

Measuring power efficiency of a CPU frequency scheduler on OpenBSD

Written by Solène, on 26 September 2021.
Tags: #openbsd #power #efficiency

Comments on Fediverse/Mastodon

Introduction §

I started to work on the OpenBSD code dealing with the CPU frequency scaling. The current automatic logic is a trade-off between okay performance and okay battery. I'd like the auto policy to be different when on battery and when on current (for laptops) to improve battery life for nomad users and performance for people connected to the grid.

I've been able to make raw changes to produce this effect but before going further, I wanted to see if I got any improvement in regards to battery life and to which extent if it was positive.

In the incoming sections of the article I will refer to Wh unit, meaning Watt-hour. It's a measurement unit for a quantity of energy used, because energy used is absolutely not linear, we can make an average of the usage and scale it to one hour so it's easy to compare. An oven drawing 1 kW when on and being on for an hour will use 1 kWh (one kilowatt-hour), while an electric heater drawing 2 kW when on and turned on for 30 minutes will use 1 kWh too.

Kilowatt Hour explanation from Wikipedia

How to understand power usage for nomad users §

While one may think that the faster we do a task, the less time the system stay up and the less battery we use, it's not entirely true for laptops or computers.

There are two kinds of load on a system: interactive and non-interactive. In non-interactive mode, let's imagine the user powers on the computer, run a job and expect it to be finished as soon as possible and then shutdown the computer. This is (I think) highly unusual for people using a laptop on battery. Most of the time, users with a laptop will want their computer to be able to stay up as long as possible without having to charge.

In this scenario I will call interactive, the computer may be up with lot of idle time where the human operator is slowly typing, thinking or reading. Usually one doesn't power off a computer and power it on again while the person is sitting in front of the laptop. So, for a given task among the main task "staying up" may not be more efficient (in regards to battery) if it takes less time, because whatever the time it will take to do X() the system will stay up after.

Testing protocol §

Here is the protocol I did for the testing "powersaving" frequency policy and then the regular auto policy.

  1. Clean package of games/gzdoom
  2. Unplug charger
  3. Dump hw.sensors.acpibat1.watthour3 value in a file (it's the remaining battery in Wh)
  4. Run compilation of the port games/gzdoom with dpb set to use all cores
  5. Dump watthour3 value again
  6. Wait until 18 minutes and 43 seconds
  7. Dump watthour3 value again

Why games/gzdoom? It's a port I know can be compiled with parallel build allowing to use all CPU and I know it takes some times but isn't too short too.

Why 18 minutes and 43 seconds? It's the time it takes for the powersaving policy to compile games/gzdoom. I needed to compare the amount of energy used by both policies for the exact same time with the exact same job done (remember the laptop must be up as long as possible, so we don't shutdown it after compiling gzdoom).

I could have extended the duration of the test so the powersaving would have had some idle time but given the idle time is drawing the exact same power with both policies, that would have been meaningless.

Results §

I'm planning to add results for the lowest and highest modes (apm -L and apm -H) to see the extremes.

Compilation time §

As expected, powersaving was slower than the auto mode, 18 minutes and 43 seconds versus 14 minutes and 31 seconds for the auto policy.

Policy		Compile time	Idle time
------		------------	---------
powersaving	1123		0
auto		871		252

Chart showing the difference in time spent for the two policies

Energy used §

We see that the powersaving used more energy for the duration of the compilation of gzdoom, 5.9 Wh vs 5.6 Wh, but as we don't turn off the computer after the compilation is done, the auto mode also spent a few minutes idling and used 0.74 Wh in that time.

Policy		Compile power	Idle power	Total (Wh)
------		------------	---------	----------
powersaving	5,90		0,00		5,90
auto		5,60		0,74		6,34

Chart showing the difference in energy used for the two policies

Conclusion §

For the same job done: compiling games/gzdoom and stay on for 18 minutes and 43 seconds, the powersaving policy used 5.90 Wh while the auto mode used 6.34 Wh. This is a saving of 6.90% of power.

This is a testing policy I made for testing purposes, it may be too conservative for most people, I don't know. I'm currently playing with this and with a reproducible benchmark like this one I'm able to compare results between changes in the scheduler.

Reuse of OpenBSD packages for trying runtime

Written by Solène, on 19 September 2021.
Tags: #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

So, I'm currently playing with OpenBSD trying each end user package (providing binaries) and see if they work when installed alone. I needed a simple way to keep packages downloaded and I didn't want to go the hard way by using rsync on a package mirror because it would waste too much bandwidth and would take too much time.

The most efficient way I found rely on a cache and ordering the source of packages.

pkg_add mastery §

pkg_add has a special variable named PKG_CACHE that when it's set, downloaded packages are copied in this directory. This is handy because every time I will install a package, all the packages downloaded by will kept in that directory.

The other variable that interests us for the job is PKG_PATH because we want pkg_add to first look up in $PKG_CACHE and if not found, in the usual mirror.

I've set this in my /root/.profile

export PKG_CACHE=/home/packages/
export PKG_PATH=${PKG_CACHE}:http://ftp.fr.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/

Every time pkg_add will have to get a package, it will first look in the cache, if not there it will download it in the mirror and then store it in the cache.

Saving time removing packages §

Because I try packages one by one, installing and removing dependencies takes a lot of time (I'm using old hardware for the job). Instead of installing a package, deleting it and removing its dependencies, it's easier to work with manually installed packages and once done, remove dependencies, this way you will keep already installed dependencies that will be required for the next package.

#!/bin/sh

# prepare the packages passed as parameter as a regex for grep
KEEP=$(echo $* | awk '{ gsub(" ","|",$0); printf("(%s)", $0) }')

# iterate among the manually installed packages
# but skip the packages passed as parameter
for pkg in $(pkg_info -mz | grep -vE "$KEEP")
do
	# instead of deleting the package
	# mark it installed automatically
	pkg_add -aa $pkg
done

# install the packages given as parameter
pkg_add $*

# remove packages not required anymore
pkg_delete -a

This way, I can use this script (named add.sh) "./add.sh gnome" and then reuse it with "./add.sh xfce", the common dependencies between gnome and xfce packages won't be removed and reinstalled, they will be kept in place.

Conclusion §

There are always tricks to make bandwidth and storage more efficient, it's not complicated and it's always a good opportunity to understand simple mechanisms available in our daily tools.

How to use cpan or pip packages on Nix and NixOS

Written by Solène, on 18 September 2021.
Tags: #nixos #nix #perl #python

Comments on Fediverse/Mastodon

Introduction §

When using Nix/NixOS and requiring some development libraries available in pip (for python) or cpan (for perl) but not available as package, it can be extremely complicated to get those on your system because the usual way won't work.

Nix-shell §

The command nix-shell will be our friend here, we will define a new environment in which we will have to create the package for the libraries we need. If you really think this library is useful, it may be time to contribute to nixpkgs so everyone can enjoy it :)

The simple way to invoke nix-shell is to use packages, for example the command ` nix-shell -p python38Packages.pyyaml` will give you access to the python library pyyaml for Python 3.8 as long as you run python from this current shell.

The same way for Perl, we can start a shell with some packages available for databases access, multiples packages can be passed to "nix-shell -p" like this: `nix-shell -p perl532Packages.DBI perl532Packages.DBDSQLite`.

Defining a nix-shell §

Reading the explanations found on a blog and help received on Mastodon, I've been able to understand how to use a simple nix-shell definition file to declare new cpan or pip packages.

Mattia Gheda's blog: Introduction to nix-shell

Mastodon toot from @cryptix@social.coop how to declare a python package on the fly

What we want is to create a file that will define the state of the shell, it will contain new packages needed but also the list of packages.

Skeleton §

Create a file with the nix extension (or really, whatever the file name you want), special file name "shell.nix" will be automatically picked up when using "nix-shell" instead of passing the file name as parameter.

with (import <nixpkgs> {});
let
    # we will declare new packages here
in
mkShell {
  buildInputs = [ ]; # we will declare package list here
}

Now we will see how to declare a python or perl library.

Python §

For python, we need to know the package name on pypi.org and its version. Reusing the previous template, the code would look like this for the package Crossplane

with (import <nixpkgs> {}).pkgs;
let
  crossplane = python37.pkgs.buildPythonPackage rec {
    pname = "crossplane";
    version = "0.5.7";
    src = python37.pkgs.fetchPypi {
      inherit pname version;
      sha256 = "a3d3ee1776bcccebf7a58cefeb365775374ab38bd544408117717ccd9f264f60";
    };
    
    meta = { };
  };


in
mkShell {
  buildInputs = [ crossplane python37 ];
}

If you need another library, replace crossplane variable name but also pname value by the new name, don't forget to update that name in buildInputs at the end of the file. Use the correct version value too.

There are two references to python37 here, this implies we need python 3.7, adapt to the version you want.

The only tricky part is the sha256 value, the only way I found to find it easily is the following.

  1. declare the package with a random sha256 value (like echo hello | sha256)
  2. run nix-shell on the file, see it complaining about the wrong checksum
  3. get the url of the file, download it and run sha256 on it
  4. update the file with the new value

Perl §

For perl, it is required to use a script available in the official git repository when packages are made. We will only download the latest checkout because it's quite huge.

In this example I will generate a package for Data::Traverse.

$ git clone --depth 1 https://github.com/nixos/nixpkgs
$ cd nixpkgs/maintainers/scripts
$ nix-shell -p perlPackages.{CPANPLUS,perl,GetoptLongDescriptive,LogLog4perl,Readonly}
$ ./nix-generate-from-cpan.pl Data::Traverse
attribute name: DataTraverse
module: Data::Traverse
version: 0.03
package: Data-Traverse-0.03.tar.gz (Data-Traverse-0.03, DataTraverse)
path: authors/id/F/FR/FRIEDO
downloaded to: /home/solene/.cpanplus/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz
sha-256: dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f
unpacked to: /home/solene/.cpanplus/5.34.0/build/EB15LXwI8e/Data-Traverse-0.03
runtime deps: 
build deps: 
description: Unknown
license: unknown
License 'unknown' is ambiguous, please verify
RSS feed: https://metacpan.org/feed/distribution/Data-Traverse
===
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    };
    meta = {
    };
  };

We will only reuse the part after the ===, this is nix code that defines a package named DataTraverse.

The shell definition will look like this:

with (import <nixpkgs> {});
let
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    };
    meta = { };
  };

in
mkShell {
  buildInputs = [ DataTraverse perl ];
  # putting perl here is only required when not using NixOS, this tell you want Nix perl binary
}

Then, run "nix-shell myfile.nix" and run you perl script using Data::Traverse, it should work!

Conclusion §

Using not packaged libraries is not that bad once you understand the logic of declaring it properly as a new package that you keep locally and then hook it to your current shell session.

Finding the syntax, the logic and the method when you are not a Nix guru made me despair. I've been struggling a lot with this, trying to install from cpan or pip (even if it wouldn't work after next update of my system and I didn't even got it to work.

Benchmarking compilation time with ccache/mfs on OpenBSD

Written by Solène, on 18 September 2021.
Tags: #openbsd #benchmark

Comments on Fediverse/Mastodon

Introduction §

I always wondered how to make packages building faster. There are at least two easy tricks available: storing temporary data into RAM and caching build objects.

Caching build objects can be done with ccache, it will intercept cc and c++ calls (the programs compiling C/C++ files) and depending on the inputs will reuse a previously built object if available or build normally and store the result for potential next reuse. It has nearly no use when you build software only once because it requires objects to be cached before being useful. It obviously doesn't work for non C/C++ programs.

The other trick is using a temporary filesystem stored in memory (RAM), on OpenBSD we will use mfs but on Linux or FreeBSD you could use tmpfs. The difference between those two is mfs will reserve the given memory usage while tmpfs is faster and won't reserve the memory of its filesystem (which has pros and cons).

So, I decided to measure the build time of the Gemini browser Lagrange in three cases: without ccache, with ccache but first build so it doesn't have any cached objects and with ccache with objects in it. I did these three tests multiple time because I also wanted to measure the impact of using memory base filesystem or the old spinning disk drive in my computer, this made a lot of tests because I tried with ccache on mfs and package build objects (later referenced as pobj) on mfs, then one on hdd and the other on mfs and so on.

To proceed, I compiled net/lagrange using dpb after cleaning the lagrange package generated everytime. Using dpb made measurement a lot easier and the setup was reliable. It added some overhead when checking dependencies (that were already installed in the chroot) but the point was to compare the time difference between various tweaks.

Results numbers §

Here are the results, raw and with a graphical view. I did run multiples time the same test sometimes to see if the result dispersion was huge, but it was reliable at +/- 1 second.

Type			Duration for second build	Duration with empty cache
ccache mfs + pobj mfs	60				133
ccache mfs + pobj hdd	63				130
ccache hdd + pobj mfs	61				127
ccache hdd + pobj hdd	68				137
 no ccache + pobj mfs					124
 no ccache + pobj hdd					128

Diagram with results

Results analysis §

At first glance, we can see that not using ccache results in builds a bit faster, so ccache definitely has a very small performance impact when there is no cached objects.

Then, we can see results are really tied together, except for the ccache and pobj both on the hdd which is the slowest combination by far compared to the others times differences.

Problems encountered §

My building system has 16 GB of memory and 4 cores, I want builds to be as fast as possible so I use the 4 cores, for some programs using Rust for compilation (like Firefox), more than 8 GB of memory (4x 2GB) is required because of Rust and I need to keep a lot of memory available. I tried to build it once with 10GB of mfs filesystem but when packaging it did reach the filesystem limit and fail, it also swapped during the build process.

When using a 8GB mfs for pobj, I've been hitting the limit which induced build failures, building four ports in parallel can take some disk space, especially at package time when it copies the result. It's not always easy to store everything in memory.

I decided to go with a 3 GB ccache over MFS and keep the pobj on the hdd.

I had no spare SSD to add an SSD to the list. :(

Conclusion §

Using mfs for at least ccache or pobj but not necessarily both is beneficial. I would recommend using ccache in mfs because the memory required to store it is only 1 or 2 GB for regular builds while storing the pobj in mfs could requires a few dozen gigabytes of memory (I think chromium requires 30 or 40 GB last time I tried).

Experimenting with a new OpenBSD development lab

Written by Solène, on 16 September 2021.
Tags: #openbsd #life

Comments on Fediverse/Mastodon

Experimenting §

This article is not an how to or explaining anything, I just wanted to share how I spend my current free time. It's obviously OpenBSD related.

When updating or making new packages, it's important to get the dependencies right, at least for the compilation dependencies it's not hard because you know it's fine once the building process can run entirely, but at run time you may have surprises and discover lacking dependencies.

What's a dependency? §

Software are made of written text called source code (or code to make it simpler), but to avoid wasting time (because writing code is hard enough already) some people write libraries which are pieces of code made in the purpose of being used by other programs (through fellow developers) to save everyone's time and efforts.

A library can propose graphics manipulation, time and date functions, sound decoding etc... and the software we are using rely on A LOT of extra code that comes from other piece of code we have to ship separately. Those are dependencies.

There are dependencies required for building a program, they are used to manipulate the source code to transform it into machine readable code, or for organizing the building process to ease the development and so on and there are libraries dependencies which are required for the software to run. The simplest one to understand would be the library to access the audio system of your operating system for an audio player.

And finally, we have run time dependencies which can be found upon loading a software or within its use. They may not be well documented in the project so we can't really know they are required until we try to use some feature of the software and it crashes / errors because of something missing. This could be a program that would call an extra program to delegate the resizing of a picture.

What's up? §

In order to spot these run time dependencies, I've started to use an old laptop (a thinkpad T400 that I absolutely love) with a clean OpenBSD installation, lot of local packages on my network (see it later) and a very clean X environment.

The point of this computer is to clean every package, install only one I need to try (pulling the dependencies that come with it) and see if it works under the minimal conditions. They should work with no issue if the packages are correctly done.

Once I'm satisfied with the test process, I will clean every packages on the system and try another one.

Sometimes, as we have many many packages installed, it happens we have a run time dependency installed by that is not declared in the software package we are working on, and we don't see the failure as the requirement is provided by some other package. By using a clean environment to check every single program separately, I remove the "other packages" that could provide a requirement.

Building §

When I work on packages I often need to compile many of them, and it takes time, a lot of time, and my laptop usually make a lot of noise and is hot and slow to do something else, it's not very practical. I'm going to setup a dedicated building machine that I will power on when I'll work on ports, and it will be hidden in some isolated corner at home building packages when I need it. That machine is a bit more powerful and will prevent my laptop to be unusable for some time.

This machine in combination with the laptop are a great combination to make quick changes and test how it goes. The laptop will pull packages directly from the building machine, and things could be fixed on the building machine quite fast.

The end §

Contributing to packages is an endless work, making good packages is hard work and requires tests. I'm not really good at doing packages but I want to improve myself in that field and also improve the way we can test packages are working. With these new development environments I hope I will be able to contribute a bit more to the quality of the futures OpenBSD releases.

Reviewing some open source distraction free editors

Written by Solène, on 15 September 2021.
Tags: #editors #unix

Comments on Fediverse/Mastodon

Introduction §

This article is about comparing "distraction free" editors running on Linux. This category of editors is supposed to be used in full screen and shouldn't display much more than text, allowing to stay focused on the text.

I've found a few programs that run on Linux and are open source, I deliberately omitted web browser based editors

  • Apostrophe
  • Focuswriter
  • Ghostwriter
  • Quilter
  • Vi (the minimal vi from busybox)

I used them on Alpine, three of them installed from Flatpak and Apostrophe installed from the Alpine packages repositories.

I'm writing this on my netbook and wanted to see if a "distraction" free editor could be valuable for me, the laptop screen and resolution are small and using it for writing seems a fun idea, although I'm not really convinced of the use (for me!) of such editors.

Resource usage and performance §

Quick tour of the memory usage (reported in top in the SHR column)

  • Apostrophe: 63 MB of memory
  • Focuswriter: 77 MB of memory
  • Ghostwriter: 228 MB of memory
  • Quilter: 72 MB of memory
  • vi: 0.89 MB of memory + 41 MB of memory for xfce4-terminal

As for the perceived performance when typing I've had mixed results.

  • Apostrophe: writing is smooth and pleasant
  • Focuswriter: writing is smooth and pleasant
  • Ghostwriter: writing is smooth and pleasant
  • Quilter: there is a delay when typing, I've been able to type an entire sentence and being so fast I've been able to see the last word being drawn on the screen
  • vi: writing is smooth and pleasant

Features §

I didn't know much what to expect from these editors, I've seen some common features and some other that I discovered.

  • focus mode: keep the current sentence/paragraph/line in focus and fade the text around
  • helpers for markdown mode: shortcuts to enable/disable bold/italic, bullet lists etc... Outlining window to see the structure of the document or also real time rendering from the markdown
  • full screen mode
  • changing fonts and display: color, fonts, background, style sheet may be customized to fit what you prefer
  • "Hemingway" mode: you can't undo what you type, I suppose it's to write as much as possible and edit later
  • Export as multiple format: html, ODT, PDF, epub...

Personal experience and feelings §

It would be long and not really interesting to list which program has which feature so here is my feelings about those four software.

Apostrophe §

It's the one I used for writing this article, it feels very nice, it proposes only three themes that you can't customize and the font can't be changed. Although you can't customize that much, it's the one that looks the best out of the box, that is easiest to use and which just works fine. From a distraction free editor, it seems it's the best approach.

This is the one I would recommend to anyone wanting a distraction free editor.

Apostrophe project website

Quilter §

Because of the input lag when typing text, this was the worse experience for me, maybe it's platform specific? The user interface looks a LOT like apostrophe at the point I'd think one is a fork from another, but in regards to performance it's drastically different. It offers three themes but also allow choosing the fonts from three named "Quilt something" which is disappointing.

Quilter project website

Focuswriter §

This one has potential, it has a lot of things you can tweak in the preferences menu, from which character should be doubled (like quotes) when typed, daily goals, statistics, configurable shortcuts for everything, writing from right to left.

It also relies a lot on the theming features to choose which background (picture or color) you want, how to space the text, which font, which size, opacity of the typing area. It has too many tweaks required to be usable to me, the default themes looked nice but the text was small and ugly, it was absolutely not enjoying to type and see the text appending. I tried to duplicate a theme (from the user interface) and change the font and size, but I didn't get something that I enjoyed. Maybe with some time spent it could look good, but what the other tools provide is something that just works and looks good out of the box.

Focuswriter project website

Ghostwriter §

I tried ghostwriter 1.x at first then I saw there was a 2.x version with a lot more features, so I used both for this review, I'll only cover the 2.x version but looking at the repositories information many distributions providing the old version, including flatpak.

Ghostwriter seems to be the king of the arena. It has all the features you would expect from a distraction free editor, it has sane defaults but is customizable and is enjoyable out of the box. For writing long documents, the markdown outlining panel to see the structure of the document is very useful and there are features for writing goal and statistics, this may certainly be useful for some users.

Ghostwriter project website

vi §

I couldn't review some editors without including a terminal based editor. I chose vi because it seemed the most distraction free to me, emacs has too many features and nano has too much things displayed at the bottom of the screen. I choose vi instead of ed because it's more beginner friendly, but ed would work as fine. Note that I am using vi (from busybox on Alpine linux) and not Vim or nvi.

vi doesn't have much features, it can save text to a file. The display can be customized in the terminal emulator and allow a great choice of font / theme / style / coloring after decades of refinements in this field. It has no focus mode or markdown coloration/integration, which I admit can be confusing for big texts with some markup involved, at least for bullet lists and headers. I always welcome a bit of syntactic coloration and vi lacks this (this can be solved with a more advanced text editor). vi won't allow you to export into any kind of file except plain text, so you need to know how to convert the text file into the output format you are looking for.

busybox project website

Conclusion §

It's hard for me to tell if typing this article using Apostrophe editor was better or more efficient than using my regular kakoune terminal text editor. The font looks absolutely better in Apostrophe but I never gave much attention to the look and feel of my terminal emulator.

I'll try using Apostrophe or Ghostwriter for further articles, at least by using my netbook as a typing machine.

Blog update 2021

Written by Solène, on 15 September 2021.
Tags: #blog #life

Comments on Fediverse/Mastodon

Hello,

This is a simple announce to gather some changes I made to my blog recently.

  • The web version of the blog now display the articles list grouped by year when viewing a tag page, previously it was displaying the whole article contents and I think tags were unusable this way, although it was so because initially I had two articles when I wrote the blog generator and it made sense.
  • The RSS file was embedding the whole HTML content of each article, I switched to use the article original plain text format, HTML should only be used in a Web browser and RSS is not meant to be dedicated for web browsers. I know this is a step back for some users but many users also appreciated this move and I'm happy to not contribute at putting HTML everywhere.
  • Most texts are now written using the gemtext format, served raw on gemini and gopher and converted into HTML for the http version using gmi2html python tool slightly modified (I forgot where I got it initially). I use gemtext because I like this format and often forced me to rethink the way I present an idea because I had to separate links and code from the content and I'm convinced it's a good thing. No more links named "here" or inlined code hard to spot.

If you think changes could be done on my blog, on the web / gopher or gemini version please share your ideas with me, it's also the opportunity for me to play with the code of the blog generator cl-yag that I absolutely love.

I have been publishing a lot more this year, I enjoy much more sharing my ideas or knowledge this way than I used to and writing is also the opportunity for me to improve my English and when I compare to the first publications I'm proud to see I improved the quality over time (I hope so at least). I got more feedback for strangers reading this blog, by mail or IRC and I'm thankful to them, they just drop by to tell me they like what I write or that I made a mistake so I can fix it, it's invaluable and allows me to make new connections to people I would never have reached otherwise.

I should try to find some time and motivation to get back at my Podcast publications now but I find it a lot harder to speak than to write some text, maybe it would be an habit to take. We will see soon.

Managing /etc/hosts on NixOS

Written by Solène, on 14 September 2021.
Tags: #nixos

Comments on Fediverse/Mastodon

Introduction §

This is a simple article explaining how to manage entries in /etc/hosts in a NixOS system. Modifying this file is quite useful when you need to make tests on a remote server while its domain name is still not updated so you can force a domain name to be resolved by a given IP address, bypassing DNS queries.

NixOS being what is is, you can't modify the /etc/hosts file directly.

NixOS stable documentation about the extraHosts variable

Configuration §

In your /etc/nixos/configuration.nix file, you have to declare the variable networking.extraHosts and use "\n" as separator for entries.

networking.extraHosts = "1.2.3.4 foobar.perso.pw\n1.2.3.5 foo.perso.pw";

or as suggested by @tokudan@chaos.social on Mastodon, you can use multiple lines in the string as follow (using two single quotes character):

networking.extraHosts = ''
1.2.3.4 foobar.perso.pw
1.2.3.5 foo.perso.pw
'';

The previous pieces of configuration will associate "foobar.perso.pw" to IP 1.2.3.4 and "foo.perso.pw" to IP 1.2.3.5.

Now, I need to rebuild my system configuration and use it, this can be done with the command `nixos-rebuild switch` as root.

Workaround for an OpenBSD boot error on APU boards

Written by Solène, on 10 September 2021.
Tags: #openbsd #apu

Comments on Fediverse/Mastodon

If you ever get your hands on an APU board from PCEngines and that you have an issue like this when trying to boot OpenBSD:

Entry point at 0xffffffff8100100

There is a simple solution explained by Mischa on the misc@openbsd.org mailing list in 2020.

Re: Can't install OpenBSD 6.6 on apu4d4

I'll copy the reply here in case the archives get lost. When you get the OpenBSD boot prompt, type the following commands to tell about the serial port.

stty com0 115200
set tty com0
boot

And you are done! During the installation process you will be asked about serial devices to use but the default offered will match what you set at boot.

Dear open source developers

Written by Solène, on 09 September 2021.
Tags: #life

Comments on Fediverse/Mastodon

Dear open source and libre software developers, I would like to share thoughts with you. This could be considered as an open letter but I'm not sure to know what an open letter is, and I don't want to give instructions to anyone. I have feelings I want to share about my beloved hobby: computers and open source.

Computers are amazing, they do stuff, lot of stuff, at hardware and software level. We can use them for anything, they are a great tool and we can program our tools to match our expectations, wishes and needs, it's not easy, it's an art but also a science, we do it together because it's a huge task requiring more than one brain time to achieve.

We are currently facing supply chain issues at many levels in the electronic industry, making modern high end computers is always more complicated, we also face pollution concerns and limited resources that will prevent an infinity of computers.

I would like to see my hobby affordable for anyone. There are many many computers already built and most of their parts can be replaced which is a crazy opportunity when you compare this to the smartphone industry where no parts can be changed.

As people writing software used by others, it is absolutely important to keep old computers useful. They were useful when they were built, they should still be useful in the future to some extent.

Nowadays, a computer without network access would be considered useless but it's not. But if you want to connect a computer to the Internet, facing continuously increase of network attacks, one should only use an up to date operating system and latest software version, unfortunately it's not always easy on old computers.

Some cryptography may require regularly increased minimum requirements, this is acceptable. What is not is that doing the same task on a computer requires more resources over the years as software grows and evolves.

Nowadays, regularly, more operating systems are dropping support for older architectures to only focus on amd64. This is understandable, volunteer work is limited and it's important to focus on the hardware found in most of the users computers. But then, by doing so they are making old hardware obsolete which is not acceptable.

I understand this is a huge dilemma and I have no solution, maybe we should need less operating systems to gather the volunteers to maintain older but still relevant architectures. It is not possible obviously, volunteers work on what they want because they like it, you can't assign contributors to some task against their will.

The issue is at a higher scale and every person working in the IT field is part of the problem.

More ? §

Some are dropping old architectures because there are no users. There are no users because they have to replace their hardware with a more powerful new hardware to cope with software becoming more and more hungry of resources. They become so because of people writing software, because companies want to do unoptimized code to release the product with less development time implying a cheaper cost, with the trade-off of asking customers to use a more powerful computer.

The web become unusable on old hardware, you can't use the world wide web anymore on old hardware because of lack of memory, lack of javascript support or too much animations using the CPU that you can't disable.

When you think about open source systems, many think "Linux", and most people think "amd64". A big part of the open source ecosystem is now driven toward Linux/amd64 target, at the cost of all the OS / architectures that are still in use, existing, not dead.

We could argue that technology is evolving and that those should make the work to stay in the race with the holy Linux/amd64 combo, this is a receivable argument as open source can be used / forked by everyone. But it would work so much better if we worked as a whole team.

Thoughts §

I just wanted to express my feelings with this blog post. I don't want to tell anyone what to do, we are the open source community, we do what we enjoy.

I own old computers, from 15 years old to 8 years old, I still like to use them. Why would they be "old"? because of their date of manufacture, this is a fact. But because of the software ecosystem, they are becoming more obsolete every year and I definitely don't understand why it must be this way.

If you can give a thought to my old computers when writing code, thinking about them and make a three lines changes to improve your software for them, I would be absolutely grateful for the extra work. We don't really need more computers, we need to dig out the old computers to make them useful again.

Thank you very much dear community <3

Port of the week: pngquant

Written by Solène, on 07 September 2021.
Tags: #graphics #unix #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today as a "Port of the Week" article (that isn't published every week now but who cares) I would like to present you pngquant.

pngquant is a simple utility to compress png files in order to reduce them, with the goal of not altering the file in a visible way. pngquant is lossy which mean it modify the content, at the opposite of the optipng program which optimize the png file to try to reduce its size as possible without modifying the visual.

pngquant project website

How to use §

The easiest way to use pngquant is simply give the file to compress as an argument, a new file with the original file name with "-fs8" added before the file extension will be created.

$ pngquant file.png
$ test -f file-fs8.png && echo true
true

Performance §

I made a simple screenshot of four terminals on my computer, I compared the file size of the original png, the png optimized with optipng and the compressed png using pngquant. I also included a conversion to jpg of the same size as the original file.

I used defaults of each commands.

File		size (in kilobytes)	% of original (lower is better)
========	===============		===============================
original	168			100
optipng		144			85.7
pngquant	50.2			29.9
jpeg 71%	169			100

The file produced by pngquant is less than a third of the original. Here are the files so you can try to check if you see differences with the pngquant version.

  • Original file
  • Original file

  • Optimized file
  • Optimized file using optipng

  • Compressed file
  • Compressed file using pngquant

  • Jpeg conversion (targeting same size)
  • Jpeg file converted with ImageMagick

Conclusion §

Most of the time, compressing a png is suitable for publishing or sharing. For screenshots or digital pictures, jpg format is usually very bad and is only suitable for camera pictures.

For a drawn picture you should keep the original if you ever plan to make changes on it.

Review of ElementaryOS 6 (Odin)

Written by Solène, on 06 September 2021.
Tags: #linux #review

Comments on Fediverse/Mastodon

Introduction §

ElementaryOS is a linux distribution based on Ubuntu that also ship with a in-house developed desktop environment Pantheon and ecosystem apps. Since their 6th release named Odin, the development team made a bold choice of proposing software through the Flatpak package manager.

I've been using this linux distribution on my powerful netbook (4 cores atom, 4 GB of memory) for some weeks, trying not to use the terminal and now this is my review.

ElementaryOS project website

ElementaryOS desktop with no window shown

Pantheon §

I've been using ElementaryOS a little in the past so I was already aware of the Pantheon desktop when I installed ElementaryOS Odin on my netbook, I've been pleased to see it didn't change in term of usability. Basically, Pantheon looks like a Gnome3 desktop with a nice and usable dock à la MacOS.

Using the Super key (often referred to as the "Windows key") and you will be disappointed by getting a window with a list of shortcuts that works with Pantheon. Putting the help on this button is quite clever as we are used to press it for sending commands, but after a while it's misleading to have a single button triggering help, fortunately this behaviour can be configured to display the desktop or the applications menu.

Pantheon has a very nice feature I totally love which create a floating miniature of a target window that stay on top of everything, I often need to keep an eye on a window or watch a movie, and this mode allow me to exactly do that. The miniature is easy to move on the screen, easy to resize, and upon a click the window appears and the miniature is then hidden until you switch to another window. It may seems a gadget, but on a small screens I really appreciate. You can create this for a window by pressing Super+f and clicking on a target.

Picture in picture mode, showing the AppCenter while in a terminal

The desktop comes with some programs made specifically for Pantheon: terminal emulator, file browser, text editor, calendar etc... They are simple but effective.

The whole environment is stable, good looking, coherent and usable.

The AppCenter and Flatpak §

As I said before, ElementaryOS is based on Ubuntu so it inherits all the packages available on Ubuntu, but they will be only installable from the command line. The Application center GUI shows an entirely different package sets that comes from the ElementaryOS flatpak repository but also the one from flathub. Official repository apps are clearly designated as official while programs from flathub will be displayed as third party and a warning about quality/security will be displayed for each program from this repository when you want to install.

Warning shown when trying to install a program from a different repository than the one from ElementaryOS

Flatpak has a pretty bad reputation among the groups I regularly read, however I like flatpak. Crash course to flatpak: it is a Linux agnostic package manager that will not reuse your system library but instead install the whole basics dependencies required (such as X11, KDE, Gnome etc...) and then programs are installed upon this, but still separated from each other. Programs running from flatpak will have different permissions and may be limited in their permissions (no network, can only reach ~/Downloads/ etc..), this is very nice but not always convenient especially for programs that require plugins. The whole idea of flatpak is that you install a program and it shouldn't mess with the current system, and it can be installed in such way that when you use it, the person making the program bundle can restrict the permissions as much as wanted.

While installing flatpak programs take a good amount of data to download because of the big dependencies, you need them only once and updating flatpak programs will use delta changes, so only difference is downloaded, I found updates to be very small in regards to network consumption. While installing a single GUI app from flatpak on a Linux system can be seen as overkill, the small Gemini browser Lagrange involve more than 1GB of dependencies from flatpak, it totally make sense to install everything needed by the user from flatpak.

If you are unhappy with the current permissions of a program, you can use the utility Flatseal to tweak its permissions, which is very cool.

I totally understand and love the move to full flatpak, it has proven me to be solid, easy to use and easy to tweak despite flatpak still being very young. I liked very much that my Firefox on OpenBSD had the unveil feature preventing it from accessing my data in case of security breach, now with Firefox from Flatpak or Firefox run from firejail I can get the same on Linux. There is one thing I regret in the AppCenter though but this is my opinion and I can understand why it is so, some programs have a priced button like "3,00$" while the other are "Free", there is a menu near the price that let you choose the amount you want to pay but you can also put 0,00 and then the program is free. This can be misleading for users because the program is actually free but in "pay what you want" mode.

Picture of a torrent program that is not shown as free but can be set to 0,00$

I have no issues paying for Free software as long as it's 100% free, but suggesting a price for a package while you don't know you can install it for free can be weird. The payment implementation of the AppCenter could be the beginning of paid software integrated into ElementaryOS, I have no strong opinion about this because people need money for a living, but I hope it will be used wisely.

No terminal challenge §

While trying ElementaryOS for some time, I gave myself a little challenge that was to avoid using the Terminal as much as possible. I quite succeeded as I only required a terminal to install a regular package (lutris, not available as flatpak). Of course, I couldn't prevent myself to play with a terminal to check for bandwidth or CPU usage but it doesn't count as a normal computer use.

Everything worked fine so far, network access, wireless, installing and playing video games, video players.

I'd feel confident if I recommended a non linux users to install ElementaryOS and use it. On first boot the system provides a nice introduction to explain basics.

Parental control §

This is a feature I'm not using but I found it in the configuration panel and I've been surprised to see it. ElementaryOS comes with a feature to restrict time in week days and week-end days, but also prevent an user to reach some URLs (no idea how this is implemented) and also forbid to run some installed Apps.

I don't have kids but I assume this can be very useful to prevent the use of the computer past some time or prevent them to use some programs, to make it work they would obviously need their own account and not able to be root. I can't judge if it works fine, if it's suitable for real world, but I wanted to share about this unique feature.

Screenshot of the parental control

Global performance §

My netbook proved to be quite okay to use Pantheon. The worse cases I figured out are displaying the applications menu which takes a second, and the AppCenter that is slow to browse and the "searching for update" takes a long time.

As I said in the introduction, my Netbook has a quad core atom and a good amount of memory but the eMMC storage is quite slow. I don't know if the lack of responsiveness comes from my CPU or storage, but I can tell everything works smoothly on an older Core2 Duo!

Conclusion §

Using ElementaryOS was delightful, it just works. The team made a very good work for the whole coherence of the desktop. It is certainly not the distribution you need when you want full control or if you want something super light, but it definitely does the job for users that just want things to work, and who like Pantheon. It doesn't seem straightforward to switch to another desktop environment.

Playing with a new shell: fish

Written by Solène, on 05 September 2021.
Tags: #openbsd #shell

Comments on Fediverse/Mastodon

Introduction §

Today I'll introduce you to the interactive shell fish. Usually, Linux distributions ships bash (which can be a hidden dash, a limited shell), MacOS is providing zsh and OpenBSD ksh. There are other shells around and fish is one of them.

But fish is not like the others.

fish shell project website

What make it special? §

Here is a list of biggest changes:

  • suggested input based on commands available
  • suggested input based on history (even related to the current directory you are in!)
  • not POSIX compatible (the usual shell syntax won't work)
  • command completion works out of the box (no need for extensions like "ohmyzsh")
  • interconnected processes: updating a variable can be done into every opened shells

Asciinema recording showing history features and also fzf integration

Making history more powerful with fzf §

fzf is a simple utility for searching data among a file (the history file in that case) in fuzzy mode, meaning in not a strict matching, on OpenBSD I use the following configuration file in ~/.config/fish/config.fish to make fzf active.

When pressing ctrl+r with some history available, you can type any words you can think about an old command like "ssh bar" and it should return "ssh foobar" if it exists.

source /usr/local/share/fish/functions/fzf-key-bindings.fish
fzf_key_bindings

fzf is absolutely not related to fish, it can certainly be used in some other shells.

github: fzf project

Tips §

Disable caret character for redirecting to stderr §

The defaults works pretty well but as I said before, fish is not POSIX compatible, meaning some habits must be changed. By default, ^ character like in "grep ^foobar" is the equivalent of 2> which is very misleading.

# make typing ^ actually inserting a "^" and not stderr redirect
set -U fish_features stderr-nocaret qmark-noglob

Web GUI for customizing your shell §

If you want to change behaviors or colors of your shell, just type "fish_config" while in a shell fish, it will run a local web server and open your web browser.

Validating a suggestion §

When you type a command and you see more text suggested as you type the command you can press ctrl+e to validate the suggestion. If you don't care about the suggestion, continue typing your command.

Get the return value of latest command §

In fish, you want to read $status and not $? , that variable doesn't exist in fish.

Syntax changes §

Because it's not always easy to find what changed and how, here is a simple reminder that should cover most of your needs:

  • loops (no do keyword, ends with end): for i in 1 2 3 ; echo $i ; end
  • condition (no then, ends with end): if something ; echo true ; end
  • inline command (no dollar sign): (date +%s)
  • export a variable: set -x EDITOR kak
  • return value of last command: $status

Conclusion §

I love this shell. I've been using the shell that come with my system since forever, and a few months ago I wanted to try something different, it felt weird at first but over time I found it very convenient, especially for git commands or daily tasks, suggesting me exactly the command I wanted to type in that exact directory.

Obviously, as the usual syntax changes, it may not please everyone and it's totally fine.

External GPU on Linux review

Written by Solène, on 01 September 2021.
Tags: #linux #gentoo #games #egpu

Comments on Fediverse/Mastodon

Introduction §

I like playing video games, and most games I play require a GPU that is more powerful than the integrated graphic chipset that can be found in laptop or computers. I recently found that external graphic card were a thing, and fortunately I had a few spare old graphic card for trying.

The hardware is called an eGPU (for external GPU) and are connected to the computer using a thunderbolt link. Because I buy most of my hardware second hand now, I've been able to find a Razer Core X eGPU (the simple core X and not the core X Chroma which provides USB and RJ45 connectivity on the case through thunderbolt), exactly what I was looking for. Basically, it's an external case with a PSU inside and a rack, pull out the rack and insert the graphic card, and you are done. Obviously, it works fine on Windows or Mac but it can be tricky on Linux.

Razer core X product

Attempt to make a picture of my eGPU with an nvidia 1060 in it

My setup §

I'm using a Lenovo T470 with an i5 CPU. When I want to use the eGPU, I connect the thunderbolt wire and keyboard / mouse (which I connect through an USB KVM to switch those from a computer to another). The thunderbolt port also provide power to the laptop which is good to know.

How does it work? §

There are two ways to use this device, the display can be connected to the eGPU itself or the rendering could be done on the laptop (let's say we only target laptops here) using the eGPU as a discrete card (only rendering, without display). Both modes have pros and cons.

  • External display Pros: best performance, allow many displays to be used
  • External display Cons: require a screen
  • Discrete mode Pros: no extra wire, no different setup when using the laptop without the eGPU
  • Discrete mode Cons: performance penalty, support doesn't work well on Linux

The performance penalty comes from the fact the thunderbolt bandwidth is limited, and if you want to display on your screen you need to receive the data back which will reduce the bandwidth allowed for rendering. A penalty of at least 20% should be expected in normal mode, and around 40% in discrete mode. This is not really fun but for a nice boost with an old graphic card this is still nice.

eGPU on Linux with a Razer core X Chroma

eGPU benchmarks

What to expect of it on Linux? §

I've been using this on Gentoo only so far, but I had a previous experience with a pretty similar setup a few years ago with a laptop with a discrete nvidia card (called Optimus at that time), and the GPU was only usable as a discrete GPU and it was a mess at that time.

As for the eGPU, in external mode it works fine using the nvidia driver, I needed an xorg.conf file to tell to use the nvidia driver, then the display would be fine and 3D would work perfectly as if I was using a "real" card on a computer. I can play high demanding games such as Control, Death Stranding or other games using my Thinkpad Laptop when docked, this is really nice!

The setup is a bit weird though, if I want to undock, I need to prepare the new xorg.conf file and stop X, disconnect the eGPU and restart the display manager to login. Not very easy. I've been able to script it using a simple script at boot that will detect the Nvidia GPU and choose the correct xorg.conf file just before starting the display manager, it works quite fine and makes life easier.

Video games? §

I've been playing Steam video games, it works absolutely perfectly due to their work on Proton to make Windows games running. GOG games works fine too, I use Lutris games library manager to handle them and it works so far.

Now, there is the tricky discrete mode. On linux, the bumblebee project allows rendering a program in a virtual display to benefit from the 3D acceleration and then show it on another device, this work was done for Optimus hardware hence the bumblebee name (related to Transfomers lore). Steam doesn't like bumblebee at all and won't start game, this is a known bug, Steam is bad at managing multiple GPUs. I've not been able to display anything using bumblebee.

On the other hand, native Linux GOG games were working fine using bumblebee, however I don't own much high demanding Linux games so I've not been able to see if the performance hit was hard. Windows GOG games wouldn't run, partially because the DXVK (directX to vulkan) Wine rendering can't be used because bumblebee doesn't allow using Vulkan graphical API and error messages were unhelpful. I have literally lost two days of my life trying to achieve something useful with the discrete GPU mode but nothing came out of it, except native Linux games.

Playing Control on Gentoo (windowed for the screen)

Why using an eGPU? §

Laptops are very limited in their upgrade capabilities, adding a GPU could avoid someone to own a "gaming" tower PC and a good laptop. The GPU is 100% replaceable because the case offers a pci express port and a standard PSU (which can be replaced too!). The EGPU could be shared among a few users in a home too. This is a nice way to recycling old GPUs for a nice graphic boost to play everything that is more than 5 years old (and that's a bunch of good games!). I think using a top notch GPU in this would be a waste though.

Conclusion §

I'm pretty happy with the experience so far, now I can play my favorites games on Linux using the same computer I like to use all the day. While the experience is not as plug and play than it is on Windows, it is solid and stable.

Fair Internet bandwidth management on a network using OpenBSD

Written by Solène, on 30 August 2021.
Tags: #openbsd #bandwidth

Comments on Fediverse/Mastodon

Introduction §

I have a simple DSL line with a 15 Mb/s download and 900 kb/s upload rates and there are many devices using the Internet and two people in remote work. Some poorly designed software (mostly on windows) will auto update without allowing to reduce the bandwidth or some huge bloated website will require lot of download and will impact workers using the network.

The point of this article is to explain how to use OpenBSD as a router on your network to allow the Internet access to be used fairly by devices on the network to guarantee everyone they will have at least a bit of Internet to continue working flawlessly.

I will use the queuing features from the OpenBSD firewall PF (Packet Filter) which relies on the CoDel network scheduler algorithm, which seems to bring all the features we need to do what we want.

pf.conf manual page: QUEUEING section

Wikipedia page about the CoDel network scheduler algorithm

Important §

I'm writing this in a separate section of the article because it is important to understand.

It is not possible to limit the download bandwidth, because once the data are already in the router, this mean they came from the modem and it's too late to try to do anything. But there is still hope, if the router receives data from the Internet it's that some devices on the network asked to receive it, you can act on the uploaded data to throttle what we receive. This is not obvious at first but it makes totally sense once you get the idea.

The biggest point to understand is that you can throttle download speed through the ACK packets. Think of two people on a phone, let's say Alice and Bob, Alice is your network and calls Bob who is very happy to tell his life to Alice. Bob speaking is data you download. In a normal conversation, Bob will talk and will hear some sounds from Alice who acknowledge what Bob is saying. If Alice stops or shut her microphone, Bob may ask if Alice is still listening and will wait for an answer. When Alice is making a sound (like "hmmhm or yes"), this is an acknowledgement for Bob to continue. Literally, Bob is sending a voice stream to Alice who is sending ACK (acknowledgement short name) packets to Bob so he can continue.

This is exactly where you can control bandwidth, if we reduce the bandwidth used by ACK packets for a download, you can reduce the given download. If you can allow multiple systems to fairly send their share of ACK, they should have a fair share of the downloaded data.

What's even more important is that you absolutely don't use all the upload bandwidth with ACK packets to reach your maximum download bandwidth. We will have to separate ACK from uploaded data so we don't limit file upload or similar flows.

Setup §

For the setup I used a laptop with two network cards, one was connected to the ISP box and the other was on the LAN side. I've enabled a DHCP server on the OpenBSD router to automatically give IP addresses and gateway and name servers addresses to devices on the network.

Basically, you can just plug an equivalent router on your current LAN, disable DHCP on your ISP router and enable DHCP on your OpenBSD system using a different subnet, both subnets will be available on the network but for tests it requires little changes, when you want to switch from a router to another by default, toggle the DHCP service on both and renew DHCP leases on your devices. This is extremely easy.


  +---------+
  |  ISP    |
  |  router |
  +---------+
       |
       |
       | re0
  +---------+
  | OpenBSD |
  | router  |
  +---------+
       | em0
       | 
       |
  +---------+
  | network |
  | switch  |
  +---------+

Configuration explained §

Line by line §

I'll explain first all the config lines from my /etc/pf.conf file, and later in this article you will find a block with the complete rules set.

The following lines are default and can be kept as-is except if you want to filter what's going in or out, but it's another topic as we only want to apply queues. Filtering would be as usual.

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

This is where it get interesting. The upstream router is accessed through the interface re0, so we create a queue of the speed of the link of that interface, which is 1 Gb/s. pf.conf syntax requires to use bits per second (b/s or bps) and not bytes per second (Bps or B/s) which can be misleading.

queue std on re0 bandwidth 1G

Then, we create a queue that inherits from the parent created before, this represent the whole upload bandwidth to reach the Internet. We will make all the traffic reaching the Internet to go through this queue.

I've set a bandwidth of 900K with a max of 900K, this mean, that this queue can't let pass more than 900 kilo bits per second (which represent 900/8 = 112.5 kB/s or kilo Bytes per second). This is the extreme maximum my Internet access allows me.

	queue internet parent std bandwidth 900K max 900K

The following lines are all sub queues to divide the upload usage, we want to have a separate queue for DNS request which must not be delayed to keep responsiveness, but also voip or VPN queues to guarantee a minimum available for the users.

The web queue is the one which is likely to pass the most data, if you upload a file through a website, it will pass through the web queue. The unknown queue is the outgoing traffic that is not known, it's up to you to put a maximum or not.

Finally, the ackp queue that is split into two other queues, it's the most important part of the setup.

The "bandwidth xxxK" values should sum up to something around the 900K defined as a maximum in the parent, this only mean we target to keep this amount for this queue, this doesn't enforce a minimum or a maximum which can be defined with min and max keywords.

As explained earlier, you can control the downloading speed by regulating the sent ACK packets, all ACK will go through the queues ack_web and ack.

ack_web is a queue dedicated for http/https downloads and the other ack queue is used for other protocol, I preferred to divide it in two so other protocol will have a bit more room for themselves to counterbalance a huge http download (Steam game platform like to make things hard on this topic by making downloads to simultaneous server for maximum bandwidth usage).

The two ack queues accumulated can't get over the parent queue set as 406K here. Finding the correct value is empirical, I'll explain later.

All these queues created will allow each queue to guarantee a minimum from the router point of view, roughly said per protocol here. Unfortunately, this won't guarantee computers on the network will have a fair share of the queues! This is a crucial understanding I lacked at first when trying to do this a few years ago. The solution is to use the "flow" scheduler by using the flow keyword in the queue, this will give some slot to every session on the network, guarantying (at least theoretically) every session have the same time passed to send data.

I used "flows" only for ACK, it proved to work perfectly fine for me as it's the most critical part but in fact, it could be applied to every leaf queues.

		queue web      parent internet bandwidth 220K qlimit 100
		queue dns      parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 150 default
                queue vpn      parent internet bandwidth 150K min 200K qlimit 100
                queue voip     parent internet bandwidth 150K min 150K
                queue ping     parent internet bandwidth  10K min  10K
                
		queue ackp     parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

Because packets aren't magically assigned to queues, we need some match rules for the job. You may notice the notation with parenthesis, this mean the second member of the parenthesis is the queue dedicated for ACK packets.

The VOIP queuing is done a bit wide, it seems Microsoft Teams and Discord VOIP goes through these port ranges, it worked fine from my experience but may depend of protocols.

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (wireguard, ssh, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

# keep some bandwidth for ping packets
match proto icmp from em0:network to any queue ping

Simple rule to enable NAT so devices from the LAN network can reach the Internet.

# NAT to the outside
pass out on egress from !(egress:network) nat-to (egress)

Default OpenBSD rules that can be kept here.

# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

How to choose values §

In the previous section I used absolute values, like 900K or even 406K. A simple way to define them is to upload a big file to the Internet and check the upload rate, I use bwm-ng but vnstat or even netstat (with the correct combination of flags) could work, see your average bandwidth over 10 or 20 seconds while transferring, and use that value as a maximum in BITS as a maximum for the internet queue.

As for the ACK queue, it's a bit more tricky and you may tweak it a lot, this is a balance between full download mode or conservative download speed. I've lost a bit of download rate for the benefit of keeping room for more overall responsiveness. Like previously, monitor your upload rate when you download a big file (or even multiples files to be sure to fill your download link) and you will see how much will be used for ACK. It will certainly be a few try and guesses before you get the perfect value, too low and the maximum download rate will be reduced, and too high and your link will be filled entirely when downloading.

Full configuration §

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

queue std on re0 bandwidth 1G
	queue internet parent std bandwidth 900K min 900K max 900K
		queue web  parent internet bandwidth 220K qlimit 100
		queue dns  parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 120 default
                queue vpn  parent internet bandwidth 150K min 200K qlimit 100
                queue voip parent internet bandwidth 150K min 150K
                queue ping parent internet bandwidth 10K min 10K
		queue ackp parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (ssh, wireguard, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

# ICMP
match proto icmp from em0:network to any queue ping

# NAT
pass out on egress from !(egress:network) nat-to (egress)

# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

How to monitor §

There is an excellent tool to monitor the queues in OpenBSD which is systat in its queue view. Simply call it with "systat queue", you can define the refresh rate by pressing "s" and a number. If you see packets being dropped in a queue, you can try to increase the qlimit of the queue which is the amount of packets kept in the queue and delayed (it's a FIFO) before dropping them. The default qlimit is 50 and may be too low.

systat man page anchored to the queues parameter

Conclusion §

I've spent a week scrutinizing pf.conf manual and doing many tests with many hardware until I understand that ACK were the key and that the flow queuing mode was what I was looking for. As a result, my network is much more responsive and still usable even when someone/some device is using the network without any kind of limit.

The setup can appear a bit complicated but in the end it's only a few pf.conf lines and using the correct values for your internet access. I chose to make a lot of queues, but simply separating ack from the default queue may be enough.

pkgupdate, an OpenBSD script to update packages fast

Written by Solène, on 15 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

pkgupdate is a simple shell script meant for OpenBSD users of the stable branchs (people following releases) to easily keep their packages up to date.

It is meant to be run daily by cron on servers on at boot time for workstations (you can obviously configure it how you prefer).

pkgupdate git repository (web view)

Why ? How ? §

Basically, I've explained all of this in the project repository README file.

I strongly think updating packages at boot time is important for workstation users, so the process has to be done fast and efficiently, without requiring user agreement (by setting this up, the sysadmin agreed).

As for servers, it could be useful to by running this a few time a day and using checkrestart program to notify the admin if some process is required to restart after an update.

Whole setup §

Too long, didn't read? Here the code to make the thing up!

$ su -
# git clone https://tildegit.org/solene/pkgupdate.git
# cp pkgupdate/pkgupdate /usr/local/bin/
# crontab -e (which will open EDITOR, add the following lines)

### BEGIN this goes into crontab
# for updating on boot
@reboot /usr/local/bin/pkgupdate
### END of this goes into crontab

Faster packages updates with OpenBSD

Written by Solène, on 06 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

On OpenBSD, pkg_add is not the fastest package manager around but it is possible to make a simple change to make yours regular updates check faster.

Disclaimer: THIS DOES NOT WORK ON -current/development version!

Explanation §

When you configure the mirror url in /etc/installurl, on release/stable installations when you use "pkg_add", some magic happens to expand the base url into full paths usable by PKG_PATH.

http://ftp.fr.openbsd.org/pub/OpenBSD

becomes

http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages-stable/%a/:http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages/%a/

The built string passed to PKG_PATH is the concatenation (joined by a ":" character) of the URL toward /packages/ and /packages-stable/ directories for your OpenBSD version and architecture.

This is why when you use "pkg_info -Q foobar" to search for a package and that a package name matches "foobar" in /packages-stable/ pkg_info will stop, it search for a result in the first URL given by PKG_PATH, when you add -a like "pkg_info -aQ foobar", it will look in all URL available in PKG_PATH.

Why we can remove /packages/ §

When you run your OpenBSD system freshly installed or after an upgrade, when you have your packages sets installed from the repository of your version, the files in /packages/ in the mirrors will NEVER CHANGE. When you run "pkg_add -u", it's absolutely 100% sure nothing changed in the directory /packages/, so checking for changes against them every time make no sense.

Using "pkg_add -u" with the defaults makes sense when you upgrade from a previous OpenBSD version because you need to upgrade all your packages. But then, when you look for security updates, you only need to check against /packages-stable/.

How to proceed §

There are two ways, one reusing your /etc/installurl file and the other is hard coding it. Pick the one you prefer.

# reusing the content of /etc/installurl
env PKG_PATH="$(cat /etc/installurl)/%v/packages-stable/%a/" pkg_add -u

# hard coding the url
env PKG_PATH="http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages-stable/%a/" pkg_add -u

Be careful, you will certainly have a message like this:

Couldn't find updates for ImageMagick-6.9.12.2 adwaita-icon-theme-3.38.0 aom-2.0.2 argon2-20190702 aspell-0.60.6.1p10 .....

This is perfectly normal, as pkg_add didn't find the packages in /packages-stable/ it wasn't able to find the current version installed or an update, as we only want updates it's fine.

Simple benchmark §

On my server running 6.9 with 438 packages I get these results.

  • packages-stable only: 44 seconds
  • all the packages: 203 seconds

I didn't measure the bandwidth usage but it should scale with the time reduction.

Conclusion §

This is a very simple and reliable way to reduce the time and bandwidth required to check for updates on OpenBSD (non -current!). I wonder if this would be a good idea to provide it as a flag for pkg_add, like "only check for stable updates".

Register multiples wifi networks on OpenBSD

Written by Solène, on 05 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This is a short text to introduce you about an OpenBSD feature arrived in 2018 and that may not be known by everyone. Wifi interfaces can have a list of network and their associated passphrase to automatically connect when network is known.

phessler@ hackathon report including wifi join feature

How to configure §

The relevant configuration information is in the ifconfig man page, look for "WIRELESS DEVICES" and check the "join" keyword.

OpenBSD ifconfig man page anchored on the join keyword

OpenBSD FAQ about wireless LAN

Basically, in your /etc/hostname.if file (if being replaced by the interface name like iwm0, athn0 etc...), list every access point you know and their according password.

join android_hotspot wpakey t00345Y4Y0U
join my-home wpakey goodbyekitty
join friends1 wpakey ilikeb33r5
join favorite-bar-hotspot

This will make the wifi interface to try to connect to the first declared network in the file if multiples access points are available. You can temporarily remove a hotspot from the list using "ifconfig iwm0 -join android_hotspot" if you don't want to connect to it.

Automatically lock screen on OpenBSD using xidle and xlock

Written by Solène, on 30 July 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

For security reasons I like when my computer screen get locked when I'm away and forgot to lock it manually or when I suspend the computer. Those operations are usually native in desktop managers such as Xfce, MATE or Gnome but not when you use a simple window manager.

Yesterday, I was looking at the xlock man page and found recommendations to use it with xidle, a program that triggers a command when we don't use a computer. That was the match I required to do something.

xidle §

xidle is simple, you tell it about conditions and it will run a command. Basically, it has three triggers:

  • no activity from the user after $TIMEOUT
  • cursor is moved in a screen border or corner for $SECONDS
  • xidle receives a SIGUSR1 signal

The first trigger is useful for automatic run, usually when you leave the computer and you forget to lock. The second one is a simple way to trigger your command manually by moving the cursor at the right place, and finally the last one is the way to script the trigger.

xidle man page, EXAMPLES section showing how to use it with xlock

xlock man page

Using both §

Reusing the example given in xidle it was easy to build the command line. You would have to use this in your ~/.xsession file that contain instructions to run your graphical session. The following command will lock the screen if you let your mouse cursor in the upper left corner of the screen for 5 seconds or if you are inactive for 1800 seconds (30 minutes), once the screen is locked by xlock, it will turn off the display after 5 seconds. It is critical to run this command in background using "&" so the xsession script can continue.

xidle -delay 5 -nw -program "/usr/X11R6/bin/xlock -dpmsstandby 5" -timeout 1800 &

Resume / Suspend case §

So, we currently made your computer auto locking after some time when you are not using it, but what if you put your computer on suspend and leave, this mean anyone can open it and it won't be locked. We should trigger the command just before suspending the device, so it will be locked upon resume.

This operation is possible by giving a SIGUSR1 to xidle at the right time, and apmd (the power management daemon on OpenBSD) is able to execute scripts when suspending (and not only).

apmd man page, FILES section about the supported operations running scripts

Create the directory /etc/apm/ and write /etc/apm/suspend with this content:

#!/bin/sh

pkill -USR1 xidle

Make the script executable with chmod +x /etc/apm/suspend and restart apmd. Now, you should have the screen getting locked when you suspend your computer, automatically.

Conclusion §

Locking access to a computer is very important because most of the time we have programs opened, security keys unlocked (ssh, gpg, password managers etc...) and if someone put their hands on it they can access all files. Locking the screen is a simple but very effective way to prevent this disaster to happen.

Studying the impact of being on Hacker News first page

Written by Solène, on 27 July 2021.
Tags: #network #openbsd #blog

Comments on Fediverse/Mastodon

Introduction §

Since beginning of 2021, my blog has been popular a few times on the website Hacker News and it draws a lot of traffic. This is a report of the traffic generated by Hacker News because I found this topic quite interesting.

Hacker News website: a portal where people give interesting URL and members can vote/comment the link

Data §

From data gathered from the http server access logs, my blog has an average of 1200 visitors and 1100 hits every day.

The blog was featured on hacker news: 16th February, 10th May, 7th July and 24th July. On the following diagram, you can see each spike being an appearance on hacker news.

What's really interesting, is the different between 24th July and the other spikes, only 24th July appearance made up to the front page of hacker news. That day, the server received 36 000 visitors and 132 000 hits and it continued the next date at a slower rate but still a lot more noticeable than other spikes.

Visitors/Hits of the blog (generated using goaccess)

The following diagram comes from the tool pfstat, gathering data from the OpenBSD firewall to produce images. We can see the firewall is usually at a rate of ~35 new TCP states per seconds, on 24th July, it drastically increased very fast to 230 states per second for at least 12h and the load continued for days compared to the usual traffic.

Firewall states per second

Conclusion §

I don't have much more data than this, but it's already interesting to see the insane traffic drag and audience that Hacker News can generate. Having a static website and enough bandwidth didn't made it hard to absorb the load, but if you have a dynamic website running code, you could be worried to be featured on Hacker News which would certainly trigger a denial of service.

Wikipedia article on the "Slashdot effect" explaining this phenomena

The Old Computer Challenge: 10 days later, what changed?

Written by Solène, on 26 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

Ten days ago I finished the Old Computer Challenge I started, it gather a dozen of people over the days and we had a great week of fun restricting ourselves with a 1 CPU / 512 MB old computer and try to manage our daily tasks using it.

In my last article about it, I noticed many things about my computer use and reported them. Did it change my habits?

How it changed me §

Because I noticed using an old computer improved my life because I was using it less made me realize it was all about self discipline.

Checking news once a day is enough §

I have accounts on some specialized news website (bike, video games) and I used to check them every too often when I was clueless about what to do. I'm trying to reduce the number of time I look for news there, if I miss a news I can still read it the next day. I'm also more looking into RSS feed when available so I can just stop visiting the website entirely.

Forums with low traffic §

Same as for news, I only look a few time in the day the forums I participate to check for replies or new message, instead of every 10 minutes.

Shutdown instead of suspend §

I started to shutdown my computer at evening after my news routine check. If nothing had to be done on the computer, I find it better to shutdown it so I'm not tempting to reuse it. I was using suspend/resume before and it was too easy to just resume the computer to look for a new IRC message. I realized IRC messages can wait.

Read NOW §

A biggest change on the old computer was that when browsing the internet and blogs, I was actually reading the content instead of bookmarking it and never come back or reading the text very fast by looking for some key word to have some vague idea of the text.

On my laptop, when reading content in Firefox, I find it very hard to focus on text, maybe because of the font, the size, the spacing, the screen contrast, I don't know. Using the Reader mode in Firefox drastically helps me focusing on the text. When land on a page with some interesting text, I switch to reader me and read it. HUGE WIN for me here.

I really don't know why I find text easier to read in w3m, I should try it on my computer but it's quite a pain to reach a page on some websites, maybe I should try to open w3m to read content I want after I find it using Firefox.

Slow is slow §

Sometimes I found my OpenBSD computer to be slow, using a very old computer helped me put it into perspective. Using my time more efficiently with less task switching doesn't require as much as performance as one would think.

Driving development ideas §

I recently wrote the software "potcasse" to manage podcasts distribution, I came to it thinking I want to record my podcasts and publish them from the old computer, I needed a simple and fast method to use it on that old system.

Conclusion §

The challenge was not always easy but it has bring a lot of fun for a week and in the end, it changed the way I use computer now. No regret!

OpenBSD full Tor setup

Written by Solène, on 25 July 2021.
Tags: #openbsd #tor #privacy #security

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to block all your traffic except traffic going through Tor, here is how to proceed on OpenBSD.

The setup is simple and consists at installing Tor, running the service and configure the firewall to block every requests that doesn't come from the user _tor used by Tor daemon.

Setup §

Modify /etc/pf.conf to make it look like the following:

set skip on lo

# block OUT traffic
block out

# block IN traffic and allow response to our OUT requests
block return

# allow TCP requests made by _tor user
pass out on egress proto tcp user _tor

If you forgot to save your pf.conf file, the default file is available in /etc/examples/pf.conf if you want to go back to a standard PF configuration.

Here are the commands to type as root to install tor and reload PF:

pkg_add tor
rcctl enable tor
rcctl start tor
pfctl -f /etc/pf.conf

Configure your programs to use the proxy SOCKS5 localhost:9050, if you need to reach a remote server / service of yours, you will need to have a server running tor and define HiddenServices to access them through Tor.

Privacy considerations in the local area network §

Please consider that if you are using DHCP to obtain an IP on the network the hostname of your system is shared and also its MAC address.

As for the MAC address, you can use "lladdr random" in your interface configuration file to have a new random MAC address on every boot.

As for the hostname, I didn't test it but it should work, rewrite your /etc/myname file with a new value at each boot, meaning the next boot you will have a new value. To do so, you could run an /etc/rc.local with this script:

#!/bin/sh

grep -v ^# /usr/share/misc/airport | cut -d ':' -f 1 | sort -R | head -n 1 > /etc/myname

The script will take a random name out of the 2000+ entries of the airport list (every airport in the list has been visited by OpenBSD developed before it is added). This still mean you have 1/2000 chance to have the same name upon reboot, if you prefer more entropy you can make a script generating a long random string.

Privacy considerations on the Web §

You shouldn't use Tor for anything, this may leak your IP address depending on the software used, it may not be built with privacy in mind. The Tor Browser (modified Firefox including Tor and privacy settings) can be fully trusted to only share/send what is required and not more.

The point of this setup is to block leaking programs and only allow Tor to reach the Internet, then it's up to you to use Tor wisely. I recommend reading Tor documentation to understand how it works.

Tor project documentation

Potential issues §

The only issue I can imagine right now is connecting on a network with a captive portal to reach the Internet, you would have to disable the PF rule (or entire PF) at the risk of some programs leaking data.

Same setup with I2P §

If you prefer using i2p only to reach external services, replace _tor by _i2p or _i2pd in the pf.conf rule, depending on which implementation you used.

Conclusion §

I'm not a huge Tor user but for the people who need to be sure non-Tor traffic can't go out, this is a simple setup to make.

Why self hosting is important

Written by Solène, on 23 July 2021.
Tags: #fediverse #selfhosting #chatons #life #internet

Comments on Fediverse/Mastodon

Introduction §

Computers are amazing tools and Internet is an amazing network, we can share everything we want with anyone connected. As for now, most of the Internet is neutral, meaning ISP have to give access to the Internet to their customer and don't make choices depending on the destination (like faster access for some websites).

This is important to understand, this mean you can have your own website, your own chat server or your own gaming server hosted at home or on a dedicated server you rent, this is called self hosting. I suppose putting the label self hosting on dedicated server may not make everyone agree, this is true it's a grey area. The opposite of self hosting is to rely on a company to do the job for you, under their conditions, free or not.

Why is self hosting exactly? §

Self hosting is about freedom, you can choose what server you want to run, which version, which features and which configuration you want. If you self host at home, You can also pick the hardware to match your needs (more Ram ? More Disk? RAID?).

Self hosting is not a perfect solution, you have to buy the hardware, replace faulty components, do the system maintenance to keep the software part alive.

Why does it matter? §

When you rely on a company or a third party offering services, you become tied to their ecosystem and their decisions. A company can stop what you rely on at any time, they can decide to suspend your account at any time without explanation. Companies will try to make their services good are appealing, no doubt on it, and then lock you in their ecosystem. For example, if you move all your projects on github and you start using github services deeply (more than a simple git repository), moving away from Github will be complicated because you don't have _reversibility_, which mean the right to get out and receive help from your service to move away without losing data or information.

Self hosting empower the users instead of making profit from them. Self hosting is better when it's done in community, a common mail server for a group of people and a communication server federated to a bigger network (such as XMPP or Matrix) are a good way to create a resilient Internet while not giving away your rights to capitalist companies.

Community hosting §

Asking everyone to host their own services is not even utopia but rather stupid, we don't need everyone to run their own server for their own services, we should rather build a constellation of communities that connect using federated protocol such as Email, XMPP, Matrix, ActivityPub (protocol used for Mastodon, Pleroma, Peertube).

In France, there is a great initiative named CHATONS (which is the french word for KITTENS) gathering associative hosting with some pre-requisites like multiple sysadmin to avoid relying on one person.

[English] CHATONS website

[French] Site internet du collectif CHATONS

In Catalonia, a similiar initiative started:

[Catalan] Mixetess website

Quality of service §

I suppose most of my readers will argue that self hosting is nice but can't compete with "cloud" services, I admit this is true. Companies put a lot of money to make great services to get customers and earn money, if their service were bad, they wouldn't exist long.

But not using open source and self hosting won't make alternatives to your service provider greater, you become part of the problem by feeding the system. For example, Google Mail GMAIL is now so big that they can decide which domain is allowed to reach them and which can't. It is such a problem that most small email servers can't send emails to Gmail without being treated as spam and we can't do anything to it, the more users they are, the less they care about other providers.

Great achievements can be done in open source federated services like Peertube, one can host videos on a Peertube instance and follow the local rules of the instance, while some other big companies could just disable your video because some automatic detection script found a piece of music or inappropriate picture.

Giving your data to a company and relying on their services make you lose your freedom. If you don't think it's true this is okay, freedom is a vague concept and it comes with various steps on a high scale.

Tips for self hosting §

Here are a few tips if you want to learn more about hosting your own services.

  • ask people you trust if they want to participate, it's better to have more than only one person to manage servers.
  • you don't need to be an IT professional, but you need to understand you will have to learn.
  • backups are not a luxury, they are mandatory.
  • asking (for contributing or as a requirement) for money is fine as long as you can justify why (a peertube server can be very expensive to run for example).
  • people around usually throw old hardware, ask friends or relative if they have old unused hardware. You can easily repair "that old Windows laptop I replaced because wifi stopped working" and use it as a server.
  • electricity usage must be considered but on the other hand, buying a brand new hardware to save 20W is not necessarily more ecological.
  • some services such as email servers can't be hosted on most ISP connection due to specific requirements
  • you will certainly need to buy a domain name
  • redundancy is overkill most of the time, shit happens but in redundant servers shit happens twice more often

IndieWeb website: a community proposing alternatives to the "corporate web".

There is a Linux disribution dedicated to self hosting named "Yunohost" (Y U No Host) that make the task really easy and give you a beginner friendly interface to manage your own service.

Yunohost website

Yunohost documentation "What is Yunohost ?"

Conclusion §

I'm self hosting since I first understood running a web server was the only thing I required to have my own PHP forum 15 years ago. I mostly keep this blog alive to show and share my experiments, most of the time happening when playing with my self hosting servers.

I have a strong opinion on the subject, hosting your own services is a fantastic way to learn new skills or perfect them, but it's also important for freedom. In France we even have associative ISP and even if they are small, their existence force the big ISP companies to be transparent on their processes and interoperatibility.

If you disagree with me, this is fine.

Self host your Podcast easily with potcasse

Written by Solène, on 21 July 2021.
Tags: #openbsd #scripts #podcast

Comments on Fediverse/Mastodon

Introduction §

I wrote « potcasse », pronounced "pot kas", a tool to help people to publish and self host a podcast easily without using a third party service. I found it very hard to find information to self host your own podcast and make it available easily on "apps" / podcast players so I wrote potcasse.

Where to get it §

Get the code from git and run "make install" or just copy the script "potcasse" somewhere available in your $PATH. Note that rsync is a required dependency.

Gitea access to potcasse

direct git url to the sources

What is it doing? §

Potcasse will gather your audio files with some metadata (date, title), some information about your Podcast (name, address, language) and will create an output directory ready to be synced on your web server.

Potcasse creates a RSS feed compatible with players but also a simple HTML page with a summary of your episodes, your logo and the podcast title.

Why potcasse? §

I wanted to self host my podcast and I only found Wordpress, Nextcloud or complex PHP programs to do the job, I wanted something static like my static blog that will work on any hosting platform securely.

How to use it §

The process is simple for initialization:

  • init the project directory using "potcasse init"
  • edit the metadata.sh file to configure your Podcast

Then, for every new episode:

  • import audio files using "potcasse episode" with the required arguments
  • generate the html output directory using "potcasse gen"
  • use rsync to push the output directory to your web server

There is a README file in the project that explain how to configure it, once you deploy you should have an index.html file with links to your episodes and also a link for the RSS feed that can be used in podcast applications.

Conclusion §

This was a few hours of work to get the job done, I'm quite proud of the result and switched my podcast (only 2 episodes at the moment...) to it in a few minutes. I wrote the commands lines and parameters while trying to use it as if it was finished, this helped me a lot to choose what is required, optional, in which order, how I would like to manually make changes as an author etc...

I hope you will enjoy this simple tool as much as I do.

Simple scripts I made over time

Written by Solène, on 19 July 2021.
Tags: #openbsd #scripts #shell

Comments on Fediverse/Mastodon

Introduction §

I wanted to share a few scripts of mine for some time, here they are!

Scripts §

Over time I'm writing a few scripts to help me in some tasks, they are often associated to a key binding or at least in my ~/bin/ directory that I add to my $PATH.

Screenshot of a region and upload §

When I want to share something displayed on my screen, I use my simple "screen_up.sh" script (super+r) that will do the following:

  • use scrot and let me select an area on the screen
  • convert the file in jpg but also png compression using pngquant and pick the smallest file
  • upload the file to my remote server in a directory where files older than 3 days are cleaned (using find -ctime -type f -delete)
  • put the link in the clipboard and show a notification

This simple script has been improved a lot over time like getting a feedback of the result or picking the smallest file from various combinations.

#!/bin/sh
test -f /tmp/capture.png && rm /tmp/capture.png
scrot -s /tmp/capture.png
pngquant -f /tmp/capture.png
convert /tmp/capture-fs8.png /tmp/capture.jpg
FILE=$(ls -1Sr /tmp/capture* | head -n 1)
EXTENSION=${FILE##*.}

MD5=$(md5 -b "$FILE" | awk '{ print $4 }' | tr -d '/+=' )

ls -l $MD5

scp $FILE perso.pw:/var/www/htdocs/solene/i/${MD5}.${EXTENSION}
URL="https://perso.pw/i/${MD5}.${EXTENSION}"
echo "$URL" | xclip -selection clipboard

notify-send -u low $URL

Uploading a file temporarily §

Second most used script of mine is a uploading file utility. It will rename a file using the content md5 hash but keeping the extension and will upload it in a directory on my server where it will be deleted after a few days from a crontab. Once the transfer is finished, I get a notification and the url in my clipboard.

#!/bin/sh
FILE="$1"

if [ -z "$1" ]
then
        echo "usage: [file]"
        exit 1
fi
                
                
MD5=$(md5 -b "$1" | awk '{ print $NF }' | tr -d '/+=' )
NAME=${MD5}.${FILE##*.}

scp "$FILE" perso.pw:/var/www/htdocs/solene/f/${NAME}

URL="https://perso.pw/f/${NAME}"
echo -n "$URL" | xclip -selection clipboard

notify-send -u low "$URL"

Sharing some text or code snippets §

While I can easily transfer files, sometimes I need to share a snippet of code or a whole file but I want to ease the reader work and display the content in an html page instead of sharing an extension file that will be downloaded. I don't put those files in a cleaned directory and I require a name to give some clues about the content to potential readers. The remote directory contains a highlight.js library used to use syntactic coloration, hence I pass the text language to use the coloration.

#!/bin/sh

if [ "$#" -eq 0 ]
then
        echo "usage: language [name] [path]"
        exit 1
fi

cat > /tmp/paste_upload <<EOF
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
</head>
<body>
        <link rel="stylesheet" href="default.min.css">
        <script src="highlight.min.js"></script>
        <script>hljs.initHighlightingOnLoad();</script>

        <pre><code class="$1">
EOF

# ugly but it works
cat /tmp/paste_upload | tr -d '\n' > /tmp/paste_upload_tmp
mv /tmp/paste_upload_tmp /tmp/paste_upload

if [ -f "$3" ]
then
    cat "$3" | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
else
    xclip -o | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
fi


cat >> /tmp/paste_upload <<EOF


</code></pre> </body> </html>
EOF


if [ -n "$2" ]
then
    NAME="$2"
else
    NAME=temp
fi

FILE=$(date +%s)_${1}_${NAME}.html

scp /tmp/paste_upload perso.pw:/var/www/htdocs/solene/prog/${FILE}

echo -n "https://perso.pw/prog/${FILE}" | xclip -selection clipboard
notify-send -u low "https://perso.pw/prog/${FILE}"

Resize a picture §

I never remember how to resize a picture so I made a one line script to not have to remember about it, I could have used a shell function for this kind of job.

#!/bin/sh

if [ -z "$2" ]
then
	PERCENT="40%"
else
	PERCENT="$2"
fi

convert -resize "$PERCENT" "$1" "tn_${1}"

Latency meter using DNS §

Because UDP requests are not reliable they make a good choice for testing network access reliability and performance. I used this as part of my stumpwm window manager bar to get the history of my internet access quality while in a high speed train.

The output uses three characters to tell if it's under a threshold (it works fine), between two threshold (not good quality) or higher than the second one (meaning high latency) or even a network failure.

The default timeout is 1s, if it works, under 60ms you get a "_", between 60ms and 150ms you get a "-" and beyond 150ms you get a "¯", if the network is failure you see a "N".

For example, if your quality is getting worse until it breaks and then works, it may look like this: _-¯¯NNNNN-____-_______ My LISP code was taking care of accumulating the values and only retaining the n values I wanted as history.

Why would you want to do that? Because I was bored in a train. But also, when network is fine, it's time to sync mails or refresh that failed web request to get an important documentation page.

#!/bin/sh

dig perso.pw @9.9.9.9  +timeout=1 | tee /tmp/latencecheck

if [ $? -eq 0 ]
then
        time=$(awk '/Query time/{
                if($4 < 60) { print "_";}
                if($4 >= 60 && $4 <= 150) { print "-"; }
                if($4 > 150) { print "¯"; }
        }' /tmp/latencecheck)
        echo $time | tee /tmp/latenceresult
else
        echo "N" | tee /tmp/latenceresult
    exit 1
fi

Conclusion §

Those scripts are part of my habits, I'm a bit lost when I don't have them because I always feel they are available at hand. While they don't bring much benefits, it's quality of life and it's fun to hack on small easy pieces of programs to achieve a simple purpose. I'm glad to share those.

The Old Computer Challenge: day 7

Written by Solène, on 16 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the last day of the old computer challenge.

A journey §

I'm writing this text while in the last hours of the challenge, I may repeat some thoughts and observations already reported in the earlier posts but never mind, this is the end of the journey.

Technical §

Let's speak about Tech! My computer is 16 years old but I've been able to accomplish most of what I enjoy on a computer: IRC, reading my mails, hacking on code and reading some interesting content on the internet. So far, I've been quite happy about my computer, it worked without any trouble.

On the other hand, there were many tasks that didn't work at all:

  • Browsing the internet to use "modern" website relying on javascript: this is because Javascript capable browsers are not working on my combination of operating system/CPU architecture, I'm quite sure the challenge would have been easier with an old amd64 computer even with low memory.
  • Watching videos: for some reasons, mplayer in full screen was producing a weird issue, computer stopped working but cursor was still moving but nothing more was possible. However it worked correctly for most videos.
  • Listening to my big FLAC music files, if doing so I wasn't able to do anything else because of the CPU usage and sitting on my desk to listen to music was not an interesting option.
  • Using Go, Rust and Node programs because there are no implementation of these languages on OpenBSD PowerPC 32bits.

On the hardware side, here is what I noticed:

  • 512MB are quite enough as long as you stay focused on one task, I rarely required to use swap even with multiple programs opened.
  • I don't really miss spinning hard drive, in term of speed and noise, I'm happy they are gone in my newer computers.
  • Using an external pointing device (mouse/trackball) is so much better than the bad touchpad.
  • Modern screens are so much better in term of resolution, colours and contrast!
  • They keyboard is pleasant but lack a "Super" modifier key which lead to issues with key binding overlapping between the window manager and programs.
  • Suspend and resume doesn't work on OpenBSD, so I had to boot the computer and it takes a few minutes to do so and require manual step to unlock /home which add delay for boot sequence.

Despite everything the computer was solid but modern hardware is such more pleasant to use in many ways, not only in term of raw speed. When you buy a laptop especially, you should take care about the other specs than the CPU/memory like the case, the keyboard, the touchpad and the screen, if you use a lot your laptop they are as much important as the CPU itself in my opinion.

Thanks to the programs w3m, catgirl, luakit, links, neomutt, claws-mail, ls, make, sbcl, git, rednotebook, keepassxc, gimp, sxiv, feh, windowmaker, fvwm, ratpoison, ksh, fish, mplayer, openttd, mednafen, rsync, pngquant, ncdu, nethack, goffice, gnumeric, scrot, sct, lxappearence, tootstream, toot, OpenBSD and all the other programs I used for this challenge.

Human §

Because I always felt this challenge was a journey to understand my use of computer, I'm happy of the journey.

To make things simple, here is a bullet list of what I noticed

  • Going to sleep earlier instead of waiting for something to happen.
  • I've spent a lot less time on my computer but at the same time I don't notice it much in term of what I've done with it, this mean I was more "productive" (writing blog, reading content, hacking) and not idling.
  • I didn't participate into web forums of my communities :(
  • I cleared things in my todo list on my server (such as replacing Spamassassin by rspamd and writing about it).
  • I've read more blogs and interesting texts than usual, and I did it without switching to another task.
  • Javascript is not ecological because it prevent older hardware to be usable. If I didn't needed javascript I guess I could continue using this laptop.
  • I got time to discover and practice meditation.
  • Less open source contribution because compiling was too slow.

I'm sad and disappointed to notice I need to work on my self discipline (that's why I started to learn about meditation) to waste less time on my computer. I will really work on it, I see I can still do the same tasks but spend less time doing nothing/idling/switching tasks.

I will take care of supporting old systems by my contributions, like my blog working perfectly fine in console web browsers but also trying to educate people about this.

I've met lot of interesting people on the IRC channel and for this sole reason I'm happy I made the challenge.

Conclusion §

Good hardware is good but is not always necessary, it's up to the developers to make good use of the hardware. While some requirements can evolve over time like cryptography or video codecs, programs shouldn't become more and more resources hungry for the reason that we have more and more available. We have to learn how todo MORE with LESS with computers and it was something I wanted to highlight with this challenge.

The Old Computer Challenge: day 6

Written by Solène, on 15 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report §

This is the 6th day of the challenge! Time went quite fast.

Mood §

I got quite bored two days ago because it was very frustrating to not be able to do everything I want. I wanted to contribute to OpenBSD but the computer is way to slow to do anything useful beyond editing files.

Although, it got better yesterday, 5th day of the challenge, when I decided to move away from claws-mail and switch to neomutt for my emails. I updated claws-mail to version 4.0.0 freshly released and starting updating the OpenBSD package, but claws-mail switched to gtk3 and it became too slow for the computer.

I started using a mouse on the laptop and it made some tasks more enjoyable although I don't need it too much because most of my programs are in a console but every time I need the cursor it's more pleasant to use a mouse support 3 clicks + wheel.

Software §

The computer is the sum of its software. Here is a list of the software I'm using right now:

  • fvwm2: window manager, doesn't bug with full screen programs and is light enough and I like it.
  • neomutt: mail reader, I always hate mutt/neomutt because of the complexity of their config file, fortunately I had some memories of when I used it and I've been able to build a nice simple configuration script and took the opportunity to update my Neomutt cheatsheet article.
  • w3m: in my opinion it's the best web browser in terminal :) the bookmark feature works very great and using https://lite.duckduckgo.com/lite for searches works perfectly fine. I use the flavor with image rendering support, however I have mixed feelings about it because pictures take time to download and render and will always render at their original size which is a pain most of the time.
  • keepassxc: my usual password manager, it has a cli command line to manage the entries from a shell after unlocking the database.
  • openttd: a game of legend that is relaxing and also very fun to play, runs fine after a few tweaks.
  • mastodon: tootstream but it's quite limited sometimes and I also access Mastodon on my phone with Tusky from F-droid, they make a great combination.
  • rednotebook: I was already using it on this computer when it was known as the "offline computer", this program is a diary where I write my day when I feel bad (anger, depressed, bored), it doesn't have much entries in it but it really helps me to write things down. While the program is very heavy and could be considered bloated for the purpose of writing about your day, I just like it because it works and it looks nice.

I'm often asked how I deal with youtube, I just don't, I don't use youtube so problem is solved :-) I use no streaming services at home.

Breaking the challenge §

I had to use my regular computer to order a pizza because the stupid pizza company doesn't want to take orders by phone and they are the only pizza shop around... :( I could have done using my phone but I don't really trust my phone web browser to support all the operations of the process.

I could easily handle using this computer for more time if I hadn't so many requirements on web services, mostly for ordering products I can't find locally (pizza doesn't count here) and I hate using my phone for web access because I hate smartphone most of the time.

If I had used an old i386 / amd64 computer I would have been able to use a webkit browser even if it was slow, but on PowerPC the state of web browser with javascript is complicated and currently none works for me on OpenBSD.

Filtering spam using Rspamd and OpenSMTPD on OpenBSD

Written by Solène, on 13 July 2021.
Tags: #openbsd #mail #spam

Comments on Fediverse/Mastodon

Introduction §

I recently used Spamassassin to get ride of the spam I started to receive but it proved to be quite useless against some kind of spam so I decided to give rspamd a try and write about it.

rspamd can filter spam but also sign outgoing messages with DKIM, I will only care about the anti spam aspect.

rspamd project website

Setup §

The rspamd setup for spam was incredibly easy on OpenBSD (6.9 for me when I wrote this). We need to install the rspamd service but also the connector for opensmtpd, and also redis which is mandatory to make rspamd working.

pkg_add opensmtpd-filter-rspamd rspamd redis
rcctl enable redis rspamd
rcctl start redis rspamd

Modify your /etc/mail/smtpd.conf file to add this new line:

filter rspamd proc-exec "filter-rspamd"

And modify your "listen on ..." lines to add "filter "rspamd"" to it, like in this example:

listen on em0 pki perso.pw tls auth-optional   filter "rspamd"
listen on em0 pki perso.pw smtps auth-optional filter "rspamd"

Restart smtpd with "rcctl restart smtpd" and you should have rspamd working!

Using rspamd §

Rspamd will automatically check multiple criteria for assigning a score to an incoming email, beyond a high score the email will be rejected but between a low score and too high, it may be tagged with a header "X-spam" with the value true.

If you want to automatically put the tagged email as spam in your Junk directory, either use a sieve filter on the server side or use a local filter in your email client. The sieve filter would look like this:


if header :contains "X-Spam" "yes" {
        fileinto "Junk";
        stop;
}

Feeding rspamd §

If you want better results, the filter needs to learn what is spam and what is not spam (named ham). You need to regularly scan new emails to increase the effectiveness of the filter, in my example I have a single user with a Junk directory and an Archives directory within the maildir storage, I use crontab to run learning on mails newer than 24h.

0  1 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec rspamc learn_ham {} +
10 1 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec rspamc learn_spam {} +

Getting statistics §

rspamd comes with very nice reporting tools, you can get a WebUI on the port 11334 which is listening on localhost by default so you would require tuning rspamd to listen on other addresses or you can use a SSH tunnel.

You can get the same statistics on the command line using the command "rspamc stat" which should have an output similar to this:

Results for command: stat (0.031 seconds)
Messages scanned: 615
Messages with action reject: 15, 2.43%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 9, 1.46%
Messages with action greylist: 6, 0.97%
Messages with action no action: 585, 95.12%
Messages treated as spam: 24, 3.90%
Messages treated as ham: 591, 96.09%
Messages learned: 4167
Connections count: 611
Control connections count: 5190
Pools allocated: 5824
Pools freed: 5801
Bytes allocated: 31.17MiB
Memory chunks allocated: 158
Shared chunks allocated: 16
Chunks freed: 0
Oversized chunks: 575
Fuzzy hashes in storage "rspamd.com": 2936336370
Fuzzy hashes stored: 2936336370
Statfile: BAYES_SPAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 344; users: 1; languages: 0
Statfile: BAYES_HAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 3822; users: 1; languages: 0
Total learns: 4166

Conclusion §

rspamd is for me a huge improvement in term of efficiency, when I tag an email as spam the next one looking similar will immediately go into Spam after the learning cron runs, it draws less memory then Spamassassin and reports nice statistics. My Spamassassin setup was directly rejecting emails so I didn't have a good comprehension of its effectiveness but I got too many identical messages over weeks that were never filtered, for now rspamd proved to be better here.

I recommend looking at the configurations files, they are all disabled by default but offer many comments with explanations which is a nice introduction to learn about features of rspamd, I preferred to keep the defaults and see how it goes before tweaking more.

The Old Computer Challenge: day 3

Written by Solène, on 12 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the third day of the old computer challenge.

Community §

I got a lot of feedback from the community, the IRC channel #old-computer-challenge is quite active and it seems we have a small community that may start here. I received help from various question I had in regards to the programs I'm now using.

Changes §

Web is a pity §

The computer I use is using a different processor architecture than we we are used too. Our computers are now amd64 (even the intel one, amd64 is the name of the instruction sets of the processors) or arm64 for most tablets/smartphone or small boards like raspberry PI, my computer is a PowerPC but it disappeared around 2007 from the market. It is important to know that because most language virtual machines (for interpreted languages) requires some architecture specifics instructions to work, and nobody care much about PowerPC in the javascript land (that could be considered wasting time given the user base), so I'm left without a JS capable web browser because they would instantly crash. The person of cwen@ at the OpenBSD project is pushing hard to fix many programs on PowerPC and she is doing an awesome work, she got JS browsers to work through webkit but for some reasons they are broken again so I have to do without those.

w3m works very fine, I learned about using bookmarks in it and it makes w3m a lot more usable for daily stuff, I've been able to log-in on most websites but I faced some buttons not working because they triggered a javascript action. I'm using it with built-in support for images but it makes loading time longer and they are displayed with their real size which can screw up the display, I'm think I'll disable the image support...

Long live to the smolnet §

What is the smolnet? This is a word that feature what is not on the Web, this includes mostly content from Gopher and Gemini. I like that word because it represents an alternative that I'm contributing too for years and the word carries a lot of meaning.

Gopher and Gemini are way saner to browse, thanks to a standard concept of one item per line and no style, visiting one page feels like all the others and I don't have to look for where the menu is, or even wait for the page to render. I've been recommended the av-98 terminal browser and it has a very lovely feature named "tour", you can accumulate links from pages you visit and add them to the tour, and them visit the next liked accumulated (like a First in-First out queue), this avoids cumbersome tabs or adding bookmarks for later viewing and forgetting about them.

Working on OpenBSD ports §

I'm working at updating the claws-mail mail client package on OpenBSD, a new major release was done the first day of the challenge, unfortunately working with it is extremely painful on my old computer. Compiling was long, but was done only once, now I need to sort out libraries includes and using the built-in check of the ports tree takes like 15 minutes which is really not fun.

I hate the old hardware §

While I like this old laptop, I start to hate it too. The touchpad is extremely bad and move by increments of 5px or so which is extremely imprecise especially for copy/pasting text or playing OpenTTD, not mentioning again that it only has a left click button. (update, it has been fixed thanks to anthk_ on IRC using the command xinput set-prop /dev/wsmouse "Device Accel Constant Deceleration" 1.5)

The screen has a very poor contrast, I can deal with a 1024x768 resolution and I love the 4:3 ratio, but the lack of contrast is really painful to deal with.

The mechanical hard drive is slow, I can cope with that, but it's also extremely noisy, I forgot the crispy noises of the old HDD. It's so annoying to my hears... And talking about noise, I'm often limiting the CPU speed of my computer to avoid the temperature rising too high and triggering the super loud small CPU fan. It is really super loud and it doesn't seem quite effective, maybe the thermal paste is old...

A few months ago I wanted to replace the HDD but I looked on iFixit website the HDD replacement procedure for this laptop and there are like 40 steps to follow plus an Apple specific screwdriver, the procedure basically consists at removing all parts of the laptop to access the HDD which seems the piece of hardware in the most remote place in the case. This is insane, I'm used to work on Thinkpad laptop and after removing 4 usual screws you get access to everything, even my T470 internal battery is removable.

All of these annoying facts are not even related to the computer power but simply because modern hardware evolved, they are quality of life because they don't make the computer more or less usable, but more pleasant. Silence, good and larger screens and multiple fingers gestures touchpad bring a more comfortable use of the computer.

Taking my time §

Because of context switching cost a lot of time, I take my time to read content and appreciate it in one shot instead of bookmarking after reading a few lines and never read the bookmark again. I was quite happy to see I'm able to focus more than 2 minutes on something and I'm a bit relieved in that regards.

Psychological effect §

I'm quite sad to see an older system forcing me to restriction can improve my focus, this mean I'm lacking self discipline and that I've wasted too much time of my life doing useless context/task switching. I don't want to rely on some sort of limitations to be guards of my sanity, I have to work on this on my own, maybe meditation could be me getting my patience back.

End of report of day 3 §

I'm meeting friendly people sharing what I like, I realizing my dependencies over services or my lack of self mental discipline. The challenge is a lot harder than I expected but if it was too easy that wouldn't be a challenge. I already know I'll be happy to get back to my regular laptop but I also think I'll change some habits.

The Old Computer Challenge: day 1

Written by Solène, on 10 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of my first day of the old computer challenge

My setup §

I'm using an Apple iBook G4 running the operating system development version of OpenBSD macppc. Its specs are: 1 CPU G4 1.3GHz, 512 MB of memory and an old IDE HDD 40 GB. The screen is a 4/3 ratio with a 1024x768 resolution. The touchpad has only one tap button doing left click, the touchpad doesn't support multiple fingers gestures (can't scroll, can't click). The battery is still holding a 1h40 capacity which is very surprising.

About the software, I was using the ratpoison window manager but I got issue with two GUI applications so I moved to cwm but I have other issues with cwm now. I may switch to window maker maybe or return to ratpoison which worked very well except for 2 programs, and switch to cwm when I need them... I use xterm as my terminal emulator because "it works" and it doesn't draw much memory, usually I'm using Sakura but with 32 MB of memory for each instance vs 4 MB for xterm it's important to save memory now. I usually run only one xterm with a tmux inside.

Same for the shell, I've been using fish since the beginning of 2021 but each instance of fish draws 9 MB which is quite a lot because this mean every time I split my tmux and this spawns a new shell then I have an extra 9MB used. ksh draws only 1MB per instance which is 9x less than fish, however for some operations I still switch to fish manually because it's a lot more comfortable for many operations due to its lovely completion.

Tasks §

Tasks on the day and how I complete them.

Searching on the internet §

My favorite browser on such old system is w3m with image support in the terminal, it's super fast and the render is very good. I use https://html.duckduckgo.com/html/ as my search engine.

The only false issue with w3m is that the key bindings are absolutely not straightforward but you only need to know a few of them to use it and they are all listed in the help.

Using mastodon §

I spend a lot of time on Mastodon to communicate with people, I usually use my web browser to access mastodon but I can't here because javascript capable web browser takes all the memory and often crash so I can only use them as a last joker. I'm using the terminal user interface tootstream but it has some limitations and my high traffic account doesn't match well with it. I'm setting up brutaldon which is a local program that gives access to mastodon through an old style website, I already wrote about it on my blog if you want more information.

Listening to music §

Most of my files are FLAC encoded and are extremely big, although the computer can decode them right but this uses most of the CPU. As OpenBSD doesn't support mounting samba shares and that my music is on my NAS (in addition to locally on my usual computer), I will have to copy the files locally before playing them.

One solution is to use musikcube on my NAS and my laptop with the server/client setup which will make my nas transcoding the music I want to play on the laptop on the fly. Unfortunately there is no package for musikcube yet and I started compiling it on my old laptop and I suppose it will take a few hours to complete.

Reading emails §

My favorite email client at the moment is claws-mail and fortunately it runs perfectly fine on this old computer, although the lack of right click is sometimes a problem but a clever workaround is to run "xdotool click 3" to tell X to do a right click where the cursor is, it's not ideal but I rarely need it so it's ok. The small screen is not ideal to deal with huge piles of mails but it works so far.

IRC §

My IRC setup is to have a tmux with as many catgirl (irc client) instances as network I'm connected too, and this is running on a remote server so I just connect there with ssh and attach to the local tmux. No problem here.

Writing my blog §

The process is exactly the same as usual. I open a terminal to start my favorite text editor, I create the file and write in it, then I run aspell to check for typos, then I run "make" to make my blog generator creates the html/gopher/gemini versions and dispatch them on the various server where they belong to.

How I feel §

It's not that easy! My reliance on web services is hurting here, I found a website providing weather forecast working in w3m.

I easily focus on a task because switching to something else is painful (screen redrawing takes some times, HDD is noisy), I found a blog from a reader linking to other blogs, I enjoyed reading them all while I'm pretty sure I would usually just make a bookmark in firefox and switch to a 10-tabs opening to see what's new on some websites.

Obsolete in the IT crossfire

Written by Solène, on 09 July 2021.
Tags: #life #linux #unix #openbsd

Comments on Fediverse/Mastodon

Preamble §

This is not an article about some tech but more me sharing feelings about my job, my passion and IT. I've met a Linux system at first in the early 2000 and I didn't really understand what this was, I've learned it the hard way by wiping Windows on the family computer (which was quite an issue) and since that time I got a passion with computers. I made a lot of mistakes that made me progress and learn more, and the more I was learning, the more I saw the amount of knowledge I was missing.

Anyway, I finally got a decent skill level if I could say, but I started early and so my skill is related to all of that early Linux ecosystem. Tools are evolving, Linux is morphing into something different a bit more every year, practices are evolving with the "Cloud". I feel lost.

Within the crossfire §

I've met many people along my ride in open source and I think we can distinguish two schools (of course I know it's not that black and white): the people (like me) who enjoy the traditional ecosystem and the other group that is from the Cloud era. It is quite easy to bash the opposite group and I feel sad when I assist at such dispute.

I can't tell which group is right and which is wrong, there is certainly good and bad in both. While I like to understand and control how my system work, the other group will just care about the produced service and not the underlying layers. Nowadays, you want your service uptime to have as much nine as you can afford (99.999999) at the cost of having complex setup with automatic respawning services on failure, automatic routing within VMs and stuff like that. This is not necessarily something that I enjoy, I think a good service should have a good foundation and restarting the whole system upon failure seems wrong, although I can't deny it's effective for the availability.

I know how a package manager work but the other group will certainly prefer to have a tool that will hide all of the package manager complexity to get the job done. Tell ansible to pop a new virtual machine on Amazon using Terraform with a full nginx-php-mysql stack installed is the new way to manage servers. It seems a sane option because it gets the job done, but still, I can't find myself in there, where is the fun? I can't get the fun out of this. You can install the system and the services without ever see the installer of the OS you are deploying, this is amazing and insane at the same time.

I feel lost in this new era, I used to manage dozens of system (most bare-metal, without virtualization), I knew each of them that I bought and installed myself, I knew which process should be running and their usual CPU/memory usage, I got some acquaintance with all my systems. I was not only the system administrator, I was the IT gardener. I was working all the time to get the most out of our servers, optimizing network transfers, memory usage, backups scripts. Nowadays you just pop a larger VM if you need more resources and backups are just snapshots of the whole virtual disk, their lives are ephemeral and anonymous.

To the future §

I would like to understand better that other group, get more confident with their tools and logic but at the same time I feel some aversion toward doing so because I feel I'm renouncing to what I like, what I want, what made me who I am now. I suppose the group I belong too will slowly fade away to give room to the new era, I want to be prepared to join that new era but at the same time I don't want to abandon the people of my own group by accelerating the process.

I'm a bit lost in this crossfire. Should a resistance organize against this? I don't know, I wouldn't see the point. The way we do computing is very young, we are looking for a way. Humanity has been making building for thousands and years and yet we still improve the way we build houses, bridges and roads, I guess that the IT industry is following the same process but as usual with computers, at an insane rate that humans can barely follow.

Next §

Please share with me by email or mastodon or even IRC if you feel something similar or if you got past that issue, I would be really interested to speak about this topic with other people.

Readers reactions §

ew.srht.site reply

After thoughts (UPDATE post publication) §

I got many many readers giving me their thoughts about this article and I'm really thankful for this.

Now I think it's important to realize that when you want to deploy systems at scale, you need to automate all your infrastructure and then you lose that feeling with your servers. However, it's still possible to have fun because we need tooling, proper tooling that works and bring a huge benefit. We are still very young in regards to automation and lot of improvements can be done.

We will still need all those gardeners enjoying their small area of computer because all the cloud services rely on their work to create duplicated system in quantity that you can rely on. They are making the first most important bricks required to build the "Cloud", without them you wouldn't have a working Alpine/CentOS/FreeBSD/etc... to deploy automatically.

Both can coexist, both should know better each other because they will have to live together to continue the fantastic computer journey, however the first group will certainly be in a small number compared to the other.

So, not everything is lost! The Cloud industry can be avoided by self-hosting at home or in associative datacenter/colocations but it's still possible to enjoy some parts of the great shift without giving up all we believe in. A certain balance can be found, I'm quite sure of it.

OpenBSD: pkg_add performance analysis

Written by Solène, on 08 July 2021.
Tags: #bandwidth #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

OpenBSD package manager pkg_add is known to be quite slow and using much bandwidth, I'm trying to figure out easy ways to improve it and I may nailed something today by replacing ftp(1) http client by curl.

Testing protocol §

I used on an OpenBSD -current amd64 the following command "pkg_add -u -v | head -n 70" which will check for updates of the 70 first packages and then stop. The packages tested are always the same so the test is reproducible.

The traditional "ftp" will be tested, but also "curl" and "curl -N".

The bandwidth usage has been accounted using "pfctl -s labels" by a match rule matching the mirror IP and reset after each test.

What happens when pkg_add runs §

Here is a quick intro to what happens in the code when you run pkg_add -u on http://

  • pkg_add downloads the package list on the mirror (which could be considered to be an index.html file) which weights ~2.5 MB, if you add two packages separately the index will be downloaded twice.
  • pkg_add will run /usr/bin/ftp on the first package to upgrade to read its first bytes and pipe this to gunzip (done from perl from pkg_add) and piped to signify to check the package signature. The signature is the list of dependencies and their version which is used by pkg_add to know if the package requires update and the whole package signify signature is stored in the gzip header if the whole package is downloaded (there are 2 signatures: signify and the packages dependencies, don't be mislead!).
  • if everything is fine, package is downloaded and the old one is replaced.
  • if there is no need to update, package is skipped.
  • new package = new connection with ftp(1) and pipes to setup

Using FETCH_CMD variable it's possible to tell pkg_add to use another command than /usr/bin/ftp as long as it understand "-o -" parameter and also "-S session" for https:// connections. Because curl doesn't support the "-S session=..." parameter, I used a shell wrapper that discard this parameter.

Raw results §

I measured the whole execution time and the total bytes downloaded for each combination. I didn't show the whole results but I did the tests multiple times and the standard deviation is near to 0, meaning a test done multiple time was giving the same result at each run.

operation               time to run     data transferred
---------               -----------     ----------------
ftp http://             39.01           26
curl -N http://	        28.74           12
curl http://            31.76           14
ftp https://            76.55           26
curl -N https://        55.62           15
curl https://           54.51           15

Charts with results

Analysis §

There are a few surprising facts from the results.

  • ftp(1) not taking the same time in http and https, while it is supposed to reuse the same TLS socket to avoid handshake for every package.
  • ftp(1) bandwidth usage is drastically higher than with curl, time seems proportional to the bandwidth difference.
  • curl -N and curl performs exactly the same using https.

Conclusion §

Using http:// is way faster than https://, the risk is about privacy because in case of man in the middle the download packaged will be known, but the signify signature will prevent any malicious package modification to be installed. Using 'FETCH_CMD="/usr/local/bin/curl -L -s -q -N"' gave the best results.

However I can't explain yet the very different behaviors between ftp and curl or between http and https.

Extra: set a download speed limit to pkg_add operations §

By using curl as FETCH_CMD you can use the "--limit-rate 900k" parameter to limit the transfer speed to the given rate.

The Old Computer Challenge

Written by Solène, on 07 July 2021.
Tags: #linux #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

For some time I wanted to start a personal challenge, after some thoughts I want to share it with you and offer you to join me in this journey.

The point of the challenge is to replace your daily computer by a very old computer and share your feelings for the week.

The challenge §

Here are the *rules* of the challenge, there are no prize to win but I'm convinced we will have feelings to share along the week and that it will change the way we interact with computers.

  • 1 CPU maximum, whatever the model. This mean only 1 CPU|Core|Thread. Some bios allow to disable multi core.
  • 512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)
  • using USB dongles is allowed (storage, wifi, Bluetooth whatever)
  • only for your personal computer, during work time use your usual stuff
  • relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)
  • using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge
  • if you absolutely need your regular computer for something really important please use it. The goal is to have fun but not make your week a nightmare.

If you don't have an old computer, don't worry! You can still use your regularly computer and create a virtual machine with low specs, you would still be more comfortable with a good screen, disk access and a not too old CPU but you can participate.

Date §

The challenge will take place from 10Th July morning until 17Th July morning.

Social medias §

Because I want this event to be a nice moment to share with others, you can contact me so I can add your blog (including gopher/gemini space) to the future list below.

You can also join #old-computer-challenge on libera.chat IRC server.

prahou's blog, running a T42 with OpenBSD 6.9 i386 with hostname brouk

Joe's blog about the challenge and why they need it

Solene (this blog) running an iBook G4 with OpenBSD -current macppc with hostname jeefour

(gopher link) matto's report using FreeBSD 13 on an Acer aspire one

cel's blog using Void Linux PPC on an Apple Powerbook G4

Keith Burnett's blog using a T42 with an emphasis on using GUI software to see how it goes

Kuchikuu's blog using a T60 running Debian (but specs out of the challenge)

Ohio Quilbio Olarte's blog using an MSI Wind netbook with OpenBSD

carcosa's blog using an ASUS eeePC netbook with Fedora i386 downgraded with kernel command line

Tekk's website, using a Dell Latitude D400 (2003) running Slackware 14.2

My setup §

I use an old iBook G4 laptop (the one I already use "offline"), it has a single PowerPC G4 1.3 GHz CPU and 512 MB of ram and a slow 40GB HDD. The wifi is broken so I have to use a Wifi dongle but I will certainly rely on ethernet. The screen has a 1024x768 resolution but the colors are pretty bad.

In regards to software it runs OpenBSD 6.9 with /home/ encrypted which makes performance worse. I use ratpoison as the window manager because it saves screen space and requires little memory and CPU to run and is entirely keyboard driven, that laptop has only a left click touchpad button :).

I love that laptop and initially I wanted to see how far I could use for my daily driver!

Picture of the laptop

Screenshot of the laptop

Track changes in /etc with etckeeper

Written by Solène, on 06 July 2021.
Tags: #linux

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to the program etckeeper, a simple tool that track changes in your /etc/ directory into a versioning control system (git, mercurial, darcs, bazaar...).

etckeeper project website

Installation §

Your system may certainly package it, you will then have to run "etckeeper init" in /etc/ the first time. A cron or systemd timer should be set by your package manager to automatically run etckeeper every day.

In some cases, etckeeper can integrate with package manager to automatically run after a package installation.

Benefits §

While it can easily be replicated using "git init" in /etc/ and then using "git commit" when you make changes, etckeeper does it automatically as a safety net because it's easy to forget to commit when we make changes. It also has integration with other system tools and can use hooks like sending an email when a change is found.

It's really a convenience tool but given it's very light and can be useful I think it's a must for most sysadmins.

Gentoo cheatsheet

Written by Solène, on 05 July 2021.
Tags: #linux #gentoo #cheatsheet

Comments on Fediverse/Mastodon

Introduction §

This is a simple cheatsheet to manage my Gentoo systems, a linux distribution source based, meaning everything installed on the computer must be compiled locally.

Gentoo project website

Upgrade system §

I use the following command to update my system, it will downloaded latest portage version and then rebuild @world (the whole set of packages manually installed).

#!/bin/sh
emerge-webrsync 2>&1 | grep "The current local"
if [ $? -eq 0 ]
then
	exit
fi

emerge -auDv --with-bdeps=y --changed-use --newuse @world

Use ccache §

As you may rebuild the same program many times (especially on a new install), I highly recommend using ccache to reuse previous builded objects and will reduce build duration by 80% when you change an USE.

It's quite easy, install ccache package, add 'FEATURES="ccache"' in your make.conf and do "install -d -o root -g portage -p 775" /var/cache/ccache and it should be working (you should see files in the ccache directory).

Gentoo wiki about ccache

Use genlop to view / calculate build time from past builds §

Genlop can tell you how much time will be needed or remains on a build based on previous builds information. I find it quite fun to see how long an upgrade will take.

Gentoo wiki about Genlop

View compilation time §

From the package genlop

# genlop -c

 Currently merging 1 out of 1

 * app-editors/vim-8.2.0814-r100 

       current merge time: 4 seconds.
       ETA: 1 minute and 5 seconds.

Simulate compilation §

Add -p to emerge command for "pretend" and pipe it to genlop -p like this

# emerge -av -p kakoune | genlop -p
These are the pretended packages: (this may take a while; wait...)

[ebuild   R   ~] app-editors/kakoune-2020.01.16_p20200601::gentoo  0 KiB


Estimated update time: 1 minute.

Using gentoolkit §

The gentoolkit package provides a few commands to find informations about packages.

Gentoo wiki page about Gentoolkit

Find a package §

You can use "equery" from the package gentoolkit like this "equery l -p '*package name*" globbing with * is mandatory if you are not looking for a perfect match.

Example of usage:

# equery l -p '*firefox*'
 * Searching for *firefox* ...
[-P-] [  ] www-client/firefox-78.11.0:0/esr78
[-P-] [ ~] www-client/firefox-89.0:0/89
[-P-] [ ~] www-client/firefox-89.0.1:0/89
[-P-] [ ~] www-client/firefox-89.0.2:0/89
[-P-] [  ] www-client/firefox-bin-78.11.0:0/esr78
[-P-] [  ] www-client/firefox-bin-89.0:0/89
[-P-] [  ] www-client/firefox-bin-89.0.1:0/89
[IP-] [  ] www-client/firefox-bin-89.0.2:0/89

Get the package name providing a file §

Use "equery b /path/to/file" like this

# equery b /usr/bin/2to3
 * Searching for /usr/bin/2to3 ... 
dev-lang/python-exec-2.4.6-r4 (/usr/lib/python-exec/python-exec2)
dev-lang/python-exec-2.4.6-r4 (/usr/bin/2to3 -> ../lib/python-exec/python-exec2)

Upgrade parts of the system using packages sets §

There are special packages sets like @security or @profile that can be used instead of @world that will restrict the packages to only a group, on a server you may only want to update @security for... security but not for newer versions.

Gentoo wiki about Packages sets

Disable network when emerging for extra security §

When building programs using emerge, you can disable the network access for the building process, this is considered a good thing because if the building process requires extra files downloaded or a git repository cloned during building phase, this mean your build is not reliable over time. This is also important for security because a rogue build script could upload data. This behavior is default on OpenBSD system.

To enable this, just add "network-sandbox" in the FEATURE variable in your make.conf file.

Gentoo documentation about make.conf variables

Easy trimming kernel process §

I had a bulky kernel at first but I decided to trim it down to reduce build time, it took me a long fail and retry process in order to have everything right that still work, here is a short explanation about my process.

  • keep an old kernel that work
  • install and configure genkernel with MRPROPER=no and CLEAN=no in /etc/genkernel.conf because we don't want to rebuild everything when we make changes
  • lspci -k will tell you which hardware requires which kernel module
  • visit /usr/src/linux and run make menuconfig, basically, you can remove a lot of things in "Device drivers" category that doesn't look like standard hardware on personal computers
  • in Ethernet, Wireless LAN, Graphical drivers, you can trim everything that doesn't look like your hardware
  • run genkernel all and then grub-mkconfig -o /boot/grub/grub.cfg if not done by genkernel and reboot, if something is missed, try enabling drivers you removed previously
  • do it slowly, not much drivers at a time, it's easier to recover an issue when you don't remove many modules from many categories
  • using genkernel all without cleaning, a new kernel can be out in a minute which make the process a lot faster

You can do this without genkernel but if you are like me, using LVM over LUKS and that you need an initrd file, genkernel will just ease the process and generate the initird that you need.

Use binary packages §

If you use Gentoo you may want to have control over most of your packages, but some packages can be really long to compile without much benefit, or you may simply be fine using a binary package. Some packages have the suffix -bin to their name, meaning they won't require compilation.

There are a few well known packages such as firefox-bin, libreoffice-bin, rust-bin and even gentoo-kernel-bin! You can get a generic kernel pre-compiled :)

Gentoo wiki: Using distribution kernel

Create binary packages §

It is possible to create a binary package of every program you compile on Gentoo, this can be used for distributing packages on similar systems or simply make a backup of your packages. In some cases, the redistribution may not work if you are on a system with a different CPU generation or different hardware, this is pretty normal because you often define the variables to optimize as much as possible the code for your CPU and the binaries produced won't work on another CPU.

The guide from Gentoo will explain all you need to know about the binary packages and how to redistribute them, but the simplest config you need to start generating packages from emerge compilation is setting FEATURES="buildpkg" in your make.conf

Gentoo wiki: Binary package guide

Listing every system I used

Written by Solène, on 02 July 2021.
Tags: #linux #unix #bsd

Comments on Fediverse/Mastodon

Introduction §

Nobody asked for it but I wanted to share the list of the system I used in my life (on a computer) and a few words about them. This is obviously not very accurate but I'm happy to write it somewhere.

You may wonder why I did some choices in the past, I was young and with little experience in many of these experiments, a nice looking distribution was very appealing to me.

One has to know (or remember) that 10 years ago, Linux distributions were very different from one to another and it became more and more standardized over time. At the point that I don't consider distro hoping (the fact to switch from a distribution to another regularly) something interesting because most distributions are derivative from a main one and most will all have a systemd and same defaults.

Disclaimer: my opinions about each systems are personal and driven by feeling and memories, it may be totally inaccurate (outdated or damaged memories) or even wrong (misunderstanding, bad luck). If I had issues with a system this doesn't mean it is BAD and that you shouldn't use it, I recommend to make your opinion about them.

The list (alphabetically) §

This includes Linux distributions but also BSD or Solaris derived system.

Alpine §

  • Duration: a few hours
  • Role: workstation
  • Opinion: interesting but lack of documentation
  • Date of use: June 2021

I wanted to use it on my workstation but the documentation for full disk encryption and the documentation in general was outdated and not accurate so I gave up.

However the extreme minimalism is interesting and without full disk encryption it worked fine. It was surprising to see how packages were split in such small parts, I understand why it's used to build containers.

I really want to like it, maybe in a few years it will be mature enough.

BackTrack §

  • Duration: occasionally
  • Role: playing with wifi devices
  • Opinion: useful
  • Date of use: occasionally between 2006 and 2012

Worked well with a wifi dongle supporting monitor mode.

CentOS §

  • Duration: not much
  • Role: local server
  • Opinion: old packages
  • Date of use: 2014

Nothing much to say, I had to use it temporarily to try a program we where delivering to a client using Red Hat.

Crux §

  • Duration: a few months maybe
  • Role: workstation
  • Opinion: it was blazing fast to install
  • Date of use: around 2009

I don't remember much about it to be honest.

Debian §

  • Duration: multiple years
  • Role: workstation (at least 1 year accumulated) and servers
  • Opinion: I don't like it
  • Date of use: from 2006 to now

It's not really possible to do Linux without having to deal with Debian some day. It's quite working when installed but I always had painful time with upgrades. As for using it as a workstation, it was at a time of gnome 2 and software were already often obsolete so I was using testing.

DragonflyBSD §

  • Duration: months
  • Role: server and workstation
  • Opinion: interesting
  • Date of use: ~2009-2011

The system worked quite well, I had hardware compatibility issues at that time but it worked well for my laptop. HAMMER was stable when I used it on my server and I really enjoyed working with this file system, the server was my NAS and Mumble server at that time and it never failed me. I really think this make a good alternative to ZFS.

Edubuntu §

  • Duration: months
  • Role: laptop
  • Opinion: shame
  • Date of use: 2006

I was trying to be a good student at that time and it seemed Edubuntu was interesting, I didn't understand it was just an Ubuntu with a few packages pre-installed. It was installed on my very first laptop (a very crappy one but eh I loved it.).

Elementary §

  • Duration: months
  • Role: laptop
  • Opinion: good
  • Date of use: 2019-now

I have an old multimedia laptop (the case is falling apart) that runs Elementary OS, mainly for their own desktop environment Pantheon that I really like. The distribution itself is solid and well done, it never failed me even after major upgrades. I could do everything using the GUI. I would recommend like it to a Linux beginner or someone enjoying GUI tools.

EndeavourOS §

  • Duration: months
  • Role: testing stuff
  • Opinion: good project
  • Date of use: 2021

I never been into Arch but I got my first contact with EndeavourOS, a distribution based on Arch Linux that proposes an installer with many options to install Arch Linux, and also a few helper tools to manage your system. This is clearly and Arch Linux and they don't hide it, they just facilitate the use and administration of the system. I'm totally capable of installing Arch but I have to admit if I can save a lot of time to install it in a full disk encryption setup using a GUI I'm all for it. As an Arch Linux noob, the little "welcome" GUI provided by EndeavourOS was very useful to learn how to use the packages manager and a few other things. I'd totally recommend it over Arch Linux because it doesn't denature Arch while still providing useful additions.

Fedora §

  • Duration: months
  • Role: workstation
  • Opinion: hazardous
  • Date of use: 2006 and around 2014

I started with Fedora Core 6 in 2006, at that time it was amazing, they had many new software and up to date, the alternative was Debian or Mandrake (with Ubuntu not being very popular yet), I used it a long time. I used it again later but I stumbled on many quality issues and I don't have good memories about it.

FreeBSD §

  • Duration: years
  • Role: workstation, server
  • Opinion: pretty good
  • Date of use: 2009 to 2020

This is the first BSD I tried, I heard a lot about it so I downloaded the 3 or 5 CDs of the release with my 16 kB/s DSL line, burned CDs and installed it on my computer, the installer was proposing to install packages at that time but it was doing it in a crazy way, you had to switch CD a lot between the sets because sometimes the package was on CD 2 then CD 3 and CD 1 and CD 3 and CD2.... For some reasons, I destroyed my system a few times by mixing ports and packages which ended in dooming the system. I learned a lot from my destroy and retry method.

For my first job (I occupied for 10 years) I switched all the Debian servers to FreeBSD servers and started playing with Jails to provide security for web server. FreeBSD never let me down on servers. The most pain I have with FreeBSD is freebsd-update updating RCS tags so I had to merge sometimes a hundred of files manually... At the point I preferred reinstalling my servers (with salt stack) than upgrading.

On my workstation it always worked well. I regret packages quality can be inconsistent sometimes but I'm also part of the problem because I don't think I ever reported such issues.

Frugalware §

  • Duration: weeks
  • Role: workstation
  • Opinion: I can't remember
  • Date of use: 2006?

I remember I've run a computer with that but that's all...

Gentoo §

  • Duration: months
  • Role: workstation
  • Opinion: i love it
  • Date of use: 2005, 2017, 2020 to now

My first encounter with Gentoo was at my early Linux discovery. I remember following the instructions and compiling X for like A DAY to get a weird result, the resolution was totally wrong and it was in grey scale so I gave up.

I tried it later in 2017 and I successfully installed it with full disk encryption and used it as my pro laptop, I don't remember I broke it once. The only issue was to wait the compilation time when I needed a program not installed.

I'm back on Gentoo regularly for one laptop that requires many tweaks to work correctly and I also use it as my main Linux at home.

gNewSense §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2006

It was my first encounter with a 100% free system, I remember it wasn't able to play MP3 files :) It was an Ubuntu derivative and the community was friendly. I see the project is abandoned now.

Guix §

  • Duration: months
  • Role: workstation
  • Opinion: interesting ideas but raw
  • Date of use: 2016 and 2021

I like Guix a lot, it has very good ideas and the consistent use of Scheme language to define the packages and write the tools is something I enjoy a lot. However I found the system doesn't feel very great for a desktop usage with GUI, it appears quite raw and required me many workaround to work correctly.

Note that Guix is a distribution but also the package manager that can be installed on any linux distribution in addition to the original package manager, in that case we refer to it as Foreign Guix.

Mandrake §

  • Duration: weeks?
  • Role: workstation
  • Opinion: one of my first
  • Date of use: 2004 or something

This was one of my first distribution and it came with a graphical installer! I remember packages had to be installed with the command "urpmi" but that's all. I think I didn't have access to the internet using my USB modem so I was limited to packages from the CDs I burned.

NetBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: good
  • Date of use: 2009 to 2015

I used NetBSD at first on a laptop (in 2009) but it was not very stable and programs were core dumping a lot, I found the software where not really up to date in pkgsrc too. However, I used it for years as my first email server and I never had a single issue.

I didn't try it seriously for a workstation recently but from what I've heard it became a good choice for a daily driver.

NixOS §

  • Duration: years
  • Role: workstation and server
  • Opinion: awesome but different
  • Date of use: 2016 to now

I use NixOS daily in my professional workstation since 2020, it never failed me even when I'm on the development channel. I already wrote about it, it's an amazing piece of work but is radically different from other Linux distributions or Unix-like systems.

I'm using it on my NAS and it's absolutely flawless since I installed it. But I am not sure how easy or hard it would be to run a full featured mail server on it (my best example for a complex setup).

NuTyX §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2010

I don't remember much about this distribution but I remember the awesome community and the creator of the distro which is a very helpful and committed person. This is a distribution made from scratch that is working very well and is still alive and dynamic, kudos to the team.

OpenBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: boring because it just works
  • Date of use: 2015 to now

I already wrote a few times why I like OpenBSD so I will make it short, it just works and it works fine. However the hardware compatibility can be limited, but when hardware is supported everything just work out of the box without any tweak.

I've been using it daily for years now and it started when my NetBSD mail server had to be replaced by a newer machine at online so I chose to try OpenBSD. I'm part of the team since 2018 and apart from occasional ports changes my big contribution was to setup the infrastructure to build binary packages for ports changes in the stable branch.

I wish performance were better though.

OpenIndiana §

  • Duration: weeks
  • Role: workstation
  • Opinion: sadness but hope?
  • Date of use: 2019

I was a huge fan of OpenSolaris but Oracle killed it. OpenIndiana is the resurrection of the open source Solaris but is now a bit abandoned from contributors and the community isn't as dynamic as previously. Hardware support is lagging however the system performs very well and all Solaris features are still there if you know what to do with it.

I really hope for this project to get back on track again and being as dynamic as it used to be!

OpenSolaris §

  • Duration: years
  • Role: workstation
  • Opinion: sadness
  • Date of use: 2009-2010

I loved OpenSolaris, it was such an amazing system, every new release had a ton of improvements (packages updates, features, hardware support) and I really thought it would compete Linux at this rate. It was possible to get free CD over snail mail and they looked amazing.

It was my main workstation on my big computer (I built it in 2007 and it had 2 xeon E5420 CPU and 32 GB of memory with 6x 500GB of SATA drives!!!), it was totally amazing to play with virtualization on it. The desktop was super fast and using Wine I was able to play Windows video games.

OpenSuse §

  • Duration: months
  • Role: pro workstation
  • Opinion: meh
  • Date of use: something like 2015

I don't have strong memories about OpenSuse, I think it worked well on my workstation at first but after some time I had some madness with the package manager that was doing weird things like removing half the packages to reinstall them... I never wanted to give another try after this few months experiment.

Paldo §

  • Duration: weeks? months?
  • Role: workstation
  • Opinion: the install was fast
  • Date of use: 2008?

I remember having played and contributed a bit to packages on IRC, all I remember is the kind community and that it was super fast to install. It's a distribution from scratch and it still alive and updated, bravo!

PC-BSD §

  • Duration: months
  • Role: workstation
  • Opinion: many attempts, too bad
  • Date of use: 2005-2017

PC-BSD (and more recently TrueOS) was the idea to provide FreeBSD to everyone. Each release was either good or bad, it was possible to use FreeBSD packages but also "pbi" packages that looked like Mac OS installers (a huge file that you had to double click on it to install). I definitely liked it because it was my first real success with FreeBSD but sometimes the tools proposed were half backed or badly documented. The project is dead now.

PCLinuxOS §

  • Duration: weeks?
  • Role: laptop
  • Opinion: it worked
  • Date of use: around 2008?

I remember installing it was working fine and I liked it.

Pop!_OS §

  • Duration: months
  • Role: gaming computer
  • Opinion: works!!
  • Date of use: 2020-2021

I use this distribution on my gaming computer and I have to admit it can easily replace Windows! :) Upgrades are painless and everything works out of the box (including the Nvidia driver).

Scientific Linux §

  • Duration: months
  • Role: workstation
  • Opinion: worked well
  • Date of use: ??

I remember I used scientific Linux as my main distribution at work for some time, it worked well and remembered me my old Fedora Core.

Skywave §

  • Duration: occasionally
  • Role: laptop for listening to radio waves
  • Opinion: a must
  • Date of use: 2018-now

This distribution is really focused into providing tools for using radio hardware, I bought a simple and cheap RTL-SDR usb device and I've been able to use it with pre-installed software. Really a plug and play experience. It works as a live CD so you don't even need to install it to benefit from its power.

Slackware §

  • Duration: years
  • Role: workstation and server
  • Opinion: Still Loving You....
  • Date of use: multiple times since 2002

It is very hard for me to explain how much and deep I love Slackware Linux. I just love it. In the date you can read I started with it in 2002, it's my very first encounter with Linux. A friend bought a Linux Magazine with Slackware CDs and explanations about the installation, it worked and many programs were available to play with! (I also erased Windows on the family computer because I had no idea what I was doing).

Since that time, I used Slackware multiples times and I think it's the system that survived the longest time every time it got installed, every new Slackware release was a day to celebrate for me.

I can't explain why I like it so much, I guess it's because you deeply know how your system work over time. Packages didn't manage dependencies at that time and it was a real pain to get new programs, it improved a lot now.

I really can't wait Slackware 15.0 to be out!

Solaris §

  • Duration: months
  • Role: workstation
  • Opinion: fine but not open source
  • Date of use: 2008

I remember the first time I heard that Solaris was a system I could install on my machine, I started to install it after downloading 2 parts of the ISO (which had to be joined using cat), I started to install it on my laptop and went to school with the laptop on battery continuing installing (it was very long) and ending the installation process in class (I was in a computer science university so it was fine :P ).

I discovered a whole new world with it, I even used it on a netbook to write some Java SCTP university project. It was the very introduction to ZFS, brand new FS with many features.

Solus §

  • Duration: days
  • Role: workstation
  • Opinion: good job team
  • Date of use: 2020

I didn't try much Solus because I'm quite busy nowadays, but it's a good distro as an alternative to major distributions, it's totally independent from other main projects and they even have their own package manager. My small experiment was good and it felt quality, it's a rolling release model but the packages are curated to check quality before being pushed to mass users.

I wish them a long and prosper life.

Ubuntu §

  • Duration: months
  • Role: workstation and server
  • Opinion: it works fine
  • Date of use: 2006 to 2014

I used Ubuntu on laptop a lot, and I recommended many people to use Ubuntu if they wanted to try Linux. Whatever we say, they helped to get Linux known and bring Linux to masses. Some choices like non-free integration are definitely not great though. I started with Dapper Drake (Ubuntu 6.06 !) on an old Pentium 1 server I had under my dresser in my student room.

I used it daily a few times but mainly at the time the default window manager was Unity. For some reasons, I loved Unity, it's really a pity the project is now abandoned and lost, it worked very well for me and looked nice.

I don't want to use it anymore as it became very complex internally, like trying to understand how domain names are resolved is quite complicated...

Void §

  • Duration: days?
  • Role: workstation
  • Opinion: interesting distribution, not enough time to try
  • Date of use: 2018

Void is an interesting distribution, I use it a little on a netbook with their musl libc edition and I've run into many issues at usage but also at install time. The glibc version was working a lot better but I can't remember why it didn't catch me more than this.

I wish I could have a lot of time to try it more seriously. I recommend everyone giving it a try.

Windows §

  • Duration: years
  • Role: gaming computer
  • Opinion: it works
  • Date of use: 1995 to now

My first encounter with a computer was with Windows 3.11 on a 486dx computer, I think I was 6. Since then I always had a Windows computer, at first because I didn't know there were alternatives and then because I always had it as a hard requirement for a hardware, a software or video games. Now, my gaming computer is running Windows and is dedicated to games only, I do not trust this system enough to do anything else. I'm slowly trying to move away from it and efforts are giving results, more and more games works fine on Linux.

Zenwalk §

  • Duration: months
  • Role: workstation
  • Opinion: it's like slackware but lighter
  • Date of use: 2009?

I don't remember much, it was like Slackware but without the giant DVD install that requires 15GB of space for installation, it used Xfce by default and looked nice.

How to choose a communication protocol

Written by Solène, on 25 June 2021.
Tags: #internet

Comments on Fediverse/Mastodon

Introduction §

As a human being I have to communicate with other people and now we have many ways to speak to each other, so many that it's hard to speak to other people. This is a simple list of communication protocol and why you would use them. This is an opinionated text.

Protocols §

We rely on protocols to speak to each other, the natural way would be language with spoken words using vocal chords, we could imagine other way like emitting sound in Morse. With computers we need to define how to send a message from A to B and there are many many possibilities for such a simple task.

  • 1. The protocol could be open source, meaning anyone can create a client or a server for this protocol.
  • 2. The protocol can be centralized, federated or peer to peer. In a centralized situation, there is only one service provider and people must be on the same server to communicate. In a federated or peer-to-peer architecture, people can join the communication network with their own infrastructure, without relying on a service provider (federated and peer to peer are different in implementation but their end result is very close)
  • 3. The protocol can provide many features in addition to contact someone.

IRC §

The simplest communication protocol and an old one. It's open source and you can easily host your own server. It works very well and doesn't require a lot of resources (bandwidth, CPU, memory) to run, although it is quite limited in features.

  • you need to stay connected to know what happen
  • you can't stay connected if you don't keep a session opened 24/7
  • multi device (computer / phone for instance) is not possible without an extra setup (bouncer or tmux session)

I like to use it to communicate with many people on some topic, I find they are a good equivalent of forums. IRC has a strong culture and limitations but I love it.

XMPP (ex Jabber) §

Behind this acronym stands a long lived protocol that supports many features and has proven to work, unfortunately the XMPP clients never really shined by their user interface. Recently the protocol is seeing a good adoption rate, clients are getting better, servers are easy to deploy and doesn't draw much resources (i/o, CPU, memory).

XMPP uses a federation model, anyone can host their server and communicate with people from other servers. You can share files, create rooms, do private messages. Audio and video is supported based on the client. It's also able to bridge to IRC or some other protocol using the correct software. Multiples options for end-to-end encryption are available but the most recent named OMEMO is definitely the best choice.

The free/open source Android client « Conversations » is really good, on a computer you can use Gajim or Dino with a nice graphical interface, and finally profanity or poezio for a console client.

XMPP on Wikipedia

Matrix §

Matrix is a recent protocol in the list although it saw an incredible adoption rate and since the recent Freenode drama many projects switched to their own Matrix room. It's fully open source in client or servers and is federated so anyone can be independent with their own server.

As it's young, Matrix has only one client that proposes all the features which is Element, a very resource hungry web program (web page or run "natively using Electron, a program to turn website in desktop application) and a python server named Synapse that requires a lot of CPU to work correctly.

In regards to features, Matrix proposes end to end encryption, rooms, direct chat, encryption done well, file sharing, audio/video etc...

While it's a good alternative to XMPP, I prefer XMPP because of the poor choice of clients and servers in Matrix at the moment. Hopefully it may get better in the future.

Matrix protocol on Wikipedia

Email §

This way is well known, most people have an email address and it may have been your first touch with the Internet. Email works well, it's federated and anyone can host an email server although it's not an easy task.

Mails are not instant but with performant servers it can only takes a few seconds for an email to be sent and delivered. They can support end to end encryption using GPG which is not always easy to use. You have a huge choice for email clients and most of them allow incredible settings choice.

I really like emails, it's a very practical way to communicate ideas or thoughts to someone.

Delta Chat §

I found a nice program named Delta Chat that is built on top of emails to communicate "instantly" with your friends who also use Delta Chat, messages are automatically encrypted.

The client user interface looks like an instant messaging program but will uses emails to transport the messages. While the program is open source and Free, it requires electron for desktop and I didn't find a way to participate to an encrypted thread using an email client (even using the according GPG key). I really found that software practical because your recipients doesn't need to create a new account, it will reuse an existing email address. You can also use it without encryption to write to someone who will reply using their own mail client but you use delta chat.

Delta Chat website

Telegram §

Open source client but proprietary server, I don't recommend anyone to use such a system that lock you to their server. You would have to rely on a company and you empower them by using their service.

Telegram on Wikipedia

Signal §

Open source client / server but the main server where everybody is doesn't allow federation. So far, hosting your own server doesn't seem a possible and viable solution. I don't recommend using it because you rely on a company offering a service.

Signal on Wikipedia

WhatsApp §

Proprietary software and service, please don't use it.

Conclusion §

I daily use IRC, Emails and XMPP to communicate with friends, family, crew from open source projects or meet new people sharing my interests. My main requirement for private messages is end to end encryption and being independent so I absolutely require federated protocol.

How to use the Open Graph Protocol for your website

Written by Solène, on 21 June 2021.
Tags: #blog

Comments on Fediverse/Mastodon

Introduction §

Today I made a small change to my blog, I added some more HTML metadata for the Open Graph protocol.

Basically, when you share an url in most social networks or instant messaging, when some Open Graph headers are present the software will display you the website name, the page title, a logo and some other information. Without that, only the link will be displayed.

Implementation §

You need to add a few tags to your HTML pages in the "head" tag.

    <meta property="og:site_name" content="Solene's Percent %" />
    <meta property="og:title"     content="How to cook without burning your eyebrows" />
    <meta property="og:image"     content="static/my-super-pony-logo.png" />
    <meta property="og:url"       content="https://dataswamp.org/~solene/some-url.html" />
    <meta property="og:type"      content="website" />
    <meta property="og:locale"    content="en_EN" />

There are more metadata than this but it was enough for my blog.

Open Graph Protocol website

Using the I2P network with OpenBSD and NixOS

Written by Solène, on 20 June 2021.
Tags: #i2p #tor #openbsd #nixos #network

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain what is the I2P network and how to provide a service over I2P on OpenBSD and how to use to connect to an I2P service from NixOS.

I2P §

This acronym stands for Invisible Internet Project and is a network over the network (Internet). It is quite an old project from 2003 and is considered stable and reliable. The idea of I2P is to build a network of relays (people running an i2p daemon) to make tunnels from a client to a server, but a single TCP session (or UDP) between a client and a server could use many tunnels of n hops across relays. Basically, when you start your I2P service, the program will get some information about the relays available and prepare many tunnels in advance that will be used to reach a destination when you connect.

Some benefits from I2P network:

  • your network is reliable because it doesn't take care of operator peering
  • your network is secure because packets are encrypted, and you can even use usual encryption to reach your remote services (TLS, SSH)
  • provides privacy because nobody can tell where you are connecting to
  • can prevent against habits tracking (if you also relay data to participate to i2p, bandwidth allocated is used at 100% all the time, and any traffic you do over I2P can't be discriminated from standard relay!)
  • can only allow declared I2P nodes to access a server if you don't want anyone to connect to a port you expose

It is possible to host a website on I2P (by exposing your web server port), it is called an eepsite and can be accessed using the SOCKs proxy provided by your I2P daemon. I never played with them though but this is a thing and you may be interested into looking more in depth.

I2P project and I2P implementation (java) page

i2pd project (a recent C++ implementation that I use for this tutorial)

Wikipedia page about I2P

I2P vs Tor §

Obviously, many people would question why not using Tor which seems similar. While I2P can seem very close to Tor hidden services, the implementation is really different. Tor is designed to reach the outside while I2P is meant to build a reliable and anonymous network. When started, Tor creates a path of relays named a Circuit that will remain static for an approximate duration of 12 hours, everything you do over Tor will pass through this circuit (usually 3 relays), on the other hand I2P creates many tunnels all the time with a very low lifespan. Small difference, I2P can relay UDP protocol while Tor only supports TCP.

Tor is very widespread and using a tor hidden service for hosting a private website (if you don't have a public IP or a domain name for example) would be better to reach an audience, I2P is not very well known and that's partially why I'm writing this. It is a fantastic piece of software and only require more users.

Relays in I2P doesn't have any weight and can be seen as a huge P2P network while Tor network is built using scores (consensus) of relaying servers depending of their throughput and availability. Fastest and most reliable relays will be elected as "Guard server" which are entry points to the Tor network.

I've been running a test over 10 hours to compare bandwidth usage of I2P and Tor to keep a tunnel / hidden service available (they have not been used). Please note that relaying/transit were desactivated so it's only the uploaded data in order to keep the service working.

  • I2P sent 55.47 MB of data in 114 430 packets. Total / 10 hours = 1.58 kB/s average.
  • Tor sent 6.98 MB of data in 14 759 packets. Total / 10 hours = 0.20 kB/s average.

Tor was a lot more bandwidth efficient than I2P for the same task: keeping the network access (tor or i2p) alive.

Quick explanation about how it works §

There are three components in an I2P usage.

- a computer running an I2P daemon configured with tunnels servers (to expose a TCP/UDP port from this machine, not necessarily from localhost though)

- a computer running an I2P daemon configured with tunnel client (with information that match the server tunnel)

- computers running I2P and allowing relay, they will receive data from other I2P daemons and pass the encrypted packets. They are the core of the network.

In this text we will use an OpenBSD system to share its localhost ssh access over I2P and a NixOS client to reach the OpenBSD ssh port.

OpenBSD §

The setup is quite simple, we will use i2pd and not the i2p java program.

pkg_add i2pd

# read /usr/local/share/doc/pkg-readmes/i2pd for open files limits

cat <<EOF > /etc/i2pd/tunnels.conf
[SSH]
type = server
port = 22
host = 127.0.0.1
keys = ssh.dat
EOF

rcctl enable i2pd
rcctl start i2pd

You can edit the file /etc/i2pd/i2pd.conf to uncomment the line "notransit = true" if you don't want to relay. I would encourage people to contribute to the network by relaying packets but this would require some explanations about a nice tuning to limit the bandwidth correctly. If you disable transit, you won't participate into the network but I2P won't use any CPU and virtually no data if your tunnel is in use.

Visit http://localhost:7070/ for the admin interface and check the menu "I2P Tunnels", you should see a line "SSH => " with a long address ending by .i2p with :22 added to it. This is the address of your tunnel on I2P, we will need it (without the :22) to configure the client.

Nixos §

As usual, on NixOS we will only configure the /etc/nixos/configuration.nix file to declare the service and its configuration.

We will name the tunnel "ssh-solene" and use the destination seen on the administration interface on the OpenBSD server and expose that port to 127.0.0.1:2222 on our NixOS box.

services.i2pd.enable = true;
services.i2pd.notransit = true;

services.i2pd.outTunnels = {
  ssh-solene = {
    enable = true;
    name = "ssh";
    destination = "gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p";
    address = "127.0.0.1";
    port = 2222;
    };
};

Now you can use "nixos-rebuild switch" as root to apply changes.

Note that the equivalent NixOS configuration for any other OS would look like that for any I2P setup in the file "tunnel.conf" (on OpenBSD it would be in /etc/i2pd/tunnels.conf).

[ssh-solene]
type = client
address = 127.0.0.1  # optional, default is 127.0.0.1
port = 2222
destination = gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p

Test the setup §

From the NixOS client you should be able to run "ssh -p 2222 localhost" and get access to the OpenBSD ssh server.

Both systems have a http://localhost:7070/ interface because it's a default setting that is not bad (except if you have multiple people who can access the box).

Conclusion §

I2P is a nice way to share services on a reliable and privacy friendly network, it may not be fast but shouldn't drop you when you need it. Because it can easily bypass NAT or dynamic IP it's perfectly fine for a remote system you need to access when you can use NAT or VPN.

Run your Gemini server on Guix with Agate

Written by Solène, on 17 June 2021.
Tags: #guix #gemini

Comments on Fediverse/Mastodon

Introduction §

This article is about deploying the Gemini server agate on the Guix linux distribution.

Gemini quickstart to explain Gemini to beginners

Guix website

Configuration §

Guix manual about web services, search for Agate.

Add the agate-service definition in your /etc/config.scm file, we will store the Gemini content in /srv/gemini/content and store the certificate and its private key in the upper directory.

(service agate-service-type
         (agate-configuration
          (content "/srv/gemini/content")
          (cert "/srv/gemini/cert.pem")
          (key "/srv/gemini/key.rsa"))

If you have something like %desktop-services or %base-services, you need to wrap the services list a list using "list" function and add the %something-services to that list using the function "append" like this.

(services
  (append
    (list (service openssh-service-type)
          (service agate-service-type
                   (agate-configuration
                    (content "/srv/gemini/content")
                    (cert "/srv/gemini/cert.pem")
                    (key "/srv/gemini/key.rsa"))))
    %desktop-services))

Generating the certificate §

- Create directories /srv/gemini/content

- run the following command in /srv/gemini/

openssl req -x509 -newkey rsa:4096 -keyout key.rsa -out cert.pem -days 3650 -nodes -subj "/CN=YOUR_DOMAIN.TLD"

- Apply a chmod 400 on both files cert.pem and key.rsa

- Use "guix system reconfigure /etc/config.scm" to install agate

- Use "chown agate:agate cert.pem key.rsa" to allow agate user to read the certificates

- Use "herd restart agate" to restart the service, you should have a working gemini server on port 1965 now

Conclusion §

You are now ready to publish content on Gemini by adding files in /srv/gemini/content , enjoy!

How to use Tor only for onion addresses in a web browser

Written by Solène, on 12 June 2021.
Tags: #tor #openbsd #network #security #privacy

Comments on Fediverse/Mastodon

Introduction §

A while ago I published about Tor and Tor hidden services. As a quick reminder, hidden services are TCP ports exposed into the Tor network using a long .onion address and that doesn't go through an exit node (it never leaves the Tor network).

If you want to browse .onion websites, you should use Tor, but you may not want to use Tor for everything, so here are two solutions to use Tor for specific domains. Note that I use Tor but this method works for any Socks proxy (including ssh dynamic tunneling with ssh -D).

I assume you have tor running and listening on port 127.0.0.1:9050 ready to accept connections.

Firefox extension §

The easiest way is to use a web browser extension (I personally use Firefox) that will allow defining rules based on URL to choose a proxy (or no proxy). I found FoxyProxy to do the job, but there are certainly other extensions that propose the same features.

FoxyProxy for Firefox

Install that extension, configure it:

- add a proxy of type SOCKS5 on ip 127.0.0.1 and port 9050 (adapt if you have a non standard setup), enable "Send DNS through SOCKS5 proxy" and give it a name like "Tor"

- click on Save and edit patterns

- Replace "*" by "*.onion" and save

In Firefox, click on the extension icon and enable "Proxies by pattern and order" and visit a .onion URL, you should see the extension icon to display the proxy name. Done!

Using privoxy §

Privoxy is a fantastic tool that I forgot over the time, it's an HTTP proxy with built-in filtering to protect users privacy. Marcin Cieślak shared his setup using privoxy to dispatch between Tor or no proxy depending on the url.

The setup is quite easy, install privoxy and edit its main configuration file, on OpenBSD it's /etc/privoxy/config, and add the following line at the end of the file:

forward-socks4a   .onion               127.0.0.1:9050 .

Enable the service and start/reload/restart it.

Configure your web browser to use the HTTP proxy 127.0.0.1:8080 for every protocol (on Firefox you need to check a box to also use the proxy for HTTPS and FTP) and you are done.

Marcin Cieślak mastodon account (thanks for the idea!).

Conclusion §

We have seen two ways to use a proxy depending on the location, this can be quite useful for Tor but also for some other use cases. I may write about privoxy in the future but it has many options and this will take time to dig that topic.

Going further §

Duckduck Go official Tor hidden service access

Check if you use Tor, this is a simple but handy service when you play with proxies

Official Duckduck Go about their Tor hidden service

TL;DR on OpenBSD §

If you are lazy, here are instructions as root to setup tor and privoxy on OpenBSD.

pkg_add privoxy tor
echo "forward-socks4a   .onion               127.0.0.1:9050 ." >> /etc/privoxy/config
rcctl enable privoxy tor
rcctl start privoxy tor

Tor may take a few minutes the first time to build a circuit (finding other nodes).

Guix: easily run Linux binaries

Written by Solène, on 10 June 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

For those who used Guix or Nixos you may know that running a binary downloaded from the internet will fail, this is because most expected paths are different than the usual Linux distributions.

I wrote a simple utility to help fixing that, I called it "guix-linux-run", inspired by the "steam-run" command from NixOS (although it has no relation to Steam).

Gitlab project guix-linux-run

How to use §

Clone the git repository and make the command linux-run executable, install packages gcc-objc++:lib and gtk+ (more may be required later).

Call "~/guix-linux-run/linux-run ./some_binary" and enjoy.

If you get an error message like "libfoobar" is not available, try to install it with the package manager and try again, this is simply because the binary is trying to use a library that is not available in your library path.

In the project I wrote a simple compatibility list from a few experiments, unfortunately it doesn't run everything and I still have to understand why, but it permitted me to play a few games from itch.io so it's a start.

Guix: fetch packages from other Guix in the LAN

Written by Solène, on 07 June 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

In this how-to I will explain how to configure two Guix system to share packages from one to another. The idea is that most of the time packages are downloaded from ci.guix.gnu.org but sometimes you can compile local packages too, in both case you will certainly prefer computers from your network to get the same packages from a computer that already had them to save some bandwidth. This is quite easy to achieve in Guix.

We need at least two Guix systems, I'll name the one with the package "server" and the system that will install packages the "client".

Prepare the server §

On the server, edit your /etc/config.scm file and add this service:

(service guix-publish-service-type
         (guix-publish-configuration
             (host "0.0.0.0")
             (port 8080)
             (advertise? #t))))

Guix Manual: guix-publish service

Run "guix archive --generate-key" as root to create a public key and then reconfigure the system. Your system is now publishing packages on port 8080 and advertising it with mDNS (involving avahi).

Your port 8080 should be reachable now with a link to a public key.

Prepare the client §

On the client, edit your /etc/config.scm file and modify the "%desktop-services" or "%base-services" if any.

(guix-service-type
  config =>
    (guix-configuration
      (inherit config)
      (discover? #t)
      (authorized-keys
        (append (list (local-file "/etc/key.pub"))
                %default-authorized-guix-keys)))))))

Guix Manual: Getting substitutes from other servers

Download the public key from the server (visiting its ip on port 8080 you will get a link) and store it in "/etc/key.pub", reconfigure your system.

Now, when you install a package, you should see from where the substitution (name for packages) are downloaded from.

Declaring a repository (not dynamic) §

In the previous example, we are using advertising on the server and discovery on the client, this may not be desired and won't work from a different network.

You can manually register a remote substitute server instead of using discovery by using "substitute-urls" like this:

(guix-service-type
  config =>
    (guix-configuration
      (inherit config)
      (discover? #t)
      (substitute-urls
        (append (list "http://192.168.1.66:8080")
                %default-substitute-urls))
      (authorized-keys
        (append (list (local-file "/etc/key.pub"))
                %default-authorized-guix-keys)))))))

Conclusion §

I'm doing my best to avoid wasting bandwidth and resources in general, I really like this feature because this doesn't require much configuration or infrastructure and work in a sort of peer-to-peer.

Other projects like Debian prefer using a proxy that keep in cache the packages downloaded and act as a repository provider itself to proxyfi the service.

In case of doubts of the validity of the substitutions provided by an url, the challenge feature can be used to check if reproducible builds done locally match the packages provided by a source.

Guix Manual: guix challenge documentation

Guix Manual: guix weather, a command to get information from a repository

GearBSD: managing your packages on OpenBSD

Written by Solène, on 02 June 2021.
Tags: #rex #openbsd #gearbsd

Comments on Fediverse/Mastodon

Introduction §

I added a new module for GearBSD, it allows to define the exact list of packages you want on the system and GearBSD will take care of removing extra packages and installing missing packages. This is a huge step for me to allow managing the system from code.

Note that this is an improvement over feeding pkg_add with a package list because this method doesn't remove extra packages.

GearBSD packages in action on asciinema

How to use §

In the directory openbsd/packages/ of the GearBSD git repository, edit the file Rexfile and list the packages you want in the variable @packages.

This is the packages set I want on my server.

my @packages = qw/
bwm-ng checkrestart colorls curl dkimproxy dovecot dovecot-pigeonhole
duplicity ecl geomyidae git gnupg go-ipfs goaccess kermit lftp mosh
mtr munin-node munin-server ncdu nginx nginx-stream
opensmtpd-filter-spamassassin p5-Mail-SpamAssassin  postgresql-server
prosody redis rss2email rsync
/;

Then, run "rex -h localhost show" to see what changes will be done like which packages will be removed and which packages will be installed.

Run "rex -h localhost configure" to apply the changes for real. I use "rex -h localhost" using a local ssh connection to root but you could run rex as root with doas with the same effect.

How does it work §

Installing missing packages was easy but removing extra packages was harder because you could delete packages that are still required as dependencies.

Basically, the module looks at the packages you manually installed (the one you directly installed with the pkg_add command), if they are not part of your list of packages you want to have installed, they are marked as automatically installed and then "pkg_delete -a" will remove them if they are not required by any other package.

Where is going GearBSD §

This is a project I started yesterday but I've long think about it. I really want to be able to manage my OpenBSD system with a single configuration file. I currently wrote two modules that are independently configured, the issue is that it doesn't allow altering modules from one to another.

For example, if I create a module to install gnome3 and configure it correctly, this will require gnome3 and gnome3-packages but if you don't have them in your packages list, it will get deleted. GearBSD needs a single configuration file with all the information required by all packages, this will permit something like this:

$module{pf}{TCPports} = [ 22 ];
$module{gnome}{enable} = 1;
$module{gnome}{lang} = "fr_FR.UTF-8";
@packages = qw/catgirl firefox keepassxc/;

The module gnome will know it's enabled and that @packages has to receive gnome3 and gnome3-extras packages in order to work.

Such main configuration file will allow to catch incompatibilities like enabling gdm and xenodm at the same time.

GearBSD: a project to help automating your OpenBSD

Written by Solène, on 01 June 2021.
Tags: #gearbsd #rex #openbsd

Comments on Fediverse/Mastodon

Introduction §

I love NixOS and Guix for their easy system configuration and easy jumping from one machine to another by using your configuration file. To some extent, I want to make it possible to do so on OpenBSD with a collection of parametrized Rex modules, allowing to configure your system piece by piece from templates that you feed with variables.

Let me introduce you to GearBSD, my project to do so.

GearBSD gitlab page

How to use §

You need to clone https://tildegit.org/solene/gearbsd using git and you also need to install Rex with pkg_add p5-Rex.

Use cd to enter into a directory like openbsd/pf (the only one module at this time), edit the Rexfile to change the variables as you want and run "doas rex configure" to apply.

Video example (asciinema recording)

Example with PF §

The PF module has a few variables, in TCPports and UDPports you can list ports or ports ranges that will be allowed, if no ports are in the list then the "pass" rules for that protocol won't be there.

If you want to enable nat on em0 for your wg0 interface, set "nat" to 1, "nat_from_interface" to "wg0" and "nat_to_interface" to "em0" and the code will take care of everything, even enabling the sysctl for port forwarding.

More work required §

It's only a start but I want to work hard on it to make OpenBSD a more accessible system for everyone, and more pleasant to use.

(R)?ex automation for deploying Matrix synapse on OpenBSD

Written by Solène, on 31 May 2021.
Tags: #rex #matrix #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to Rex, an automation tool written in Perl and using SSH, it's an alternative to Salt, Ansible or drist.

(R)?ex project website

Setup §

You need to install Rex on the management system, this can be done using cpan or your package manager, on OpenBSD you can use "pkg_add p5-Rex" to install it. You will get an executable script named "rex".

To make things easier, we will use ssh from the management machine (your own computer) and a remote server, using your ssh key to access the root account (escalation with sudo is possible but will complicate things).

Get Rex

Simple steps §

Create a text file named "Rexfile" in a directory, this will contain all the instructions and tasks available.

We will write in it that we want the features up to the syntax version 1.4 (latest at this time, doesn't change often), the default user to connect to remote host will be root and our servers group has only one address.

use Rex -feature => ['1.4'];

user "root";
group servers => "myremoteserver.com";

We can go further now.

Rex commands cheat sheet §

Here are some commands, you don't need much to use Rex.

- rex -T : display the list of tasks defined in Rexfile

- rex -h : display help

- rex -d : when you need some debug

- rex -g : run a task on group

Installing Munin-master §

An example I like is deploying Munin on a computer, it requires a cron and a package.

The following task will install a package and add a crontab entry for root.

desc "Munin-cron installation";
task "install_munin_cron", sub {
	pkg "munin-server", ensure => "present";
	
	cron add => "root", {
		ensure => "present",
		command = > "su -s /bin/sh _munin /usr/local/bin/munin-cron",
		on_change => sub {
			say "Munin cron modified";
		}
	};
};

Now, let's say we want to configure this munin cron by providing it a /etc/munin/munin.conf file that we have locally. This can be done by adding the following code:

	file "/etc/munin/munin.conf",
	source => "local_munin.conf",
	owner => "root",
	group => "wheel",
	mode => 644,
	on_change => sub {
		say "munin.conf has been modified";
	};

This will install the local file "local_munin.conf" into "/etc/munin/munin.conf" on the remote host, owned by root:wheel with a chmod 644.

Now you can try "rex -g servers install_munin_cron" to deploy.

Real world tasks §

Configuring PF §

This task deploys a local pf.conf file into /etc/pf.conf and reload the configuration on changes.

desc "Configuration PF";
task "prepare_pf", sub {

    file "/etc/pf.conf",
    source => "pf.conf",
    owner => "root",
    group => "wheel",
    mode => 400,
    on_change => sub {
        say "pf.conf modified";
        run "Restart pf", command => "pfctl -f /etc/pf.conf";
    };
};

Deploying Matrix Synapse §

A task can call multiples tasks for bigger deployments. In this one, we have a "synapse_deploy" task that will run synapse_install() and then synapse_configure() and synapse_service() and finally prepare_pf() to ensure the rules are correct.

As synapse will generate a working config file, there are no reason to push one from the local system.

desc "Deploy synapse";
task "synapse_deploy", sub {
    synapse_install();
    synapse_configure();
    synapse_service();
    prepare_pf();
};

desc "Install synapse";
task "synapse_install", sub {
    pkg "synapse", ensure => "present";
    
    run "Init synapse",
    	command => 'su -s /bin/sh _synapse -c "/usr/local/bin/python3 -m synapse.app.homeserver -c /var/synapse/
    	cwd => "/tmp/",
    	only_if => is_file("/var/synapse/homeserver.yaml");
};

desc "Configure synapse";
task "synapse_configure", sub {
    file "/etc/nginx/sites-enabled/synapse.conf",
    	source => "nginx_synapse.conf",
    	owner => "root",
    	group => "wheel",
    	mode => "444",
    	on_change => sub {
    		service nginx => "reload";
    	};
};

desc "Service for synapse";
task "synapse_service", sub {
    service synapse => "ensure", "started";
};

Going further §

Rex offers many feature because the configuration is real Perl code, you can make loops, conditions and extend Rex by writing local modules.

Instead of pushing configuration file from an hard coded local one, I could write a template of the configuration file and then use Rex to generate the configuration file on the fly by giving it the needed variables.

Rex has many functions to directly alter text files like "append-if_no_such_line" to add a line if it doesn't exist or replace/add/update a line matching a regex (can be handy to uncomment some lines).

Full list of Rex commands

Rex guides

Rex FAQ

Conclusion §

Rex is a fantastic tool if you want to programmaticaly configure a system, it can even be used for your local machine to allow reproducible configuration or for keeping track of all the changes in one place.

I really like it because it's simple to work with, it's Perl code doing real things, it's easy to hack on it (I contributed to some changes and the process was easy) and it only requires a working ssh toward a server (and Perl on the remote host). While Salt stack also works "agent less", it's painfully slow compared to Rex.

Kakoune: filetype based on filename

Written by Solène, on 30 May 2021.
Tags: #kakoune #editor

Comments on Fediverse/Mastodon

Introduction §

I will explain how to configure Kakoune to automatically use a filetype (for completion/highlighting..) depending on the filename or its extension.

Setup §

The file we want to change is ~/.config/kak/kakrc , in case of issue you can use ":buffer *debug*" in kakoune to display the debug output.

Filetype based on the filename §

I had a case in which the file doesn't have any extension. This snippet will assign the filetype Perl to files named Rexfile.

hook global BufCreate (.*/)?Rexfile %{
	set buffer filetype perl
}

Filetype based on the extension §

While this is pretty similar to the previous example, we will only match any file ending by ".gmi" to assign it a type markdown (I know it's not but the syntax is quite similar).

hook global BufCreate .*\.gmi %{
	set buffer filetype markdown
}

Using dpb on OpenBSD for package compilation cluster

Written by Solène, on 30 May 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to easily setup your own OpenBSD dpb infra. dpb is a tool to manage port building and can use chroot to use a sane environment for building packages.

This is particularly useful when you want to test packages or build your own, it can parallelize package compilation in two way: multiples packages at once and multiples processes for one package.

dpb man page

proot man page

The dpb and proot executable files are available under the bin directory of the ports tree.

Building your packages provide absolutely NOTHING compared to using binary packages except wasting CPU time, disk space and bandwidth.

Setup §

You need a ports tree and a partition that you accept to mount with wxallowed,nosuid,dev options. I use /home/ for that. To simplify the setup, we will create a chroot in /home/build/ and put our ports tree in /home/build/usr/ports (then your /usr/ports can be a symlink).

Create a text file that will be used as a configuration file for proot

chroot=/home/build
WRKOBJDIR=/tmp/pobj
LOCKDIR=/tmp/locks
PLIST_REPOSITORY=/data/plist
DISTDIR=/data/distfiles
PACKAGE_REPOSITORY=/data/packages
actions=unpopulate
sets=base comp etc xbase xfont xshare xetc xserver

This will tell proot to create a chroot in /home/build and preconfigure some variables for /etc/mk.conf, use all sets listed in "sets" and clean everything when run (this is what actions=unpopulate is doing). Running proot is as easy as "proot -c proot_config".

Then, you should be able to run "dpb -B /home/build/ some/port" and it will work.

Ease of use §

I wrote a script to clean locks from dpb, locks from ports system and pobj directories but also taking care of adding the mount options.

Options -p and -j will tell dpb how many cores can be used for parallel compilation, note that dpb is smart and if you tell it 3 ports in parallel and 3 threads in parallel, it won't use 3x3, it will compile three ports at a time and once it's stuck with only one port, it will add cores to its build to make it faster.

#!/bin/sh

CHROOT=/home/build/
CORES=3

rm -fr ${CHROOT}/usr/ports/logs/amd64/locks/*
rm -fr ${CHROOT}/tmp/locks/*
rm -fr ${CHROOT}/tmp/pobj/*
mount -o dev -u /home
mount -o nosuid -u /home
mount -o wxallowed -u /home
/usr/ports/infrastructure/bin/dpb -B $CHROOT -c -p $CORES -j $CORES  $*

Then I use "doas ./my_dpb.sh sysutils/p5-Rex lang/guile" to run the build process.

It's important to use -c in dpb command line which will clear compilation logs of the packages but retains the log size, this will be used to estimate further builds progress by comparing current log size with previous logs sizes.

You can harvest your packages from /home/build/data/packages/ , I even use a symlink from /usr/ports/packages/ to the dpb packages directory because sometimes I use make in ports and sometimes I use dpb, this allow recompiling packages in both areas. I do the same for distfiles.

Going further §

dpb can spread the compilation load over remote hosts (or even manage compilation for a different architecture), it's not complicated to setup but it's out of scope for the current guide. This requires setting up ssh keys and NFS shares, the difficulty is to think with the correct paths depending on chroot/not chroot and local / nfs.

I extremely recommend reading dpb man pages, it supports many options such as providing it a list of pkgpaths (package address such as editor/vim or www/nginx) or building ports in random order.

Here is a simply command to generate a list of pkgpaths of outdated packages on your system compared to the ports tree, the -q parameter is to make it a lot quicker but less accurate for shared libraries.

/usr/ports/infrastructure/bin/pkg_outdated -q | awk '/\// { print $1 }'

Conclusion §

I use dpb when I want to update my packages from sources because the binary packages are not yet available or if I want to build a new package in a clean environment to check for missing dependencies, however I use a simple "make" when I work on a port.

Extend Guix Linux with the nonguix repository

Written by Solène, on 27 May 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

Guix is a full open source Linux distribution approved by the FSF, meaning it's fully free. However, for many people this will mean the drivers requiring firmwares won't work and their usual software won't be present (like Firefox isn't considered free because of trademark issue).

A group of people is keeping a parallel repository for Guix to add some not-100% free stuff like kernel with firmware loading capability or packages such as Firefox, this can be added to any Guix installation quite easily.

nonguix git repository

Guix project website

Configuration §

Most of the code and instructions you will find here come from the nonguix README, you need to add the new channel to download the packages or the definitions to build them if they are not available as binary packages (called substitutions) yet.

Create a new file /etc/guix/channels.scm with this content:

(cons* (channel
        (name 'nonguix)
        (url "https://gitlab.com/nonguix/nonguix")
        ;; Enable signature verification:
        (introduction
         (make-channel-introduction
          "897c1a470da759236cc11798f4e0a5f7d4d59fbc"
          (openpgp-fingerprint
           "2A39 3FFF 68F4 EF7A 3D29  12AF 6F51 20A0 22FB B2D5"))))
       %default-channels)

And then run "guix pull" to get the new repository, you have to restart "guix-daemon" using the command "herd restart guix-daemon" to make it accounted.

Deploy a new kernel §

If you use this repository you certainly want to have the kernel provided that allow loading firmwares and the firmwares, so edit your /etc/config.scm

(use-modules (nongnu packages linux)
             (nongnu system linux-initrd))

(operating-system ;; you should already have this line
  (kernel linux)
  (initrd microcode-initrd)
  (firmware (list linux-firmware))
  #...

Then you use "guix system reconfigure /etc/config.scm" to rebuild the system with the new kernel, you will certainly have to rebuild the kernel but it's not that long. Once it's done, reboot and enjoy.

Installing packages §

You should also have packages available now. You can enable the channel for your user only by modifying ~/.config/guix/channels.scm instead of the system wide /etc/channels.scm file. Note that you may have to build the packages you want because the repository doesn't build all the derivations but only a few packages (like firefox, keepassxc and a few others).

Note that Guix provide flatpak in its official repository, this is a workaround for many packages like "desktop app" for instant messaging or even Firefox, but it doesn't integrates well with the system.

Gaming §

There is also a dedicated gaming channel!

Guix gaming channel

Conclusion §

The nonguix repository is a nice illustration that it's possible to contribute to a project without forking it entirely when you don't fully agree with the ideas of the project. It integrates well with Guix while being totally separated from it, as a side project.

If you have any issues related to this repository, you should seek help from the nonguix project and not Guix because they are not affiliated.

How to use WireGuard VPN on Guix

Written by Solène, on 22 May 2021.
Tags: #guix #vpn

Comments on Fediverse/Mastodon

Introduction §

Today I had to setup a Wireguard tunnel on my Guix computer (my email server is only reachable from Wireguard) and I struggled a bit to understand from the official documentation how to put the pieces together.

In Guix (the operating system, and not the foreign Guix on an existing distribution) you certainly have a /etc/config.scm file that defines your system. You will have to add the Wireguard configuration in it after generating a private/public keys for your Wireguard.

Guix project website

Guix Wireguard VPN documentation

Key generation §

In order to generate Wireguard keys, install the package Wireguard with "guix install wireguard".

# umask 077 # this is so to make files only readable by root
# install -d -o root -g root -m 700 /etc/wireguard
# wg genkey > /etc/wireguard/private.key
# wg pubkey < /etc/wireguard/private.key > /etc/wireguard/public

Configuration §

Edit your /etc/config.scm file, in your "(services)" definition, you will define your VPN service. In this example, my Wireguard server is hosted at 192.168.10.120 on port 4433, my system has the IP address 192.168.5.1, I also defines my public key but my private key is automatically picked up from /etc/wireguard/private.key

(services (append (list
      (service wireguard-service-type
             (wireguard-configuration
              (addresses '("192.168.5.1/24"))
              (peers
               (list
                (wireguard-peer
                 (name "myserver")
                 (endpoint "192.168.10.120:4433")
                 (public-key "z+SCmAMgNNvkeaD0nfBu4fCrhk8FaNCa1/HnnbD21wE=")
                 (allowed-ips '("192.168.5.0/24"))))))))
      %desktop-services))

If you have the default "(services %desktop-services)" you need to use "(append " to merge %desktop-services and new services all defined in a "(list ... )" definition.

The "allowed-ips" field is important, Guix will automatically make routes to these networks through the Wireguard interface, if you want to route everything then use "0.0.0.0/0" (you will require a NAT on the other side) and Guix will make the required work to pass all your traffic through the VPN.

At the top of the config.scm file, you must add "vpn" in the services modules, like this:

# I added vpn to the list
(use-service-modules vpn desktop networking ssh xorg)

Once you made the changes, you can use "guix system reconfigure" to make the changes, if you do multiples reconfigure it seems Wireguard doesn't reload correctly, you may have to use "herd restart wireguard-wg0" to properly get the new settings (seems a bug?).

Conclusion §

As usual, setting Wireguard is easy but the functional way make it a bit different. It took me some time to figure out where I had to define the Wireguard service in the configuration file.

Backup software: borg vs restic

Written by Solène, on 21 May 2021.
Tags: #backup #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

Backups are important, lot of our life is now related to digital data and it's important to take care of them because computers are unreliable, can be stolen and mistakes happen. I really like two programs which are restic and borg, they have nearly the same features but it's hard to decide between both, this is an attempt to understand the differences for my use case.

Restic §

Restic is a backup software written in Go with a "push" workflow, it supports data deduplication within a repository and multiple systems using the same repository and also encryption.

Restic can backup to a remote sftp server but also many network services storage like S3/Minio and even more when using with the program rclone (which can turn any supported backend into a compatible restic backend). Restic seems compatible with Windows (I didn't try).

restic website

Borg §

Borg is a backup software written in Python with a "push" workflow, it supports encryption, data deduplication within a repository and compression. You can backup to a remote server using ssh but the remote server requires borg to be installed.

It's a very good and reliable backup software. It has a companion app named "borgmatic" to automate the backup process and snapshots managements (daily/hourly/monthly ... and integrity checking).

*BSD specific note: borg can honor the "nodump" flag in the filesystem to skip saving those files.

borgbackup website

borgmatic website

Experiment §

I've been making a backup of my /home/ partition (minus some directories that has been excluded in both cases) using borg and restic. I always performed the restic backup and then the borg backup, measuring bandwidth for each and execution time for each.

There are five steps: init for the first backup of lot of data, little changes twice, which is basically opening firefox, browsing a few pages, closing it, refreshing my emails in claws-mail (this changes a lot of small files) and use the computer for an hour. There is a massive change as fourth step, I found a few game installers that I unzipped, producing lot of small files instead of one big file and finally, 24h of normal use between the fourth and last step which is a good representation of a daily backup.

Data §

				restic	borg
Data transmitted (MB)
---------------------
Backup 1 (init)			62860	53730
Backup 2 (little changes)	15	26
Backup 3 (little changes)	168	171
Backup 4 (massive changes)	4820	3910
Backup 5 (typical day of use)	66	44
		
Local cache size (MB)
---------------------
Backup 1 (init)			161	45
Backup 2 (little changes)	163	45
Backup 3 (little changes)	207	46
Backup 4 (massive changes)	211	47
Backup 5 (typical day of use)	216	47
		
Backup time (seconds)
---------------------
Backup 1 (init)			2139	2999
Backup 2 (little changes)	38	131
Backup 3 (little changes)	43	114
Backup 4 (massive changes)	201	355
Backup 5 (typical day of use)	50	110

Repository size (GB)		65	56

Analysis §

Borg was a lot slower than restic but in my experiment the remote ssh server is a dual core atom system, borg is using a process on the other end to manage the data, so maybe that CPU was slowing the backup process. Nevertheless, in my real use case, borg is effectively slower.

Most of the time, borg was more bandwidth effective than restic: it saved 15% of bandwidth for the first backup and 18% after some big changes, but in some cases it used a bit more bandwidth. I have no explanation for this, I guess it depends how file chunks are calculated, if a big database file is changing then one may be able to save only the difference and not the whole file. Borg is also compressing the data (using lz4 by default), this may explain the bandwidth saving that doesn't work for binary data.

The local cache (typically in /root/.cache/) was a lot bigger for restic than for borg, and was increasing slightly at each new backup while borg cache never changed much.

Finally, the whole repo size holding all the snapshots has a different size for restic and borg, respectively 65 GB and 56 GB, which makes a 14% difference between each which may due to the compression done by borg.

Other backup software §

I tested Restic and Borg because they are both good software using the "push" workflow (local computer sends the data) making full snapshots of every backup, but there are many other backup solution available.

- duplicity: fully scriptable, works over many remote protocols but requires a full snapshot and then incremental snapshots to work, when you need to make a new full snapshot it will take a lot of space which is not always convenient. Supports GPG encrypted backup stored over FTP, this is useful for some dedicated server offering 100GB of free FTP.

- burp: not very well known, the setup uses TLS certificates for encryption, requires a burp server and a burp client

- rsnapshot: based on rsync, automate the rotation of backups, use hard links to avoid data duplication for files that didn't change between two backups, it pulls data from servers from a central backup system.

- backuppc: a perl app that will pull data from servers to its repository, not really easy to use

- bacula: enterprise grade solution that I never got to work because it's really complicated but can support many things, even saving on tapes

Conclusion §

In this benchmark, borg is clearly slower but was the most storage and bandwidth efficient. On the other hand, restic is easier to deploy (static binary) and supports a simple sftp server while borg requires borg installed on both sides.

A biggest difference between restic and borg, is that restic supports multiples systems backup in the same repository, allowing a massive data deduplication gain across machines, while a borg repository is for single system (it could work with multiples systems but they should not backup at the same time and they would have to rebuild the local cache every time which is slow).

I'll stick with borg because the backup time isn't a real issue given it's not dramatically slower than restic and that I really enjoy using borgmatic to automatically manage the backups.

For doing backups to a remote server over the Internet, the bandwidth efficiency would be my main concern of all the differences, borg seems a clear winner here.

How to setup wireguard on NixOS

Written by Solène, on 18 May 2021.
Tags: #nixos #network

Comments on Fediverse/Mastodon

Introduction §

Today I will share my simple wireguard setup using NixOS as a wireguard server. The official documentation is actually very good but it didn't really fit for my use case. I have a server with multiples services but some of them need to be only reachable through wireguard, but I don't want to open all ports to wireguard either.

As a quick introduction to Wireguard, it's an UDP based VPN protocol with the specificity that it's stateless, meaning it doesn't huge any bandwidth when not in use and doesn't rely on your IP either. If you switch from an IP to another to connect to the other wireguard peer, it will be seamless in regards to wireguard.

NixOS wireguard documentation

Wireguard setup §

The setup is actually easy if you use the program "wireguard" to generate the keys. You can use "nix-shell -p wireguard" to run the following commands:

umask 077 # this is so to make files only readable by root
wg genkey > /root/wg-private
wg pubkey < /root/wg-private > /root/wg-public

Congratulations, you generated a wireguard private key in /root/wg-private and a wireguard public key in /root/wg-public, as usual, you can share the public key with other peers but the private key must be kept secret on this machine.

Now, edit your /etc/nixos/configuration.nix file, we will create a network 192.168.100.0/24 in which the wireguard server will be 192.168.100.1 and a laptop peer will be 192.168.100.2, the wireguard UDP port chosen is 5553.

networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "192.168.100.1/24" ];
              listenPort = 5553;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # laptop
               publicKey = "uPfe4VBmYjnKaaqdDT1A2PMFldUQUreqGz6v2VWjwXA=";
               allowedIPs = [ "192.168.100.2/32" ];
              }];
      };
};

Firewall configuration §

Now, you will also want to enable your firewall and make the UDP port 5553 opened on your ethernet device (eth0 here). On the wireguard tunnel, we will only allow TCP port 993.

networking.firewall.enable = true;

networking.firewall.interfaces.eth0.allowedTCPPorts = [ 22 25 465 587 ];
networking.firewall.interfaces.eth0.allowedUDPPorts = [ 5553 ];

networking.firewall.interfaces.wg0.allowedTCPPorts = [ 993 ];

Specifically defining the firewall rules for eth0 are not useful if you want to allow the same ports on wireguard (+ some other ports specifics to wg0) or if you want to set the wg0 interface entirely trusted (no firewall applied).

Building §

When you have done all the changes, run "nixos-rebuild switch" to apply the changes, you will see a new network interface wg0.

Conclusion §

I obviously stripped down my real world use case but if for some reasons you want a wireguard tunnel stricter than what's available on the public network interfaces rules, this is how you do.

How to switch to NixOS development version

Written by Solène, on 17 May 2021.
Tags: #nixos

Comments on Fediverse/Mastodon

This short guide will explain you how to switch a NixOS installation to the unstable channel, understand the development version.

nix-channel --add https://channels.nixos.org/nixos-unstable nixos

You will have to reload the channel list using the command "nix-channel --update" and then you can upgrade your system using "nixos-rebuild switch".

If you have issues, you can rollback using "nix-channel --rollback" that will set the channels list to the last state before "--update".

Nix channels wiki page

Nix-channel man page

Turn your Xorg in black and white

Written by Solène, on 15 May 2021.
Tags: #unix

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to turn you display in black and white mode and you can't control this on your display (typically a laptop display won't allow you to change this), there are solutions.

Compositor way §

The best way I found is to use a compositor, fortunately I'm already using "picom" as a compositor along with fvwm2 because I found the windows are getting drawn faster when I switch between desktop with the compositor on. You will want to run the compositor in your ~/.xsession file before running your window manager.

The idea is to run picom with a shader that will turn the color into a gray scale, restart picom with no parameter if you want to get colors back.

picom -b --backend glx --glx-fshader-win  "uniform sampler2D tex; uniform float opacity; void main() { vec4 c = texture2D(tex, gl_TexCoord[0].xy); float y = dot(c.rgb, vec3(0.2126, 0.7152, 0.0722)); gl_FragColor = opacity*vec4(y, y, y, c.a); }"

It was surprisingly complicated to find how to do that. I stumbled on "toggle-monitor-grayscale" project on github which is a long script to automate this depending on your graphic card, I only took the part I needed for picom.

toggle-monitor-grayscale project on Github

Conclusion §

I have no idea why someone would like to turn the screen in black and white, but I've been curious to see how it would look like and if it would be nicer for the eyes, it's an interesting experience I have to admit but I prefer to keep my colors.

Why do I write this blog?

Written by Solène, on 14 May 2021.
Tags: #blog

Comments on Fediverse/Mastodon

Why do I write this blog? §

I decided to have a blog when I started to gather personal notes when playing with FreeBSD, while I wanted my notes to be easy to read and understand I also chose to publish them online so I could read them even at work.

The earlier articles were more about how to do X Y, they were reminders for myself that I was sharing with the world, I never intended to have readers at that time. I enjoyed writing and sharing, I had a few friends who were happy to subscribe to the RSS feed and they were proof-reading after my publications.

Over time, I wanted to make it a place to speak about unusual topic like StumpWM, Common LISP, Guix and weird Unix tricks. It made me very happy because I got feedback from more people over time so I kept doing this.

At some point, I got a lot more involved in the OpenBSD community and I think most of my audience is related to OpenBSD now. I want to share what you can do with OpenBSD, how it would be different than with another system, steps-by-steps guides. I hope it helped some people to jump to OpenBSD and they enjoy it as well now. At the same time, I try to be as honest as possible when I publish about something, this blog is making absolutely no money, there are no ads, I would have absolutely nothing to gain not being honest in my articles. I value precision and accuracy, I try to link to official documentation most of the time instead of doing a copy/paste that will become obsolete over time.

Speaking of obsolescence, I usually re-read all my texts (and it takes a long time) once a year, to check if everything seems still correct. I could see packages that not longer exist, configuration syntax that may have changed or just a software version that is really old, this takes a lot of time because I value all my publications and not only the most recent one.

I write because I have fun writing and I'm happy to make my readers happy. I often get some emails from people I don't know giving me their thoughts about an article, I'm always surprised but very happy when this happen and I always reply to those persons.

I have no schedule when I write, sometimes I plan texts but I can't get them right so I delete them. Sometimes months can pass between two publications, I do not really care, I'm not targeting any publication rate, that would be against the fun.

Why not you? §

This may sound odd, but I wanted to write this text mainly to encourage other people to write and publish their own blog. Why not you? On the technical side, there are many free hosting available in the opensource community and you have plenty of awesome static website generators available nowadays.

If you want to start the adventure, just write and publish. Propose a way to contact you, I think it's important for readers to be able to reach you, they are very nice (at least I never had any issue): they could report mistakes or give you links to things you may enjoy on the same topic as your publication.

Don't think of money, styling, hit rate, visit numbers, it doesn't matter. The true gems on the Internet are those old fashions websites of early 2000 with many ugly jpg, wrong colors but with insane content about unusual and highly specific topics. I have in mind the example of a website about a French movie, the author had found every spot in France where the movie has been filmed, he has contacted every cast in the movie even the most insignificant ones to ask about stories and gathered many pictures and stories about the making of the film. None of this would ever happen in a web driven by money and ranking and visitors.

Simple solution VS over-engineering

Written by Solène, on 13 May 2021.
Tags: #software #opensource

Comments on Fediverse/Mastodon

Introduction §

I wanted to share my thoughts about software in general. I've been using and writing software for a long time and I've seen some patterns over time.

Simple solutions §

I am a true adept of the "KISS" philosophy, in which KISS stands for Keep It Simple Stupid, meaning make your software easy to understand and not try to make it smart. It works most of the time but after you reach your goal with your software, you may be tempted to add features over it, or make it faster, or make it smarter, it usually doesn't work.

Over-engineering §

In the opensource world, we have many bricks of software that we can put together to build better tools, but at some point, you may use too many of them and the service is unbearable in regards to maintenance / operating, the current trend is to automate this by providing those huge stacks of software through docker. It may be good enough for users, it does certainly the job and it works, why should we worry?

Failure and reversibility §

When you use a complicated software, ALWAYS make sure you have a way out: either replace product A with product B or make sure the code is easy to fix. If you plan to invest yourself into deploying a complex program that will store data (like Nextcloud or Paperless-ng), the first question you should have is: how can I move away from it?

Why would you move away from something you are deploying right now because it's good? Software can be unmaintained after some time and you certainly don't want to run a network based obsolete program, due to dependency hell it may not work in the future because it relies on some component that is not available anymore (think python2 here), you may have bugs after a long use that nobody want to fix and prevent you to use the software correctly (scalability issue due to data growth).

There are tons of reasons that something can fail, so it's always important to think about replacements.

- are the data stored in a way you can extract? data could be saved as a plain file on the file system but could also be stored in some complicated repositories format (ipfs)

- if data are encrypted, can you decrypt it? If it's GPG based, you can always work with it, but if it's custom made chunk encryption like Seafile does, it's a lot harder without the real program.

- if the software is packaged for your system, it may not be forever, you may have to package it yourself in a few years if you want to keep it up to date

- if you rely on external API, it may be not able indefinitely. Web browser extensions are a good example, those programs have tightened what extensions could do over time and many tricks had to be used to migrate from API to API. When you rely on a extension, it's a real issue when the extension can't work anymore.

Build your own replacement? §

There are many situations in which you may prefer to build your own service with your own code than using a software ready on the shelf. There are always pros and cons, you gain control and reliability over features and ease of use. Not everyone is able to write such scripts and you may fail and have to deal with the consequences when you do so, this is something that must be kept in mind.

- backups: you could use rsync instead of a complex backup system

- "cloud" file storage: rsync/sftp are still a viable option to upload a file "to the cloud" if you have a server, a simple https server would be enough to share the file, the checksum of the file could be used as an unique and very long file name.

- automation: a shell script executed over ssh could replace ansible or salt-stack to some extent

There are many use case in which the administrator may prefer a home-made solution, but in a company context, you may have to rely on that very person instead of relying on a complex software, which moves the problem to another level.

Conclusion §

There are many reasons a software could fail, be abandoned, not work anymore, you should always assess such situations if you don't want to build a fragile service. Easiest ideas have less features but are a lot more reliable and resistant to time than complex implementations. The more code you involve, the more issues you will have.

We are free to use what we want, in open source we are even free to make changes to the code we use, this is fantastic. Choice always come with pros and cons and it's always better to think before hand than facing unwise consequences.

Introduction to git-annex (Port Of The Week)

Written by Solène, on 12 May 2021.
Tags: #git #openbsd

Comments on Fediverse/Mastodon

Introduction §

Now that git-annex is available as a package on OpenBSD I can use it again. I've been relying on it a few years ago but it was really complicated for me to compile it and I gave up. Since I really missed it, I'm now back to it and I think it's time to share about this wonderful piece of software.

git-annex is meant to help you manage your data like you would manage books in a library, you have a database telling you where the books are and you can find them on the shelves, or at least you can know who borrowed the book. We are working with digital files that can be copied here so the analogy doesn't fully work, but you could want to put your data in an external hard drive but not everything, and you may want to have some data on multiples devices for safety reasons, git-annex automates this.

It works very well for files that are not changing much, I call them "static files", they are music, videos, pictures, documents. You don't really want to use git-annex with files you edit everyday, it doesn't work well because the process can be a bit tedious.

git-annex may not be easy to understand at first, I suggest you try locally to grasp its purpose.

git-annex official website

what git-annex is not

Cheat sheet §

Let's create a cheat sheet first. Most git-annex commands have a dedicated man page, but can also provide a simpler help by using "git annex help somecommand".

Create the repository §

The first step is to create a repository which is based on git, then we will tell git-annex to init it too.

mkdir ~/MyDataLibrary && cd ~/MyDataLibrary
git init
git annex init "my-computer"

Add a file §

When you want to register a file in git annex, you need to use "git annex add" to add it and then "git commit" to make it permanent. The files are not stored in the git repository, it will only contains metadata.

git annex add Something
git commit -m "I added something"

Example:

$ echo "hello there" > hello
$ ls -l hello
-rw-r--r--  1 solene  wheel  12 May 12 18:38 hello
$ git annex add hello
add hello
ok
(recording state in git...)
$ ls -l hello
lrwxr-xr-x  1 solene  wheel  180 May 12 18:38 hello -> .git/annex/objects/qj/g5/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9
$  git status hello
On branch master
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   hello

Make changes to a file §

If you want to make changes to a file, you first need to "unlock" it in git-annex, which mean the symbolic link is replaced by the file itself and is no longer in read-only. Then, after your changes, you need to add it again to git-annex and commit your changes.

git annex unlock file
vi file
git annex add file
git commit -m "I changed something" file

Add a remote encrypted repository §

If you want to store data (for duplication) on a remote server using ssh you can use a remote of type "rsync" and encrypt the data in many fashions (GPG with hybrid is the best). This will allow to store data on remote untrusted devices.

git annex initremote my-remote-server type=rsync rsyncurl=remote-server.com:/home/solene/git-annex-data keyid=my-gpg@address encryption=hybrid

After this command, I can send files to my-remote-server.

git-annex website about encryption

git-annex website about special remotes

Manage data from multiple computers (with ssh) §

**This is a way to have a central git repository for many computers, this is not the best way to store data on remote servers**.

If you want to use a remote server through ssh, there are two ways: mounting the remote file system using sshfs or use a plain ssh. If you use sshfs, then it falls as a standard local file system like an external usb drive, but if you go through ssh, it's different.

You need to have a key authentication based for the remote ssh and you also need git-annex on the remote server. It's important to have a bare git repo.

cd /home/data/
git init --bare
git annex init "remote-server"

On your computer:

git remote add remote-server ssh://hostname:/home/data/
git fetch remote-server

You will be able to use commands related to repositories now!

List files and where they are stored §

You can use the "git annex list" command to list where your files are physically stored.

In the following example you can see which files are on my computer and which are available on my remote server called "network", "web" and "bittorrent" are special remotes.

here
|network
||web
|||bittorrent
||||
X___ Documentation/Nim/Dominik Picheta - Nim in Action-Manning Publications (2017).pdf
X___ Documentation/ada/Ada-Distilled-24-January-2011-Ada-2005-Version.pdf
X___ Documentation/ada/courseada1.pdf
X___ Documentation/ada/courseada2.pdf
X___ Documentation/ada/courseada3.pdf
X___ Documentation/scheme/artanis.pdf
X___ Documentation/scheme/guix.pdf
X___ Documentation/scheme/manual_guix.pdf
X___ Documentation/skribilo/skribilo.pdf
X___ Documentation/uck2ep1.pdf
X___ Documentation/uck2ep2.pdf
X___ Documentation/usingckermit3e.pdf
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/01 - Daftendirekt.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/02 - Wdpk 83.7 fm.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/03 - Revolution 909.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/04 - Da Funk.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/05 - Phoenix.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/01 - Alan Walker - Intro.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/02 - Alan Walker, Sorana - Lost Control.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/03 - Alan Walker, Julie Bergan - I Don_t Wanna Go.flac

List files locally available §

If you want to list the files for which you have the content available locally, you can use the "list" command from git-annex but only restrict to the group "here" representing your local repository.

git annex list --in here

Work with a remote repository §

Delete a repository §

Simply mark it as "dead".

git annex dead $repo_name

Adding a remote repository GPG encrypted §

git annex initremote $name type=rsync rsyncurl=remote-server:/home/solene/mydirectory keyid=your@email encryption=shared

Copy files to a remote §

If you want to duplicate files between repositories to have multiples copies you can use "git annex copy".

git annex copy Music -t remote-server

Move files to a remote §

If you want to move files from a repository to another (removing the content from origin) you can use "git annex move" which will copy to destination and remove from origin.

git annex move Music -t remote-server

Get a file content §

If you don't have a file locally, you can fetch it from a remote to get the content.

git annex get Music/Queen

Forget a file locally §

If you don't want to have the file locally because you don't have disk space or you simply don't want it, you can use the "drop" command. Note that "drop" is safe because git-annex won't allow you to drop files that have only one copy (except if you use --force of course).

git annex drop Music/Queen

Real life example: I have a very huge music library but my laptop SSD is too small, I get get some music I want and drop the files I don't want to listen for a while.

Use mincopies to enforce multi repository data duplication §

The numcopies and mincopies variables can be used to tell git-annex you want exactly or at least "n" copies of the files, so it will be able to protect you from accidental deletions and also help uploading files to other repositories to match the requirements.

Enable per directory recursively §

echo "* annex.mincopies=2" > .gitattributes

Only upload files not matching the num copies §

If you have multiples repositories and some files doesn't match the copies requirements, you can use the following commands to only push the files missing copies.

git annex copy --auto -t remote-server

Real life example: I want my salaries PDF to be really safe, I can ask to have 2 copies of those and then run a sync to the remote server which will proceed to upload them if there is only one copy of the file yet.

Verifying integrity and requirements §

There is the git-annex fsck command which will check the integrity of every file in the local repository and reports you if they are sane (or not), but it will also tell you which file doesn't meet the mincopies requirements.

git annex fsck

Reversibility §

If for some reasons you want to give up git-annex, you can easily get all your files back like a normal file system by using "git annex unlock ." on the top directory of your repository, every local files will be replaced by their physical copy instead of the symlink. Reversibility is very important when you deal with your data because it means you are not stuck forever with a tool in case it's broken or if you want to switch to another process.

My workflow §

I have a ~/DATA/ directory in which I have sub directories {documents,documentation,pictures,videos,music,images}, documents are papers or legal papers, documentation are mostly PDF. Pictures are family pictures and images are wallpapers or stupid images I want to keep.

I've set a mincopies to 2 for documents and pictures and my music is not on my computer but on a remote, I get the music files I want to listen when I'm on the local network with the computer having the files, I drop them locally when I'm bored.

Conclusion §

git-annex separates content from indexation, it can be used in many ways but it implies an archivist philosophy: redundancy, safety, immutability (sort of). It is not meant for backup, you can backup your directory managed by git-annex, it will save the data you have locally, you will have to make backup of your other data as well.

I love that tool, it's a very nice piece of software. It's unique, I didn't find any other program to achieve this.

More resources §

git-annex official walkthrough

git-annex special remotes (S3, webdav, bittorrent etc..)

git-annex encryption

Introduction to security good practices

Written by Solène, on 09 May 2021.
Tags: #security

Comments on Fediverse/Mastodon

Introduction §

I wanted to share my thoughts about security in regards to computers. Let's try to summarize it as a list of rules.

If you read it and you disagree, please let me know, I can be wrong.

Good practices §

Here is a list of good practices I've found over time.

Passwords policy §

Passwords are a mess, we need many of them every day but they are not practical. I do highly recommend to use an unique random password for every password needed. I switched to "keepassxc" to manage my passwords, there are many password managers on the market.

When I need to register a password, I use the longest possible allowed and I keep in my password database.

If I got hacked with my password database, all my passwords are leaked, but if I didn't use it and had only one password, good chance it would be registered somewhere and then the hacker would have access to everything too. The best situation would be to have a really effective memory but I don't want to rely on it.

I still recommend to have a few passwords in your memory, like the one for your backups, your user session and the one to unlock the password database.

When possible, use multi factor authentication. I like the TOTP (Timed One Time Password) method because it works without any third party service and can be stored securely in a backup.

Devices trust §

It's important to define a level of trust in the devices you use. I do not trust my Windows gaming computer, I would not let it have access to my password database. I do not trust my phone device enough for that job too.

If my phone requires a password, I generate one and keep it in my password database and I will create a QR code to scan with the phone instead of copying that very long password. The phone will have the password locally but not the entire database but yet it remains quite usable.

Define your threat model §

When you think about security, you need to think what kind of security you want, sometimes this will also imply thinking about privacy.

Let's think about my home file server, it's a small device which only one disk and doesn't have access to the internet. It could be hacked from a remote person, this is possible but very unlikely. On the other hand, a thief could come into my house a steal a few things, like this server and its data. It makes a lot of sense to use disk encryption for devices that could be stolen (let make it short, I mean all devices).

On the other hand, if I had to manage a mail server with IMAP / SMTP services on it, I would harden it a lot from external attacks and I would have to make some extra security policies for it.

Think about usability §

Most of the time, security and usability doesn't play together, if you increase security that will be at the expense of usability and vice-versa. I'll go back to my IMAP server, I could enable and enforce connecting over TLS for my users, that would prevent their connections to be eavesdropped. I could also enforce a VPN (that I manage myself, not a commercial VPN that can see all my traffic..) to connect to the IMAP server, that would prevent anyone without a VPN to connect to the server. I could also restrict that VPN connection from a list of public IP. I could require the VPN access from an allowed IP to be unlocked by an SSH connection requiring TOTP + password + public key to succeed.

At this point, I'm pretty sure my users will give up and put an automatic redirection of their emails to an other mail server which will be usable to them, I'd be defeated by my users because of too much security.

Don't lock yourself out §

When you come to encrypt everything or lock everything on the network, it could be complicated to avoid data loss or being locked out from the service.

If you have important passwords, you could use Shamir's Secret Sharing (I wrote about it a while back) to split a password in multiples pieces that you would convert as QR code and give a copy to a few person you know, to help you recover the data if you forget about the password once.

Backups §

It's important to make backups, but it's even more important to encrypt them and have them out in a different area of your storage. My practice here is to daily backup all my computer data (which is quite huge) but also backup only my most important data to remote servers. I can afford losing my music files but I'd prefer to be able to recover my GPG and SSH keys in case of huge disaster at home.

User management §

If a hacker got control of your user, it may be over for you. It's important to only run programs you trust and no network related services.

If you need to run something you are unsure, use a virtual machine or at least a dedicated user that won't have access to your user's data. My $HOMEDIR has a chmod 700 so only root and me can access it. If I need to run a service, I will use a dedicated user to it. It's not always convenient but it's effective.

Conclusion §

Good software with a good design are important for the security, but they don't do all the job when it comes to security. Users must be aware of risks and act accordingly.

How to run a NixOS VM as an OpenBSD guest

Written by Solène, on 08 May 2021.
Tags: #openbsd #nixos

Comments on Fediverse/Mastodon

Introduction §

This guide is to help people installing the NixOS Linux distribution as a virtual machine guest hosted on OpenBSD VMM hypervisor.

Preparation §

Some operations are required on the host but specifics instructions will be needed on the guest as well.

Create the disk §

We will create a qcow2 disk, this format allows not using all the reserved space upon creation, size will grow as the virtual disk will be filled with data.

vmctl create -s 20G nixos.qcow2

Configure vmd §

We have to configure the hypervisor to run the VM. I've chose to define a new MAC address for the VM interface to avoid collision with the host MAC.

vm "nixos" {
       memory 2G
       disk "/home/virt/nixos.qcow2"
       cdrom "/home/virt/latest-nixos-minimal-x86_64-linux.iso"
       interface { lladdr "aa:bb:cc:dd:ee:ff"  switch "uplink" }
       owner solene
       disable
}

switch "uplink" {
	interface bridge0
}

vm.conf man page

Configure network §

We need to create a bridge in which I will add my computer network interface "em0" to it. Virtual machines will be attached to this bridge and will be seen from the network.

echo "add em0" > /etc/hostname.bridge0
sh /etc/netstart bridge0

Start vmd §

We want to enable and then start vmd to use the virtual machine.

rcctl enable vmd
rcctl start vmd

NixOS and serial console §

When you are ready to start the VM, type "vmctl start -c nixos", you will get automatically attached to the serial console, be sure to read the whole chapter because you will have a time frame of approximately 10 seconds before it boots automatically (if you don't type anything).

If you see the grub display with letters displayed more than once, this is perfectly fine. We have to tell the kernel to enable the console output and the desired speed.

On the first grub choice, press "tab" and append this text to the command line: "console=ttyS0,115200" (without the quotes). Press Enter to validate and boot, you should see the boot sequence.

For me it took a long time on starting sshd, keep waiting, that will continue after less than a few minutes.

Installation §

There is an excellent installation guide for NixOS in their official documentation.

Official installation guide

I had issues with DHCP so I've set the network manually, my network is in 192.168.1.0/24 and my router 192.168.1.254 is offering DNS too.

systemctl stop NetworkManager
ifconfig enp0s2 192.168.1.151/24 up
route add -net default gw 192.168.1.254
echo "nameserver 192.168.1.254" >> /etc/resolv.conf

The installation process can be summarized with theses instructions:

sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB -1GiB # use every space for root except 1 GB for swap
parted /dev/vda -- mkpart primary linux-swap -1GiB 100%
mkfs.xfs -L nixos /dev/vda1
mkswap -L swap /dev/vda2
mount /dev/disk/by-label/nixos /mnt
swapon /dev/vda2
nixos-generate-config --root /mnt
nano /mnt/etc/nixos/configuration.nix
nixos-install
shutdown now

Here is my configuration.nix file on my VM guest, it's the most basic I could want and I stripped all the comments from the base example generated before install.

{ config, pkgs, ... }:

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  boot.loader.grub.extraConfig = ''
    serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
    terminal_input --append serial
    terminal_output --append serial
  '';

  networking.hostName = "my-little-vm";
  networking.useDHCP = false;

  # COMMENT THIS LINE IF YOU DON'T WANT DHCP
  # networking.interfaces.enp0s2.useDHCP = true;


  # BEGIN ADDITION
  # all of these variables were added or uncommented
  boot.loader.grub.device = "/dev/vda";

  # required for serial console to work!
  boot.kernelParams = [
    "console=ttyS0,115200n8"
  ];

  systemd.services."serial-getty@ttyS0" = {
    enable = true;
    wantedBy = [ "getty.target" ]; # to start at boot
    serviceConfig.Restart = "always"; # restart when session is closed
  };

  # use what you want
  time.timeZone = "Europe/Paris";

  # BEGIN NETWORK
  # define network here
  networking.interfaces.enp0s2.ipv4.addresses = [ {
        address = "192.168.1.151";
        prefixLength = 24;
  } ];
  networking.defaultGateway = "192.168.1.254";
  networking.nameservers = [ "192.168.1.254" ];
  # END NETWORK

  # enable SSH and allow X11 Forwarding to work
  services.openssh.enable = true;
  services.openssh.forwardX11 = true;

  # Declare a user that can use sudo
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "wheel" ];
  };

  # declare the list of packages you want installed globally
  environment.systemPackages = with pkgs; [
     wget vim
  ];

  # firewall configuration, only allow inbound TCP 22
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.enable = true;
  # END ADDITION

  # DONT TOUCH THIS EVER EVEN WHEN UPGRADING
  system.stateVersion = "20.09"; # Did you read the comment?

}

Edit /etc/vm.conf to comment the cdrom line and reload vmd service. If you want the virtual machine to automatically start with vmd, you can remove the "disable" keyword.

Once your virtual machine is started again with "vmctl start nixos", you should be able to connect to ssh to it. If you forgot to add users, you will have to access the VM console with "vmctl console", log as root, modify the configuration file, type "nixos-rebuild switch" to apply changes, and then "passwd user" to define the user password. You can set a public key when declaring a user if you prefer (I recommend).

Install packages §

There are three ways to install packages on NixOS: globally, per-user or for a single run.

- globally: edit /etc/nixos/configuration.nix and add your packages names to the variable "environment.systemPackages" and then rebuild the system

- per-user: type "nix-env -i nixos.firefox" to install Firefox for that user

- for single run: type "nix-shell -p firefox" to create a shell with Firefox available in it

Note that the single run doesn't mean the package will disappear, it's most likely... not "hooked" into your PATH so you can't use it. This is mostly useful when you make development and you need specific libraries to build a project and you don't always want them available for your user.

Conclusion §

While I never used a Linux system as a guest in OpenBSD it may be useful to run Linux specific software occasionally. With X forwarding, you can run Linux GUI programs that you couldn't run on OpenBSD, even if it's not really smooth it may be enough for some situations.

I chose NixOS because it's a Linux distribution I like and it's quite easy to use in the regards it has only one configuration file to manage the whole system.

How to install Gnome on OpenBSD

Written by Solène, on 07 May 2021.
Tags: #openbsd #unix #gnome

Comments on Fediverse/Mastodon

Introduction §

This article will explain how to install the Gnome desktop on OpenBSD. You need access to the root user to proceed.

Instructions §

As root, run "pkg_add gnome gnome-extras" which will install the meta-package gnome listing all the required dependencies to have a full working Gnome installation and the -extras package containing all gnome related programs.

You should see this output after "pkg_add" has finished installing the packages, it's important to read the "pkg-readme" files which are specific instructions to packages.

New and changed readme(s):
        /usr/local/share/doc/pkg-readmes/gnome
        /usr/local/share/doc/pkg-readmes/upower

The most important file is the pkg-readme about Gnome that contains clear instructions about the configuration required to run Gnome. That file has a "Too long didn't read" section at the end for people in a hurry which contain instructions to copy/paste.

Tweaks §

There is an "app" named Tweaks that allow further customization than Gnome3 is allowing, like virtual desktop being horizontal, add menus on the top panel or change various behavior of Gnome.

Conclusion §

While the Gnome installation is not fully automated, it requires only a few instructions to get it installed and fully operational.

Gnome3 after the first start wizard

Gnome3 desktop with a few customizations

Synchronization files software

Written by Solène, on 04 May 2021.
Tags: #unix

Comments on Fediverse/Mastodon

Introduction §

In this article I will introduce you to various opensource file synchronization programs and their according workflows. I may not know them all, obviously.

I can't give a full explanation of each of them, but I will tell you enough so you can know if it could be of any interest to you.

Software §

There are many software out there, with pros and cons, to match our file synchronization requirements.

rsync §

rsync is the leader for simple file replication, it can take care that the destination will exactly match the source data. It's available mostly everywhere and using ssh as a transport it's also secure.

rsync is really the reference for a one-way synchronization.

rsync website

lsyncd §

lsyncd is meant to be used in an environment for near to realtime synchronization. It will check for changes in the monitored directories and will replicate the changes on a remote system (using rsync by default).

lsyncd website

unison §

unison is like rsync but can synchronize in both way, meaning you can keep two directories synchronized without having to think in which order you need to transfer. Obviously, in case of conflict you will have to resolve and pick which file you want to keep. This is a well established software that is very reliable.

unison website

rclone §

rclone is like rsync but will support many backend instead of relying on ssh to connect to a remote source. It's mostly used to transfer files from or to Cloud services by making a glue between core rclone and the service API.

I covered rclone in a previous article if you want more information.

rclone website

syncthing §

syncthing is a fantastic tool to keep directories synchronized between computers/phones. It's a service you run, you define what directories you want to export, and on other syncthing instances you can add those exports and it will be kept synchronized together without tuning. It uses a public tracker to find peers so you don't have to mess with NAT or redirections, and if you want full privacy you can use direct IPs. Data are encrypted during transfers.

It has the advantages of working in full automatic mode and can exchange in both ways in a same directory, with multiples instance on a same share, it can also keep previous copies of deleted / replaced files and support many other features.

syncthing website

sparkleshare §

SparkleShare isn't well known but still does the job very efficiently. It offers automatic synchronization of a directory with other peers based on a git directory, basically, if you add a file or make a change, it's committed and pushed to the remote repositories. If someone make a change, you will receive it too.

While it works very well, it's mostly suited for non binary data because of the git backend. You can't really delete old data so the sparkleshare share will grow over time.

SparkleShare website

nextcloud §

Nextcloud has a file synchronization capability, it's mostly used to upload your data to a remote server and be able to access it from remote, but also share a file or a directory in read only or read/write to other people. It's really a huge toolbox that requires a 24/7 server but provide many features for sharing files. A not so well known feature is the ability to share a directory between Nextcloud instances.

Nextcloud has its core in PHP for the www access but also phone or desktop applications.

Nextcloud can encrypt stored data.

Nextcloud website

seafile §

Seafile is a centralized server to store data, like netxtcloud. It's more focused on file storage than nextcloud, but will provide solid features and also companions apps for phones and desktop.

seafile website

git-annex §

I kept the best for the end. Git-annex is a special beast that would have deserved a full article for it but I never found how to approach it.

git-annex is a command line tool to manage a library of data and will delegate actual transfer to the according protocol.

WHAT DOES IT MEAN? Let's try an analogy.

You are in a house, you have many things in your house: movies, music, books, papers. If you want to keep track of where is stored something, you need an inventory, in which you will label where you stored this paper, this DVD, this book etc... This is what git-annex is doing.

git-annex will allow you to entirely manage data and spread it on different location (with redundancy possible) and let you access natively (or at least tell you where to get it). A real life example would be to use an external hard drive to store big files like music or movies but use a remote server to backup important documents. But you may want your documents to also be on the external hard drive, or even two hard drives, you can tell git-annex to manage that.

git-annex can give you the current state of your library without having the files locally, it will replace the whole hierarchy with symlinks to the real files if they are on your computer, meaning you can get the files when you need them or simply work on that index to remove files and then tell git-annex to proceed to deletion if possible (or when it can, like when you get internet access or you connect that external hard drive).

The draw back is that all the tracked files are symbolic links to a potentially non existing file and that you need a specific workflow of unlocking file in order to make changes, and then store it again.

I've been using it for years for data that doesn't change much (administrative documents, music, pictures) but it's certainly not suitable for tracking logs or often modified files.

The name contains "git" but git-annex only use gits to store the whole metadata, the data themselves are not in git.

git-annex website

Conclusion §

There are different strategies to synchronize files between computers, they can be one way, both way, allow other people to use them, manage at huge scale, realtime etc...

From my experience, we all manage our files in very different ways so I'm glad we have that many ways to synchronize them.

PS: don't forget to backup, it's not because you replicate your data that you don't need backup, sometimes it's easy to destroy all the data at once with a simple mistake.

OpenBSD: getting started

Written by Solène, on 03 May 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This is a guide to OpenBSD beginners, I hope this will turn to be an useful resource helping people to get acquainted to this operating system I love. I will use a lot of links because I prefer to refer to official documentation.

If you are new on OpenBSD, welcome aboard, this guide is for you. If you are not new, well, you may learn a few things.

Installation step §

This article is not about installing OpenBSD. There are enough official documentation for this.

OpenBSD FAQ about Installation

Booting the first time §

So, you installed OpenBSD, you chose to enable X (the graphical interface at boot) and now you face a terminal on a gray background. Things are getting interesting here.

Become super user (root) §

You will often have to use the root account for commands or modifying system files.

su -l

You will have to type root user password (defined at install time) to change to that user. If you type "whoami" you should see "root" as the output.

You got a mail! §

When you install the system (or upgrade) you will receive an email on root user, you can read it using the "mail" command, it will be an email from Theo De Raadt (founder of OpenBSD) greeting you.

You will notice this email contain hints and has basically the same purpose of my current article you are reading. One important man page to read is afterboot(8).

afterboot(8) man page

What is a man page? §

If you don't know what a man page is, it's really time to learn because you will need it. When someone say a "man page" it implies "a manual page". Documentation in OpenBSD is done in manual pages related to various software, concepts or C functions.

To read a man page, in a terminal type "man afterboot" and use arrows or page up/down to navigate within the man page. You can read "man man" page to read about man itself.

Previously I wrote "afterboot(8)" but the real man page name is "afterboot", the "(8)" is meant to specify the man page section. Some words can be used in various contexts, that's where man pages sections come into the place. For instance, there are sysctl(2) documenting the system call "sysctl()" while sysctl(8) will give you information about the sysctl command to change kernel settings. You can specify which section you want to read by typing the number before the page name, like in "man 2 sysctl" or "man 8 sysctl".

Man pages are constructed in the same order: NAME, SYNOPSIS, DESCRIPTION..... SEE ALSO..., the section "SEE ALSO" is an important one, it gives you man page references of other pages you may want to read. For example, afterboot(8) will give you hints about doas(1), pkg_add(1), hier(7) and many other pages.

Now, you should be able to use the manual pages.

Install a desktop environment §

When you want to install a desktop environment, there will often be a "meta package" which will pull every packages required for the environment to work.

OpenBSD provides a few desktop environments like:

- Gnome 3 => pkg_add gnome

- Xfce => pkg_add xfce

- MATE => pkg_add mate

When you install a package using "pkg_add", you may find a message at the end of the pkg_add output telling you there is a file in /usr/local/share/doc/pkg-readmes/ to read, those files are specifics to packages and contains instructions that should be read before using a package.

The instructions could be about performance, potential limits issues, configuration snippets, how to init the service etc... They are very important to read, and for desktop environment, they will tell you everything you know to get it started.

Graphical session §

When you log-in from the xenodm screen (the one where you have a Puffer fish and OpenBSD logo asking login/password), the program xenodm will read your ~/.xsession file, this is where you prepare your desktop and the execute commands. Usually, the first blocking command (that keeps running on foreground) is your window manager, you can put commands before to customize your system or run programs in background.

# disable bell
xset b off

# auto blank after 10 minutes
xset s 600 600

# run xclock and xload
xclock -geometry 75x75-70-0 -padding 1 &
xload -nolabel -update 5 -geometry 75x75-145-0 & 

# load my ~/.profile file to define ENV
. ~/.profile

# display notifications
dunst &

# load changes in X settings
xrdb -merge ~/.Xresources

# turn the screen reddish to reduce blue color
sct 5600

# synchronize copy buffers
autocutsel &

# kdeconnect to control android phone
kdeconnect-indicator &

# reduce sound to not destroy my ears
sndioctl -f snd/1 output.level=0.3 

# compositor for faster windows drawing
picom &

# something for my mouse setup (I can't remember)
xset mouse 1 1
xinput set-prop 8 273 1.1

# run my window manager
fvwm2

Configure your shell §

This is a very recurrent question, how to get your shell aliases to be working once you have logged in? In bash, sh and ksh (and maybe other shells), every time you spawn a new interactive shell (in which you can enter commands), the environment variable ENV will be read and if it has a value matching a file path, it will be loaded.

The design to your beloved shell environment set is the following:

- ~/.xsession will source ~/.profile when starting X, inheriting the content to everything run from X

- ~/.profile will export ENV like in "export ENV=~/.myshellfile"

CPU frequency auto scaling §

If you run a regular computer (amd64 arch) you will want to run the service "apmd" in automatic mode, it will keep your CPU at lowest frequency and increase the frequency when you have some load, allowing to reduce heat, power usage and noise.

Here are commands to run as root:

rcctl enable apmd
rcctl set apmd flags -A
rcctl start apmd

What are -release and -stable? §

To make things simple, the "-release" version is the whole sets of files to install OpenBSD of that release when it's out. Further updates for that release are called -stable branch, if you run "pkg_add -u" to update your packages and "syspatch" to update your base system you will automatically follow -stable (which is fine!). Release is a single point in time of the state of OpenBSD.

Quick FAQ §

Where is steam? §

No steam, it's proprietary and can't run on OpenBSD

Where is wine? §

No wine, it would require changes into the kernel.

Does my recent NVIDIA card work? §

No nvidia driver, it would work but with a VESA driver, it will be sluggish and very slow.

Does the linux emulation work? §

There is no linux emulation.

I want my favorite program to run on OpenBSD §

If it's not opensource and not using a language like Java or C# that use a Language Virtual Machine allowing abstraction layer to work, it won't work (and most program are not like that).

If it's opensource, it may be possible if all its dependencies are available on OpenBSD.

Get into the ports tree to make things run on OpenBSD

Can I have sudo? §

OpenBSD ships a sudo alternative named "doas" in the base system but sudo can be installed from packages.

doas man page

doas.conf man page

How to view the package list? §

You can check the package directory in a mirror or visit

Openports.pl (using the development version of the ports tree)

What can the virtualization tool do? §

The virtualization system of OpenBSD can run OpenBSD or some linux distributions but without a graphical interface and with only 1 CPU. This mean you will have to configure a serial console to proceed to the installation and then use ssh or the serial console to use your system.

There is qemu in ports but it's not accelerated and won't suit most of people needs because it's terribly terribly slow.

OpenBSD 6.9 packages using IPFS

Written by Solène, on 01 May 2021.
Tags: #openbsd #ipfs

Comments on Fediverse/Mastodon

Update 15/07/2021 §

I disable the IPFS service because it's nearly not used and draw too much CPU on my server. It was a nice experiment, thank you very much for the support and suggestions.

Introduction §

OpenBSD 6.9 has been released and I decided to extend my IPFS experiment to latest release. This mean you can fetch packages and base sets for 6.9 amd64 now over IPFS.

If you don't know what IPFS is, I recommend you to read my previous articles about IPFS.

Note that it also works for -current / amd64, the server automatically checks for new updates of 6.9 and -current every 8 hours.

Benefits §

The benefits is to play with IPFS to understand how it works with a real world use case. Instead of using mirrors to distributes packages, my server is providing the packages and everyone downloading it can also participate into providing data to other IPFS client, this can be seen as a dynamic Bittorrent CDN (Content Delivery Network), instead of making a torrent per file, it's automatic. You certainly wouldn't download each packages as separate torrents files, nor you would download all the packages in a single torrent.

This could reduce the need for mirrors and potentially make faster packages access to people who are far from a mirrors if many people close to that person use IPFS and downloaded the data. This is a great technology that can only be beneficial once it reach a critical mass of adopters.

Installing IPFS on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf
go_ipfs:\
	:openfiles=2048:\
	:tc=daemon:
EOF

rcctl start go_ipfs

Put this in /etc/installurl:

http://k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns.localhost:8080/pub/OpenBSD

Conclusion §

Now, pkg_add will automatically download the packages from IPFS, if more people use it, it will be faster and more resilient than if only my server is distributing the packages.

Have fun and enjoy 6.9 !

If you are worried about security, packages distributed are the same than the one on the mirrors, pkg_add automatically checks the signature in the files against the signify keys available in /etc/signify/ so if pkg_add works, the packages are legitimates.

Use Libreoffice Calc to make 3D models

Written by Solène, on 27 April 2021.
Tags: #fun

Comments on Fediverse/Mastodon

Introduction §

Today I will share with you a simple python script turning a 2D picture defined by numbers and colors in a spreadsheet into a 3D model in OpenSCAD.

Project webpage

How to install §

Short instructions how to install sheetstruder, I will send some documentation upstream. You need git and python and later you will need openscad and a spreadsheet tool.

git clone https://git.hackers.town/seachaint/sheetstruder.git
cd sheetstruder
python3 -m venv sandbox
. sandbox/bin/activate
python3 -m pip install -r requirements.txt

You will need to be in this shell (you need at least the activate command) to make it work.

How to use §

Open a spreadsheet tool that is able to export in format xlsx, type a number to create a solid object of this width (1 = 1 pixel, 2 = 3 pixels because it's mirrored) and put a background color in your cell. Save your file as xlsx.

Run "python3 ./sheetstruder.py yourfile.xlsx > file.scad" and open the file in OpenSCAD, enjoy!

Examples §

I made a simple house with grass around, an antenna, cheminey with smoke, a door and window in it.

House in Libreoffice Calc

House rendered in OpenSCAD from the sheetstruder export

More resources §

OpenSCAD website

Port of the week: pup

Written by Solène, on 22 April 2021.
Tags: #internet

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to the utility "pup" providing CSS selectors filtering for HTML documents. It is a perfect companion to curl to properly fetch only a specific data from an HTML page.

On OpenBSD you can install it with `pkg_add pup` and check its documentation at /usr/local/share/doc/pup/README.md

pup official project

Examples §

pup is quite easy to use once you understand the filters. Let's see a few examples to illustrate practical uses.

Fetch my blog titles list to a JSON format §

The following command will returns a JSON structure with an array of data from the tags matching "a" tags with in "h4" tags.

curl https://dataswamp.org/~solene/index.html | pup "h4 a json{}"

The output (only an extract here) looks like this:

[
 {
  "href": "2021-04-18-ipfs-bandwidth-mgmt.html",
  "tag": "a",
  "text": "Bandwidth management in go-IPFS"
 },
 {
  "href": "2021-04-17-ipfs-openbsd.html",
  "tag": "a",
  "text": "Introduction to IPFS"
 },
 [truncated]
 {
  "href": "2016-05-02-3.html",
  "tag": "a",
  "text": "How to add a route through a specific interface on FreeBSD 10"
 }
]

Fetch OpenBSD -current specific changes §

The page https://www.openbsd.org/faq/current.html contains specific instructions that are required for people using OpenBSD -current and you may want to be notified for changes. Using pup it's easy to make a script to compare your last data to see what has been appended.

curl https://www.openbsd.org/faq/current.html | pup "h3 json{}"

Output sample as JSON, perfect for further processing with a scripting language.

[
 {
  "id": "r20201107",
  "tag": "h3",
  "text": "2020/11/07 - iked.conf \u0026#34;to dynamic\u0026#34;"
 },
 {
  "id": "r20210312",
  "tag": "h3",
  "text": "2021/03/12 - IPv6 privacy addresses renamed to temporary addresses"
 },
 {
  "id": "r20210329",
  "tag": "h3",
  "text": "2021/03/29 - [packages] yubiserve replaced with yubikeyedup"
 }
]

I provide a RSS feed for that

Conclusion §

There are many possibilities with pup and I won't list them all. I highly recommend reading the README.md file from the project because it's its documentation and explains the syntax for filtering.

Bandwidth management in go-IPFS

Written by Solène, on 18 April 2021.
Tags: #ipfs

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain a few important parameters for the reference IPFS node server go-ipfs in order to manage the bandwidth correctly for your usage.

Configuration File §

The configuration file of go-ipfs is set by default to $HOME/.ipfs/config but if IPFS_PATH is set it will be $IPFS_PATH/.config

Tweaks §

There are many tweaks possible in the configuration file, but there are pros and cons for each one so I can't tell you what values you want. I will rather explain what you can change and in which situation you would want it.

Connections number §

By default, go-ipfs will keep a number of connections to peers between 600 and 900 and new connections will last at least 20 seconds. This may totally overwhelm your router to have to manage that quantity of TCP sessions.

The HighWater will define the maximum sessions you want to exist, so this may be the most important setting here. On the other hand, the LowWater will define the number of connections you want to keep all the time, so it will drain bandwidth if you keep it high.

I would say if you care about your bandwidth usage, keep the LowWater low like 50 and have the HighWater quite high and a short GracePeriod, this will allow go-ipfs to be quiet when unused but responsive (able to connect to many peers to find a content) when you need it.

Documentation about Swarm.ConnMgr

DHT Routing §

IPFS use a distributed hash table to find peers (it's the common way to proceed in P2P networks), but your node can act as a client and only fetch the DHT from other peer or be active and distribute it to other peer.

If you have a low power server (CPU) and that you are limited in your bandwidth, you should use the value "dhtclient" to no distribute the DHT. You can configure this in the configuration file or use --routing=dhtclient in the command line.

Documentation about Routing.type

Reprovider §

Strategy §

This may be the most important choice you have to make for your IPFS node. With the Reprovider.Strategy setting you can choose to be part of the IPFS network and upload data you have locally, only upload data you pinned or upload nothing.

If you want to actively contribute to the network and you have enough bandwidth, keep the default "all" value, so every data available in your data store will be served to clients over IPFS.

If you self host data on your IPFS node but you don't have much bandwidth, I would recommend setting this value to "pinned" so only the data pinned in your IPFS store will be available. Remember that pinned data will never be removed from the store by the garbage collector and files you add to IPFS from the command line or web GUI are automatically pinned, the pinned data are usually data we care about and that we want to keep and/or distribute.

Finally, you can set it to empty and your IPFS node will never upload any data to anyone which could be consider as unfair in a peer to peer network but under some quota limited or high latency connection it would make sense to not upload anything.

Documentation about Reprovider.Strategy

Interval §

While you can choose what kind of data you want your node to relay as a part of the IPFS network, you can choose how often your node will publish the content of the data hold in its data store.

The default is 12 hours, meaning every 12 hours your node will publish the list of everything available for upload to the other peers. If you care about bandwidth and your content doesn't change often, you can increase this value, on the other hand if you may want to publish more often if your data store is rapidly changing.

If you don't want to publish your content, you can set it to "0", then you would still be able to publish it manually using the IPFS command line.

Documentation about Reprovider.Interval

Gateway management §

If you want to provide your data over a public gateway, you may not want everyone to use this gateway to download IPFS content because of legal concerns, resource limits or you simply don't want that.

You can set Gateway.NoFetch to make your gateway to only distribute files available in the node data store. Meaning it will act as an http·s server for your own data but the gateway can't be used to get any other data. It's a convenient way to publish content over IPFS and make it available from a gateway you trust while keeping control over the data relayed.

Documentation about Gateway.NoFetch

Conclusion §

There are many settings here for various use case. I'm running an IPFS node on a dedicated server but also another one at home and they have a very different configuration.

My home connection is limited to 900 kb/s which make IPFS very unfriendly to my ISP router and bandwidth usage.

Unfortunately, go-ipfs doesn't provide an easy way to set download and upload limit, that would be very useful.

Introduction to IPFS

Written by Solène, on 17 April 2021.
Tags: #openbsd #ipfs

Comments on Fediverse/Mastodon

introduction to IPFS §

IPFS is a distributed storage network protocol that comes with a public network. Anyone can run a peer and access content from IPFS and then relay the content while it's in your cache.

Gateways are websites used to allow accessing content of IPFS through http, there are several public gateways allowing to get data from IPFS without being a peer.

Every publish content has an unique CID to identify it, we usually add a /ipfs/ to it like in /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1. The CID is unique and if someone add the same file from another peer, they will get the same hash as you.

If you add a whole directory in IPFS, the top directory hash will depend on the hash of its content, this mean if you want to share a directory like a blog, you will need to publish the CID every time you change the content, as it's not practical at all, there is an alternative for making the process more dynamic.

A peer can publish data in a long name called an IPNS. The IPNS string will never change (it's tied to a private key) but you can associate a CID to it and update the value when you want and then tell other peers the value changed (it's called publishing). The IPNS notation used is looking like /ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns, you can access an IPNS content with public gateways with a different notation.

- IPNS gateway use example: https://k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns.dweb.link/

- IPFS gateway use example: https://ipfs.io/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1/

The IPFS link will ALWAYS return the same content because it's a defined hash to a specific resource. The IPNS link can be updated to have a newer CID over time, allowing people to bookmark the location and browse it for updates later.

Using a public gateway §

There are many public gateways you can use to fetch content.

Health check of public gateways, useful to pick one

You will find two kind of gateways url, one like "https://$domain/" and other like "https://$something_very_long.ipfs.$domain/", for the first one you need to append your /ipfs/something or /ipns/something requests like in the previous examples. For the latter, in web browser it only works with ipns because web browsers think the CID is a domain and will change the case of the letters and it's not long valid. When using an ipns like this, be careful to change the .ipfs. by .ipns. in the url to tell the gateway what kind of request you are doing.

Using your own node §

First, be aware that there is no real bandwidth control mechanism and that IPFS is known to create too many connections that small routers can't handle. On OpenBSD it's possible to mitigate this behavior using queuing. It's possible to use a "lowpower" profile that will be less demanding on network and resources but be aware this will degrade IPFS performance. I found that after a few hours of bootstrapping and reaching many peers, the bandwidth usage becomes less significant but it's may be an issue for DSL connections like mine.

When you create your own node, you can use its own gateway or the command line client. When you request a data that doesn't belong to your node, it will be downloaded from known peers able to distribute the blocks and then you will keep it in cache until your cache reach the defined limited and the garbage collector comes to make some room. This mean when you get a content, you will start distributing it, but nobody will use your node for content you never fetched first.

When you have data, you can "pin" it so it will never be removed from cache, and if you pin a directory CID, the content will be downloaded so you have a whole mirror of it. When you add data to your node, it's automatically pinned by default.

The default ports are 4001 (the one you need to expose over the internet and potentially forwarding if you are behind a NAT), the Web GUI is available at http://localhost:5001/ and the gateway is available at http://localhost:8080/

Installing the node on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf
go_ipfs:\
	:openfiles=2048:\
	:tc=daemon:
EOF
rcctl start go_ipfs

You can change the profile to lowpower with "env IPFS_PATH=/var/go-ipfs/ ipfs config profile apply lowpower", you can also list profiles with the ipfs command.

I recommend using queues in PF to limit the bandwidth usage, for my DSL connection I've set a maximum of 450K and it doesn't disrupt my network anymore. I explained how to proceed with queuing and bandwidth limitations in a previous article.

Installing the node on NixOS §

Installing IPFS is easy on NixOS thanks to its declarative way. The system has a local IPv4 of 192.168.1.150 and a public IP of 136.214.64.44 (fake IP here). it is started with a 50GB maximum for cache. The gateway will be available on the local network on http://192.168.1.150:8080/.

services.ipfs.enable = true;
services.ipfs.enableGC = true;
services.ipfs.gatewayAddress = "/ip4/192.168.1.150/tcp/8080";
services.ipfs.extraFlags = [ "--enable-namesys-pubsub" ];
services.ipfs.extraConfig = {
    Datastore = { StorageMax = "50GB"; };
    Routing = { Type = "dhtclient"; };
};
services.ipfs.swarmAddress = [
        "/ip4/0.0.0.0/tcp/4001"
        "/ip4/136.214.64.44/tcp/4001"
        "/ip4/136.214.64.44/udp/4001/quic"
        "/ip4/0.0.0.0/udp/4001/quic"
];

Testing your gateway §

Let's say your gateway is http://localhost:8080/ for making simpler incoming examples. If you want to request the data /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 , you just have to add this to your gateway, like this: http://localhost:8080/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 and you will get access to your file.

When using ipns, it's quite the same, for /ipns/blog.perso.pw/ you can request http://localhost:8080/ipns/blog.perso.pw/ and then you can browse my blog.

OpenBSD experiment §

To make all of this really useful, I started an experiment: distributing OpenBSD amd64 -current and 6.9 both with sets and packages over IPFS. Basically, I have a server making a rsync of both sets once a day, will add them to the local IPFS node, get the CID of the top directory and then publish the CID under an IPNS. Note that I have to create an index.html file in the packages sets because IPFS doesn't handle directory listing very well.

The following examples will have to be changed if you don't use a local gateway, replace localhost:8080 by your favorite IPFS gateway.

You can upgrade your packages with this command:

env PKG_PATH=http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/snapshots/packages/amd64/ pkg_add -Dsnap -u

You can switch to latest snapshot:

sysupgrade -s http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/

While it may be slow to update at first, if you have many systems, running a local gateway used by all your computers will allow to have a cache of downloaded packages, making the whole process faster.

I made a "versions.txt" file in the top directory of the repository, it contains the date and CID of every publication, this can be used to fetch a package from an older set if it's still available on the network (because I don't plan to keep all sets, I have a limited disk).

You can simply use the url http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/ in the file /etc/installurl to globally use IPFS for pkg_add or sysupgrade without specifying the url every time.

Using DNS §

It's possible to use a DNS entry to associate an IPFS resource to a domain name by using dnslink. The entry would look like:

_dnslink.blog	IN	TXT	"dnslink=/ipfs/somehashhere"

Using an /ipfs/ syntax will be faster to resolve for IPFS nodes but you will need to update your DNS every time you update your content over IPFS.

To avoid manipulating your DNS every so often (you could use an API to automate this by the way), you can use an /ipns/ record.

_dnslink.blog	IN	TXT	"dnslink=/ipns/something"

This way, I made my blog available under the hostname blog.perso.pw but it has no A or CNAME so it work only in an IPFS context (like a web browser with IPFS companion extension). Using a public gateway, the url becomes https://ipfs.io/ipns/blog.perso.pw/ and it will download the last CID associated to blog.perso.pw.

Conclusion §

IPFS is a wonderful piece of technology but in practice it's quite slow for DSL users and may not work well if you don't need a local cache. I do really love it though so I will continue running the OpenBSD experiment.

Please write me if you have any feedback or that you use my OpenBSD IPFS repository. I would be interested to know about people's experiences.

Interesting IPFS resources §

dweb-primer tutorials for IPFS (very well written)

Official IPFS documentation

IPFS companion for Firefox and Chrom·ium·e

Pinata.cloud is offering IPFS hosting (up to 1 GB for free) for pinned content

Wikipedia over IPFS

OpenBSD website/faq over IPFS (maintained by solene@)

Port of the week: musikcube

Written by Solène, on 15 April 2021.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today I will share about the console oriented audio player "musikcube" because I really like it. It has many features while being easy to use for a console player. The feature that really sold it to me is the library management and the rating feature allowing me to rate my files and filter by score. The library is nice to browse, it's easy to filter by pattern and the whole UI is easy to use.

Unfortunately it doesn't come with a man page, so you can check the key binding by typing "?" in it or look at the key bindings menu in the main menu.

Official user guide

Official project website

The package is not yet available on OpenBSD but should arrive after 6.9 release (so it will be in 7.0 release).

Picture of Musikcube playing music from a directory mode display

A terminal client §

Musikcube is a console client, meaning you start it in a terminal. You can easily switch between menus with Tab, Shift+Tab, Enter and keyboard arrows but you should also check the key bindings for full controls. Note that the mouse is supported!

Once you told musikcube where to look files, you will have access to your library, using numbers from 1 to 6 you can choose how you want the library filtered but 6 will ask which criteria to use, using "directory" will display the file hierarchy which is sometimes nicer to use for badly tagged music files.

You can access to the whole tracks list using "t" and then filter by pattern or sort the list using "Ctrl + s".

A server §

When run as musikcube, a daemon mode is started to accept incoming connections on TCP ports 7905 and 7906 for remote API control and transcoding/streaming. This behavior can be disabled in the main menu under the "server setup" choice.

Running it with the binary musikubed binary, there will be no UI started, only a background daemon listening on ports.

Android companion app §

Musikcube has a companion app for Android named musikdroid but it only available for download as a file on the github project.

The app has multiples features, it can control the musikcube server for music playing on the remote system, but you can also use it to stream music to your Android device. The song on the musikcube server and android devices can be separated. Even better, songs played on the android devices will be automatically stored for offline (you can tune the cache) and even transcode files to have smaller files for the device.

Look for a .apk file in the assets list of the releases

Easy text transmission from computer to smartphone

Written by Solène, on 25 March 2021.
Tags: #opensource

Comments on Fediverse/Mastodon

Introduction §

Today I will share with you a simple way I found to transmit text from my computer to my phone. I often have to do it, to type a password, enter an url, copy/paste a message or whatever reasons.

Using QR codes §

The best way to get a text from computer to a smartphone (that I am aware of) is scanning a QR code using the camera. By using the command qrencode (I already wrote about this one), xclip and feh (a picture viewer), it is possible to generate QR code on the fly on the screen.

Is it as simple as running the following command, from a menu or a key binding:

xclip -o | qrencode -o - -t PNG | feh -g 600x600 -Z - 

Using this command, xclip will gives the clipboard to qrencode which will create a PNG file on stdout and then feh will display it on a 600 by 600 window, no temporary file involved here.

Once the picture is displayed on the screen, you can use a scanner program on your phone to gather the content, I found "QR & Barcode Scanner" to be really light, fast and usable with its history, available on F-Droid.

QR & Barcode Scanner on F-Droid

Composing a quite long text on your computer and sharing it to the phone can be done with sending the text to xclip and then generate the QR code.

Going further §

When it comes to sharing data between my phone and my computer, I love "primitive ftpd" which is a SFTP/FTP server for Android, it works out of the box and allow secure transfers over Wifi (use SFTP please!).

primitive ftpd on F-Droid

For simple transfers, I use "Share to Computer" that will share a file or a group of files as a zip on a temporary http server, it is then easy to connect to it to save the file.

Share to Computer on F-Droid

For sending SMS through my phone but from my computer, I use the program KDE Connect (it has to be installed on phone and computer), I wanted to write about it for a long time but it's not easy to explain how to get it to work and uneasy to explain its usage. But it allows me to receive phone notifications on my computer and also send SMS. I have simple aliases in my shell like "mom-sms hello are you ?" to ease my use of SMS. When possible, don't use SMS, it's not secure. The program does a lot more than sending SMS, like using the smartphone as a remote touchpad as one example.

KDE Connect on F-Droid

Opensource from an author point of view

Written by Solène, on 23 March 2021.
Tags: #opensource

Comments on Fediverse/Mastodon

Hi, today's article will be a bit different than what you are used to. I am currently writing about my experience as an open source author and "project manager". I recently created a project that, while being extremely small, have seen some people getting involved at various level. I didn't know what it was to be in this position.

Having to deal with multiple people contributing to a project I started for myself on one architecture with a limited set of features is surprisingly hard. I don't say it's boring and that no one should ever do it, but I think I wasn't really prepare to handle this.

I made my best to integrate people wishes while keeping the helm of the project in the right direction, but I had to ask myself many questions.

Many questions §

Should I care about what other people need? I could say no to everything proposed if I see no benefit for my use case. I chose to accept some changes that I didn't use because they made sense in some context. But I have to be really careful to accept everything if I want to keep the program sane.

Should I care about other platforms I don't use? Someone proposed me to add some code to support Linux targets, which I don't use, meaning more code I can't test. For the sake of compatibility and avoiding extra work to packagers, I made a very simple solution to fix that, but if someone wanted to port my program to Windows or a platform that would require many many changes, I don't know how I would react.

Too much changing code situation. My program changed A LOT since my initials commits, and now a git blame mostly show no lines from me, this doesn't mean I didn't review changes made by contributors, but I am not as comfortable now that I was initially with my own code. That doesn't mean the new code is wrong, but it doesn't hold my logic in it. I think it's the biggest deal in this situation, I, as the project manager, must say what can go in, what can't and when. It's fine to receive contributions but they shouldn't add complexity or weird algorithms.

Accepting changes §

I am not an expert programmer, I don't often write code, and when I do, it's for my own benefit. Opening our work to other implies making it accessible to outsiders, accepting changes and explaining choices.

Many times I reviewed submitted code and replied it wasn't fine, and while it compiles and apply correctly, it's not the right way to do, please rework this in some way to make it better or discard it, but it won't get into the repository. It's not always easy, people can submit code I don't understand sometimes, I still have to review it thoroughly because I can't accept everything sent.

In some way, once people get involved into my projects, they get denatured because they receive thoughts from other, their ideas, their logic, their needs. It's wonderful and scary at the same time. When I publish code, I never expect it to be useful for someone and even less that I could receive new features by emails from strangers.

Be prepared for this is important when you start a project and that you make it open source. I could refuse everything but then I would cut myself from a potential community around my own code, that would be a shame.

Responsibility §

This part is not related to my projects (or at least not in this situation) but this is a debate I often think about when reading dramas in open source: is an open source author responsible toward the users?

One way to reply this is that if you publish your content online and accept contributions, this mean you care about users (which then contribute back), but where to draw the limit of what is acceptable? If someone writes an awesome program for themselves and gather a community around it, and then choose to make breaking changes or remove important features, what then? The users are free to fork, the author is free to to whatever they want.

There are no clear responsibility binding contributors and end users, I hope most of the time, contributors think about the end users, but with different philosophies in play sometimes we can end in dilemma between the two groups.

Epilogue §

I am very happy to publish open source code and to have contributors, coordinate people, goals and features is not something I expected :)

Please, be cautious with this writing, I only had to face this situation with a couple of contributors, I can't imagine how complicated it can become at a bigger scale!

Securely share a secret using Shamir's secret sharing

Written by Solène, on 21 March 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

I will present you the program ssss (for Shamir's Secret Sharing Scheme), a cryptography program to split a secret into n parts, requiring at least t parts to be recovered (with t <= n).

Shamir Secret Sharing (method is mathematically proven to be secure)

Use case §

The project website list a few use cases for real life and I like them, but I will share another use case.

ssss project website

I used to run a community but there was no person in charge apart me, which made me a single point of failure. I decided to make the encrypted backup available to a few kind of trustable community members, and I gave each a secret. There were four members and I made the backup password available only if the four members agreed to share their secrets to get the password. For privacy reasons, I didn't want any of these people to be able to lurk into the backup, at least, if someone had happened to me, they could agree to recover the database only if the four persons agreed on it.

How to use §

ssss-split is easy to use, you can only share text with it. So you can use a very long passphrase to encrypt files and share this passphrase into many secrets that you share.

You can install it on OpenBSD using pkg_add ssss.

In the following examples, I will create a simple passphrase and then use the generated secrets to get the original passphrase back.

$ ssss-split -t 3 -n 3
Generating shares using a (3,3) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters: [Note=>hidden input where I typed "this is a very very long password] Using a 264 bit security level.
1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353

When you want to recover a secret, you will have to run ssss-combine and tell it how many secrets you have, they can be provided in any order.

$ ssss-combine -t 3
Enter 3 shares separated by newlines:
Share [1/3]: 2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
Share [2/3]: 3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
Share [3/3]: 1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
Resulting secret: this is a very very long password

Tips §

If you want to easily store a secret or share it to a non-IT person (or in a vault), you can create a QR code and then print the picture. QR code has redundancy so if the paper is damaged you can still recover it, it's quite big on a paper so if it fades of you may not lose data and it also checks integrity.

Conclusion §

ssss is a wonderful program to share a secret among a few people or put a few secrets here and there for a recovery situation. The program can receive the passphrase on its standard input allowing it to be scripted.

Interesting fact, if you run ssss-combine multiple times on the same text, you always get different secrets, so if you give a secret, no brute force can be used to find which input produced the secret.

How to split a file into small parts

Written by Solène, on 21 March 2021.
Tags: #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

Today I will present the userland program "split" that is used to split a single file into smaller files.

OpenBSD split(1) manual page

Use case §

Split will create new files from a single files, but smaller. The original file can be get back using the command cat on all the small files (in the correct order) to recreate the original file.

There are several use cases for this:

- store a single file (like a backup) on multiple medias (floppies, 700MB CD, DVDs etc..)

- parallelize a file process, for example: split a huge log file into small parts to run analysis on each part

- distribute a file across a few people (I have no idea about the use but I like the idea)

Usage §

Its usage is very simple, run split on a file or feed its standard input, it will create 1000 lines long files by default. -b could be used to tell a size in kB or MB for the new files or use -l to change the default 1000 lines. Split can also create a new file each time a line match a regex given with -p.

Here is a simple example splitting a file into 1300kB parts and then reassemble the file from the parts, using sha256 to compare checksum of the original and reconstructed files.

solene@kongroo ~/V/pmenu> split -b 1300k pmenu.mp4
solene@kongroo ~/V/pmenu> ls
pmenu.mp4  xab        xad        xaf        xah        xaj        xal        xan
xaa        xac        xae        xag        xai        xak        xam
solene@kongroo ~/V/pmenu> cat x* > concat.mp4
solene@kongroo ~/V/pmenu> sha256 pmenu.mp4 concat.mp4 
SHA256 (pmenu.mp4)  = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
SHA256 (concat.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
solene@kongroo ~/V/pmenu> ls -l x*
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaa
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xab
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xac
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xad
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xae
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaf
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xag
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xah
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xai
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaj
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xak
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xal
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xam
-rw-r--r--  1 solene  wheel    810887 Mar 21 16:50 xan

Conclusion §

If you ever need to split files into small parts, think about the command split.

For more advanced splitting requirements, the program csplit can be used, I won't cover it here but I recommend reading the manual page for its usage.

csplit manual page

Port of the week: diffoscope

Written by Solène, on 20 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to Diffoscope, a command line software to compare two directories. I find it very useful when looking for changes between two extracted tarballs, I use it to compare changes between two version of a program to see what changed.

Diffoscope project website

How to install §

On OpenBSD you can use "pkg_add diffoscope", on other systems you may have a package for it, but it could be installed via pip too.

Usage §

It is really easy to use, as parameter give the two directories you want to compare, diffoscope will then show the uid, gid, permissions, modification/creation/access time changes between the two directories.

The output on a simple example looks like the following:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello
│ ├── stat {}
│ │ @@ -1 +1 @@
│ │ -1043 492483 -rw-r--r-- 1 solene wheel 1973218 6 "Mar 20 18:31:08 2021" "Mar 20 18:31:14 2021" "Mar 20 18:31:14 2021" 16384 4 0 t/foo
│ │ +1043 77762 -rw-r--r-- 1 solene wheel 314338 10 "Mar 20 18:31:08 2021" "Mar 20 18:31:18 2021" "Mar 20 18:31:18 2021" 16384 4 0 a/foo

Diffoscope has many flags, if you want to only compare the directories content, you have to use "--exclude-directory-metadata yes".

Using the same example as previously with --exclude-directory-metadata yes, it looks like:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello

Port of the week: pmenu

Written by Solène, on 12 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This Port of the week will introduce you to a Pie-menu for X11, available on OpenBSD since 6.9 (not released yet). A pie menu is a circle with items spread in the circle, allowing to open other circle with other items in it. I find it very effective for me because I am more comfortable with information spatially organized (my memory is based on spatialization). I think pmenu was designed for a tablet input device using a pen to trigger pmenu.

Pmenu github page

Installation §

On OpenBSD, a pkg_add pmenu is enough, but on other systems you should be able to compile it out of the box with a C compiler and the X headers.

Configuration §

This part is a bit tricky because the configuration is not obvious. Pmenu takes its configuration on the standard input and then must be piped to a shell.

My configuration file looks like this:

#!/bin/sh

cat <<ENDOFFILE | pmenu | sh &
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/utilities-terminal.png	sakura
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/applets-screenshooter.png	screen_up.sh
Apps
	IMG:/usr/local/share/icons/hicolor/48x48/apps/gimp.png	gimp
	IMG:/home/solene/dev/pmenu/claws-mail.png	claws-mail
	IMG:/usr/local/share/pixmaps/firefox.png	firefox
	IMG:/usr/local/share/icons/hicolor/256x256/apps/keepassxc.png	keepassxc
	IMG:/usr/local/share/icons/hicolor/48x48/apps/chrome.png	chrome
	IMG:/usr/local/share/icons/hicolor/128x128/apps/rclone-browser.png	rclone-browser
Games
	IMG:/home/jeux/slay_the_spire/sts.png	cd /home/jeux/slay_the_spire/ && libgdx-run
	IMG:/home/jeux/Delver/unjar/a/Delver-Logo.png	cd /home/jeux/Delver/unjar/ && /usr/local/jdk-1.8.0/bin/java -Dsun.java2d.dpiaware=true com.interrupt.dungeoneer.DesktopStarter
	IMG:/home/jeux/Dead_Cells/deadcells.png	cd /home/jeux/Dead_Cells/ && hl hlboot.dat
	IMG:/home/jeux/brutal_doom/Doom-The-Ultimate-1-icon.png	cd /home/jeux/doom2/ && gzdoom /home/jeux/brutal_doom/bd21RC4.pk3
Volume
	0%	sndioctl output.level=0
	10%	sndioctl output.level=0.1
	20%	sndioctl output.level=0.2
	30%	sndioctl output.level=0.3
	40%	sndioctl output.level=0.4
ENDOFFILE

The configuration supports levels, like "Apps" or "Games" in this example, that will allow a second level of shortcuts. A text could be used like in Volume, but you can also use images like in other categories. Every blank appearing in the configuration are tabs.

The pmenu itself can be customized by using X attributes, you can learn more about this on the official project page.

Video §

I made a short video to show how it looks with the configuration shown here.

Note that pmenu is entirely browseable with the keyboard by using tab / enter / escape to switch to next / validate / exit.

Video demonstrating pmenu in action

Easy spamAssassin with OpenSMTPD

Written by Solène, on 10 March 2021.
Tags: #openbsd #mail

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to setup very easily the anti-spam SpamAssassin and make it work with the OpenSMTPD mail server (OpenBSD default mail server). I will suppose you are already familiar with mail servers.

Installation §

We will need two packages to install: opensmtpd-filter-spamassassin and p5-Mail-SpamAssassin. The first one is a "filter" for OpenSMTPD, it's a special meaning in smtpd context, it will run spamassassin on incoming emails and the latter is the spamassassin daemon itself.

Filter §

As explained in the pkg-readme file from the filter package /usr/local/share/doc/pkg-readmes/opensmtpd-filter-spamassassin , a few changes must be done to the smtpd.conf file. Mostly a new line to define the filter and add "filter "spamassassin"" to lines starting by "listen".

Website of the filter author who made other filters

SpamAssassin §

SpamAssassin works perfectly fine out of the box, "rcctl enable spamassassin" and "rcctl start spamassassin" is enough to make it work.

Official SpamAssassin project website

Usage §

It should really work out of the box, but you can train SpamAssassin what are good mails (called "ham") and what are spam by running the command "sa-learn --ham" or "sa-learn --spam" on directories containing that kind of mail, this will make spamassassin more efficient at filtering by content. Be careful, this command should be run as the same user as the daemon used by SpamAssassin.

In /var/log/maillog, spamassassin will give information about scoring, up to 5.0 (default), a mail is rejected. For legitimate mails, headers are added by spamassassin.

Learning §

I use a crontab to run once a day sa-learn on my "Archives" directory holding all my good mails and "Junk" directory which has Spam.

0 2 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec sa-learn --spam {} +
5 2 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec sa-learn --ham  {} +

Extra configuration §

SpamAssassin is quite slow but can be speeded up by using redis (a key/value database in memory) for storing tokens that help analyzing content of emails. With redis, you would not have to care anymore about which user is running sa-learn.

You can install and run redis by using "pkg_add redis" and "rcctl enable redis" and "rcctl start redis", make sure that your port TCP/6379 is blocked from outside. You can add authentication to your redis server &if you feel it's necessary. I only have one user on my email server and it's me.

You then have to add some content to /etc/mail/spamassassin/local.cf , you may want to adapt to your redis configuration if you changed something.

bayes_store_module  Mail::SpamAssassin::BayesStore::Redis
bayes_sql_dsn       server=127.0.0.1:6379;database=4
bayes_token_ttl 300d
bayes_seen_ttl   8d
bayes_auto_expire 1

Configure a Bayes backend (like redis or SQL)

Conclusion §

Restart spamassassin after this change and enjoy. SpamAssassin has many options, I only shared the most simple way to setup it with opensmtpd.

Implement a «Command not found» handler in OpenBSD

Written by Solène, on 09 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

On many Linux systems, there is a special program run by the shell (configured by default) that will tell you which package provide a command you tried to run but is not available in $PATH. Let's do the same for OpenBSD!

Prerequisites §

We will need to install the package pkglocate to find binaries.

# pkg_add pkglocate

We will also need a file /usr/local/bin/command-not-found executable with this content:

#!/bin/sh

CMD="$1"

RESULT=$(pkglocate */bin/${CMD} */sbin/${CMD} | cut -d ':' -f 1)

if [ -n "$RESULT" ]
then
    echo "The following package(s) contain program ${CMD}"
    for result in $RESULT
    do
        echo "    - $result"
    done
else
    echo "pkglocate didn't find a package providing program ${CMD}"
fi

Configuration §

Now, we need to configure the shell to run this command when it detects an error corresponding to an unknown command. This is possible with bash, zsh or fish at least.

Bash configuration §

Let's go with bash, add this to your bash configuration file

command_not_found_handle()
{
    /usr/local/bin/command-not-found "$1"
}

Fish configuration §

function fish_command_not_found
    /usr/local/bin/command-not-found $argv[1]
end

ZSH configuration §

function command_not_found_handler()
{
    /usr/local/bin/command-not-found "$1"
}

Trying it §

Now that you configured your shell correctly, if you run a command in your shell that isn't available in your PATH, you may have either a success with a list of packages giving the command or that the command can't be found in any package (unlucky).

This is a successful output that found the program we were trying to run.

$ pup
The following package(s) contain program pup
    - pup-0.4.0p0

This is a result showing that no package found a program named "steam".

$ steam
pkglocate didn't find a package providing program steam

Top 12 best opensource games available on OpenBSD

Written by Solène, on 07 March 2021.
Tags: #openbsd #gaming

Comments on Fediverse/Mastodon

Introduction §

This article features the 12 best games (in my opinion) in term of quality and fun available in OpenBSD packages. The list only contains open source games that you can install out of the box. This means that game engines requiring proprietary (or paid) game assets are not part of this list.

Tales of Maj'Eyal §

Tome4 is a rogue-like game with many classes, many races, lot of areas to explore. There are fun pieces of lore to find and read if it's your thing, you have to play it many times to unlock everything. Note that while the game is open source, there are paid extensions requiring an online account on the official website, this is not mandatory to play or finish the game.

# pkg_add tome4
$ tome4

Tales of Maj'Eyal official website

Tales of Maj

OpenTTD §

This famous game is a free reimplementation of the Transport Tycoon game. Build roads, rails, make huge trains networks with signals, transports materials from extraction to industries and then deliver goods to cities to make them grow. There is a huge community and many mods, and the game can be played in multiplayer. Also available on Android.

# pkg_add openttd
$ openttd

OpenTTD official website

[Peertube video] OpenTTD

OpenTTD screenshot

The Battle for Wesnoth §

Wesnoth is a turn based strategy game based on hexagons. There are many races with their own units. The game features a full set of campaign for playing solo but also include multiplayer. Also available on Android.

# pkg_add wesnoth
$ wesnoth

The Battle for Wesnoth official website

Wesnoth screenshot

Endless Sky §

This game is about space exploration, you are captain of a ship and you can get missions, enhance your ship, trade goods over the galaxy or fight enemies. There is a learning curve to enjoy it because it's quite hard to understand at first.

# pkg_add endless-sky
$ endless-sky

Endless Sky official website

Endless sky screenshot

OpenRA §

Open Red Alert, the 100% free reimplementation of the engine AND assets of Red Alert, Command and Conquer and Dune. You can play all these games from OpenRA, including multiplayer. Note that there are no campaign, you can play skirmish alone with bots or in multiplayer. Campaigns (and cinematics) could be played using the original games files (from OpenRA launcher), as the games have been published as freeware a few years ago, one can find them for free and legally.

# pkg_add openra
$ openra
wait for instructions to download the assets of the game you want to play

OpenRA official website

[Peertube video] Red Alert

Red Alert screenshot

Cataclysm: Dark Days Ahead §

Cataclysm DDA is a game in which you awake in a zombie apocalypse and you have to survive. The game is extremely complete and allow many actions/combinations like driving vehicles, disassemble electronics to build your own devices and many things I didn't try yet. The game is turn based and 2D from top, I highly recommend reading the manual and how-to because the game is hard. You can also create your character when you start a game, which will totally change the game experience because of your characters attributes and knowledge.

# pkg_add cataclysm-dda
$ cataclysm-dda

Cataclysm: Dark Days Ahead official website

Cataclysm DDA screenshot

Taisei §

Taisei is a bullet hell game in the Touhou universe. Very well done, extremely fun, multiple characters to play with an alternative mechanic of each character.

# pkg_add taisei
$ taisei

Taisei official website

[Peertube video] Taisei

Taisei screenshot

The Legend of Zelda: Return of the Hylian SE §

There is a game engine named Solarus dedicated to write Zelda like games, and Zelda RotH is a game based on this. Nothing special to say, it's a 2D Zelda game, very well done with a new adventure.

# pkg_add zelda_roth_se
$ zelda_roth_se

Zelda RotH official website

ROTH screenshot

Shapez.io §

This game is about making industries from shapes and colors in order to deliver what you are asked to produce in the most efficient manner, this game is addictive and easy to understand thanks to the tutorial when you start the game.

# pkg_add shapezio
$ /usr/local/bin/electron /usr/local/share/shapez.io/index.html

Shapez.io official website

Shapez.io screenshot

OpenArena §

OpenArena is a Quake 3 reimplementation, including assets. It's like Quake 3 but it's not Quake 3 :)

# pkg_add openarena
$ openarena

OpenArena official website

Openarena screenshot

Xonotic §

This is a fast paced arena FPS game with beautiful graphics, many weapons with two modes of fire and many games modes. Reminds me a lot Unreal Tournament 2003.

# pkg_add xonotic
$ xonotic

Xonotic official website

Xonotic screenshot

Hyperrogue §

This game is a rogue like (every run is different than last one) in which you move from hexagone to hexagone to get points, each biome has its own characteristics, like a sand biome in which you have to gather spice and you must escape sand worms :-) . The game is easy to play, turn by turn and has unusual graphics because of the non-euclidian nature of its world. I recommend reading the game manual because the first time I played it I really disliked it by missing most of the game mechanics... Also available on Android!

Hyperrogue official website

Hyperrogue screenshot

And many others §

Here is a list of games I didn't include but at also worth being played: 0ad, Xmoto, Freedoom, The Dark Mod, Freedink, crack-attack, witchblast, flare, vegastrike and many others.

List of games available on OpenBSD

Port of the week: checkrestart

Written by Solène, on 02 March 2021.
Tags: #openbsd #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

This article features the very useful program "checkrestart" which is OpenBSD specific. The purpose of checkrestart is to display which programs and their according PID for which the binaries doesn't exist anymore.

Why would their binary be absent? The obvious case is that the program was removed, but what it is really good at, is when you upgrade a package with running binaries, the old binary is deleted and the new binary installed. In that case, you will have to stop all the running binaries and restart them. Hence the name "checkrestart".

Installation §

Installing it is as simple as running pkg_add checkrestart

Usage §

This is simple too, when you run checkrestart, you will have a list of PID numbers with the binary name.

For example, on my system, checkrestart gives me information about what programs got updated that I should restart to run the new binary.

69575	lagrange
16033	lagrange
9664	lagrange
77211	dhcpleased
6134	dhcpleased
21860	dhcpleased

Real world usage §

If you run OpenBSD -stable, you will want to use checkrestart after running pkg_add -u. After a package update, most often related to daemons, you will have to restart the related services.

On my server, in my daily script updating packages and running syspatch, I use it to automatically restart some services.

checkrestart | grep php && rcctl restart php-fpm
checkrestart | grep postgres && rcctl restart postgresql
checkrestart | grep nginx && rcctl restart nginx

Other Operating System §

I've been told that checkrestart is also available on FreeBSD as a package! The output may differ but the use is the same.

On Linux, a similar tool exists under the name "needrestart", at least on Debian and Gentoo.

Port of the week: shapez.io - a libre factory gaming

Written by Solène, on 26 February 2021.
Tags: #openbsd #openbsd70 #gaming #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

I would like to introduce you to a very nice game I discovered a few months ago, its name is Shapez.io and is a "factory" game, a genre popularized by the famous game Factorio. In this game you will have to extract shapes and colors and rework the shapez, mix colors and mix the whole thing together to produce wanted pieces.

The game §

The gameplay is very cool, the early game is an introduction to the game mechanics, you can extract shapes, cut them rotate pieces, merge conveys belts into one, paint shapes etc... and logic circuits!

In those games, you will have to learn how to make efficient factories and mostly "tile-able" installations. A tile-able setup means that if you copy a setup and paste it next to it, it will be bigger and functional, meaning you can extend it to infinity (except that the input conveyors will starve at some point).

It can be quite addictive to improve your setups over and over. This game is non violent and doesn't require any reflex but you need to think. You can't loose, it's between a puzzle and a management game.

Compact tile-able painting setup (may spoil if you want to learn yourself)

Where to get it §

On OpenBSD since version 6.9 (not released yet when I publish this) you can install the package shapezio and find a launcher in your desktop environment Game menu.

I also compiled a web version that you can play in your web browser (I discourage using Firefox due to performance..) without installing it, it's legal because the game is open source :)

Play shapez.io in the web browser

The game is also sold on Steam, pre-compiled and ready to run, if you prefer it, it's also a nice way to support the developer.

shapez.io on Steam

More content §

Official website

Youtube video of "Real civil engineer" explaining the game

Nginx as a TCP/UDP relay

Written by Solène, on 24 February 2021.
Tags: #openbsd #nginx #network

Comments on Fediverse/Mastodon

Introduction §

In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement.

I will explain how to install and configure Nginx and how to parse logs to obtain useful information. I will use an OpenBSD system for the examples.

It is important to understand that in this context Nginx is not doing anything related to HTTP.

Installation §

On OpenBSD we need the package nginx-stream, if you are unsure about which package is required on your system, search which package provide the file ngx_stream_module.so . To enable Nginx at boot, you can use rcctl enable nginx.

Nginx stream module core documentation

Nginx stream module log documentation

Configuration §

The default configuration file for nginx is /etc/nginx/nginx.conf , we will want it to listen on port 1965 and relay to 127.0.0.1:11965.

worker_processes  1;

load_module modules/ngx_stream_module.so;

events {
   worker_connections 5;
}

stream {
    log_format basic '$remote_addr $upstream_addr [$time_local] '
                     '$protocol $status $bytes_sent $bytes_received '
                     '$session_time';

    access_log logs/nginx-access.log basic;

    upstream backend {
        hash $remote_addr consistent;
        server 127.0.0.1:11965;
    }
    server {
        listen 1965 ssl;
        ssl_certificate /etc/ssl/perso.pw:1965.crt;
        ssl_certificate_key /etc/ssl/private/perso.pw:1965.key;
        proxy_pass backend;
    }
}

In the previous configuration file, the backend defines the destination, multiples servers could be defined, with weights and timeouts, there is only one in this example.

The server block will tell on which port Nginx should listen and if it has to handle TLS (which is named ssl because of history), usual TLS configuration can be used here, then for a request, we have to tell to which backend Nginx have to relay the connections.

The configuration file defines a custom log format that is useful for TLS connections, it includes remote host, backend destination, connection status, bytes transffered and duration.

Log parsing §

Using awk to calculate time performance §

I wrote a quite long shell command parsing the log defined earlier that display the number of requests, and median/min/max session time.

$ awk '{ print $NF }' /var/www/logs/nginx-access.log | sort -n |  awk '{ data[NR] = $1 } END { print "Total: "NR" Median:"data[int(NR/2)]" Min:"data[2]" Max:"data[NR] }'
Total: 566 Median:0.212 Min:0.000 Max:600.487

Find bad clients using awk §

Sometimes in the logs there are clients that obtains a status 500, meaning the TLS connection haven't been established correctly. It may be some scanner that doesn't try a TLS connection, if you want to get statistics about those and see if it would be worth to block them if they do too many attempt, it is easy to use awk to get the list.

awk '$(NF-3) == 500 { print $1 }' /var/www/logs/nginx-access.log

Using goaccess for real time log visualization §

It is also possible to use the program Goaccess to view logs in real time with many information, it is really an awesome program.

goaccess --date-format="%d/%b/%Y" \
         --time-format="%H:%M:%S" \
         --log-format="%h %r [%d:%t %^] TCP %s %^ %b %L" /var/www/logs/nginx-access.log

Goaccess official website

Conclusion §

I was using relayd before trying Nginx with stream module, while relayd worked fine it doesn't provide any of the logs Nginx offer. I am really happy with this use of Nginx because it is a very versatile program that shown to be more than a http server over time. For a minimal setup I would still recommend lighter daemon such as relayd.

Port of the week: catgirl irc client

Written by Solène, on 22 February 2021.
Tags: #openbsd70 #openbsd #irc #catgirl #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

In this Port of the Week I will introduce you to the IRC client catgirl. While there are already many IRC clients available (and good ones), there was a niche that wasn't filled yet, between minimalism (ii, irCII) and full featured clients (irssi, weechat) in the terminal world. Here comes catgirl, a simple IRC client coming with enough features to be comfortable to use for heavy IRC users.

Catgirl has the following features: tab completion, split scrolling, URL detection, nick coloring, ignores filter. On the other hand, it doesn't support non-TLS networks, CCTP, multi networks or dynamic configuration. If you want to use catgirl with multiples networks, you have to run it once per network.

Catgirl will be available as a package in OpenBSD starting with version 6.9.

OpenBSD security bonus: catgirl features a very good use of unveil to reduce file system access to the minimum required (configuration+logs+certs), reducing the severity of an exploit. It also has a restricted mode when using the -R parameter that reduce features like notifications or url handling and tight the pledge list (allowing systems calls).

Catgirl official website

Catgirl screenshot

Configuration §

A simple configuration file to connect to the irc.tilde.chat server would look like the following file that must be stored under ~/.config/catgirl/tilde

nick = solene_nickname
real = Solene
host = irc.tilde.chat
join = #foobar-channel

You can then run catgirl and use the configuration file but passing the config file name as parameter.

$ catgirl tilde

Usage and tips §

I recommend reading catgirl man page, everything is well explained there. I will cover most basics needs here.

Catgirl man page

Catgirl only display one window at a time, it is not possible to split the display, but if you scroll up you will see the last displayed lines and the text stream while keeping the upper part displaying the history, it is a neat way to browse the history without cutting yourself from what's going on in the channel.

Channels can be browsed from keyboard using Ctrl+N or Ctrl+P like in Irssi or by typing /window NUMBER, with number being the buffer number. Alt+NUMBER could also be used to switch directly to buffer NUMBER.

Searches in buffer could be used by typing a word in your input and using Ctrl+R to search backward or Ctrl+S for searching forward (given you are in the history of course).

Finally, my most favorite feature which is missing in minimal clients is Alt+A, jumping to next buffers I have to read (also yes, catgirl keep a line with information about how many messages in channels since last time you didn't read them). Even better, when you press alt+A while there is nothing to read, you jump back to the channel you manually selected last, this allow to quickly read what you missed and return to the channel you spend all your time on.

Conclusion §

I really love this IRC client, it replaced Irssi that I used for years really easily because most of the key bindings are the same, but I am also very happy to use a client that is a lot safer (on OpenBSD). It can be used with tmux for persistence but also connect to multiple servers and make it manageable.

Full list of services offered by a default OpenBSD installation

Written by Solène, on 16 February 2021.
Tags: #openbsd70 #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).

From all this list, the following list is started by default: cron, dhcpleased, pflogd, sndiod, openssh, ntpd, slaacd, resolvd, sshd, spamlogd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.

Service list §

I extracted the list of base install services by looking at /etc/rc.conf.

$ grep _flags /etc/rc.conf | cut -d '_' -f 1

amd §

This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".

amd man page

apmd §

This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.

apmd man page

apm man page

bgpd §

This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.

OpenBGPD website

bootparamd §

This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.

Information about a diskless setup

cron §

This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.

Cron man page

Crontab command

Crontab format

dhcpd §

This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.

dhcpleased §

This is the new default DHCPv4 client service. It monitors multiples interfaces and is able to handle more complicated setup than dhclient.

dhcpleased man page

dhcrelay §

This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.

dvmrpd §

This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.

eigrpd §

This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.

ftpd §

This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).

ftpd man page

ftpproxy §

This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.

ftpproxy6 §

Same as above but for IPv6. Using IPv6 behind a NAT make no sense.

hostapd §

This is the daemon that turns OpenBSD into a WiFi access point.

hostapd man page

hostapd configuration file man page

hotplugd §

hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.

hotplugd man page

httpd §

httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.

httpd man page

httpd configuration file man page

identd §

Identd is a daemon for the Identification Protocol which returns the login name of a user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.

ifstated §

This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.

ifstated man page

ifstated configuration file man page

iked §

This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.

OpenBSD FAQ about VPN

inetd §

This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.

inetd man page

isakmpd §

This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.

iscsid §

This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.

ldapd §

This is a light LDAP server, offering version 3 of the protocol.

ldap client man page

ldapd daemon man page

ldapd daemon configuration file man page

ldattach §

This daemon allows to configure programs that are exposed as a serial port, such as gps devices.

ldomd §

This daemon is specific to the sparc64 platform and provide services for dom feature.

lockd §

This daemon is used as part of a NFS environment to support file locking.

ldpd §

This daemon is used by MPLS routers to get labels.

lpd §

This daemon is used to manage print access to a line printer.

mountd §

This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.

mountd man page

showmount man page

mopd §

This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.

mrouted §

Similar to dvmrpd.

nfsd §

This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.

nfsd man page

nfsstat man page

npppd §

This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.

nsd §

This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.

nsd man page

nsd configuration file man page

ntpd §

This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.

ntpd man page

ospfd §

It is a daemon for the OSPF routing protocol (Open Shortest Path First).

ospf6d §

Same as above for IPv6.

pflogd §

This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.

pflogd man page

tcpdump

portmap §

This daemon is used as part of a NFS environment.

rad §

This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.

radiusd §

This daemon is used to offer RADIUS protocol authentication.

rarpd §

This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.

Information about a diskless setup

rbootd §

Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».

relayd §

This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.

relayd man page

relayd control tool man page

relayd configuration file man page

resolvd §

This daemon is used to manipulate the file /etc/resolv.conf depending on multiple factors like configured DNS or stragegy change in unwind.

resolvd man page

ripd §

This is a routing daemon using an old protocol but widely supported.

route6d §

Same as above but for IPv6.

sasyncd §

This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.

sensorsd §

This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.

sensorsd man page

sensorsd configuration file man page

slaacd §

This service is a daemon that will automatically pick up auto IPv6 configuration on the network.

slowcgi §

This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.

slowcgi man page

smtpd §

This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.

smtpd man page

smtpd configuration file man page

smtpd control command man page

sndiod §

This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.

sndiod man page

sndiod control command man page

mixerctl man page to control an audio device

OpenBSD FAQ about multimedia devices

snmpd §

This daemon is a SNMP server exposing some system metrics to SNMP client.

snmpd man page

snmpd configuration file man page

spamd §

This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.

spamlogd §

This daemon is dedicated to the update of spamd whitelist.

sshd §

This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.

sshd man page

sshd configuration file man page

statd §

This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.

switchd §

This daemon is used to control a switch pseudo device.

switch pseudo device man page

syslogd §

This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.

syslogd man page

syslogd configuration file man page

newsyslog man page

logger man page

tftpd §

This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.

Information about a diskless setup

tftpproxy §

This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.

unbound §

This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.

unbound man page

unbound configuration file man page

unwind §

This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.

unwind man page

unwind configuration file man page

unwind control command man page

vmd §

This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.

vmd man page

vmd configuration file man page

vmd control command man page

vmm driver man page

OpenBSD FAQ about virtualization

watchdogd §

This daemon is used to trigger watchdog timer devices if any.

wsmoused §

This daemon is used to provide a mouse support to the console.

xenodm §

This daemon is used to start the X server and allow users to authenticate themselves and log in their session.

xenodm man page

ypbind §

This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.

ypldap §

This daemon offers a YP service using a LDAP backend.

ypserv §

This daemon is a YP server.

What security does a default OpenBSD installation offer?

Written by Solène, on 14 February 2021.
Tags: #openbsd70 #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.

There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.

Security matters §

Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.

In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.

Pledge / unveil on userland §

Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.

Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.

Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.

OpenBSD presentation about Unveil (BSDCan2019)

OpenBSD presentation of Pledge and Unveil (BSDCan2018)

Privilege separation §

Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.

Clock synchronization §

The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.

X display not as root §

If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.

Resources limits §

Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.

Genuine full disk encryption §

When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.

W^X §

Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.

OpenBSD presentation « Kernel W^X Improvements In OpenBSD »

Only one reliable randomness source §

When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.

OpenBSD presentation about arc4random

Accurate documentation §

OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.

OpenBSD man pages online

EuroBSDcon 2018 about « Better documentation »

IPSec and Wireguard out of the box §

If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.

Memory safeties §

OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.

Dedicated root account §

When you install the system, a root account is created and its password is asked, then you create a user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.

Small network attack surface §

The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).

Encrypted swap §

By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.

SMT disabled §

Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.

Meltdown: one of the first security issue related to speculative execution in the CPU

Micro and Webcam disabled §

With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.

Maintainability, release often, update often §

The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.

Signify chain of trust §

Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.

Signify at BSDCan 2015

Packages §

While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.

Chroot by default when available §

Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.

Dedicated users for services §

Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.

Installing a service doesn't enable it §

When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.

rcctl man page

Conclusion §

Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.

There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.

« Mitigations and other real security features » by Theo De Raadt

OpenBSD innovations

OpenBSD events, often including slides or videos

Firejail on Linux to sandbox all the things

Written by Solène, on 14 February 2021.
Tags: #linux #security #sandbox

Comments on Fediverse/Mastodon

Introduction §

Firejail is a program that can prepare sandboxes to run other programs. This is an efficient way to keep a software isolated from the rest of the system without need of changing its source code, it works for network, graphical or daemons programs.

You may want to sandbox programs you run in order to protect your system for any issue that could happen within the program (security breach, code mistake, unknown errors), like Steam once had a "rm -fr /" issue, using a sandbox that would have partially saved a part of the user directory. Web browsers are major tools nowadays and yet they have access to the whole system and have many security issues discovered and exploited in the wild, running it in a sandbox can reduce the data a hacker could exfiltrate from the computer. Of course, sandboxing comes with an usability tradeoff because if you only allow access to the ~/Downloads/ directory, you need to put files in this directory if you want to upload them, and you can only download files into this directory and then move them later where you really want to keep your files.

Installation §

On most Linux systems you will find a Firejail package that you can install. If your distribution doesn't provide a Firejail package, it seems the installing from sources process is quite easy, and as the project is written in C with limited dependencies it may be easy to get the build process done.

There are no service to enable and no kernel parameters to add. Apparmor or SELinux features in kernel can be used to integrates into Firejail profiles if you want to.

Usage §

Start a program §

The simplest usage is to run a command by adding Firejail before the command name.

$ Firejail firefox

Firejail has a neat feature to allow starting software by their name without calling Firejail explicitly, if you create a symbolic link in your $PATH using a program name but targeting Firejail, when you call that name Firejail will automatically now what you want to start. The following example will run firefox when you call the symbolic link.

export PATH=~/bin/:$PATH
$ ln -s /usr/bin/firejail ~/bin/firefox
$ firefox

Listing sandboxes §

There is a Firejail --list command that will tell you about all sandboxes running and what are their parameters. As a first column the identifier is available for more Firejail features.

$ firejail --list
6108:solene::/usr/bin/firejail /usr/bin/firefox 

Limit bandwidth per program §

Firejail also has a neat feature that allows to limit the bandwidth available only for one sandbox environment. Reusing previous list output, I will reduce firefox bandwidth, the number are in kB/s.

$ firejail --bandwidth=6108 set wlan0 1000 40

You can find more information about this feature in the "TRAFFIC SHAPING" section of the Firejail man page.

Restrict network access §

If for some reason you want to start a program with absolutely no network access, you can run a program and deny it any network.

$ firejail --net=none libreoffice

Conclusion §

Firejail is a neat way to start software into sandboxes without requiring any particular setup. It may be more limited and maybe less reliable than OpenBSD programs who received unveil() features but it's a nice trade off between safety and required work within source code (literally none). It is a very interesting project that proves to work easily on any Linux system, with a simple C source code with little dependencies. I am not really familiar with Linux kernel and its features but Firejail seems to use seccomp-bpf and namespace, I guess they are complicated to use but powerful and Firejail comes here as a wrapper to automate all of this.

Firejail has been proven to be USABLE and RELIABLE for me while my attempts at sandboxing Firefox with AppArmor were tedious and not optimal. I really recommend it.

More resources §

Official project website with releases and security information

Firejail sources and documentation

Community profiles 1

Community profiles 2

Bandwidth limiting on OpenBSD 6.8

Written by Solène, on 07 February 2021.
Tags: #openbsd #unix #network

Comments on Fediverse/Mastodon

This is a February 2021 update of a text originally published in April 2017.

Introduction §

I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.

OpenBSD pf.conf man page about queuing

Prerequisites §

My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).

PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.

Configuration §

Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.

# we define a main queue (requirement)
queue main on em0 bandwidth 1G

# set a queue for everything
queue normal parent main bandwidth 200K max 200K default

And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`

QUEUE        BW/FL SCH      PKTS    BYTES   DROP_P   DROP_B QLEN
main on em0  1000M fifo        0        0        0        0    0
 normal      1000M fifo   535424 36032467        0        0   60

More control (per user / protocol) §

This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.

# within the queue rules
queue oasis parent main bandwidth 150K max 150K

# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis

Instead of a user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.

How to set a system wide bandwidth limit on Linux systems

Written by Solène, on 06 February 2021.
Tags: #linux #bandwidth

Comments on Fediverse/Mastodon

In these times of remote work / home office, you may have a limited bandwidth shared with other people/device. All software doesn't provide a way to limit bandwidth usage (package manager, Youtube videos player etc...).

Fortunately, Linux has a very nice program very easy to use to limit your bandwidth in one command. This program is « Wondershaper » and is using the Linux QoS framework that is usually manipulated with "tc", but it makes it VERY easy to set limits.

What are QoS, TC and Filters on Linux

On most distributions, wondershaper will be available as a package with its own name. I found a few distributions that didn't provide it (NixOS at least), and some are providing various wondershaper versions.

To know if you have the newer version, a "wondershaper --help" may provide information about "-d" and "-u" flags, the older version doesn't have this.

Wondershaper requires the download and upload bandwidths to be set in kb/s (kilo bits per second, not kilo bytes). I personally only know my bandwidth in kB/s which is a 1/8 of its kb/s equivalent. My home connection is 1600 kB/s max in download and 95 kB/s max in upload, I can use wondershaper to limit to 1000 / 50 so it won't affect much my other devices on my network.

# my network device is enp3s0
# new wondershaper
sudo wondershaper -a enp3s0 -d $(( 1000 * 8 )) -u $(( 50 * 8 ))

# old wondershaper
sudo wondershaper enp3s0 $(( 1000 * 8 )) $(( 50 * 8 ))

I use a multiplication to convert from kB/s to kb/s and still keep the command understandable to me. Once a limit is set, wondershaper can be used to clear the limit to get full bandwidth available again.

# new wondershaper
sudo wondershaper -c -a enp3s0

# old wondershaper
sudo wondershaper clear enp3s0

There are so many programs that doesn't allow to limit download/upload speeds, wondershaper effectiveness and ease of use are a blessing.

Filtering TCP connections by operating system on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.

OpenBSD pf.conf man page about OS Fingerprinting

Explanations §

Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.

Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.

How to setup §

The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).

# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22

# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22

This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.

Using pkgsrc on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #pkgsrc

Comments on Fediverse/Mastodon

This quick article will explain how to install pkgsrc packages on an OpenBSD installation. This is something regulary asked on #openbsd freenode irc channel. I am not convinced by the relevant use of pkgsrc under OpenBSD but why not :)

I will cover an unprivileged installation that doesn't require root. I will use packages from 2020Q4 release, I may not update regularly this text so you will have to adapt to your current year.

$ cd ~/
$ ftp https://cdn.NetBSD.org/pub/pkgsrc/pkgsrc-2020Q4/pkgsrc.tar.gz
$ tar -xzf pkgsrc.tar.gz
$ cd pkgsrc/bootstrap
$ ./bootstrap --unprivileged

From now you must add the path ~/pkg/bin to your $PATH environment variable. The pkgsrc tree is in ~/pkgsrc/ and all the relevant files for it to work are in ~/pkg/.

You can install programs by searching directories of software you want in ~/pkgsrc/ and run "bmake install", for example in ~/pkgsrc/chat/irssi/ to install irssi irc client.

I'm not sure X11 software compiles well, I got issues compiling dbus as a dependency of x11/xterm and I got compilation errors, maybe clashing with Xenocara from base system... I don't really want to investigate more about this though.

Enable multi-factor authentication on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD

What is TOTP (Time-based One time Password)

When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.

TOTP software §

Here is a quick list of TOTP software

- command line: oathtool from package oath-toolkit

- GUI and multiplatform: KeepassXC

- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)

Setup §

A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.

# pkg_add login_oath

You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:

# totp OR password
totp:\
        :auth=-totp,passwd:\
        :tc=default:

# totp AND password
totppw:\
        :auth=-totp-and-pwd:\
        :tc=default:

If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.

Local login §

Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.

# usermod -L totp some_user

In the user directory, you have to generate a key and give it the correct permissions.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key

The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.

python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"

SSH login §

It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.

This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.

sshd_config man page about AuthenticationMethods

# enable for everyone
AuthenticationMethods publickey,password

# for one user
Match User solene
	AuthenticationMethods publickey,password

Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.

Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.

Conclusion §

In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.

NixOS review: pros and cons

Written by Solène, on 22 January 2021.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

Hello, in this article I would like to share my thoughts about the NixOS Linux distribution. I've been using it daily for more than six months as my main workstation at work and on some computer at home too. I also made modest contributions to the git repository.

NixOS official website

Introduction §

NixOS is a Linux distribution built around Nix tool. I'll try to explain quickly what Nix is but if you want more accurate explanations I recommend visiting the project website. Nix is the package manager of the system, Nix could be used on any Linux distribution on top of the distribution package manager. NixOS is built from top to bottom from Nix.

This makes NixOS a system entirely different than what one can expect from a regular Linux/Unix system (with the exception of Guix sharing the same idea with a different implementation). NixOS system configuration is stateless, most of the system is in read-only and most of paths you know doesn't exist. The directory /bin/sh only contains "sh" which is a symlink.

The whole system configuration: fstab, packages, users, services, crontab, firewall... is configured from a global configuration file that defines the state of the system.

An example of my configuration file to enable graphical interface with Mate as a desktop and a french keyboard layout.

services.xserver.enable = true;
services.xserver.layout = "fr";
services.xserver.libinput.enable = true;
services.xserver.displayManager.lightdm.enable = true;
services.xserver.desktopManager.mate.enable = true;

I could add the following lines into the configuration to add auto login into my graphical session.

services.xserver.displayManager.autoLogin.enable = true;
services.xserver.displayManager.autoLogin.user = "solene";

Pros §

There are a lot of pros. The system is really easy to setup, installing a system (for a reinstall or replicate an installation) is very easy, you only need to get the configuration.nix file from the other/previous system. Everything is very fast to setup, it's often only a few lines to add to the configuration.

Every time the system is rebuilt from the configuration file, a new grub entry is made so at boot you can choose on which environment you want to boot. This make upgrades or tries very easy to rollback and safe.

Documentation! The NixOS documentation is very nice and is part of the code. There is a special man page "configuration.nix" in the system that contains all variables you can define, what values to expect, what is the default and what it's doing. You can literally search for "steam", "mediawiki" or "luks" to get information to configure your system.

All the documentation

Builds are reproducible, I don't consider it a huge advantage but it's nice to have it. This allow to challenge a package mirror by building packages locally and verifying they provide the exact same package on the mirror.

It has a lot of packages. I think the NixOS team is pretty happy to share their statistics because, if I got it right, Nixpkgs is the biggest and up to date repository alive.

Search for a package

Cons §

When you download a pre compiled Linux program that isn't statically built, it's a huge pain to make it work on NixOS. The binary will expect some paths to exist at usual places but they won't exist on NixOS. There are some tricks to get them work but it's not always easy. If the program you want isn't in the packages, it may not be easy to use it. Flatpak can help to get some programs if they are not in the packages though.

Running binaries

It takes disk space, some libraries can exist at the same time with small compilation differences. A program can exist with different version at the same time because of previous builds still available for boot in grub, if you forget to clean them it takes a lot of memory.

The whole system (especially for graphical environments) may not feel as polished as more mainstream distributions putting a lot of efforts into branding and customization. NixOS will only install everything and you will have a quite raw environment that you will have to configure. It's not a real cons but in comparison to other desktop oriented distributions, NixOS may not look as good out of the box.

Conclusion §

NixOS is an awesome piece of software. It works very well and I never had any reliability issue with it. Some services like xrdp are usually quite complex to setup but it worked out of the box here for me.

I see it as a huge Lego© box with which you can automate the building of the super system you want, given you have the schematics of its parts. Once you need a block you don't have in your recipes list, you will have a hard time.

I really classify it into its own category, in comparison to Linux/BSD distributions and Windows, there is the NixOS / Guix category with those stateless systems for which the configuration is their code.

Vger security analysis

Written by Solène, on 14 January 2021.
Tags: #vger #gemini #security

Comments on Fediverse/Mastodon

I would like to share about Vger internals in regards to how the security was thought to protect vger users and host systems.

Vger code repository

Thinking about security first §

I claim about security in Vger as its main feature, I even wrote Vger to have a secure gemini server that I can trust. Why so? It's written in C and I'm a beginner developer in this language, this looks like a scam.

I chose to follow the best practice I'm aware of from the very first line. My goal is to be sure Vger can't be used to exfiltrate data from the host on which it runs or to allow it to run arbirary command. While I may have missed corner case in which it could crash, I think a crash is the worse that can happen with Vger.

Smallest code possible §

Vger doesn't have to manage connections or TLS, this was a lot of code already removed by this design choice. There are better tools which are exactly made for this purpose, so it's time to reuse other people good work.

Inetd and user §

Vger is run by inetd daemon, allowing to choose the user running vger. Using a dedicated user is always a good idea to prevent any harm in case of issue, but it's really not sufficient to protect vger to behave badly.

Another kind of security benefit is that vger runtime isn't looping like a daemon awaiting new connections. Vger accept a request, read a file if exist and gives its result and terminates. This is less error prone because no variable can be reused or tricked after a loop that could leave the code in an inconsistent or vulnerable state.

Chroot §

A critical vger feature is the ability to chroot into a directory, meaning the directory is now seen as the root of the file system (/var/gemini would be seen as /) and prevent vger to escape it. In addition to the chroot feature, the feature allow vger to drop to an unprivileged user.

     /* 
      * use chroot() if a user is specified requires root user to be 
      * running the program to run chroot() and then drop privileges 
      */
     if (strlen(user) > 0) {

             /* is root? */
             if (getuid() != 0) {
                     syslog(LOG_DAEMON, "chroot requires program to be run as root");
                     errx(1, "chroot requires root user");
             }
             /* search user uid from name */
             if ((pw = getpwnam(user)) == NULL) {
                     syslog(LOG_DAEMON, "the user %s can't be found on the system", user);
                     err(1, "finding user");
             }
             /* chroot worked? */
             if (chroot(path) != 0) {
                     syslog(LOG_DAEMON, "the chroot_dir %s can't be used for chroot", path);
                     err(1, "chroot");
             }
             chrooted = 1;
             if (chdir("/") == -1) {
                     syslog(LOG_DAEMON, "failed to chdir(\"/\")");
                     err(1, "chdir");
             }
             /* drop privileges */
             if (setgroups(1, &pw->pw_gid) ||
                 setresgid(pw->pw_gid, pw->pw_gid, pw->pw_gid) ||
                 setresuid(pw->pw_uid, pw->pw_uid, pw->pw_uid)) {
                     syslog(LOG_DAEMON, "dropping privileges to user %s (uid=%i) failed",
                            user, pw->pw_uid);
                     err(1, "Can't drop privileges");
             }
     }

No use of third party libs §

Vger only requires standard C includes, this avoid leaving trust to dozens of developers using fragile or barely tested code.

OpenBSD specific code §

In addition to all the previous security practices, OpenBSD is offering a few functions to help restricting a lot what Vger can do.

The first function is pledge, allowing to restrict the system calls that can happen within the code itself. The current syscalls allowed in vger are related to the categories "rpath" and "stdio", basically standard input/output and reading files/directories only. This mean after pledge() is called, if any syscall not in those two categories is used, vger will be killed and a pledge error will be reported in the logs.

The second function is unveil, which will basically restrict access to the filesystem to anything but what you list, with the permission. Currently, vger only allows file access in read-only mode in the base directory used to serve files.

Here is an extract of the code relative to the OpenBSD specific code. With unveil available everywhere chroot wouldn't be required.

 #ifdef __OpenBSD__
         /* 
          * prevent access to files other than the one in path 
          */
         if (chrooted) {
                 eunveil("/", "r");
         } else {
                 eunveil(path, "r");
         }
         /* 
          * prevent system calls other parsing queryfor fread file and 
          * write to stdio 
          */
         if (pledge("stdio rpath", NULL) == -1) {
                 syslog(LOG_DAEMON, "pledge call failed");
                 err(1, "pledge");
         }
 #endif

The least code before dropping privileges §

I made my best to use the least code possible before reducing Vger capabilities. Only the code managing the parameters is done before activating chroot and/or unveil/pledge.

int
main(int argc, char **argv)
{
     char            request  [GEMINI_REQUEST_MAX] = {'\0'};
     char            hostname [GEMINI_REQUEST_MAX] = {'\0'};
     char            uri      [PATH_MAX]           = {'\0'};
     char            user     [_SC_LOGIN_NAME_MAX] = "";
     int             virtualhost = 0;
     int             option = 0;
     char           *pos = NULL;

     while ((option = getopt(argc, argv, ":d:l:m:u:vi")) != -1) {
             switch (option) {
             case 'd':
                     estrlcpy(chroot_dir, optarg, sizeof(chroot_dir));
                     break;
             case 'l':
                     estrlcpy(lang, "lang=", sizeof(lang));
                     estrlcat(lang, optarg, sizeof(lang));
                     break;
             case 'm':
                     estrlcpy(default_mime, optarg, sizeof(default_mime));
                     break;
             case 'u':
                     estrlcpy(user, optarg, sizeof(user));
                     break;
             case 'v':
                     virtualhost = 1;
                     break;
             case 'i':
                     doautoidx = 1;
                     break;
             }
     }

     /* 
      * do chroot if a user is supplied run pledge/unveil if OpenBSD 
      */
     drop_privileges(user, chroot_dir); 

The Unix way §

Unix is made of small component that can work together as small bricks to build something more complex. Vger is based on this idea by delegating the listening daemon handling incoming requests to another software (let's say relayd or haproxy). And then, what's left from the gemini specs once you delegate TLS is to take account of a request and return some content, which is well suited for a program accepting a request on its standard input and giving the result on standard ouput. Inetd is a key here to make such a program compatible with a daemon like relayd or haproxy. When a connection is made into the TLS listening daemon, a local port will trigger inetd that will run the command, passing the network content to the binary into its stdin.

Fine grained CGI §

CGI support was added in order to allow Vger to make dynamic content instead of serving only static files. It has a fine grained control, you can allow only one file to be executable as a CGI or a whole directory of files. When serving a CGI, vger forks, a pipe is opened between the two processes and a process is using execlp to run the cgi and transmit its output to vger.

Using tests §

From the beginning, I wrote a set of tests to be sure that once a kind of request or a use case work I can easily check I won't break it. This isn't about security but about reliability. When I push a new version on the git repository, I am absolutely confident it will work for the users. It was also an invaluable help for writing Vger.

As vger is a simple binary that accept data in stdin and output data on stdout, it is simple to write tests like this. The following example will run vger with a request, as the content is local and within the git repository, the output is predictable and known.

printf "gemini://host.name/autoidx/\r\n" | vger -d var/gemini/

From here, it's possible to build an automatic test by checking the checksum of the output to the checksum of the known correct output. Of course, when you make a new use case, this requires manually generating the checksum to use it as a comparison later.

OUT=$(printf "gemini://host.name/autoidx/\r\n" | ../vger -d var/gemini/ -i | md5)
if ! [ $OUT = "770a987b8f5cf7169e6bc3c6563e1570" ]
then
	echo "error"
	exit 1
fi

At this time, vger as 19 use case in its test suite.

By using the program `entr` and a Makefile to manage the build process, it was very easy to trigger the testing process while working on the source code, allowing me to check the test suite only by saving my current changes. Anytime a .c file is modified, entr will trigger a make test command that will be displayed in a dedicated terminal.

ls *.c | entr make test

Realtime integration tests? :)

Conclusion §

By using best practices, reducing the amount of code and using only system libraries, I am quite confident about Vger good security. The only real issue could be to have too many connections leading to a quite high load due to inetd spawning new processes and doing a denial of services. This could be avoided by throttling simultaneous connection in the TLS daemon.

If you want to contribute, please do, and if you find a security issue please contact me, I'll be glad to examine the issue.

Free time partitionning

Written by Solène, on 06 January 2021.
Tags: #life

Comments on Fediverse/Mastodon

Lately I wanted to change the way I use my free time. I define my free time as: not working, not sleeping, not eating. So, I estimate it to six hours a day in work day and fourteen hours in non worked day.

With the year 2020 being quite unusual, I was staying at home most of the time without seeing the time passing. At the end of the year, I started to mix the duration of weeks and months which disturbed me a lot.

For a a few weeks now, I started to change the way I spend my free time. I thought it was be nice to have a few separate activies in the same day to help me realizing how time is passing by.

Activity list §

Here is the way I chose to distribute my free time. It's not a strict approach, I measure nothing. But I try to keep a simple ratio of 3/6, 2/6 and 1/6.

Recreation: 3/6 §

I spend a lot of time in recreation time. A few activies I've put into recreation:

  • video games
  • movies
  • reading novels
  • sports

Creativity: 2/6 §

Those activies requires creativy, work and knowledge:

  • writing code
  • reading technical books
  • playing music
  • creating content (texts, video, audio etc..)

Chores: 1/6 §

Yes, obviously this has to be done on free time... And it's always better to do it a bit everyday than accumulating it until you are forced to proceed.

Conclusion §

I only started for a few weeks now but I really enjoy doing it. As I said previously, it's not something I stricly apply, but more a general way to spend my time and not stick for six hours writing code in a row from after work to going to sleep. I really feel my life is better balanced now and I feel some accomplishments for the few activies done every day.

Questions / Answers §

Some asked asked me if I was planning in advance how I spend my time.

The answer is no. I don't plan anything but when I tend to lose focus on what I'm doing (and this happen often), I think about this time repartition method and then I think it may be time to jump on another activity and I pick something in another category. Now I think about it, that was very often that I was doing something because I was bored and lacking idea of activities to occupy myself, with this current list I no longer have this issue.

Toward a simpler lifestyle

Written by Solène, on 04 January 2021.
Tags: #life

Comments on Fediverse/Mastodon

I don't often give my own opinion on this blog but I really feel it is important here.

The matter is about ecology, fair money distribution and civilization. I feel I need to share a bit about my lifestyle, in hope it will have a positive impact on some of my readers. I really think one person can make a change. I changed myself, only by spending a few moments with a member of my family a few years ago. That person never tried to convince me of anything, they only lived by their own standard without never offending me, it was simple things, nothing that would make that person a paria in our society. But I got curious about the reasons and I figurated it myself way later, now I understand why.

My philisophy is simple. In a life in modern civilization where everything is going fast, everyone cares about opinions other have about them and ultra communication, step back.

Here are the various statement I am following, this is something I self defined, it's not absolute rules.

  • Be yourself and be prepare to assume who you are. If you don't have the latest gadget you are not "has been", if you don't live in a giant house, you didn't fail your career, if you don't have a top notch shiny car nobody should ever care.
  • Reuse what you have. It's not because a cloth has a little scratch that you can't reuse it. It's not because an electronic device is old that you should replace it.
  • Opensource is a great way to revive old computers
  • Reduce your food waste to 0 and eat less meat because to feed animals we eat this requires a huge food production, more than what we finally eat in the meat
  • Travel less, there are a lot to see around where I live than at the other side of the planet. Certainly not go on vacation far away from home only to enjoy a beach under the sun. This also mean no car if it can be avoided, and if I use a car, why not carpooling?
  • Avoid gadgets (electronic devices that bring nothing useful) at all cost. Buy good gears (kitchen tools, workshop tools, furnitures etc...) that can be repaired. If possible buy second hand. For non-essential gears, second hand is mandatory.
  • In winter, heat at 19°C maximum with warm clothes while at home.
  • In summer, no A/C but use of extern isolation and vines along the home to help cooling down. And fans + water while wearing lights clothes to keep cool.

While some people are looking for more and more, I do seek for less. There are not enough for everyone on the planet, so it's important to make sacrifices.

Of course, it is how I am and I don't expect anyone to apply this, that would be insane :)

Be safe and enjoy this new year! <3

Lowtech Magazine, articles about doing things using simple technology

[FR] Pourquoi j'utilise OpenBSD

Written by Solène, on 04 January 2021.
Tags: #openbsd #francais

Comments on Fediverse/Mastodon

Dans ce billet je vais vous livrer mon ressenti sur ce que j'aime dans OpenBSD.

Respect de la vie privée §

Il n'y a aucune télémétrie dans OpenBSD, je n'ai pas à m'inquiéter pour le respect de ma vie privée. Pour rappel, la télémétrie est un mécanisme qui consiste à remonter des informations de l'utilisateur afin d'analyser l'utilisation du produit.

De plus, le défaut du système a été de désactiver entièrement le micro, à moins d'une intervention avec le compte root, le microphone enregistre du silence (ce qui permet de ne pas le bloquer quant à des droits d'utilisation). A venir dans 6.9, la caméra suit le même chemin et sera désactivée par défaut. Il s'agit pour moi d'un signal fort quant à la nécessité de protéger l'utilisateur.

Navigateurs web sécurisés §

Avec l'ajout des fonctionnalités de sécurité (pledge et surtout unveil) dans les sources de Firefox et Chromium, je suis plus sereine quant à leur utilisation au quotidien. À l'heure actuelle, l'utilisation d'un navigateur web est quasiment incontournable, mais ils sont à la fois devenus extrêmement complexes et mal maîtrisés. L'exécution de code côté client via Javascript qui a de plus en plus de possibilité, de performances et de nécessités, ajouter un peu de sécurité dans l'équation était nécessaire. Bien que ces ajouts soient parfois un peu dérangeants à l'utilisation, je suis vraiment heureuse de pouvoir en bénéficier.

Avec ces sécurités ajoutés (par défaut), les navigateurs cités précédemment ne peuvent pas parcourir les répertoires en dehors de ce qui leur est nécessaire à leur bon fonctionnement plus les dossiers ~/Téléchargements/ et /tmp/. Ainsi, des emplacements comme ~/Documents ou ~/.gnupg sont totalement inaccessibles ce qui limite grandement les risques d'exfiltration de données par le navigateur.

On pourrait refaire grossièrement la même fonctionnalité sous Linux en utilisant AppArmor mais l'intégration est extrêmement compliquée (là où c'est par défaut sur OpenBSD) et un peu moins efficace, il est plus facile d'agir au bon moment depuis le code plutôt qu'en encapsulant le programme entier d'un groupe de règles.

Pare-feu PF §

Avec PF, il est très simple de vérifier le fichier de configuration pour comprendre les règles en place sur le serveur ou un ordinateur de bureau. La centralisation des règles dans un fichier et le système de macros permet d'écrire des règles simples et lisibles.

J'utilise énormément la fonctionnalité de gestion de bande passante pour limiter le débit de certaines applications qui n'offrent pas ce réglage. C'est très important pour moi n'étant pas la seule utilisatrice du réseau et ayant une connexion assez lente.

Sous Linux, il est possible d'utiliser les programmes trickle ou wondershaper pour mettre en place des limitations de bande passante, par contre, iptables est un cauchemar à utiliser en tant que firewall!

C'est stable §

A part à l'utilisation sur du matériel peu répandu, OpenBSD est très stable et fiable. Je peux facilement atteindre deux semaines d'uptime sur mon pc de bureau avec plusieurs mises en veille par jour. Mes serveurs OpenBSD tournent 24/24 sans problème depuis des années.

Je dépasse rarement deux semaines puisque je dois mettre à jour le système de temps en temps pour continuer les développements sur OpenBSD :)

Peu de maintenance §

Garder à jour un système OpenBSD est très simple. Je lance les commandes syspatch et pkg_add -u tous les jours pour garder mes serveurs à jour. Une mise à jour tous les six mois est nécessaire pour monter en version mais à part quelques instructions spécifiques qui peuvent parfois arriver, une mise à jour ressemble à ça :

# sysupgrade
[..attendre un peu..]
# pkg_add -u
# reboot

Documentation de qualité §

Installer OpenBSD avec un chiffrement complet du disque est très facile (il faudra que j'écrive un billet sur l'importance de chiffrer ses disques et téléphones).

La documentation officielle expliquant l'installation d'un routeur avec NAT est parfaitement expliquée pas à pas, c'est une référence dès qu'il s'agit d'installer un routeur.

Tous les binaires du système de base (ça ne compte pas les packages) ont une documentation, ainsi que leurs fichiers de configuration.

Le site internet, la FAQ officielle et les pages de man sont les seules ressources nécessaires pour s'en sortir. Elles représentent un gros morceau, il n'est pas toujours facile de s'y retrouve mais tout y est.

Si je devais me débrouiller pendant un moment sans internet, je préférerais largement être sur un système OpenBSD. La documentation des pages de man suffit en général à s'en sortir.

Imaginez mettre en place un routeur qui fait du trafic shaping sous OpenBSD ou Linux sans l'aide de documents extérieurs au système. Personnellement je choisis OpenBSD à 100% pour ça :)

Facilité de contribution §

J'adore vraiment la façon dont OpenBSD gère les contributions. Je récupère les sources sur mon système et je procède aux modifications, je génère un fichier de diff (différence entre avant/après) et je l'envoie sur la liste de diffusion. Tout ça peut être fait en console avec des outils que je connais déjà (git/cvs) et des emails.

Parfois, les nouveaux contributeurs peuvent penser que les personnes qui répondent ne sont vraiment pas sympa. **Ce n'est pas vrai**. Si vous envoyez un diff et que vous recevez une critique, cela signifie déjà qu'on vous accorde du temps pour vous expliquer ce qui peut être amélioré. Je peux comprendre que cela puisse paraître rude pour certaines personnes, mais ce n'est pas ça du tout.

Cette année, j'ai fait quelques modestes contributions aux projets OpenIndiana et NixOS, c'était l'occasion de découvrir comment ces projets gèrent les contributions. Les deux utilisent github et la manière de faire est très intéressante, mais la comprendre demande beaucoup de travail car c'est relativement compliqué.

Site officiel d'OpenIndiana

Site officiel de NixOS

La méthode de contribution nécessite un compte sur Github, de faire un fork du projet, cloner le fork en local, créer une branche, faire les modifications en local, envoyer le fork sur son compte github et utiliser l'interface web de github pour faire un "pull request". Ça c'est la version courte. Sur NixOS, ma première tentative de faire un pull request s'est terminée par une demande contenant six mois de commits en plus de mon petit changement. Avec une bonne documentation et de l'entrainement c'est tout à fait surmontable. Cette méthode de travail présente certains avantages comme le suivi des contributeurs, l'intégration continue ou la facilité de critique de code, mais c'est rebutoire au possible pour les nouveaux.

Packages top qualité §

Mon opinion est sûrement biaisée ici (bien plus que pour les éléments précédents) mais je pense sincèrement que les packages d'OpenBSD sont de très bonne qualité. La plupart d'entre eux fonctionnent "out of the box" avec des paramètres par défaut corrects.

Les packages qui nécessitent des instructions particulières sont fournis avec un fichier "readme" expliquant ce qui est nécessaire, par exemple créer certains répertoires avec des droits particuliers ou comment mettre à jour depuis une version précédente.

Même si par manque de contributeurs et de temps (en plus de certains programmes utilisant beaucoup de linuxismes pour être faciles à porter), la plupart des programmes libres majeurs sont disponibles et fonctionnent très bien.

Je profite de l'occasion de ce billet pour critiquer une tendance au sein du monde Open Source.

  • les programmes distribués avec flatpak / docker / snap fonctionnent très bien sur Linux mais sont hostiles envers les autres systèmes. Ils utilisent souvent des fonctionnalités spécifiques à Linux et les méthodes de compilation sont tournées vers Linux. Cela complique grandement le portage de ces applications vers d'autres systèmes.
  • les programmes avec nodeJS: ils nécessitent parfois des centaines voir des milliers des libs et certaines sont mêmes un peu bancales. C'est vraiment compliqué de faire fonctionner ces programmes sur OpenBSD. Certaines libs vont même jusqu'à embarquer du code rust ou à télécharger un binaire statique sur un serveur distant sans solution de compilation si nécessaire ou sans regardant si ce binaire est disponible dans $PATH. On y trouve des aberrations incroyables.
  • les programmes nécessitant git pour compiler: le système de compilation dans les ports d'OpenBSD fait de son mieux pour faire au plus propre. L'utilisateur dédié à la création des packages n'a pas du tout accès à internet (bloqué par le pare-feu avec une règle par défaut) et ne pourra pas exécuter de commande git pour récupérer du code. Il n'y a aucune raison pour que la compilation d'un programme nécessite de télécharger du code au milieu de l'étape de compilation!

Évidemment je comprends que ces trois points ci-dessus existent car cela facilite la vie des développeurs, mais si vous écrivez un programme et que vous le publiez, ce serait très sympa de penser aux systèmes non-linux. N'hésite pas à demander sur les réseaux sociaux si quelqu'un veut tester votre code sur un autre système que Linux. On adore les développeurs "BSD friendly" qui acceptent nos patches pour améliorer le support OpenBSD.

Ce que j'aimerais voir évoluer §

Il y a certaines choses où j'aimerais voir OpenBSD s'améliorer. Cette liste est personnelle et reflète pas l'opinion des membres du projet OpenBSD.

  • Meilleur support ARM
  • Débit du Wifi
  • Meilleures performances (mais ça s'améliore un peu à chaque version)
  • Améliorations de FFS (lors de crashs j'ai parfois des fichiers dans lost+found)
  • Un pkg_add -u plus rapide
  • Support du décodage vidéo matériel
  • Meilleur support de FUSE avec une possibilité de monter des systèmes CIFS/samba
  • Plus de contributeurs

Je suis consciente de tout le travail nécessaire ici, et ce n'est certainement pas moi qui vais y faire quelque chose. J'aimerais que cela s'améliore sans toutefois me plaindre de la situation actuelle :)

Malheureusement, tout le monde sait qu'OpenBSD évolue par un travail acharné et pas en envoyant une liste de souhaits aux développeurs :)

Quand on pense à ce qu'arrive à faire une petite équipe (environ 150 développeurs impliqués sur les dernières versions) en comparaison d'autres systèmes majeurs, je pense qu'on est assez efficace!

[FR] Méthodes de publication de mon blog sur plusieurs médias

Written by Solène, on 03 January 2021.
Tags: #life #blog #francais

Comments on Fediverse/Mastodon

On me pose souvent la question sur la façon dont je publie mon blog, comment j'écris mes textes et comment ils sont publiés sur trois médias différents. Cet article est l'occasion pour moi de répondre à ces questions.

Pour mes publications j'utilise le générateur de site statique "cl-yag" que j'ai développé. Son principal travail est de générer les fichiers d'index d'accueil et de chaque tags pour chacun des médias de diffusion, HTML pour http, gophermap pour gopher et gemtext pour gemini. Après la génération des indexs, pour chaque article publié en HTML, un convertisseur va être appelé pour transformer le fichier d'origine en HTML afin de permettre sa consultation avec un navigateur internet. Pour gemini et gopher, l'article source est simplement copié avec quelques méta-données ajoutées en haut du fichier comme le titre, la date, l'auteur et les mots-clés.

Publier sur ces trois format en même temps avec un seul fichier source est un défi qui requiert malheureusement de faire des sacrifices sur le rendu si on ne veut pas écrire trois versions du même texte. Pour gopher, j'ai choisi de distribuer les textes tel quel, en tant que fichier texte, le contenu peut être du markdown, org-mode, mandoc ou autre mais gopher ne permet pas de le déterminer. Pour gémini, les textes sont distribués comme .gmi qui correspondent au type gemtext même si les anciennes publications sont du markdown pour le contenu. Pour le http, c'est simplement du HTML obtenu via une commande en fonction du type de données en entrée.

J'ai récemment décidé d'utiliser le format gemtext par défaut plutôt que le markdown pour écrire mes articles. Il a certes moins de possibilités que le markdown, mais le rendu ne contient aucune ambiguïté, tandis que le rendu d'un markdown peut varier selon l'implémentation et le type de markdown (tableaux, pas tableaux ? Syntaxe pour les images ? etc...)

Lors de l'exécution du générateur de site, tous les indexs sont régénérées, pour les fichiers publiés, la date de modification de celui-ci est comparée au fichier source, si la source est plus récente alors le fichier publié est généré à nouveau car il y a eu un changement. Cela permet de gagner énormément de temps puisque mon site atteint bientôt les 200 articles et copier 200 fichiers pour gopher, 200 pour gemini et lancer 200 programmes de conversion pour le HTML rendrait la génération extrêmement longue.

Après la génération de tous les fichiers, la commande rsync est utilisée pour mettre à jour les dossiers de sortie pour chaque protocole vers le serveur correspondant. J'utilise un serveur pour le http, deux serveurs pour gopher (le principal n'était pas spécialement stable à l'époque), un serveur pour gemini.

J'ai ajouté un système d'annonce sur Mastodon en appelant le programme local "toot" configuré sur un compte dédié. Ces changements n'ont pas été déployé dans cl-yag car il s'agit de changements très spécifiques pour mon utilisation personnelle. Ce genre de modification me fait penser qu'un générateur de site statique peut être un outil très personnel que l'on configure vraiment pour un besoin hyper spécifique et qu'il peut être difficile pour quelqu'un d'autre de s'en servir. J'avais décidé de le publier à l'époque, je ne sais pas si quelqu'un l'utilise activement, mais au moins le code est là pour les plus téméraires qui voudraient y jeter un oeil.

Mon générateur de blog peut supporter le mélange de différents types de fichiers sources pour être convertis en HTML. Cela me permet d'utiliser le type de formatage que je veux sans avoir à tout refaire.

Voici quelques commandes utilisées pour convertir les fichiers d'entrées (les articles bruts tels que je les écrits) en HTML. On constate que la conversion org-mode vers HTML n'est pas la plus simple. Le fichier de configuration de cl-yag est du code LISP chargé lors de l'exécution, je peux y mettre des commentaires mais aussi du code si je le souhaite, cela se révèle pratique parfois.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

Quand je déclare un nouvel article dans le fichier de configuration qui détient les méta-données de toutes les publications, j'ai la possibilité de choisir le convertisseur HTML à utiliser si ce n'est pas celui par défaut.

;; utilisation du convertisseur par défaut
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; utilisation du convertisseur mmd, un script awk très simple que j'ai fait pour convertir quelques fonctionnalités de markdown en html
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Quelques statistiques concernant la syntaxe de mes différentes publications, via http vous ne voyez que le HTML, mais en gopher ou gemini vous verrez la source telle quelle.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

My blog workflow

Written by Solène, on 03 January 2021.
Tags: #life #blog

Comments on Fediverse/Mastodon

I often have questions about how I write my articles, which format I use and how I publish on various medias. This article is the opportunity to highlight all the process.

So, I use my own static generator cl-yag which supports generating indexes for whole article lists but also for every tags in html, gophermap format and gemini gemtext. After the generation of indexes, for html every article will be converted into html by running a "converter" command. For gopher and gemini the original text is picked up, some metadata are added at the top of the file and that's all.

Publishing for all the three formats is complicated and sacrifices must be made if I want to avoid extra work (like writing a version for each). For gopher, I chose to distribute them as simple text file but it can be markdown, org-mode, mandoc or other formats, you can't know. For gemini, it will distribute gemtext format and for http it will be html.

Recently, I decided to switch to gemtext format instead of markdown as the main format for writing new texts, it has a bit less features than markdown, but markdown has some many implementations than the result can differ greatly from one renderer to another.

When I run the generator, all the indexes are regenerated, and destination file modification time are compared to the original file modification time, if the destination file (the gopher/html/gemini file that is published) is newer than the original file, no need to rewrite it, this saves a lot of time. After generation, the Makefile running the program will then run rsync to various servers to publish the new directories. One server has gopher and html, another server only gemini and another server has only gopher as a backup.

I added a Mastodon announcement calling a local script to publish links to new publications on Mastodon, this wasn't merged into cl-yag git repository because it's too custom code depending on local programs. I think a blog generator is as personal as the blog itself, I decided to publish its code at first but I am not sure it makes much sense because nobody may have the same mindset as mine to appropriate this tool, but at least it's available if someone wants to use it.

My blog software can support mixing input format so I am not tied to a specific format for all its life.

Here are the various commands used to convert a file from its original format to html. One can see that converting from org-mode to html in command line isn't an easy task. As my blog software is written in Common LISP, the configuration file is also a valid common lisp file, so I can write some code in it if required.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

When I define a new article to generate from a main file holding the metadata, I can specify the converter if it's not the default one configured.

;; using default converter
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; using mmd converter, a simple markdown to html converter written in awk
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Some statistics about the various format used in my blog.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

Port of the week: Lagrange

Written by Solène, on 02 January 2021.
Tags: #portoftheweek #gemini

Comments on Fediverse/Mastodon

Today's Port of the Week is about Lagrange, a gemini web browser.

Lagrange official website

Information about the Gemini protocol

Curated list of Gemini clients

Lagrange is the finest browser I ever used and it's still brand new. I imported it into OpenBSD and so it will be available starting from OpenBSD 6.9 releases.

Screenshot of the web browser in action with dark mode, it supports left and right side panels.

Lagrange is fantastic in the way it helps the user with the content browsed.

  • Links already visited display the last visited date
  • Subscription on page without RSS is possible for pages respecting a specific format (most of gemini space does)
  • Easy management of client certificates, used for authentication
  • In-page image loading, video watching and sound playing
  • Gopher support
  • Table of content displayed generated from headings
  • Keyboard navigation
  • Very light (dependencies, memory footprint, cpu usage)
  • Smooth scrolling
  • Dark and light modes
  • Much more

If you are interested into Gemini, I highly recommend this piece of software as a browser.

In case you would like to host your own Gemini content without requiring infrastructure, some community servers are offering hosting through secure sftp transfers.

Si3t.ch community Gemini hosting

Un bon café !

Once you get into Gemini space, I recommend the following resources:

CAPCOM feed agregator, a great place to meet new authors

GUS: a search engine

Vger gemini server can now redirect

Written by Solène, on 02 January 2021.
Tags: #gemini

Comments on Fediverse/Mastodon

I added a new feature to Vger gemini server.

Vger git repository

The protocol supports status code including redirections, Vger had no way to know if a user wanted to redirect a page to another. The redirection litteraly means "You asked for this content but it is now at that place, load it from there".

To keep it with vger Unix way, a redirection is done using a symbolic link:

The following command would redirect requests from gemini://perso.pw/blog/index.gmi to gemini://perso.pw/blog/index.gmi:

ln -s "gemini://perso.pw/capsule/index.gmi" blog/index.gmi

Unfortunately, this doesn't support globbing, in other words it is not possible to redirect everything from `/blog/` to `/capsule/` without creating a symlink for all previous resources to their new locations.

Host your Cryptpad web office suite with OpenBSD

Written by Solène, on 14 December 2020.
Tags: #web #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to deploy your own cryptpad instance with OpenBSD.

Cryptpad official website

Cryptpad is a web office suite featuring easy real time collaboration on documents. Cryptpad is written in JavaScript and the daemon acts as a web server.

Pre-requisites §

You need to install the packages git, node, automake and autoconfig to be able to fetch the sources and run the program.

# pkg_add node git autoconf--%2.69 automake--%1.16

Another web front-end software will be required to allow TLS connections and secure the network access to the Cryptpad instance. This can be relayd, haproxy, nginx or lighttpd. I'll cover the setup using httpd, and relayd. Note that Cryptpad developers will provide support only to Nginx users.

Installation §

I really recommend using dedicated users daemons. We will create a new user with the command:

# useradd -m _cryptpad

Then we will continue the software installation as the `_cryptpad` user.

# su -l _cryptpad

We will mainly follow the official instructions with some exceptions to adapt to OpenBSD:

Official installation guide

$ git clone https://github.com/xwiki-labs/cryptpad
$ cd cryptpad
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install bower
$ node_modules/.bin/bower install
$ cp config/config.example.js config/config.js

Configuration §

There are a few variables important to customize:

  • "httpUnsafeOrigin" should be set to the public address on which cryptpad will be available. This will certainly be a HTTPS link with an hostname. I will use https://cryptpad.kongroo.eu
  • "httpSafeOrigin" should be set to a public address which is different than the previous one. Cryptpad requires two different addresses to work. I will use https://api.cryptpad.kongroo.eu
  • "adminEmail" must be set to a valid email used by the admin (certainly you)

Make a rc file to start the service §

We need to automatically start the service properly with the system.

Create the file /etc/rc.d/cryptpad

#!/bin/ksh

daemon="/usr/local/bin/node"
daemon_flags="server"
daemon_user="_cryptpad"
location="/home/_cryptpad/cryptpad"

. /etc/rc.d/rc.subr

rc_start() {
	${rcexec} "cd ${location}; ${daemon} ${daemon_flags}"
}

rc_bg=YES
rc_cmd $1

Enable the service and start it with rcctl

# rcctl enable cryptpad
# rcctl start cryptpad

Operating §

Make an admin account §

Register yourself on your Cryptpad instance then visit the *Settings* page of your profile: copy your public signing key.

Edit Cryptpad file config.js and search for the pattern "adminKeys", uncomment it by removing the "/* */" around and delete the example key and paste your key as follow:

adminKeys: [
    "[solene@cryptpad.kongroo.eu/YzfbEYwZq6Xhl7ET6AHD01w3QqOE7STYgGglgSTgWfk=]",
],

Restart Cryptpad, the user is now admin and has access to a new administration panel from the web application.

Backups §

In the cryptpad directory, you need to backup `data` and `datastore` directories.

Extra configuration §

In this section I will explain how to configure generate your TLS certificate with acme-client and how to configure httpd and relayd to publish cryptpad. I consider it besides the current article because if you have nginx and already a setup to generate certificates, you don't need it. If you start from scratch, it's the easiest way to get the job done.

Acme client man page

Httpd man page and

Relayd man page

From here, I consider you use OpenBSD and you have blank configuration files.

I'll use the domain **kongroo.eu** as an example.

httpd §

We will use httpd in a very simple way. It will only listen on port 80 for all domain to allow acme-client to work and also to automatically redirect http requests to https.

# cp /etc/examples/httpd.conf /etc/httpd.conf
# rcctl enable httpd
# rcctl start httpd

acme-client §

We will use the example file as a default:

# cp /etc/examples/acme-client.conf /etc/acme-client.conf

Edit `/etc/acme-client.conf` and change the last domain block, replace `example.com` and `secure.example.com` with your domains, like `cryptpad.kongroo.eu` and `api.cryptpad.kongroo.eu` as alternative name.

For convenience, you will want to replace the path for the full chain certificate to have `hostname.crt` instead of `hostname.fullchain.pem` to match relayd expectations.

This looks like this paragraph on my setup:

domain kongroo.eu {
        alternative names { api.cryptpad.kongroo.eu cryptpad.kongroo.eu }
        domain key "/etc/ssl/private/kongroo.eu.key"
        domain full chain certificate "/etc/ssl/kongroo.eu.crt"
        sign with buypass
}

Note that with the default acme-client.conf file, you can use *letsencrypt* or *buypass* as a certification authority.

acme-client.conf man page

You should be able to create your certificates now.

# acme-client kongroo.eu

Done!

You will want the certificate to be renewed automatically and relayd to restart upon certificate change. As stated by acme-client.conf man page, add this to your root crontab using `crontab -e`:

~ * * * * acme-client kongroo.eu && rcctl reload relayd

relayd §

This configuration is quite easy, replace `kongroo.eu` with your domain.

Create a /etc/relayd.conf file with the following content:

relayd.conf man page

tcp protocol "https" {
        tls keypair kongroo.eu
}

relay "https" {
        listen on egress port 443 tls
        protocol https
        forward to 127.0.0.1 port 3000
}

Enable and start relayd using rcctl:

# rcctl enable relayd
# rcctl start relayd

Conclusion §

You should be able to reach your Cryptpad instance using the public URL now. Congratulations!

Kakoune editor cheatsheet

Written by Solène, on 02 December 2020.
Tags: #kakoune #editor #cheatsheet

Comments on Fediverse/Mastodon

This is a simple kakoune cheat sheet to help me (and readers) remember some very useful features.

To see kakoune in action.

Video showing various features in made with asciinema.

Official kakoune website (it has a video)

Commands (in command mode) §

Select from START to END position. §

Use `Z` to mark start and `alt+z i` to select unti current position.

Add a vertical cursor (useful to mimic rectangle operation) §

Type `C` to add a new cursor below your current cursor.

Clear all cursors §

Type `space` to remove all cursors except one.

Pasting text verbatim (without completion/indentation) §

You have to use "disable hook" command before inserting text. This is done with `\i` with `\` disabling hooks.

Split selection into cursors §

When you make a selection, you can use `s` and type a pattern, this will create a new cursor at the start of every pattern match.

This is useful to make replacements for words or characters.

A pattern can be a word, a letter, or even `^` to tell the beginning of each line.

How-to §

In kakoune there are often multiples way to do operations.

Select multiples lines §

Multiples cursors §

Go to first line, press `J` to create cursors below and press `X` to select whole lines of every cursors.

Using start / end markers §

Press `Z` on first line, and `alt+z i` on last line and then press `X` to select whole lines of every lines.

Using selections §

Press `X` until you reach the last line.

Replace characters or words §

Make a selection and type `|`, you are then asked for a shell command, you have to use `sed`.

Sed can be used, but you can also select the lines and split the selection to make a new cursor before each word and replace the content by typing it, using the `s` command.

Format lines §

For my blog I format paragraphs so lines are not longer than 80 characters. This can be done by selecting lines and run `fmt` using a pipe command. You can use other software if fmt doesn't please you.

How to deploy Vger gemini server on OpenBSD

Written by Solène, on 30 November 2020.
Tags: #gemini #openbsd

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain how to install and configure Vger, a gemini server.

What is the gemini protocol

Short introduction about Gemini: it's a very recent protocol that is being simplistic and limited. Keys features are: pages are written in markdown like, mandatory TLS, no header, UTF-8 encoding only.

Vger program §

Vger source code

I wrote Vger to discover the protocol and the Gemini space. I had a lot of fun with it, it was the opportunity for me to rediscover the C language with a better approach. The sources include a full test suite. This test suite was unvaluable for the development process.

Vger was really built with security in mind from the first lines of code, now it offers the following features:

  • chroot and privilege dropping, and on OpenBSD it uses unveil/pledge all the time
  • virtualhost support
  • language selection
  • MIME detection
  • handcrafted man page, OpenBSD quality!

The name Vger is a reference to the 1979 first Star Trek movie.

Star Trek: The Motion Picture

Install Vger §

Compile vger.c using clang or gcc

$ make
# install -o root -g bin -m 755 vger /usr/local/bin/vger

Vger receives requests on stdin and gives the result on stdout. It doesn't take account of the hostname given but a request MUST start with `gemini://`.

vger official homepage

Setup on OpenBSD §

Create directory /var/gemini/, files will be served from there.

Create the `_gemini` user:

useradd -s /sbin/nologin _gemini

Configure vger in /etc/inetd.conf

11965 stream tcp nowait _gemini /usr/local/bin/vger vger

Inetd will run vger` with the _gemini user. You need to take care that /var/gemini/ is readable by this user.

inetd is a wonderful daemon listening on ports and running commands upon connections. This mean when someone connects on the port 11965, inetd will run vger as _gemini and pass the network data to its standard input, vger will send the result to the standard output captured by inetd that will transmit it back to the TCP client.

Tell relayd to forward connections in relayd.conf

log connection
relay "gemini" {
    listen on 163.172.223.238 port 1965 tls
    forward to 127.0.0.1 port 11965
}

Make links to the certificates and key files according to relayd.conf documentation. You can use acme / certbot / dehydrate or any "Let's Encrypt" client to get certificates. You can also generate your own certificates but it's beyond the scope of this article.

# ln -s /etc/ssl/acme/cert.pem /etc/ssl/163.172.223.238\:1965.crt
# ln -s /etc/ssl/acme/private/privkey.pem /etc/ssl/private/163.172.223.238\:1965.key

Enable inetd and relayd at boot and start them

# rcctl enable relayd inetd
# rcctl start relayd inetd

From here, what's left is populating /var/gemini/ with the files you want to publish, the `index.md` file is special because it will be the default file if no file are requests.

About Language Server Protocol and Kakoune text editor

Written by Solène, on 24 November 2020.
Tags: #kakoune #editor #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to install a lsp plugin for kakoune to add language specific features such as autocompletion, syntax error reporting, easier navigation to definitions and more.

The principle is to use "Language Server Protocol" (LSP) to communicate between the editor and a daemon specific to a programming language. This can be also done with emacs, vim and neovim using the according plugins.

Language Server Protocol on Wikipedia

For python, _pyls_ would be used while for C or C++ it would be _clangd_.

The how-to will use OpenBSD as a base. The package names may certainly vary for other systems.

Pre-requisites §

We need _kak-lsp_ which requires rust and cargo. We will need git too to fetch the sources, and obviously kakoune.

# pkg_add kakoune rust git

Building §

Official building steps documentation

I recommend using a dedicated build user when building programs from sources, without a real audit you can't know what happens exactly in the build process. Mistakes could be done and do nasty things with your data.

$ git clone https://github.com/kak-lsp/kak-lsp
$ cd kak-lsp
$ cargo install --locked --force --path .

Configuration §

There are a few steps. kak-lsp has its own configuration file but the default one is good enough and kakoune must be configured to run the kak-lsp program when needed.

Take care about the second command if you built from another user, you have to fix the path.

$ mkdir -p ~/.config/kak-lsp
$ cp kak-lsp.toml ~/.config/kak-lsp/

This configuration file tells what program must be used depending of the programming language required.

[language.python]
filetypes = ["python"]
roots = ["requirements.txt", "setup.py", ".git", ".hg"]
command = "pyls"
offset_encoding = "utf-8"

Taking the configuration block for python, we can see the command used is _pyls_.

For kakoune configuration, we need a simple configuration in ~/.config/kak/kakrc

eval %sh{/usr/local/bin/kak-lsp --kakoune -s $kak_session}
hook global WinSetOption filetype=(rust|python|go|javascript|typescript|c|cpp) %{
        lsp-enable-window
}

Note that I used the full path of kak-lsp binary in the configuration file, this is due to a rust issue on OpenBSD.

Link to Rust issue on github

Trying with python §

To support python programs you need to install python-language-server which is available in pip. There are no package for it on OpenBSD. If you install the program with pip, take care to have the binary in your $PATH (either by extending $PATH to ~/.local/bin/ or by copying the binary in /usr/local/bin/ or whatever suits you).

The pip command would be the following (your pip binary name may change):

$ pip3.8 install --user 'python-language-server[all]'

Then, opening python source file should activate the analyzer automatically. If you add a mistake, you should see `!` or `*` in the most left column.

Trying with C §

To support C programs, clangd binary is required. On OpenBSD it is provided by the clang-tools-extra package. If clangd is in your $PATH then you should have working support.

Using kak-lsp §

Now that it is installed and working, you may want to read the documentation.

kak-lsp usage

I didn't look deep for now, the autocompletion automatically but may be slow in some situation.

Default keybindings for "gr" and "gd" are made respectively for "jump to reference" and "jump to definition".

Typing "diag" in the command prompt runs "lsp-diagnostics" which will open a new buffer explaining where errors are warnings are located in your source file. This is very useful to fix errors before compiling or running the program.

Debugging §

The official documentation explains well how you can check what is wrong with the setup. It consists into starting kak-lsp in a terminal and kakoune separately and check kak-lsp output. This helped me a lot.

Official troubleshooting guide

[7th floor] Nethack story of Sery the tourist

Written by Solène, on 24 November 2020.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

Sery is back in the fourth floor 4 of the underworld. What mysteries are to be discovered? What enemies will be slayed so we can make our path?

Everything is awesome

Sery is in the fourth floor, she found stairs to go deeper but she also heard coins flipping. Maybe a merchant is around? That would be the right opportunity to buy weapons, armor and food.

               --------------
               |............|
              #.@...........+
              #|............|
              #|..>...$.....|
              #--------------
              ###
                #
                ##
                 #
                 #
                 #
                 #
         -- -----#
              <  #
         |      |
         |      |
         --------

After walking to a new room south-east, she found a large room with a hobbit statue h and a potion on the floor. The potion is not identified, so using it will be very risky.

The large room was a dead end. Back to the previous room Sery was now surrounded by enemies. A gas spore e, a green mold F and a giant bug :! She also felt hungry at the time, but she had to fight. Eggs and pancakes will be for another time.

           --------------
           |.F..........|
          #.:.....@..e..-#
          #|............|#
          #|..>...d.....|#
          #--------------#
          ###            #

While fleeing to the ascending stairs to search a merchant on this floor while escaping enemies, a gecko was blocking the way. Sery had to fight with her fists and fortunately the gecko didn’t oppose much resistance. But a few steps later, a goblin was also in the path. Sery’s dog location is unknown, it was certainly fighting in the previous room. Sery decided to drink a potion to recover from her 2 HP left and go back to the room, in hope the dog can help her.

It worked! The dog was just behind and charged the goblin would die instantly. The dog was starving and ate the goblin freshly killed, Sery was hungry too but preferred eating some pancake that wasn’t fresh, it had better taste than the remaining goblin meat tin can she had in her purse.

                               --------------
                               |            |
                              #.............-#
                              #|            |#
      ---------------         #|  >         |#
      .........o....|         #--------------#
      |.............|         ###            #
      |.......$....@d##         #            #
      --------------- ###       ##           #
                        #        #           #
                        #        #   `##################
                        #        #           #--------- --
                        #        #           #|         h|
                        #-- -----#           #|          |
                        #     <  #           #           |
                         |      |             |          |
                         |      |             |          |
                         --------             ------------

On the first steps in the room, she found a graffiti on the ground:

Atta?king a? ec| vhere the?c is rone i? usually a ?a?al mistakc!

The message didn’t make any sense. The room had a goblin statue and some gold on the ground, it’s all Sery had to know. The room was calm and nothing happened when crossing it. Sery seemed to be blessed!

        -----
        |....##  
        |@..| ###
        -----   #

Nearby she found a very small room with no other way than the entrance. This looked very suspicious and she decided to spend some time looking around for a clue about a secret door. She was right! A few minutes after she started to search, she found a hidden door! The door was not locked, which was surprising. Who knows what was waiting on the other side?

After walking a bit in a small and dark corridor, a new room was here, with an empty box along a wall and a grave in a corner in the opposite side of the room.

             -----
             |    ##                           --------------
            #-   | ###                         |            |
            #-----   #                        #             -#
            ##       #                        #|            |#
             ##      #---------------         #|  >         |#
              ##     #         o    |         #--------------#
      ---------#      |             |         ###            #
      |.......|#      |              ##         #            #
      |........#      --------------- ###       ##           #
      |.......|                         #        #           #
      |(@......                         #        #   `##################
      |......||                         #        #           #--------- --
      ---------                         #        #           #|         h|
                                        #-- -----#           #|          |
                                        #     <  #           #           |
                                         |      |             |          |
                                         |      |             |          |
                                         --------             ------------

The large box was locked! Without lock pick she wasn’t able to open it. After all she went through in the dungeon, anger gave her some strength to break the box padlock after a few kicks in it.

The box contained the following objects:

  • a pyramidal amulet
  • a food ration
  • a black gem
  • two green gems

She still had some room on her bag, it wasn’t too heavy for now so she decided to take everything from the box.

Kicking the box consumed energy and she decided to restart a little, and eat something. The food ration from the box looked very tasty but it may be poisoned or toxic so she avoid it and ate goblin meat in tin can. It wasn’t good, but did the job.

She looked at the grave, it was old and only had engraved words on it which appeared to be

Yes Dear, just a few more minutes…

A corridor in the room was leading to a dead end. There was nothing. Even after searching for a long time, Sery didn’t find any way there so she decided to go back and descend to the next floor.

On a way back, she had to fight monsters: a newt, a sewer rat, a gas spore! After the fights, hunger was back again! It was time for a good meal: goblin meat and food ration. It did hit the spot and Sery felt a lot better.

Fifth floor

In the fifth floor, a potion ! was lying on the ground. There was some light, it wasn’t completely dark, without a lamp or a torch this would be a real problem.

    ---------
    |.......+
    |.......|
    |@......|
    |..d.!..|
    |........
    ------- -

In a corridor leading to a room in the south, she had to kill a coyote in the way. The room had a teleportation trap and an apple %, food!

Going east, she walked through a long corridor until a dead end. After searching for some time she found a way to get a body through a hole and get to the other side. A boulder was in the tunnel but she have been able to push it, fortunately the bolder was rolling fine.

    ---------
    |       +
    |       |
    |<      |
    |       |
    |        
    ------- -
           #
           #
           ##
            #
            ##
             #
             #      #           #                    ##
          --- ------#           #             #      @
          |         #################################`
          |    ^   |
          ----------

Sery found a new room with two potions and a gnome. It was hard for Sery to know if the gnome was hostile

                -.--|--
                +..!G.|
       #        |...!.|
        ########d@....|
        #       |.....|
    ####`       -------

The dog got triggered by the gnome presence and ran to fight the gnome. The gnome was definitely hostile. Sery ended quickly in hand-to-hand combat with the gnome.

The camera’s flash! She thought it should work, after all the camera still had forty seven pictures to take, or enemies to blind.

It worked, the poor creature got blinded, the dog was biting its back. After a few hits, the gnome died, leaving a bow on the ground.

Continuing her way, Sery found the room with the descending stairs. There were a homunculus i and a sewer rat r waiting. She knew the rat was an easy target but the other enemy was unknown. It didn’t appeared friendly and she doubted to be able to kill it without risking her life.

    ---------
    |       +                                               -------------
    |       |                                               |...........|
    |<      |                                               -....>!.....|
    |       |                                               |...........|
    |                                                       ....i....r..|
    ------- -                                               -- -------@--
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                - --)--
            ##                                               +     |
             #                                      #        |  )  |
             #      #           #                    ########      |
          --- ------#           #             #      #       |     |
          |         #################################`       -------
          |    ^   |
          ----------

Sery decided to go back to the long corridor which had cross ways.

    ---------
    |       +                                               -------------
    |       |                                               |           |
    |<      |                                               -    >!     |
    |       |                                               |           |
    |                                                                   |
    ------- -                                               -- ------- --
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                -.--|--
            ##                                      #########i@....|
             #                                #######        |..)..|
             #      #           #             #      ########......|
          --- ------#           #             #      #       |)....|
          |         #################################`       -------
          |    ^   |
          ----------

The homunculus was fast! It found Sery back from where they met. Sery was in troubles. The homunculus seemed hard to escape and while fleeing in a corridor, a dwarf zombie Z blocked the way.

She tried to fight it but she lost 9 HP in 2 hits, the beast was very powerful. It was time to drink the random potions she got over the journey. They were unidentified but there was no choice, except praying maybe.

Praying! Sery wasn’t a believer but praying was the best she could do. Her pray was deep and pure, she only wanted to have some hope for her future and her quest.

The Lady heard her pray, Sery got surrounded by a shimmering light. The dwarf zombie attacked Sery but got pulled back by some energy field. Sery felt a lot better, her health was fully recovered and also increased.

                #########-.....|
          #######        |..)..|
          #      #Z@#####......|
          #      #       |)....|
        #########`       -------

Sery got a second chance, she certainly wanted to make a good use of it. At this time, the only thought in her mind was: RUN AWAY

She did run, very fast, to the stairs leading deeper. None enemies made troubles in her retreat.

Sixth floor

No time to look in the room she arrived, Sery got attacked by a brown mold, which in turn was killed by her dog.

    ------
    |....|
    |....|
    |.d@.|
    |....|
    |....|
    |....|
    --.---

The room had only way to the south. Finding a merchant was becoming urgent. Her food supplies were depleting. She had a lot of money but that is not helpful in the middle of the underground among the monsters.

In the south room there was a lichen F, but it seemed peaceful, or guarding the stairs to descend to seventh floor, who knows? The room had no other entrance than the one by which Sery came, but after examining the walls, she found a door.

     ------
     |    |
     |    |
     |  < |
     |    |
     |    |
     |    |
     -- ---
       ####
          #
          #
          ##
      ----- -      -----
      |     |     |....|
      |.F...-#####@....|
      |>    |     |....|
      -------      .!...
                   -----

Nothing unusual in this floor. Continuing her progresses through the tunnels, she ended in a dark room, she wasn’t able to see further than a meter away.

     ------
     |    |              -------------
     |    |             |          .d|
     |  < |            #-          .@|
     |    |            #----       -.-
     |    |            #
     |    |            ##
     -- ---             #
       ####             #
          #             #
          #             #
          ##            #
      ----- -     ------#
      |     |     |    |#
      |     -#####     |#
      |>    |     |    |#
      -------     |     #
                  ------

One more step and she came face to face with a homunculus. Fortunately the dog was just behind and not fighting any other aggressive animals. The dog killed it fast. But then another homunculus came, which also got killed by the dog.

In the end, those homunculus are pretty weak.

Room after room, with only emptiness as a friend, Sery walked for a long time. And then he appeared! The merchant !

     ------
     |    |              -------------                                      ------
     |    |             |            |                                      |????|
     |  < |            #-            |                                      |????|
     |    |            #----       - -                                      |???+|
     |    |            #            ##                                      |??+?|
     |    |            ##            #                                      |+??+|
     -- ---             #            #                                      |.@.
       ####             #        ---- -#                                    -@-
          #             #        |    -#                                     #
          #             #        |    |      |            -- ------        ###
          ##            #        |    -######|                    |        #
      ----- -     ------#        |    |     #|            |                #
      |     |     |    |#        |  <      ##         #### `      |        #
      |     -#####     |#        ------    ######     #   |        ###### - ----
      |>    |     |    |#                       #######   |     _ |     # |    |
      -------     |     #                                 |       |     ##     |
                  ------                                  ---------       ------

He was a bookseller, selling scrolls… Sery was so disappointed by this, she felt helpless for a moment.

FuguITA: OpenBSD live-cd

Written by Solène, on 18 November 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to download and run the FuguITA OpenBSD live-cd, which is not an official OpenBSD project (it is not endorsed by the OpenBSD project), but is available since a long time and is carefully updated at every release and errata published.

FuguITA official homepage

I do like this project and I am running their European mirror, it was really long to download it from Europe before.

Please note that if you have issues with FuguITA, you must report it to the FuguITA team and not report it to the OpenBSD project.

Preparing §

Download the img or iso file on a mirror.

Mirror list from official project page

The file is gzipped, run gunzip on the img file FuguIta-6.8-amd64-202010251.img.gz (name may change over time because they get updated to include new erratas).

Then, copy the file to your usb memory stick. This can be dangerous if you don't write the file to the correct disk!

To avoid mistakes, I plug in the memory stick when I need it, then I check the last lines of the output of dmesg command which looks like:

sd1 at scsibus2 targ 1 lun 0: <Corsair, Voyager 3.0, 1.00> removable serial.1b1c1a03800000000060
sd1: 15280MB, 512 bytes/sector, 31293440 sectors

This tells me my memory stick is the sd1 device.

Now I can copy the image to the memory stick:

# dd if=FuguIta-6.8-amd64-202010251.img of=/dev/rsd1c bs=10M

Note that I use /dev/rsd1c for the sd1 device. I've added a r to use the raw mode (in opposition of buffered mode) so it gets faster, and the c stands for the whole disk (there is a historical explanation).

Starting the system §

Boot on your usb memory stick. You will be prompted for a kernel, you can wait or type enter, the default is to use the multiprocessor kernel and there are no reason to use something else.

If will see a prompt "scanning partitions: sd0i sd1a sd1d sd1i" and be asked which is the FuguIta operating device, proposing a default that should be the correct one.

FROM HERE, YOUR KEYBOARD IS IN QWERTY.

Just type enter.

The second question will be the memory disk allowed size (using TMPFS), just press enter for "automatic".

Then, a boot mode will be showed: the best is the mode 0 for a livecd experience.

Official documentation in regards to FuguITA specifics options

Keyboard type will be asked, just type the layout you want. Then answer to questions:

  • root password
  • hostname (you can just press enter)
  • IP to use (v4, v6, both [default])

When prompted for your network interfaces, WIFI may not work because the livecd doesn't have any firmware.

Finally, you will be prompted for C for console or X for xenodm. THERE ARE NO USER except root, so if you start X you can only use root as a user, which I STRONGLY discourage.

You can login console as root, use the two commands "useradd -m username" and "passwd username" to give a password to that user, and then start xenodm.

The livecd can restore data from a local hard drive, this is explained in the start guide of the FuguITA project.

Conclusion §

Having FuguITA around is very handy. You can use it to check your hardware compatibility with OpenBSD without installing it. Packages can be installed so it's perfect to check how OpenBSD performs for you and if you really want to install it on your computer.

You can also use it as an usb live system to transport OpenBSD anywhere (the system must be compatible) by using the persistent mode, encryption being a feature! This may be very useful for people traveling on lot and who don't necesserarly want to travel with an OpenBSD laptop.

As I said in the introduction, the team is doing a very good job at producing FuguITA releases shortly after the OpenBSD release, and they continuously update every release with new erratas.

Why I use OpenBSD

Written by Solène, on 16 November 2020.
Tags: #openbsd #life

Comments on Fediverse/Mastodon

Introduction §

In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.

Features §

Privacy §

There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.

The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.

Secure firefox / chromium §

While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.

With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.

This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.

It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.

PF firewall §

With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.

I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.

Linux users could use the software such as trickle or wondershaper for this.

It's stable §

Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.

I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.

Low maintenance §

Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.

Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):

# sysupgrade
[..wait..]
# pkg_add -u
# reboot

Documentation is accurate §

Setting up an OpenBSD system with full disk encryption is easy.

Documentation to create a router with NAT is explained step by step.

Every binary or configuration file have their own up-to-date man page.

The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.

If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.

Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.

Contributing is easy §

This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.

There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.

This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.

OpenIndiana official website

NixOS official website

One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.

High quality packages §

My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.

Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.

Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.

I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.

  • programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
  • nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
  • programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.

I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.

What I would like to see improved §

This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.

  • Better ARM support
  • Better performance (gently improving every release)
  • FFS improvements in regards to reliability (I often get files in lost+found)
  • Faster pkg_add -u
  • hardware video decoding/encoding support
  • better FUSE support and mount cifs/smb support
  • scaling up the contributions (more contributors and reviewers for ports@)

I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.

Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)

When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.

Toward an automated tracking of OpenBSD ports contributions

Written by Solène, on 15 November 2020.
Tags: #openbsd #automation

Comments on Fediverse/Mastodon

Since my previous article about a continous integration service to track OpenBSD ports contribution I made a simple proof of concept that allowed me to track what works and what doesn't work.

The continuous integration goal §

A first step for the CI service would be to create a database of diffs sent to ports. This would allow people to track what has been sent and not yet committed and what the state of the contribution is (build/don't built, apply/don't apply). I would proceed following this logic:

  • a mail arrive and is sent to the pipeline
  • it's possible to find a pkgpath out of the file
  • the diff applies
  • distfiles can be fetched
  • portcheck is happy

Step 1 is easy, it could be mail dumped into a directory that get scanned every X minutes.

Step 2 is already done in my POC using a shell script. It's quite hard and required tuning. Submitted diffs are done with diff(1), cvs diff or git diff. The important part is to retrieve the pkgpath like "lang/php/7.4". This allow testing the port exists.

Step 3 is important, I found three cases so far when applying a diff:

  • it works, we can then register in the database it can be used to build
  • it doesn't work, human investigation required
  • the diff is already applied and patch think you want to reverse it. It's already committed!

Being able to check if a diff is applied is really useful. When building the contributions database, a daily check of patches that are known to apply can be done. If a reverse patch is detected, this mean it's committed and the entry could be delete from the database. This would be rather useful to keep the database clean automatically over time.

Step 4 is an inexpensive extra check to be sure the distfiles can be downloaded over the internet.

Step 5 is also an inexpensive check, running portinfo can reports easy to fix mistakes.

All the steps only require a ports tree. Only the step 4 could be tricked by someone malicious, using a patch to make the system download very huge files or files with some legal concerns, but that message would also appear on the mailing list so the risk is quite limited.

To go further in the automation, building the port is required but it must be done in a clean virtual machine. We could then report into the database if the diff has been producing a package correctly, if not, provide the compilation log.

Automatic VM creation §

Automatically creating an OpenBSD-current virtual machine was tricky but I've been able to sort this out using vmm, rsync and upobsd.

The script download the last sets using rsync, that directory is served from a mail server. I use upobsd to create an automatic installation with bsd.rd including my autoinstall file. Then it gets tricky :)

vmm must be started with its storage disk AND the bsd.rd, as it's an auto install, it will reboot after the install finishes and then will install again and again.

I found that using the parameters "-B disk" would make the vm to shutdown after installation for some reasons. I can then wait for the vm to stop and then start it without bsd.rd.

My vmm VM creation sequence:

upobsd -i autoinstall-vmm-openbsd -m http://localhost:8080/pub/OpenBSD/
vmctl stop -f -w integration
vmctl start -B disk -m 1G -L -i 1 -d main.qcow2 -b autobuild_vm/bsd.rd integration
vmctl wait integration
vmctl start -m 1G -L -i 1 -d main.qcow2 integration

The whole process is long though. A derivated qcow image could be used after creation to try each port faster until we want to update the VM again.

Multplies vm could be used at once to make parallel testing and make good use of host ressources.

What's done so far §

I'm currently able to deposite email as files in a directory and run a script that will extract the pkgpath, try to apply the patch, download distfiles, run portcheck and run the build on the host using PORTS_PRIVSEP. If the ports compiled fine, the email file is deleted and a proper diff is made from the port and moved into a staging directory where I'll review the diffs known to work.

This script would stop on blocking error and write a short text report for each port. I intended to sent this as a reply to the mailing at first, but maintaining a parallel website for people working on ports seems a better idea.

The Nethack story of Sery the tourist

Written by Solène, on 15 November 2020.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

First episode of maybe a serie!

Let’s play NetHack and write a story along the way. I find nethack to be a wonderful game despite its quite simple graphics. In this game, you can do more actions than in any modern games. I can dip a towel in a fountain to make it wet, and wear it on my head. Maybe it would protect me from heat? Who knows.

As this leaves a lot of place for the imagination, every serious nethack game I play, I create a story in my head and try to imagine the various situations, so maybe I could write them down?

Welcome to the underworld Gehennom, you will read the story of Sery the human female neutral tourist and her dog. She has to find the Amulet of Yendor and come back to the surface, for some reasons.

@ is Sery and d is her dog.

Arrival - first floor

{ is a fountain, # a sink, - an open door and + a closed door.

In her inventory, she has 875 gold, tourists are rich! 24 darts to throw at enemies, 2 fortunes cookies, some various food (goblin meat in tin can, eggs, carrot, apple, pancakes…), 4 scrolls of magic mapping, 2 healing potions, and expensive camera and an uncursed credit card.

       ---+---------
       |......{....-
       |@.........#|
       |d..........|
       -------------

She went to the closed door but it resisted, after kicked it three times, the door opened! After walking around in tunnel, she only found empty rooms, leading to others tunnels.

# are corridors (when they are not sinks in a room).

                             --------
                            #   ..  |
                            #|  ..  |
                            #|  ..  |
                            #---|----
                            #   ##
                          ###########
                          ##     #
                          #      #
                          #      #
          ----------|---###   ##d@##
          |             #     # ###
          |            |      #---.---------
          |            -#######|..... {    -
          |            |       |<....     #|
          |            |       |.....      |
          --------------       -------------

At the end of a corridor, Sery was stuck but after searching around for some secrets passage, she found a hidden passage to the first room. Back to square one.

                             --------
                            #       |
                            #|      |
                            #|      |
                            #---|----
                            #   ##
                          ###########
                          ##     #           # #
                          #      #       #######
                          #      #       #   #
          ----------|---###   ############  #d
          |             #     # ###         @
          |            |      #--- ---------#
          |            -#######|      {....-#
          |            |       |<   ......#|
          |            |       |   ........|
          --------------       -------------

After she heard some noise in a corridor, she stumbled on a boulder \` but it is impossible to move it to clear the corridor.

A new room was found, with a large box ( in it. What could be in this box?

           ------
           |....|
         ##d.@..+
        ###|....|
        ## |....|
        ##`|.(..|
        #  |....|
        #  ------

While walking toward the box, her dog suddenly disappeared, falling in a trap door! Sery shorten her exploration of the first level after opening the box to look after her dog.

The large box was locked, without weapon or tools to unlock it, Sery kicked the large box a dozen time so it opened. What a disappointment when she was it was empty!

Second floor

            ----------
            |......@.|
            .........|
            |........|
            |....>...|
            |.....$..|
            ----------

Sery jumped into the trap to descend to the level below, her dog wasn’t in the room though. There were five gold to loot and stairs to descend to the third level. She needed to find her dog before continuing exploration to third level.

In the adjacent corridor, the dog was found sound and safe!

After continuing the exploration, a room was found with enemies!

F lichen, o goblin and a : newt! That was a lot of enemies for a simple tourist. She wanted to pull them into a corridor and let her dog take care of the enemies. This was a good spartiate strategy after all!

                                ----------
                                |        |
                               #         |
                               #|        |
                               #|    >   |
                               #|        |
                               #----------
                               #
                               #
         --------              #
         |.......              #
         .......F|      -------#
         |:....o.@d#####......|#
         |.......|      |      #
         |.......       |     |
         |......        |     |
         -------        -------

Unfortunately, when a lichen is in contact with you, you can’t escape. It took a while for Sery to kill the lichen and retreat in the corridor, she receive a few hits from the lichen and the goblin (HP 6/10). She heard some noises while staying in the corridor, after coming back in the room, the dog finished to kill the newt and the goblin seemed to ran away.

             -------- 
             |.....o. 
             ........|
             |.....d.@
             |.......|
             |....... 
             |......  
             -------  

The dog was then attacking the goblin and killed it rather quickly. This was really fortunate that Sery was in company of her dog.

After walking a bit to continue the exploration, Sery stumbled on a sewer rat, she got hit rather hard and didn’t had much HP left! While retreating to the last room, looking for the dog who stayed back eating the goblin corp, the dog came back to her bringing a iron skull cap certainly found on the dead goblin. In one bit, the dog killed the rat.

After some rest to recover a few HP, Sery went back to the exploration. The exploration was quiet and easy, rooms with unlocked doors, she found the stairs to go upstairs. Nothing of interested was to be found, so it was time to go to the third level. A newt and a lichen were encountered in the corridors but opposed little resistance to the dog.

    ---------                                                   ----------
    |       |                                                   |........|
    |       |       ----------                                 #.........|
    |       |       |        |                                 #|.d..@...|
    |       |       |        |                                 #|F...>...|
    |       |       |        |                                 #|........|
    - -|--- -#   ###-        |                                 #----------
      ### ####  ##  |        |                                 #
       #  `##`###   --- ------                                 #
       ###     ###    ##                 ---------             #
         #####  #     #####              |       |             #
    ---------|-##      ######          ##        |      -------#
    |         |#      -- ---|-----     # |       -######      |#
    |         |#      |          |   ### |       |      |      #
    |         |#      |          |   #   |       |      |     |
    |         -#      |           ####   |       |      |     |
    | <       |       ------------       ---------      -------
    -----------

Third floor

The room where Sery arrived in the third level had an enemy, a huge x bug and some money in a corner near a door.

                      --------------
                      |...@........|
                      |....d.......|
                      ....x.......$|
                      |............+
                      --------------

The door required two kicks to be opened.

In the next room, Sery saw a bug before entering, so she immediately swapped her place with her dig in the corridor to let her defender do their job.

< are upstairs stairs.

                      --------------
                      |   <        |
                      |            |
                                   |
                      |             ##
                      -------------- #
                                     ##    --+-
                                      ##d@.x..|
                                            .$|
                                              .
                                              -

As usual, the dog took care of the enemies. A new room was found, multiples windows, some opening in previous rooms wasn’t explored yet too. There were lot of exploration to be done in this area.

                                   --------
                                   |......+
                                   |......|
                                   +>.{...|
              --------------       |......|
              |   <        |       |....@.|
              |            |       -----.--     ...
                           |        ######
              |             ##       #####
              -------------- #       #
                             ##   ---|-
                              ####    |
                                  |   |
                                  |    
                                  -----

While exploring, Sery got to fight a giant rat, she didn’t know where her dog was so she had to fight for real this time.

                                                           --------
             ----                                          |      +
             ....                                          |      |
              ..                     ######################-> {   |
               r                     #--------------       |      |
              #@#####                #|   <        |       |      |
              #     #              ###|            |       ----- --        
                    ##             ###             |        ######
                     #            ##  |             ##       #####
                     #            ##  -------------- #       #
                     #             #                 ##   ---|-
                     ##        #####                  ####    |
                    #- ------  ####                       |   |
                     +      |  #                          |    
                     | >     ###                          -----
                     |      |###
                     |      |
                     --------

Thinking about her inventory, she panicked and used her camera. The flash blinded the giant rat and he ran away! Unfortunately, another giant rat came from the left corridor. She tried to use her camera again but it didn’t work as expected as the giant was still standing in the corridor. The blinding effect didn’t seem very effective because a few seconds later, the first giant rat was back again!

      ----     
      ....     
       ..      
        r      
       r@##### 
       #     # 
         ##

She had no choice but run away, maybe at least fight then but one at a time in a corridor. She want backward, suffered from a giant rat bite and found her dog on the way, who came to the rescue. While she let her dog fighting, a third rat came from behind, this one, she really had to fight, no escape was possible with the dog fighting two rats in the corridor on the other side.

Camera flash, it worked! Time to throw darts, one dart was enough to kill the rat but she missed it a few times. The rat never missed a bite, Sery was in poor health at this moment.

The dog killed the two rats and she was safe, for now.

While walking around to find her way, she got surprised by a giant zombie Z who hit her hard. She had only 1 health point left. Death was close. What she could do? Try the camera flash, drink a potion, escape until her dog run and try to bite the zombie?

She decided to try the health potion and then, support enough hits from the zombie to blind it while the dog behind it was killing the undead living. It was a good idea, at the moment she dunked the healing potion, the zombie hit her, losing one health point, she would be dead if she didn’t drink that potion, then the dog killed the monster and our duo leveled up!

It was time to finish exploring and get deeper in the underworld. A = ring was on the ground in the last room. It was silver ring.

                                                             --------
               --------------                                |      +
              #.            |                                |      |
              #|            |          ######################-> {   |
              #-- -----------          #--------------       |      |
              #########                #|   <        |       |      |
                #     #              ###|            |       ----- --        
                #     ##             ###             |        ######
     -----------#      #            ##  |             ##       #####
     |.......=@.#      #            ##  -------------- #       #
     |.........|       #             #                 ##   ---|-
     |.........        ##        #####                  ####    |
     |....`....|      #- ------  ####                       |   |
     |..  .....|       +      |  #                          |    
     ---  ------       | >     ###                          -----
                       |      |###
                       |      |
                       --------

It would be foolish to wear the ring without identifying it first, it could be a cursed ring you can’t remove that makes you blind or provoke some unwanted effects.

Fourth floor

Arriving at the fourth floor, Sery found a green gem. Feeling this floor would be quite complicated, she decided to read one of her mapping scroll.

       -------
      --     |                                                    ---  ---    ---
      |  --  |           ------                       --- ----   -- ---- --  -- --
      | -|-- |           |  | ---                    -- ---  --  |        ----   |
      |  --| |           |      ----                --        |  |        >      |
      |   || ----------  --      | --------------- --         |  ---             |
      | | ||          -------        | --      | ---         --    -- ---        --
      | |--|  -------     ---                                | ---- --- --        |
      | |  | --     ---                                      | |  |---- --       --
      | -- | --       -------     ----       --  - --        ---  --  | |       --
     --  --|  |             |    --  |       |--   --- ---            ---       |
     |    |-- |             ---  |   --     -| ---  --------                    |
     |    | | ---------       ----    |      --  --      --|            ---     |
     | -- | |.....--.@--             --       |   ------   |-- --      -- |     |
     ---| | ----.......|        ------        |        |-  | ---|-    --  |     |
       -- |   --......-|       --  |         --        |   ---  |    --   --   --
     ---  |  --........|      --             |         |     |  |  ---     -----
    --   --  |.........|      |         -- ---         --    |  ----
    |   --   |......--.|      |     --  |---            ---  |
    --  |    --.|.------      ---- ------                 ----
     ----     -----              ---

After the whole map got reveal in her mind, she got face to face with a dwarf h wielding a dagger. He really didn’t seem friendly but he didn’t attack her yet.

The whole area was very dark, without a torch or a light source, exploring this level would be very tedious.

After exploring the room, looking for interesting loots on the ground, the dwarf attacked her. This was a very dolorous stabbing. Sery retreated back to the upper stairs, she wanted to reach the level below through the other stairs on this level. In the room, she found her dog which stayed behind, fighting a gecko and a giant rat.

She started to feel hungry, hopefully she went to the underworld with a lot of food. She decided to eat a fortune cookie. When cracking it, she found a paper saying: They say that you should never introduce a rope golem to a succubus. This didn’t make much sense to her though.

While walking toward the other stairs, Sery found a graffiti on the ground: ??urist? we?r shirts loud enougn to wake t?e ?e?d.. As for the fortune cookie, this didn’t make much sense.

On her way, she fought various enemies: red mold, newt, rats, found a banana. Descending the stairs, she was surprised to see they didn’t lead to the forth floor with the dwarves, it was a parallel fourth floor. Could it be possible?? There were a newt and money in the room, it wasn’t dark.

             -- -----
             .....@..
             |....d.|
             |...:.$|
             --------

She was angry.

The dog jumped on the newt and killed it. The duo got experience to reach level four. The dog, being a little dog, did grow up into a dog.

After a short rest to eat and recover health, Sery went back in corridors to find a way and continue her quest.

                   --------------
                   |............|
                  #.@...........+
                  #|............|
                  #|..>...$.....|
                  #--------------
                  ###
                    #
                    ##
                     #
                     #
                     #
                     #
             -- -----#
                  <  #
             |      |
             |      |
             --------

In the room she found stairs to go in the level below, would it be a good idea to descend now or should she explore the area first? She had lot of money, finding a merchant to buy armors and weapons would be a good idea.

To be continued

It’s all for today! Please tell me if you enjoyed it!

Full featured Slackware email server with sendmail and cyrus-imapd

Written by Solène, on 14 November 2020.
Tags: #slackware #email

Comments on Fediverse/Mastodon

This article is about making your own mail server using Slackware linux distribution, sendmail and cyrus-imap. This choice is because I really love Slackware and I also enjoy non-mainstream stacks. While everyone would recommend postfix/dovecot, I prefer using sendmail/cyrus-imap. Please not this article contain ironical statements, I will try to write them with some emphasis.

While some people use fossil fuel cars, some people use Slackware.

If you are used to clean, reproducible and automated deployments, the present how-to is the totally opposite. This is the /Slackware/ way.

Slackware

Slackware is one of the oldest (maybe the oldest with debian) linux distribution out there and it’s still usable. The last release (14.2) is 4 years old but there are still security updates. I choose to use the development branch slackware-current for this article.

I discovered an alternative to Windows in the early 2000' with a friend showing me a « Linux » magazine, featuring Slackware installation CDs and the instructions to install. It was my very first contact with Linux and open source ever. I used Slackware multiple times over time, and it was always a great system for me on my main laptop.

The Slackware specifics could be said as: “not changing much” and “quite limited”. Slackware never change much between releases, from 2010 to 2020, it’s pretty much the same system when you use it. I say it’s rather limited, package wise, the default Slackware installation requires like 15 GB on your disk because it bundles KDE and all the kde apps, a bunch of editors (emacs,vim,vs,elvis), lot of compilers/interpreter (gcc, llvm, ada, scheme, python, ruby etc..). While it provides a LOT of things out of the box, you really get all Slackware can offer. If something isn’t in the packages, you need to install it yourself.

Full Disk Encryption or nothing

I recommend to EVERYONE the practice of having a full disk encryption (phone, laptop, workstation, servers). If your system get stolen, you will only lose hardware when you use full disk encryption.

Without encryption, the thief can access all your data forever.

Slackware provides a file README_CRYPT.txt explaining how to install on an encrypted partition. Don’t forget to tell the bootloader LILO about the initrd, and keep in mind the initrd must be recreated after kernel upgrade

Use ntpd

It’s important to have a correct time on your server.

# chmod +x /etc/rc.d/rc.ntpd
# /etc/rc.d/rc.ntpd start

Disable ssh password authentication

In /etc/ssh/sshd_config there are two changes to do:

Turn UsePam yes into UsePam no and add PasswordAuthentication.

Changes can be applied by restarting ssh with /etc/rc.d/rc.sshd restart.

Before enabling this, don’t forget to deploy your public key to an user who is able to become to root.

Get a SSL certificate

We need a SSL certificate for the infrastructure, so we will install certbot. Unfortunately, certbot-auto doesn’t work on Slackware because the system is unsupported. So we will use pip and call certbot in standalone mode so we don’t need a web server.

# pip3 install certbot
# certbot certonly --standalone -d mydomain.foobar -m usernam@example

My domain being kongroo.eu the files are generated under /etc/letsencrypt/live/kongroo.eu/.

Configure the DNS

Three DNS entries have to be added for a working email server.

  1. SPF to tell the world which addresses have the right send your emails
  2. MX to tell the world which addresses will receive the emails and in which order
  3. DKIM (a public key) to allow recipients to check your emails really comes from your servers (signed used a private key)
  4. DMARC to tell recipient what to do with mails not respecting SPF

SPF

Simple, add an entry with v=spf1 mx if you want to allow your MX servers to send emails. Basically, for simple setups, the same server receive and send emails.

@ 1800 IN SPF "v=spf1 mx"

MX

My server with the address kongroo.eu will receive the emails.

@ 10800 IN MX 50 kongroo.eu.

DKIM

This part will be a bit more complicated. We have to generate a pair of public and private keys and run a daemon that will sign outgoing emails with the private key, so recipients can verify the emails signature using the public key available in the DNS. We will use opendkim, I found this very good article explaining how to use opendkim with sendmail.

Opendkim isn’t part of slackware base packages, fortunately it is available in slackbuilds, you can check my previous article explaining how to setup slackbuilds.

# groupadd -g 305 opendkim
# useradd -r -u 305 -g opendkim -d /var/run/opendkim/ -s /sbin/nologin \
    -c  "OpenDKIM Milter" opendkim
# sboinstall opendkim

We want to enable opendkim at boot, as it’s not a service from the base system, so we need to “register” it in rc.local and enable both.

Add the following to /etc/rc.d/rc.local:

if [ -x /etc/rc.d/rc.opendkim ]; then
  /etc/rc.d/rc.opendkim start
fi

Make the scripts executable so they will be run at boot:

# chmod +x /etc/rc.d/rc.local
# chmod +x /etc/rc.d/rc.opendkim

Create the key pair:

# mkdir /etc/opendkim
# cd /etc/opendkim
# opendkim-genkey -t -s default -d kongroo.eu

Get the content of default.txt, we will use it as a content for a TXT entry in the DNS, select only the content between parenthesis without double quotes: your DNS tool (like on Gandi) may take everything without warning which would produce an invalid DKIM signature. Been there, done that.

The file should looks like:

default._domainkey      IN      TXT     ( "v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB" )

But the content I used for my entry at gandi is:

v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB

Now we need to configure opendkim to use our keys. Edit /etc/opendkim.conf to changes the following lines already there:

Domain                  kongroo.eu
KeyFile /etc/opendkim/default.private
ReportAddress           postmaster@kongroo.eu

Dmarc

We have to tell DMARC, this may help being accepted by big corporate mail servers.

_dmarc.kongroo.eu.   IN TXT    "v=DMARC1;p=none;pct=100;rua=mailto:postmaster@kongroo.eu;"

This will tell the recipient that we don’t give specific instruction to what to do with suspicious mails from our domain and tell postmaster@kongroo.eu about the reports. Expect daily mail from every mail server reached in the day to arrive on that address.

Install Sendmail

Unfortunately Slackware team dropped sendmail in favor to postfix in the default install, this may be a good thing but I want sendmail. Good news: sendmail is still in the extra directory.

I wanted to use citadel but it was really complicated, so I went to sendmail.

Installation

Download the two sendmail txz packages on a mirror in the “extra” directory: https://mirrors.slackware.com/slackware/slackware64-current/extra/sendmail/

Run /sbin/installpkg on both packages.

Configuration

We will disable postfix.

# sh /etc/rc.d/rc.postfix stop
# chmod -x /etc/rc.d/rc.postfix

Enable sendmail and saslauthd

# chmod +x /etc/rc.d/rc.sendmail
# chmod +x /etc/rc.d/rc.saslauthd

All the configuration will be done in /usr/share/sendmail/cf/cf, we will use a default template from the package. As explained in the cf files, we need to use a template and rebuild from this directory containing all the macros.

# cp sendmail-slackware-tls-sasl.mc /usr/share/sendmail/cf/cf/config.mc

Every time we want to rebuild the configuration file, we need to apply the m4 macros to have the real configuration file.

# sh Build config.mc
# cp config.cf /etc/mail/sendmail.cf

My config.mc file looks like this (I stripped the comments):

include(`../m4/cf.m4')
VERSIONID(`TLS supporting setup for Slackware Linux')dnl
OSTYPE(`linux')dnl
define(`confCACERT_PATH', `/etc/letsencrypt/live/kongroo.eu/')
define(`confCACERT', `/etc/letsencrypt/live/kongroo.eu/cert.pem')
define(`confSERVER_CERT', `/etc/letsencrypt/live/kongroo.eu/fullchain.pem')
define(`confSERVER_KEY', `/etc/letsencrypt/live/kongroo.eu/privkey.pem')
define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
define(`confTO_IDENT', `0')dnl
FEATURE(`use_cw_file')dnl
FEATURE(`use_ct_file')dnl
FEATURE(`mailertable',`hash -o /etc/mail/mailertable.db')dnl
FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable.db')dnl
FEATURE(`access_db', `hash -T<TMPF> /etc/mail/access')dnl
FEATURE(`blocklist_recipients')dnl
FEATURE(`local_procmail',`',`procmail -t -Y -a $h -d $u')dnl
FEATURE(`always_add_domain')dnl
FEATURE(`redirect')dnl
FEATURE(`no_default_msa')dnl
EXPOSED_USER(`root')dnl
LOCAL_DOMAIN(`localhost.localdomain')dnl
INPUT_MAIL_FILTER(`opendkim', `S=inet:8891@localhost')
MAILER(local)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl
define(`confAUTH_OPTIONS', `A p y')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
TRUST_AUTH_MECH(`LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
DAEMON_OPTIONS(`Port=smtps, Name=MSA-SSL, M=Esa')dnl
LOCAL_CONFIG
O CipherList=ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-LOW:+SSLv3:+TLSv1:-SSLv2:+EXP:+eNULL

Create the file /etc/sasl2/Sendmail.conf with this content:

pwcheck_method:saslauthd

This will tell sendmail to use saslauthd for PLAIN and LOGIN connections. Any SMTP client will have to use either PLAIN or LOGIN.

If you start sendmail and saslauthd, you should be able to send e-mails with authentication.

We need to edit /etc/mail/local-host-names to tell sendmail for which domain it should accept local deliveries.

Simply add your email domain:

kongroo.eu

The mail logs are located under /var/log/maillog, every mail sent well signed with DKIM should appear under a line like this:

[time] [host] sm-mta[2520]: 0AECKet1002520: Milter (opendkim) insert (1): header: DKIM-Signature:  [whole signature]

Configure DKIM

This has been explained in a subsection of sendmail configuration. If you didn’t read this step because you don’t want to setup dkim, you missed information required for the next steps.

Install cyrus-imap

Slackware ships with dovecot in the default installation, but cyrus-imapd is available in slackbuilds.

The bad news is that the slackbuild is outdated, so here it a simple patch to apply in /usr/sbo/repo/network/cyrus-imapd. This patch also fixes a compilation issue.

diff --git a/network/cyrus-imapd/cyrus-imapd.SlackBuild b/network/cyrus-imapd/cyrus-imapd.SlackBuild
index 48e2c54e55..251ca5f207 100644
--- a/network/cyrus-imapd/cyrus-imapd.SlackBuild
+++ b/network/cyrus-imapd/cyrus-imapd.SlackBuild
@@ -23,7 +23,7 @@
 #  ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     
 PRGNAM=cyrus-imapd
-VERSION=${VERSION:-2.5.11}
+VERSION=${VERSION:-2.5.16}
 BUILD=${BUILD:-1}
 TAG=${TAG:-_SBo}
     
@@ -107,6 +107,8 @@ CXXFLAGS="$SLKCFLAGS" \
   $DATABASE \
   --build=$ARCH-slackware-linux
     
+sed -i'' 's/gettid/_gettid/g' lib/cyrusdb_berkeley.c
+
 make PERL_MM_OPT='INSTALLDIRS=vendor'
 make install DESTDIR=$PKG
     
diff --git a/network/cyrus-imapd/cyrus-imapd.info b/network/cyrus-imapd/cyrus-imapd.info
index 99b2c68075..6ae26365dc 100644
--- a/network/cyrus-imapd/cyrus-imapd.info
+++ b/network/cyrus-imapd/cyrus-imapd.info
@@ -1,8 +1,8 @@
 PRGNAM="cyrus-imapd"
 VERSION="2.5.11"
 HOMEPAGE="https://www.cyrusimap.org/"
-DOWNLOAD="ftp://ftp.cyrusimap.org/cyrus-imapd/cyrus-imapd-2.5.11.tar.gz"
-MD5SUM="674083444c36a786d9431b6612969224"
+DOWNLOAD="https://github.com/cyrusimap/cyrus-imapd/releases/download/cyrus-imapd-2.5.16/cyrus-imapd-2.5.16.tar.gz"
+MD5SUM="d5667e91d8e094ef24560a148e39c462"
 DOWNLOAD_x86_64=""
 MD5SUM_x86_64=""
 REQUIRES=""

You can apply it by carefully copying the content in a file and use the command patch.

We can now proceed with cyrus-imapd compilation and installation.

# env DATABASE=sqlite sboinstall cyrus-imapd

As explained in the README file shown during installation, we need to do a few instructions.

# mkdir -m 750 -p /var/imap /var/spool/imap /var/sieve
# chown cyrus:cyrus /var/imap /var/spool/imap /var/sieve
# su - cyrus
# /usr/doc/cyrus-imapd-2.5.16/tools/mkimap
# logout

Add the following to /etc/rc.d/rc.local to enable cyrus-imapd at boot:

if [ -x /etc/rc.d/rc.cyrus-imapd ]; then
  /etc/rc.d/rc.cyrus-imapd start
fi

And make the rc script executable:

# chmod +x /etc/rc.d/rc.cyrus-imapd

The official cyrus documentation is very well done and was very helpful while writing this.

The configuration file is /etc/imapd.conf:

configdirectory: /var/imap
partition-default: /var/spool/imap
sievedir: /var/sieve
admins: cyrus
sasl_pwcheck_method: saslauthd
allowplaintext: yes
tls_server_cert: /etc/letsencrypt/cyrus/fullchain.pem
tls_server_key:  /etc/letsencrypt/cyrus/privkey.pem
tls_client_ca_dir: /etc/ssl/certs

There is another file /etc/cyrusd.conf used but we don’t need to make changes in it.

We will have to copy the certificates into a separate place and allow cyrus user to read them. This will have to be done every time the certificate are renewed. Let’s add the certbot command so we can use this script as a cron.

#!/bin/sh
DOMAIN=kongroo.eu
LIVEDIR=/etc/letsencrypt/live/$DOMAIN/
DESTDIR=/etc/letsencrypt/cyrus/

certbot certonly --standalone -d $DOMAIN -m usernam@example
mkdir -p $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/fullchain.pem $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/privkey.pem $DESTDIR
/etc/rc.d/rc.sendmail restart
/etc/rc.d/rc.cyrus-imapd restart

Add a crontab entry to run this script once a day, using crontab -e to change root crontab.

MAILTO=""
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
0 5 * * * sh /root/renew_certs.sh

Starting the mail server

We prepared the mail server to be working on reboot, but the services aren’t started yet.

# /etc/rc.d/rc.saslauthd start
# /etc/rc.d/rc.sendmail start
# /etc/rc.d/rc.cyrus-imapd start
# /etc/rc.d/rc.opendkim start

Adding a new user

Add a new user to your system.

# useradd $username
# passwd $username

For some reasons the user mailboxes must be initialized. The same password must be typed twice (or passed as parameter using -w $password).

# USER=foobar
# DOMAIN=kongroo.eu
# echo "cm INBOX" | rlwrap cyradm -u $USER $DOMAIN
Password:
IMAP Password:

Voila! The user should be able to connect using IMAP and receive emails.

Check your email setup

You can use the web service Mail tester by sending an email. You could copy/paste a real email to avoid having a bad mark due to spam recognition (which happens if you send a mail with a few words). The bad spam core isn’t relevant anyway as long as it’s due to the content of your email.

Conclusion

I had real fun writing this article, digging hard in Slackware and playing with unusual programs like sendmail and cyrus-imapd. I hope you will enjoy too as much as I enjoyed writing it!

If you find mistakes or bad configuration settings, please contact me so, I will be happy to discuss about the change and fix this how-to.

Nota Bene: Slackbuilds aren’t mean to be used on the current version, but really on the last release. There is a github repository carrying the -current changes on a github repository https://github.com/Ponce/slackbuilds/.

How to use Slackware community slackbuilds

Written by Solène, on 13 November 2020.
Tags: #slackware

Comments on Fediverse/Mastodon

In today article I will explain how to use Slackbuilds repository on a Slackware current system.

You can read the Documentation of slackbuilds for more information.

We will first install sbotools package which make the use of slackbuilds a lot easier: like a proper ports tree. As it’s preferable to let the tools create the repository, we will install them without downloading the whole slackbuild repository.

Download the slackbuild from this page, extract it and cd into the new directory.

$ tar xzvf sbotools.tar.gz
$ cd sbotools
$ . ./sbotools.info
$ wget $DOWNLOAD
$ md5sum $(basename $DOWNLOAD)
$ echo $MD5SUM

The two md5 string should match.

Now, run the build as root

$ sudo sh sbotools.SlackBuild
[lot of text]
Slackware package /tmp/sbotools-2.7-noarch-1_SBo.tgz created.

Now you can install the created package using

$ sudo /sbin/installpkg /tmp/sbotools-2.7-noarch-1_SBo.tgz

We now have a few programs to use the slackbuilds repository, they all have their own man page:

  • sbocheck
  • sboclean
  • sboconfig
  • sbofind
  • sboinstall
  • sboremove
  • sbosnap
  • sboupgrade

Creating the repository

As root, run the following command:

# sbosnap fetch
Pulling SlackBuilds tree...
Cloning into '/usr/sbo/repo'...
remote: Enumerating objects: 59, done.
remote: Counting objects: 100% (59/59), done.
remote: Compressing objects: 100% (59/59), done.
remote: Total 485454 (delta 31), reused 14 (delta 0), pack-reused 485395
Receiving objects: 100% (485454/485454), 134.37 MiB | 1.20 MiB/s, done.
Resolving deltas: 100% (337079/337079), done.
Updating files: 100% (39863/39863), done.

The slackbuilds tree is now installed under /usr/sbo/repo. This could be configured before using sboconfig -s /home/solene which would create a /home/solene/repo.

Searching a port

One can use the command sbofind to look for a port:

# sbofind nethack
SBo:    nethack 3.6.6
Path:   /usr/sbo/repo/games/nethack
    
SBo:    unnethack 5.2.0
Path:   /usr/sbo/repo/games/unnethack

Install a port

We will install the previously searched port: nethack

# sboinstall nethack
Nethack is a single-player dungeon exploration game. The emphasis is
on discovering the detail of the dungeon. Each game presents a
different landscape - the random number generator provides an
essentially unlimited number of variations of the dungeon and its
denizens to be discovered by the player in one of a number of
characters: you can pick your race, your role, and your gender.
    
User accounts that play this need to be members of the "games" group.
    
Proceed with nethack? [y] y
nethack added to install queue.

Install queue: nethack

Are you sure you wish to continue? [y] y
[... compilation ... ]
+==============================================================================
| Installing new package /tmp/nethack-3.6.6-x86_64-1_SBo.tgz
+==============================================================================
    
Verifying package nethack-3.6.6-x86_64-1_SBo.tgz.
Installing package nethack-3.6.6-x86_64-1_SBo.tgz:
PACKAGE DESCRIPTION:
# nethack (roguelike game)
#
# Nethack is a single-player dungeon exploration game. The emphasis is
# on discovering the detail of the dungeon. Each game presents a
# different landscape - the random number generator provides an
# essentially unlimited number of variations of the dungeon and its
# denizens to be discovered by the player in one of a number of
# characters: you can pick your race, your role, and your gender.
#
# http://nethack.org
#
Package nethack-3.6.6-x86_64-1_SBo.tgz installed.
Cleaning for nethack-3.6.6...

Done, nethack is installed! sboinstall manages dependencies and if required will ask you for every required other slackbuilds to install to add to the queue before starting compiling.

Example: getting flatpak

Flatpak is a software distribution system for linux distributions, mainly to provide desktop software that could be complicated to package like Libreoffice, GIMP, Microsoft Teams etc… Using Slackware, this can be a good source of software.

To use flatpak and the official flathub repository, we need to install flatpak first. It’s now as easy as:

# sboinstall flatpak

And answer yes to questions (you will be asked to agree for every dependency required, there are a few of them), if you don’t want to answer, you can use -r flag to automatically accept.

We need to add the official repository flathub using the following command:

# flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

And now you can browse flatpak programs on flathub

For example, if you want to install VLC

# flatpak install flathub org.videolan.VLC

You will be prompted about all the dependencies required in order to get VLC installed, those dependencies are some system parts that will be shared across all the flatpak software in order to efficiently use disk space. For VLC, some kde components will be required and also Xorg GL/VAAPI/openh264 environments, flatpak manage all this and you don’t have to worry about this.

The file /usr/sbo/repo/desktop/flatpak/README explains quirks of flatpak on Slackware, like pulseaudio instructions or the polkit policy on slackware not allowing your user to use the global flatpak install command.

I found the following ~/.xinitrc to enable dbus and pulseaudio for me, so flatpak programs work.

start-pulseaudio-x11
eval $(pax11publish -i)
dbus-run-session fvwm2

About the offline laptop project

Written by Solène, on 10 November 2020.
Tags: #life #disconnected

Comments on Fediverse/Mastodon

Third article of the offline laptop serie.

Sometimes, network access is required

Having a totally disconnected system isn’t really practical for a few reasons. Sometimes, I really need to connect the offline laptop to the network. I do produce some content on the computer, so I need to do backups. The easiest way for me to have reliable backup is to host them on a remote server holding the data, this requires network connection for the time of the backup. Of course, backups could be done on external disks or usb memory sticks (I don’t need to backup much), but I never liked this backup solution; don’t get me wrong, I don’t say it’s ineffective, but it doesn’t suit my needs.

Besides the backup, I may need to sync files like my music files. I may have bought new music that I want to get on the offline laptop, so network access is required.

I also require internet access to install new packages or upgrade the system, this isn’t a regular need but I occasionnaly require a new program I forgot to install. This could be solved by downloaded the whole packages repository but this would require too many disk space for packages I would never use. This would also waste a lot of network transfer.

Finally, when I work on my blog, I need to publish the files, I use rsync to sync the destination directory from my local computer and this requires access to the Internet through ssh.

A nice place at the right time

The moments I enjoy using this computer the most is by taking the laptop on a table with nothing around me. I can then focus about what I am doing. I find comfortable setups being source of distraction, so a stool and a table are very nice in my opinion.

In addition to have a clean place to use it, I like to dedicate some time for the use of this computer. I can write texts or some code in a given time frame.

On a computer with 24/7 power and internet access I always feel everything is at reach, then I tend to slack with it.

Having a rather limited battery life changes the way I experience the computer use. It has a finite time, I have N minutes until the computer has to be charged or shutdown. This produces for me the same effect than when starting watching a movie, sometimes I pick up a movie that fits the time I can spend on it.

Knowing I have some time until the computer stops, I know I must keep focused because time is passing.

Keyboard tweaks to use Xorg on an IBook laptop

Written by Solène, on 09 November 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

Simple article for posterity or future-me. I will share here my tweaks to make the IBook G4 laptop (apple keyboard) suitable for OpenBSD , this should work for Linux too as long as you run X.

Command should be alt+gr

I really need the alt+gr key which is not there on the keyboard, I solved this by using this line in my ~/.xsession.

xmodmap -e "keycode 115 = ISO_Level3_Shift"

i3 and mod4

As the touchpad is incredibely bad by nowadays standards (and it only has 1 button and no scrolling feature!), I am using a window manager that could be entirely keyboard driven, while I’m not familiar with tiling window manager, i3 was easy to understand and light enough. Long time readers may remember I am familiar with stumpwm but it’s not really a dynamic tiling window manager, I can only tolerate i3 using the tabs mode.

But an issue arise, there are no “super” key on the keyboard, and using “alt” would collide with way too many programs. One solution is to use “caps lock” as a “super” key.

I added this in my ~/.xsession file:

xmodmap ~/.Xmodmap

with ~/.Xmodmap having the following instructions:

clear Lock 
keycode 66 = Hyper_L
add mod4 = Hyper_L
clear Lock

This will disable to “toggling” effect of caps lock, and will turn it into a “Super” key that will be refered as mod4 for i3.

Connect to Mastodon using HTTP 1.0 with Brutaldon

Written by Solène, on 09 November 2020.
Tags: #openbsd #mastodon

Comments on Fediverse/Mastodon

Today post is about Brutaldon, a Mastodon/Pleroma interface in old fashion HTML like in the web 1.0 era. I will explain how it works and how to install it. Tested and approved on an 16 years old powerpc laptop, using Mastodon with w3m or dillo web browsers!

Introduction

Brutaldon is a mastodon client running as a web server. This mean you have to connect to a running brutaldon server, you can use a public one like Brutaldon.online and then you will have two ways to connect to your account:

  1. using oauth which will redirect through a dedicated API page of your mastodon instance and will give back a token once you logged in properly, this is totally safe of use, but requires javascript to be enabled to works due to the login page on the instance
  2. there is “old login” method in which you have to provide your instance address, your account login and password. This is not really safe because the brutaldon instance will known about your credentials, but you can use any web browser with that. There are not much security issues if you use a local brutaldon instance

How to install it

The installation is quite easy, I wish this could be as easy more often. You need a python3 interpreter and pipenv. If you don’t have pipenv, you need pip to install pipenv. On OpenBSD this would translates as:

$ pip3.8 install --user pipenv

Note that on some system, pip3.8 could be pip3, or pip. Due to the coexistence of python2 and python3 for some time until we can get ride of python2, most python related commands have a suffix to tell which python version it uses.

If you install pipenv with pip, the path will be ~/.local/bin/pipenv.

Now, very easy to proceed! Clone the code, run pipenv to get the dependencies, create a sqlite database and run the server.

$ git clone git://github.com/jfmcbrayer/brutaldon.git
$ cd brutaldon
$ pipenv install
$ pipenv run python ./manage.py migrate
$ pipenv run python ./manage.py runserver

And voilà! Your brutaldon instance is available on http://localhost:8000, you only need to open it on your web browser and log-in to your instance.

As explained in the INSTALL.md file of the project, this method isn’t suitable for a public deployment. The code is a Django webapp and could be used with wsgi and a proper web server. This setup is beyond the scope of this article.

Join the peer to peer social network Scuttlebutt using OpenBSD and Oasis

Written by Solène, on 04 November 2020.
Tags: #openbsd #ssb

Comments on Fediverse/Mastodon

In this article I will tell you about the Scuttlebutt social network, what makes it special and how to join it using OpenBSD. From here, I’ll refer to Scuttlebutt as SSB.

Introduction to the protocol

You can find all the related documentation on the official website. I will make a simplification of the protocol to present it.

SSB is decentralized, meaning there are no central server with clients around it (think about Twitter model) nor it has a constellation of servers federating to each others (Fediverse: mastodon, plemora, peertube…). SSB uses a peer to peer model, meaning nodes exchanges data between others nodes. A device with an account is a node, someone using SSB acts as a node.

The protocol requires people to be mutual followers to make the private messaging system to work (messages are encrypted end-to end).

This peer to peer paradigm has specific implications:

  1. Internet is not required for SSB to work. You could use it with other people in a local network. For example, you could visit a friend’s place exchange your SSB data over their network.
  2. Nodes owns the data: when you join, this can be very long to download the content of nodes close to you (relatively to people you follow) because the SSB client will download the data, and then serves everything locally. This mean you can use SSB while being offline, but also that in the case seen previously at your friend’s place, you can