About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(BSD OpenBSD h+ Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on Freenode, solene+www at dataswamp dot org or solene@bsd.network (mastodon). If for some reason you want to give me some money, I accept paypal at the address donate@perso.pw.

How to split a file into small parts

Written by Solène, on 21 March 2021.
Tags: #openbsd #unix

Comments on Mastodon


Today I will present the userland program "split" that is used to split a single file into smaller files.

OpenBSD split(1) manual page

Use case

Split will create new files from a single files, but smaller. The original file can be get back using the command cat on all the small files (in the correct order) to recreate the original file.

There are several use cases for this:

- store a single file (like a backup) on multiple medias (floppies, 700MB CD, DVDs etc..)

- parallelize a file process, for example: split a huge log file into small parts to run analysis on each part

- distribute a file across a few people (I have no idea about the use but I like the idea)


Its usage is very simple, run split on a file or feed its standard input, it will create 1000 lines long files by default. -b could be used to tell a size in kB or MB for the new files or use -l to change the default 1000 lines. Split can also create a new file each time a line match a regex given with -p.

Here is a simple example splitting a file into 1300kB parts and then reassemble the file from the parts, using sha256 to compare checksum of the original and reconstructed files.

solene@kongroo ~/V/pmenu> split -b 1300k pmenu.mp4
solene@kongroo ~/V/pmenu> ls
pmenu.mp4  xab        xad        xaf        xah        xaj        xal        xan
xaa        xac        xae        xag        xai        xak        xam
solene@kongroo ~/V/pmenu> cat x* > concat.mp4
solene@kongroo ~/V/pmenu> sha256 pmenu.mp4 concat.mp4 
SHA256 (pmenu.mp4)  = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
SHA256 (concat.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
solene@kongroo ~/V/pmenu> ls -l x*
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaa
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xab
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xac
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xad
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xae
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaf
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xag
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xah
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xai
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaj
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xak
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xal
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xam
-rw-r--r--  1 solene  wheel    810887 Mar 21 16:50 xan


If you ever need to split files into small parts, think about the command split.

For more advanced splitting requirements, the program csplit can be used, I won't cover it here but I recommend reading the manual page for its usage.

csplit manual page

Full list of services offered by a default OpenBSD installation

Written by Solène, on 16 February 2021.
Tags: #openbsd69 #openbsd #unix

Comments on Mastodon


This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).

From all this list, the following list is started by default: cron, pflogd, sndiod, openssh, ntpd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.

Service list

I extracted the list of base install services by looking at /etc/rc.conf.

$ grep _flags /etc/rc.conf | cut -d '_' -f 1


This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".

amd man page


This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.

apmd man page

apm man page


This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.

OpenBGPD website


This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.

Information about a diskless setup


This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.

Cron man page

Crontab command

Crontab format


This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.


This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.


This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.


This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.


This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).

ftpd man page


This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.


Same as above but for IPv6. Using IPv6 behind a NAT make no sense.


This is the daemon that turns OpenBSD into a WiFi access point.

hostapd man page

hostapd configuration file man page


hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.

hotplugd man page


httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.

httpd man page

httpd configuration file man page


Identd is a daemon for the Identification Protocol which returns the login name of an user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.


This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.

ifstated man page

ifstated configuration file man page


This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.

OpenBSD FAQ about VPN


This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.

inetd man page


This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.


This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.


This is a light LDAP server, offering version 3 of the protocol.

ldap client man page

ldapd daemon man page

ldapd daemon configuration file man page


This daemon allows to configure programs that are exposed as a serial port, such as gps devices.


This daemon is specific to the sparc64 platform and provide services for dom feature.


This daemon is used as part of a NFS environment to support file locking.


This daemon is used by MPLS routers to get labels.


This daemon is used to manage print access to a line printer.


This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.

mountd man page

showmount man page


This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.


Similar to dvmrpd.


This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.

nfsd man page

nfsstat man page


This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.


This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.

nsd man page

nsd configuration file man page


This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.

ntpd man page


It is a daemon for the OSPF routing protocol (Open Shortest Path First).


Same as above for IPv6.


This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.

pflogd man page



This daemon is used as part of a NFS environment.


This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.


This daemon is used to offer RADIUS protocol authentication.


This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.

Information about a diskless setup


Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».


This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.

relayd man page

relayd control tool man page

relayd configuration file man page


This is a routing daemon using an old protocol but widely supported.


Same as above but for IPv6.


This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.


This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.

sensorsd man page

sensorsd configuration file man page


This service is a daemon that will automatically pick up auto IPv6 configuration on the network.


This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.

slowcgi man page


This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.

smtpd man page

smtpd configuration file man page

smtpd control command man page


This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.

sndiod man page

sndiod control command man page

mixerctl man page to control an audio device

OpenBSD FAQ about multimedia devices


This daemon is a SNMP server exposing some system metrics to SNMP client.

snmpd man page

snmpd configuration file man page


This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.


This daemon is dedicated to the update of spamd whitelist.


This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.

sshd man page

sshd configuration file man page


This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.


This daemon is used to control a switch pseudo device.

switch pseudo device man page


This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.

syslogd man page

syslogd configuration file man page

newsyslog man page

logger man page


This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.

Information about a diskless setup


This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.


This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.

unbound man page

unbound configuration file man page


This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.

unwind man page

unwind configuration file man page

unwind control command man page


This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.

vmd man page

vmd configuration file man page

vmd control command man page

vmm driver man page

OpenBSD FAQ about virtualization


This daemon is used to trigger watchdog timer devices if any.


This daemon is used to provide a mouse support to the console.


This daemon is used to start the X server and allow users to authenticate themselves and log in their session.

xenodm man page


This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.


This daemon offers a YP service using a LDAP backend.


This daemon is a YP server.

Bandwidth limiting on OpenBSD 6.8

Written by Solène, on 07 February 2021.
Tags: #openbsd68 #openbsd #unix #network

Comments on Mastodon

This is a February 2021 update of a text originally published in April 2017.


I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.

OpenBSD pf.conf man page about queuing


My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).

PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.


Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.

# we define a main queue (requirement)
queue main on em0 bandwidth 1G

# set a queue for everything
queue normal parent main bandwidth 200K max 200K default

And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`

main on em0  1000M fifo        0        0        0        0    0
 normal      1000M fifo   535424 36032467        0        0   60

More control (per user / protocol)

This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.

# within the queue rules
queue oasis parent main bandwidth 150K max 150K

# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis

Instead of an user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.

A few tips about the cd command

Written by Solène, on 04 September 2020.
Tags: #unix

Comments on Mastodon

While everyone familiar with a shell know about the command cd there are a few tips you should know.

Moving to your $HOME directory

$ pwd
$ cd
$ pwd

Using cd without argument will change your current directory to your $HOME.

Moving into someone $HOME directory

While this should fail most of the time because people shouldn’t allow anyone to visit their $HOME, there are use case it can be used though.

$ cd ~user1
$ pwd
$ cd ~solene
$ pwd

Using ~user as a parameter will move to that user $HOME directory, note that cd and cd ~youruser have the same result.

Moving to previous directory

This is a very useful command which allow going back and forth between two directories.

$ pwd
$ cd /tmp
$ pwd
$ cd -
$ pwd

When you use cd - the command will move to the previous directory in which you were. There are two special variables in your shell: PWD and OLDPWD, when you move somewhere, OLDPWD will hold your current location before moving and then PWD hold the new path. When you use cd - the two variables get exchanged, this mean you can only jump from two paths using cd - multiple times.

Please note that when using cd - your new location is displayed.

Changing directory by modifying current PWD

thfr@ showed me a cd feature I never heard about, and it’s the perfect place to write about it. Note that this work in ksh and zsh but is reported to not work in bash.

One example will explain better than any text.

$ pwd
$ cd 1.2.0 2.4.0

This tells cd to replace first parameter pattern by the second parameter in the current PWD and then cd into it.

$ pwd
$ cd solene user1

This could be done in a bloated way with the following command:

$ cd $(echo $PWD | sed "s/solene/user1/")

I learned it a few minutes ago but I see a lot of uses cases where I could use it.

Moving into the current directory after removal

In some specific case, like having your shell into a directory that existed but was deleted and removed (this happens often when you working into compilation directories).

A simple trick is to tell cd to go to the current location.

$ cd .


$ cd $PWD

And cd will go into the same path and you can start hacking again in that directory.

BitreichCON 2019 talks available

Written by Solène, on 27 August 2019.
Tags: #unix #drist #automation #awk

Comments on Mastodon

Earlier in August 2019 happened the BitreichCON 2019. There was awesome talks there during two days but there are two I would like to share. You can find all the informations about this event at the following address with the Gopher protocol gopher://bitreich.org/1/con/2019

BrCON talks are happening through an audio stream, a ssh session for viewing the current slide and IRC for questions. I have the markdown files producing the slides (1 title = 1 slide) and the audio recording.

Simple solutions

This is a talk I have made for this conference. It as about using simple solutions for most problems. Simple solutions come with simple tools, unix tools. I explain with real life examples like how to retrieve my blog articles titles from the website using curl, grep, tr or awk.

Link to the audio

Link to the slides

Experiences with drist

Another talk from Parazyd about my deployment tool Drist so I feel obligated to share it with you.

In his talk he makes a comparison with slack (debian package, not the online community), explains his workflow with Drist and how it saves his precious time.

Link to the audio

Link to the slides

About the bitreich community

If you want to know more about the bitreich community, check gopher://bitreich.org or IRC #bitreich-en on Freenode servers.

There is also the bitreich website which is a website parody of the worse of what you can daily see.

Minimalistic markdown subset to html converter using awk

Written by Solène, on 26 August 2019.
Tags: #unix #awk

Comments on Mastodon


As on my blog I use different markup languages I would like to use a simpler markup language not requiring an extra package. To do so, I wrote an awk script handling titles, paragraphs and code blocks the same way markdown does.

16 December 2019 UPDATE: adc sent me a patch to add ordered and unordered list. Code below contain the addition.

It is very easy to use, like: awk -f mmd file.mmd > output.html

The script is the following:


    # escape < > characters

    # close code blocks
    if(! match($0,/^    /)) {
        if(in_code) {
            printf "</code></pre>\n"

    # close unordered list
    if(! match($0,/^- /)) {
        if(in_list_unordered) {
            printf "</ul>\n"

    # close ordered list
    if(! match($0,/^[0-9]+\. /)) {
        if(in_list_ordered) {
            printf "</ol>\n"

    # display titles
    if(match($0,/^#/)) {
        if(match($0,/^(#+)/)) {
            printf "<h%i>%s</h%i>\n", RLENGTH, substr($0,index($0,$2)), RLENGTH

    # display code blocks
    } else if(match($0,/^    /)) {
        if(in_code==0) {
            printf "<pre><code>"
            print substr($0,5)
        } else {
            print substr($0,5)

    # display unordered lists
    } else if(match($0,/^- /)) {
        if(in_list_unordered==0) {
            printf "<ul>\n"
            printf "<li>%s</li>\n", substr($0,3)
        } else {
            printf "<li>%s</li>\n", substr($0,3)

    # display ordered lists
    } else if(match($0,/^[0-9]+\. /)) {
        n=index($0," ")+1
        if(in_list_ordered==0) {
            printf "<ol>\n"
            printf "<li>%s</li>\n", substr($0,n)
        } else {
            printf "<li>%s</li>\n", substr($0,n)

    # close p if current line is empty
    } else {
        if(length($0) == 0 && in_paragraph == 1 && in_code == 0) {
            printf "</p>"
        } # we are still in a paragraph
        if(length($0) != 0 && in_paragraph == 1) {
        } # open a p tag if previous line is empty
        if(length(previous_line)==0 && in_paragraph==0) {
            printf "<p>%s\n", $0
    previous_line = $0

    if(in_code==1) {
        printf "</code></pre>\n"
    if(in_list_unordered==1) {
        printf "</ul>\n"
    if(in_list_ordered==1) {
        printf "</ol>\n"
    if(in_paragraph==1) {
        printf "</p>\n"

OpenBSD and iSCSI part2: the initiator (client)

Written by Solène, on 21 February 2019.
Tags: #unix #openbsd #iscsi

Comments on Mastodon

This is the second article of the serie about iSCSI. In this one, you will learn how to connect to an iSCSI target using OpenBSD base daemon iscsid.

The configuration file of iscsid doesn’t exist by default, its location is /etc/iscsi.conf. It can be easily written using the following:


target "disk1" {
    initiatoraddr $myaddress
    targetaddr $target1
    targetname "iqn.1994-04.org.netbsd.iscsi-target:target0"

While most lines are really obvious, it is mandatory to have the line initiatoraddr, many thanks to cwen@ for pointing this out when I was stuck on it.

The targetname value will depend of the iSCSI target server. If you use netbsd-iscsi-target, then you only need to care about the last part, aka target0 and replace it by the name of your target (which is target0 for the default one).

Then we can enable the daemon and start it:

# rcctl enable iscsid
# rcctl start iscsid

In your dmesg, you should see a line like:

sd4 at scsibus0 targ 1 lun 0: <NetBSD, NetBSD iSCSI, 0> SCSI3 0/direct fixed t10.NetBSD_0x5c6cf1b69fc3b38a

If you use netbsd-iscsi-target, the whole line should be identic except for the sd4 part which can change, depending of your hardware.

If you don’t see it, you may need to reload iscsid configuration file with iscsictl reload.

Warning: iSCSI is a bit of pain to debug, if it doesn’t work, double check the IPs in /etc/iscsi.conf, check your PF rules on the initiator and the target. You should be at least able to telnet into the target IP port 3260.

Once you found your new sd device, you can format it and mount it as a regular disk device:

# newfs /dev/rsd4c
# mount /dev/sd4c /mnt

iSCSI is far mor efficient and faster than NFS but it has a total different purpose. I’m using it on my powerpc machines to build packages on it. This reduce their old IDE disks usage while giving better response time and equivalent speed.

OpenBSD and iSCSI part1: the target (server)

Written by Solène, on 21 February 2019.
Tags: #unix #openbsd #iscsi

Comments on Mastodon

This is the first article of a series about iSCSI.

iSCSI is a protocol designed for sharing a block device across network as if it was a local disk. This doesn’t permit using that disk from multiples places at once though, except if you use a specific filesystem like GFS2 or OCFS2 (Linux only). In this article, we will learn how to create an iSCSI target, which is the “server” part of iSCSI, the target is the system holding the disk and making it available to others on the network.

OpenBSD does not have an target server in base, we will have to use net/netbsd-iscsi-target for this. The setup is really simple.

First, we obviously need to install the package and we will activate the daemon so it start automatically at boot, but don’t start it yet:

# pkg_add netbsd-iscsi-target
# rcctl enable iscsi_target

The configurations files are in /etc/iscsi/ folder, it contains files auths and targets. The default configuration files are the same. By looking at the source code, it seems that auths is used there but it seems to have no use at all. We will just overwrite it everytime we modify targets to keep them in sync.

Default /etc/iscsi/targets (with comments stripped):

extent0         /tmp/iscsi-target0      0       100MB
target0         rw      extent0

The first line defines the file holding our disk in the second field, and the last field defines the size of it. When iscsi-target will be started, it will create files as required with the size defined here.

The second line defines permissions, in that case, the extent0 disk can be used read/write by the net For this example, I will only change the netmask to suit my network, then I copy targets over auths.

Let’s start the daemon:

# rcctl start iscsi_target
# rcctl check iscsi_target

If you want to restrict ports using PF, you only have to allows the TCP port 3260 from the network that will connect to the target. The according line would looks like this:

pass in proto tcp to port 3260


Drist release with persistent ssh

Written by Solène, on 18 February 2019.
Tags: #unix #automation #drist

Comments on Mastodon

Drist see its release 1.04 available. This adds support for the flag -p to make the ssh connection persistent across the script using the ssh ControlMaster feature. This fixes one use case where you modify ssh keys in two operations: copy file + script to change permissions and this makes drist a lot faster for fast tasks.

Drist makes a first ssh connection to get the real hostname of the remote machine, and then will ssh for each step (copy, copy-hostname, absent, absent-hostname, script, script-hostname), this mean in the use case where you copy one file and reload a service, it was doing 3 connections. Now with the persistent flag, drist will keep the first connection and reusing it, closing the control socket at the end of the script.

Drist is now 121 lines long.

Download v1.04

SHA512 checksum, it is split it in two to not break the display:


Aspell to check spelling

Written by Solène, on 12 February 2019.
Tags: #unix

Comments on Mastodon

I never used a command line utility to check the spelling in my texts because I did not know how to do. After taking five minutes to learn how to do it, I feel guilty about not having used it before as it is really simple.

First, you want to install aspell package, which may be already there pulled as a dependency. In order to proceed on OpenBSD it’s easy:

# pkg_add aspell

I will only explain how to use it on text files. I think it is possible to have some integration with text editors but then, it would be more relevant to check out the editor documentation.

If I want to check the spelling in my file draft.txt it is as simple as:

$ aspell -l en_EN -c draft.txt

The parameter -l en_EN will depend of your locale, I have fr_FR.UTF–8 so aspell uses it by default if I don’t enforce another language. With this command, aspell will make an interactive display in the terminal

The output looks like this, with the word ful highlighted which I can not render in my article.

It's ful of mistakkes!

I dont know how to type corectly!

1) flu                                              6) FL
2) foul                                             7) fl
3) fuel                                             8) UL
4) full                                             9) fol
5) furl                                             0) fur
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit


I am asked how I want to resolve the issue with ful, as I wanted to write full, I will type 4 and aspell will replace the word ful with full. This will automatically jump to the next error found, mistakkes in my case:

It's full of mistakkes!

I dont know how to type corectly!

1) mistakes                                         6) misstates
2) mistake's                                        7) mistimes
3) mistake                                          8) mistypes
4) mistaken                                         9) stake's
5) stakes                                           0) Mintaka's
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit


and it will continue until there are no errors left, then the file is saved with the changes.

I will use aspell everyday from now.

Port of the week: sct

Written by Solène, on 07 February 2019.
Tags: #unix #openbsd

Comments on Mastodon

Long time I didn’t write a “port of the week”.

This week, I am happy to present you sct, a very small utility software to set the color of your screen. You can install it on OpenBSD with pkg_add sct and its usage is really simple, just run sct $temp where $temp is the temperature you want to get on your screen.

The default temperature is 6500, if you lower this value, the screen will change toward red, meaning your screen will appear less blue and this may be more comfortable for some people. The temperature you want to use depend from the screen and from your feeling, I have one screen which is correct at 5900 but another old screen which turn too much red below 6200!

You can add sct 5900 to your .xsession file to start it when you start your X11 session.

There is an alternative to sct whose name is redshift, it is more complicated as you need to tell it your location with latitude and longitude and, as a daemon, it will correct continuously your screen temperature depending on the time. This is possible because when you know your location on earth and the time, you can compute the sunrise time and dawn time. sct is not a daemon, you run it once and does not change the temperature until you call it again.

How to parallelize Drist

Written by Solène, on 06 February 2019.
Tags: #drist #automation #unix

Comments on Mastodon

This article will show you how to make drist faster by using it on multiple servers at the same time, in a correct way.

What is drist?

It is easily possible to parallelize drist (this works for everything though) using Makefile. I use this to deploy a configuration on my servers at the same time, this is way faster.

A simple BSD Make compatible Makefile looks like this:

SERVERS=tor-relay.local srvmail.tld srvmail2.tld
        drist $*
install: ${SERVERS}
.PHONY: all install ${SERVERS}

This create a target for each server in my list which will call drist. Typing make install will iterate over $SERVERS list but it is so possible to use make -j 3 to tell make to use 3 threads. The output may be mixed though.

You can also use make tor-relay.local if you don’t want make to iterate over all servers. This doesn’t do more than typing drist tor-relay.local in the example, but your Makefile may do other logic before/after.

If you want to type make to deploy everything instead of make install you can add the line all: install in the Makefile.

If you use GNU Make (gmake), the file requires a small change:

The part ${SERVERS}: must be changed to ${SERVERS}: %:, I think that gmake will print a warning but I did not succeed with better result. If you have the solution to remove the warning, please tell me.

If you are not comfortable with Makefiles, the .PHONY line tells make that the targets are not valid files.

Make is awesome!

Vincent Delft talk at FOSDEM 2019: OpenBSD as a full-featured NAS

Written by Solène, on 05 February 2019.
Tags: #unix #openbsd

Comments on Mastodon

Hi, I rarely post about external links or other people work, but at FOSDEM 2019 Vincent Delft had a talk about running OpenBSD as a full featured NAS.

I do use OpenBSD on my NAS, I wanted to write an article about it since long time but never did it. Thanks to Vincent, I can just share his work which is very very interesting if you plan to make your own NAS.

Videos can be downloaded directly with following links provided by Fosdem:

Transfer your files with Kermit

Written by Solène, on 31 January 2019.
Tags: #unix #kermit

Comments on Mastodon

Hi, it’s been long time I wanted to write this article. The topic is Kermit, which is a file transfer protocol from the 80’s which solved problems of that era (text files and binaries files, poor lines, high latency etc..).

There is a comm/kermit package on OpenBSD and I am going to show you how to use it. The package is the program ckermit which is a client/server for kermit.

Kermit is a lot of things, there is a protocol, but it’s also the client/server, when you type kermit, it opens a kermit shell, where you can type commands or write kermit scripts. This allows scripts to be done using a kermit in the shebang.

I personally use kermit over ssh to retrieve files from my remote server, this requires kermit on both machines. My script is the following:

#!/usr/local/bin/kermit +
set host /pty ssh -t -e none -l solene perso.pw kermit
remote cd /home/ftp/
cd /home/solene/Downloads/
reget /recursive /delete .

This connects to the remote server and starts kermit. It changes the current directory on the remote server into /home/ftp and locally it goes into /home/solene/Downloads, then, it start retrieving data, continuing previous transfer if not finished (reget command), for every file finished, it’s deleted on the remote server. Once finished, it close the ssh connection and exits.

The transfer interfaces looks like this. It shows how you are connected, which file is currently transferring, its size, the percent done (0% in the example), time left, speed and some others information.

C-Kermit 9.0.302 OPEN SOURCE:, 20 Aug 2011, solene.perso.local []

   Current Directory: /home/downloads/openbsd
        Network Host: ssh -t -e none -l solene perso.pw kermit (UNIX)
        Network Type: TCP/IP
              Parity: none
         RTT/Timeout: 01 / 03
           RECEIVING: src.tar.gz => src.tar.gz => src.tar.gz
           File Type: BINARY
           File Size: 183640885
        Percent Done:
 Estimated Time Left: 00:43:32
  Transfer Rate, CPS: 70098
        Window Slots: 1 of 30
         Packet Type: D
        Packet Count: 214
       Packet Length: 3998
         Error Count: 0
          Last Error:
        Last Message:

X to cancel file, Z to cancel group, <CR> to resend last packet,
E to send Error packet, ^C to quit immediately, ^L to refresh screen.

What’s interesting is that you can skip a file by pressing “X”, kermit will stop the downloading (but keep the file for later resuming) and start downloading the next file. It can be useful sometimes when you transfer a bunch of files, and it’s really big and you don’t want it now and don’t want to type the command by hand, just “X” and it skips it. Z or E will exists the transfer and close the connection.

Speed can be improved by adding the following lines before the reget command:

set reliable
set window 32
set receive packet-length 9024

This improves performance because nowadays our networks are mostly reliable and fast. Kermit was designed at a time when serial line was used to transfer data. It’s also reported that Kermit is in use in the ISS (International Space Station), I can’t verify if it’s still in use there.

I never had any issue while transferring, even by getting a file by resuming it so many times or using a poor 4G hot-spot with 20s of latency.

I did some tests and I get same performances than rsync over the Internet, it’s a bit slower over Lan though.

I only described an use case. Scripts can be made, there are a lot of others commands. You can type “help” in the kermit shell to get some hints for more help, “?” will display the command list.

It can be used interactively, you can queue files by using “add” to create a send-list, and then proceed to transfer the queue.

Another way to use it is to start the local kermit shell, then type “ssh user@remote-server” which will ssh into a remote box. Then you can type “kermit” and type kermit commands, this make a link between your local kermit and the remote one. You can go back to the local kermit by typing “Ctrl+", and go back to the remote by entering the command ”C".

This is a piece of software I found by lurking into the ports tree for discovering new software and I felt in love with it. It’s really reliable.

It does a different job compared to rsync, I don’t think it can preserve time, permissions etc… but it can be scripted completely, using parameters, and it’s an awesome piece of software!

It should support HTTP, HTTPS and ftp transfers too, as a client, but I did not get it work. On OpenBSD, the HTTPS support is disabled, it requires some work to switch to libreSSL.

You can find information on the official website.

Fun tip #3: Split a line using ed

Written by Solène, on 04 December 2018.
Tags: #fun-tip #unix #openbsd68

Comments on Mastodon

In this new article I will explain how to programmaticaly a line (with a newline) using ed.

We will use commands sent to ed in its stdin to do so. The logic is to locate the part where to add the newline and if a character need to be replaced.

this is a file
with a too much line in it that should be split
but not this one.

In order to do so, we will format using printf(1) the command list using a small trick to insert the newline. The command list is the following:

/too much line
s/that /that

This search the first line matching “too much line” and then replaced “that ” by "that0, the trick is to escape using a backslash so the substitution command can accept the newline, and at the end we print the file (replace ,n by w to write it).

The resulting command line is:

$ printf '/too much line0/that /that\0n0 | ed file.txt
> with a too much line in it that should be split
> should be split
> 1     this is a file
2       with a too much line in it that
3       should be split
4       but not this one.
> ?

Configuration deployment made easy with drist

Written by Solène, on 29 November 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Hello, in this article I will present you my deployement tool drist (if you speak Russian, I am already aware of what you think). It reached a feature complete status today and now I can write about it.

As a system administrator, I started using salt a few years ago. And honestly, I can not cope with it anymore. It is slow, it can get very complicated for some tasks like correctly ordering commands and a configuration file can become a nightmare when you start using condition in it.

You may already have read and heard a bit about drist as I wrote an article about my presentation of it at bitreichcon 2018.


I also tried alternatives like ansible, puppet, Rex etc… One day, when lurking in the ports tree, I found sysutils/radmind which got a lot interest from me even if it is really poorly documented. It is a project from 1995 if I remember correctly, but I liked the base idea. Radmind works with files, you create a known working set of files for your system, and you can propagate that whole set to other machines, or see differences between the reference and the current system. Sets could be negative, meaning that the listed files should not be present on the system, but it was also possible to add extra sets for specific hosts. The whole thing is really really cumbersome, this requires a lot of work, I found little documentation etc… so I did not used it but, that lead me to write my own deployment tool using ideas from radmind (working with files) and from Rex (using a script for doing changes).


drist aims at being simple to understand and pluggable with standard tools. There is no special syntax to learn, no daemon to run, no agent, and it relies on base tools like awk, sed, ssh and rsync.

drist is cross platform as it has a few requirements but it is not well suited for deploying on too much differents operating systems.

When executed, drist will execute six steps in a specific order, you can use only steps you need.

Shamelessly copied from the man page, explanations after:

  1. If folder files exists, its content is copied to server rsync(1).
  2. If folder files-HOSTNAME exists, its content is copied to server using rsync(1).
  3. If folder absent exists, filenames in it are deleted on server.
  4. If folder absent-HOSTNAME exists, filenames in it are deleted on server.
  5. If file script exists, it is copied to server and executed there.
  6. If file script-HOSTNAME exists, it is copied to server and executed there.

In the previous list, all the existences checks are done from the current working directory where drist is started. The text HOSTNAME is replaced by the output of uname -n of the remote server, and files are copied starting from the root directory.

drist does not do anything more. In a more litteral manner, it copies files to the remote server, using a local filesystem tree (folder files). It will delete on the remote server all files present in the local filesystem tree (folder absent), and it will run on the remote server a script named script.

Each of theses can be customized per-host by adding a “-HOSTNAME” suffix to the folder or file name, because experience taught me that some hosts does require specific configuration.

If a folder or a file does not exist, drist will skip it. So it is possible to only copy files, or only execute a script, or delete files and execute a script after.

Drist usage

The usage is pretty simple. drist has 3 flags which are optionals.

  • -n flag will show what happens (simuation mode)
  • -s flag tells drist to use sudo on the remote host
  • -e flag with a parameter will tell drist to use a specific path for the sudo program

The remote server address (ssh format like user@host) is mandatory.

$ drist my_user@my_remote_host

drist will look at files and folders in the current directory when executed, this allow to organize as you want using your filesystem and a revision control system.

Simple examples

Here are two examples to illustrate its usage. The examples are easy, for learning purpose.

Deploying ssh keys

I want to easily copy my users ssh keys to a remote server.

$ mkdir drist_deploy_ssh_keys
$ cd drist_deploy_ssh_keys
$ mkdir -p files/home/my_user1/.ssh
$ mkdir -p files/home/my_user2/.ssh
$ cp -fr /path/to/key1/id_rsa files/home/my_user1/.ssh/
$ cp -fr /path/to/key2/id_rsa files/home/my_user2/.ssh/
$ drist user@remote-host
Copying files from folder "files":

Deploying authorized_keys file

We can easily create the authorized_key file by using cat.

$ mkdir drist_deploy_ssh_authorized
$ cd drist_deploy_ssh_authorized
$ mkdir -p files/home/user/.ssh/
$ cat /path/to/user/keys/*.pub > files/home/user/.ssh/authorized_keys
$ drist user@remote-host
Copying files from folder "files":

This can be automated using a makefile running the cat command and then running drist.

    cat /path/to/keys/*.pub > files/home/user.ssh/authorized_keys
drist user@remote-host

Installing nginx on FreeBSD

This module (aka a folder which contain material for drist) will install nginx on FreeBSD and start it.

$ mkdir deploy_nginx
$ cd deploy_nginx
$ cat >script <<EOF
test -f /usr/local/bin/nginx
if [ $? -ne 0 ]; then
    pkg install -y nginx
sysrc nginx_enable=yes
service nginx restart
$ drist user@remote-host
Executing file "script":
    Updating FreeBSD repository catalogue...
    FreeBSD repository is up to date.
    All repositories are up to date.
    The following 1 package(s) will be affected (of 0 checked):

    New packages to be INSTALLED:
            nginx: 1.14.1,2

    Number of packages to be installed: 1

    The process will require 1 MiB more space.
    421 KiB to be downloaded.
    [1/1] Fetching nginx-1.14.1,2.txz: 100%  421 KiB 430.7kB/s    00:01
    Checking integrity... done (0 conflicting)
    [1/1] Installing nginx-1.14.1,2...
    ===> Creating groups.
    Using existing group 'www'.
    ===> Creating users
    Using existing user 'www'.
    [1/1] Extracting nginx-1.14.1,2: 100%
    Message from nginx-1.14.1,2:

    Recent version of the NGINX introduces dynamic modules support.  In
    FreeBSD ports tree this feature was enabled by default with the DSO
    knob.  Several vendor's and third-party modules have been converted
    to dynamic modules.  Unset the DSO knob builds an NGINX without
    dynamic modules support.

    To load a module at runtime, include the new `load_module'
    directive in the main context, specifying the path to the shared
    object file for the module, enclosed in quotation marks.  When you
    reload the configuration or restart NGINX, the module is loaded in.
    It is possible to specify a path relative to the source directory,
    or a full path, please see
    https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and
    http://nginx.org/en/docs/ngx_core_module.html#load_module for

    Default path for the NGINX dynamic modules is

    nginx_enable:  -> yes
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    nginx not running? (check /var/run/nginx.pid).
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    Starting nginx.

More complex example

Now I will show more complexes examples, with host specific steps. I will not display the output because the previous output were sufficient enough to give a rough idea of what drist does.

Removing someone ssh access

We will reuse an existing module here, an user should not be able to login anymore on its account on the servers using the ssh key.

$ cd ssh
$ mkdir -p absent/home/user/.ssh/
$ touch absent/home/user/.ssh/authorized_keys
$ drist user@server

Installing php on FreeBSD

The following module will install php and remove the opcache.ini file, and will install php72-pdo_pgsql if it is run on server production.domain.private.

$ mkdir deploy_php && cd deploy_php
$ mkdir -p files/usr/local/etc
$ cp /some/correct/config.ini files/usr/local/etc/php.ini
$ cat > script <<EOF
test -f /usr/local/etc/php-fpm.conf || pkg install -f php-extensions
sysrc php_fpm_enable=yes
service php-fpm restart
test -f /usr/local/etc/php/opcache.ini || rm /usr/local/etc/php/opcache.ini
$ cat > script-production.domain.private <<EOF
test -f /usr/local/etc/php/pdo_pgsql.ini || pkg install -f php72-pdo_pgsql
service php-fpm restart

The monitoring machine

This one is unique and I would like to avoid applying its configuration against another server (that happened to me once with salt and it was really really bad). So I will just do all the job using the hostname specific cases.

$ mkdir my_unique_machine && cd my_unique_machine
$ mkdir -p files-unique-machine.private/usr/local/etc/{smokeping,munin}
$ cp /good/config files-unique-machine.private/usr/local/etc/smokeping/config
$ cp /correct/conf files-unique-machine.private/usr/local/etc/munin/munin.conf
$ cat > script-unique-machine.private <<EOF
pkg install -y smokeping munin-master munin-node
munin-configure --shell --suggest | sh
sysrc munin_node_enable=yes
sysrc smokeping_enable=yes
service munin-node restart
service smokeping restart
$ drist user@incorrect-host
$ drist user@unique-machine.private
Copying files from folder "files-unique-machine.private":
Executing file "script-unique-machine.private":

Nothing happened on the wrong system.

Be creative

Everything can be automated easily. I have some makefile in a lot of my drist modules, because I just need to type “make” to run it correctly. Sometimes it requires concatenating files before being run, sometimes I do not want to make mistake or having to remember on which module apply on which server (if it’s specific), so the makefile does the job for me.

One of my drist module will look at all my SSL certificates from another module, and make a reed-alert configuration file using awk and deploying it on the monitoring server. All I do is typing “make” and enjoy my free time.

How to get it and install it

  • Drist can be downloaded at this address.
  • Sources can be cloned using git clone git://bitreich.org/drist

In the sources folder, type “make install” as root, that will copy drist binary to /usr/bin/drist and its man page to /usr/share/man/man1/drist.1

For copying files, drist requires rsync on both local and remote hosts.

For running the script file, a sh compatible shell is required (csh is not working).

Fun tip #2: Display trailing spaces using ed

Written by Solène, on 29 November 2018.
Tags: #unix #fun-tip #openbsd68

Comments on Mastodon

This second fun-tip article will explain how to display trailing spaces in a text file, using the ed(1) editor. ed has a special command for showing a dollar character at the end of each line, which mean that if the line has some spaces, the dollar character will spaced from the last visible line character.

$ echo ",pl" | ed some-file.txt
This second fun-tip article will explain how to display trailing$
spaces in a text file, using the$
ed has a special command for showing a dollar character at the end of$
each line, which mean that if the line has some spaces, the dollar$
character will spaced from the last visible line character.$
.Bd -literal -offset indent$
echo ",pl" | ed some-file.txt$

This is the output of the article file while I am writing it. As you can notice, there is no trailing space here.

The first number shown in the ed output is the file size, because ed starts at the end of the file and then, wait for commands.

If I use that very same command on a small text files with trailing spaces, the following result is expected:

this is full    $
of trailing  $
spaces      !    $

It is also possible to display line numbers using the “n” command instead of the “p” command. This would produce this result for my current article file:

1       .Dd November 29, 2018$
2       .Dt "Show trailing spaces using ed"$
3       This second fun-tip article will explain how to display trailing$
4       spaces in a text file, using the$
5       .Lk https://man.openbsd.org/ed ed(1)$
6       editor.$
7       ed has a special command for showing a dollar character at the end of$
8       each line, which mean that if the line has some spaces, the dollar$
9       character will spaced from the last visible line character.$
10      $
11      .Bd -literal -offset indent$
12      echo ",pl" | ed some-file.txt$
13      453$
14      .Dd November 29, 2018
15      .Dt "Show trailing spaces using ed"
16      This second fun-tip article will explain how to display trailing
17      spaces in a text file, using the
18      .Lk https://man.openbsd.org/ed ed(1)
19      editor.
20      ed has a special command for showing a '\ character at the end of
21      each line, which mean that if the line has some spaces, the '\
22      character will spaced from the last visible line character.
24      \&.Bd \-literal \-offset indent
25      \echo ",pl" | ed some-file.txt
26      .Ed$
27      $
28      This is the output of the article file while I am writing it. As you$
29      can notice, there is no trailing space here.$
30      $
31      The first number shown in the ed output is the file size, because ed$
32      starts at the end of the file and then, wait for commands.$
33      $
34      If I use that very same command on a small text files with trailing$
35      spaces, the following result is expected:$
36      $
37      .Bd -literal -offset indent$
38      49$
39      this is full
40      of trailing
41      spaces      !
42      .Ed$
43      $
44      It is also possible to display line numbers using the "n" command$
45      instead of the "p" command.$
46      This would produce this result for my current article file:$
47      .Bd -literal -offset indent$

This shows my article file with each line numbered plus the position of the last character of each line, this is awesome!

I have to admit though that including my own article as example is blowing up my mind, especially as I am writing it using ed.

Tor part 6: onionshare for sharing files anonymously

Written by Solène, on 21 November 2018.
Tags: #tor #unix #network #openbsd68

Comments on Mastodon

If for some reasons you need to share a file anonymously, this can be done through Tor using the port net/onionshare. Onionshare will start a web server displaying an unique page with a list of shared files and a Download Files button leading to a zip file.

While waiting for a download, onionshare will display HTTP logs. By default, onionshare will exit upon successful download of the files but this can be changed with the flag –stay-open.

Its usage is very simple, execute onionshare with the list of files to share, as you can see in the following example:

solene@computer ~ $ onionshare Epictetus-The_Enchiridion.txt
Onionshare 1.3 | https://onionshare.org/
Connecting to the Tor network: 100% - Done
Configuring onion service on port 17616.
Starting ephemeral Tor onion service and awaiting publication
Settings saved to /home/solene/.config/onionshare/onionshare.json
Preparing files to share.
 * Running on (Press CTRL+C to quit)
Give this address to the person you're sending the file to:

Press Ctrl-C to stop server

Now, I need to give the address http://3ngjewzijwb4znjf.onion/hybrid-marbled to the receiver who will need a web browser with Tor to download it.

Tor part 5: onioncat for IPv6 VPN over tor

Written by Solène, on 13 November 2018.
Tags: #tor #unix #network #openbsd68

Comments on Mastodon

This article is about a software named onioncat, it is available as a package on most Unix and Linux systems. This software allows to create an IPv6 VPN over Tor, with no restrictions on network usage.

First, we need to install onioncat, on OpenBSD:

$ doas pkg_add onioncat

Run a tor hidden service, as explained in one of my previous article, and get the hostname value. If you run multiples hidden services, pick one hostname.

# cat /var/tor/ssh_hidden_service/hostname

Now that we have the hostname, we just need to run ocat.

# ocat g6adq2w15j1eakzr.onion

If everything works as expected, a tun interface will be created. With a fe80:: IPv6 address assigned to it, and a fd87:: address.

Your system is now reachable, via Tor, through its IPv6 address starting with fd87:: . It supports every IP protocol. Instead of using torsocks wrapper and .onion hostname, you can use the IPv6 address with any software.

Fun tip #1: Apply a diff with ed

Written by Solène, on 13 November 2018.
Tags: #fun-tip #unix #openbsd68

Comments on Mastodon

I am starting a new kind of articles that I chose to name it ”fun facts“. Theses articles will be about one-liners which can have some kind of use, or that I find interesting from a technical point of view. While not useless, theses commands may be used in very specific cases.

The first of its kind will explain how to programmaticaly use diff to modify file1 to file2, using a command line, and without a patch.

First, create a file, with a small content for the example:

$ printf "first line\nsecond line\nthird line\nfourth line with text\n" > file1
$ cp file1{,.orig}
$ printf "very first line\nsecond line\n third line\nfourth line\n" > file1

We will use diff(1) -e flag with the two files.

$ diff -e file1 file1.orig
fourth line
very first line

The diff(1) output is batch of ed(1) commands, which will transform file1 into file2. This can be embedded into a script as in the following example. We also add w last commands to save the file after edition.

ed file1 <<EOF
fourth line
very first line

This is a quite convenient way to transform a file into another file, without pushing the entire file. This can be used in a deployment script. This is more precise and less error prone than a sed command.

In the same way, we can use ed to alter configuration file by writing instructions without using diff(1). The following script will change the whole first line containing “Port 22” into Port 2222 in /etc/ssh/sshd_config.

ed /etc/ssh/sshd_config <<EOF
/Port 22
Port 2222

The sed(1) equivalent would be:

sed -i'' 's/.*Port 22.*/Port 2222/' /etc/ssh/sshd_config

Both programs have their use, pros and cons. The most important is to use the right tool for the right job.

Tor part 4: run a relay

Written by Solène, on 08 November 2018.
Tags: #unix #tor

Comments on Mastodon

In this fourth Tor article, I will quickly cover how to run a Tor relay, the Tor project already have a very nice and up-to-date Guide for setting a relay. Those relays are what make Tor usable, with more relay, Tor gets more bandwidth and it makes you harder to trace, because that would mean more traffic to analyze.

A relay server can be an exit node, which will relay Tor traffic to the outside. This implies a lot of legal issues, the Tor project foundation offers to help you if your exit node gets you in trouble.

Remember that being an exit node is optional. Most relays are not exit nodes. They will either relay traffic between relays, or become a guard which is an entry point to the Tor network. The guard gets the request over non-tor network and send it to the next relay of the user circuit.

Running a relay requires a lot of CPU (capable of some crypto) and a huge amount of bandwidth. Running a relay requires at least a bandwidth of 10Mb/s, this is a minimal requirement. If you have less, you can still run a bridge with obfs4 but I won’t cover it here.

When running a relay, you will be able to set a daily/weekly/monthly traffic limit, so your relay will stop relaying when it reach the quota. It’s quiet useful if you don’t have unmeasured bandwidth, you can also limit the bandwidth allowed to Tor.

To get real-time information about your relay, the software Nyx (net/nyx) is a Tor top-like front end which show Tor CPU usage, bandwidth, connections, log in real time.

The awesome Official Tor guide

File versioning with rcs

Written by Solène, on 31 October 2018.
Tags: #openbsd68 #highlight #unix

Comments on Mastodon

In this article I will present you the rcs tools and we will use it for versioning files in /etc to track changes between editions. These tools are part of the OpenBSD base install.


You need to create a RCS folder where your files are, so the files versions will be saved in it. I will use /etc in the examples, you can adapt to your needs.

# cd /etc
# mkdir RCS

The following examples use the command ci -u. This will be explained later why so.

Tracking a file

We need to add a file to the RCS directory so we can track its revisions. Each time we will proceed, we will create a new revision of the file which contain the whole file at that point of time. This will allow us to see changes between revisions, and the date of each revision (and some others informations).

I really recommend to track the files you edit in your system, or even configuration file in your user directory.

In next example, we will create the first revision of our file with ci, and we will have to write some message about it, like what is doing that file. Once we write the message, we need to validate with a single dot on the line.

# cd /etc
# ci -u fstab
fstab,v  <--  fstab
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> this is the /etc/fstab file
>> .
initial revision: 1.1

Editing a file

The process of edition has multiples steps, using ci and co:

  1. checkout the file and lock it, this will make the file available for writing and will prevent using co on it again (due to lock)
  2. edit the file
  3. commit the new file + checkout

When using ci to store the new revision, we need to write a small message, try to use something clear and short. The log messages can be seen in the file history, that should help you to know which change has been made and why. The full process is done in the following example.

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v  <--  fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
revision 1.4 (unlocked)

View changes since last version

Using previous example, we will use rcsdiff to check the changes since the last version.

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
# echo "something wrong" >> fstab
# rcsdiff -u fstab
--- fstab   2018/10/28 14:28:29 1.1
+++ fstab   2018/10/28 14:30:41
@@ -9,3 +9,4 @@
 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong

The -u flag is so to produce an unified diff, which I find easier to read. Lines with + shows additions, and lines with - show deletions (there are none in the example).

Use of ci -u

The examples were using ci -u this is because, if you use ci some_file, the file will be saved in the RCS folder but will be missing in its place. You should use co some_file to get it back (in read-only).

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v  <--  fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
# ls fstab
ls: fstab: No such file or directory
# co fstab
RCS/fstab,v  -->  fstab
revision 1.5
# ls fstab

Using ci -u is very convenient because it prevent the user to forget to checkout the file after commiting the changes.

Show existing revisions of a file

# rlog fstab
RCS file: RCS/fstab,v
Working file: fstab
head: 1.2
locks: strict
access list:
symbolic names:
keyword substitution: kv
total revisions: 2;     selected revisions: 2
new file
revision 1.2
date: 2018/10/28 14:45:34;  author: solene;  state: Exp;  lines: +1 -0;
Adding a disk
revision 1.1
date: 2018/10/28 14:45:18;  author: solene;  state: Exp;
Initial revision

We have revisions 1.1 and 1.2, if we want to display the file in its 1.1 revision, we can use the following command:

# co -p1.1 fstab
RCS/fstab,v  -->  standard output
revision 1.1
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2

Note that there is no space between the flag and the revision! This is required.

We can see that the command did output some extra informations about the file and “done” at the end of the file. Thoses extra informations are sent to stderr while the actual file content is sent to stdout. That mean if we redirect stdout to a file, we will get the file content.

# co -p1.1 fstab > a_file
RCS/fstab,v  -->  standard output
revision 1.1
# cat a_file
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2

Show a diff of a file since a revision

We can use rcsdiff using -r flag to tell it to show the changes between last and one specific revision.

# rcsdiff -u -r1.1 fstab
--- fstab   2018/10/29 14:45:18 1.1
+++ fstab   2018/10/29 14:45:34
@@ -9,3 +9,4 @@
 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong

Tor part 3: Tor Browser

Written by Solène, on 24 October 2018.
Tags: #openbsd68 #openbsd #unix #tor

Comments on Mastodon

In this third Tor article, we will discover the web browser Tor Browser.

The Tor Browser is an official Tor project. It is a modified Firefox, including some defaults settings changes and some extensions. The default changes are all related to privacy and anonymity. It has been made to be easy to browse the Internet through Tor without leaving behing any information which could help identify you, because there are much more informations than your public IP address which could be used against you.

It requires tor daemon to be installed and running, as I covered in my first Tor article.

Using it is really straightforward.

How to install tor-browser

$ pkg_add tor-browser

How to start tor-browser

$ tor-browser

It will create a ~/TorBrowser-Data folder at launch. You can remove it as you want, it doesn’t contain anything sensitive but is required for it to work.

New cl-yag version

Written by Solène, on 12 October 2018.
Tags: #cl-yag #unix

Comments on Mastodon

My website/gopherhole static generator cl-yag has been updated today, and see its first release!

New feature added today is that the gopher output now supports an index menu of tags, and a menu for each tags displaying articles tagged by that tag. The gopher output was a bit of a second class citizen before this, only listing articles.

New release v1.00 can be downloaded here (sha512 sum 53839dfb52544c3ac0a3ca78d12161fee9bff628036d8e8d3f54c11e479b3a8c5effe17dd3f21cf6ae4249c61bfbc8585b1aa5b928581a6b257b268f66630819). Code can be cloned with git: git://bitreich.org/cl-yag

Tor part 2: hidden service

Written by Solène, on 11 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

In this second Tor article, I will present an interesting Tor feature named hidden service. The principle of this hidden service is to make available a network service from anywhere, with only prerequisites that the computer must be powered on, tor not blocked and it has network access.

This service will be available through an address not disclosing anything about the server internet provider or its IP, instead, a hostname ending by .onion will be provided by tor for connecting. This hidden service will be only accessible through Tor.

There are a few advantages of using hidden services:

  • privacy, hostname doesn’t contain any hint
  • security, secure access to a remote service not using SSL/TLS
  • no need for running some kind of dynamic dns updater

The drawback is that it’s quite slow and it only work for TCP services.

From here, we assume that Tor is installed and working.

Running an hidden service require to modify the Tor daemon configuration file, located in /etc/tor/torrc on OpenBSD.

Add the following lines in the configuration file to enable a hidden service for SSH:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22

The directory /var/tor/ssh_service will be be created. The directory /var/tor is owned by user _tor and not readable by other users. The hidden service directory can be named as you want, but it should be owned by user _tor with restricted permissions. Tor daemon will take care at creating the directory with correct permissions once you reload it.

Now you can reload the tor daemon to make the hidden service available.

$ doas rcctl reload tor

In the /var/tor/ssh_service directory, two files are created. What we want is the content of the file hostname which contains the hostname to reach our hidden service.

$ doas cat /var/tor/ssh_service/hostname

Now, we can use the following command to connect to the hidden service from anywhere.

$ torsocks ssh piosdnzecmbijclc.onion

In Tor network, this feature doesn’t use an exit node. Hidden services can be used for various services like http, imap, ssh, gopher etc…

Using hidden service isn’t illegal nor it makes the computer to relay tor network, as previously, just check if you can use Tor on your network.

Note: it is possible to have a version 3 .onion address which will prevent hostname collapsing, but this produce very long hostnames. This can be done like in the following example:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22
HiddenServiceVersion 3

This will produce a really long hostname like tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion

If you want to have the short and long hostnames, you need to specify twice the hidden service, with differents folders.

Take care, if you run a ssh service on your website and using this same ssh daemon on the hidden service, the host keys will be the same, implying that someone could theoricaly associate both and know that this public IP runs this hidden service, breaking anonymity.

Tor part 1: how-to use Tor

Written by Solène, on 10 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

Tor is a network service allowing to hide your traffic. People sniffing your network will not be able to know what server you reach and people on the remote side (like the administrator of a web service) will not know where you are from. Tor helps keeping your anonymity and privacy.

To make it quick, tor make use of an entry point that you reach directly, then servers acting as relay not able to decrypt the data relayed, and up to an exit node which will do the real request for you, and the network response will do the opposite way.

You can find more details on the Tor project homepage.

Installing tor is really easy on OpenBSD. We need to install it, and start its daemon. The daemon will listen by default on localhost on port 9050. On others systems, it may be quite similar, install the tor package and enable the daemon if not enabled by default.

# pkg_add tor
# rcctl enable tor
# rcctl start tor

Now, you can use your favorite program, look at the proxy settings and choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and use the default address: with port 9050.

If you need to use tor with a program that doesn’t support setting a SOCKS proxy, it’s still possible to use torsocks to wrap it, that will work with most programs. It is very easy to use.

# pkg_add torsocks
$ torsocks ssh remoteserver

This will make ssh going through tor network.

Using tor won’t make you relaying anything, and is legal in most countries. Tor is like a VPN, some countries has laws about VPN, check for your country laws if you plan to use tor. Also, note that using tor may be forbidden in some networks (companies, schools etc..) because this allows to escape filtering which may be against some kind of “Agreement usage” of the network.

I will cover later the relaying part, which can lead to legal uncertainty.

Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to wrap network system calls, there is a way to do it more cleanly with ssh (or any program supporting a custom command for initialize the connection) using netcat.

ssh -o ProxyCommand='/usr/bin/nc -X 5 -x %h %p' address.onion

This can be simplified by adding the following lines to your ~/.ssh/config file, in order to automatically use the proxy command when you connect to a .onion hostname:

Host *.onion
ProxyCommand='/usr/bin/nc -X 5 -x %h %p'

This netcat command is tested under OpenBSD, there are differents netcat implementations, the flags may be differents or may not even exist.

Presenting drist at BitreichCON 2018

Written by Solène, on 21 August 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Still about bitreich conference 2018, I’ve been presenting drist, an utility for server deployment (like salt/puppet/ansible…) that I wrote.

drist makes deployments easy to understand and easy to extend. Basically, it has 3 steps:

  1. copying a local file tree on the remote server (for deploying files)
  2. delete files on the remote server if they are present in a local tree
  3. execute a script on the remote server

Each step is run if the according file/folder exists, and for each step, it’s possible to have a general / per-host setup.

How to fetch drist

git clone git://bitreich.org/drist

It was my very first talk in english, please be indulgent.

Plain text slides (tgz)

MP3 of the talk

MP3 of questions/answers

Bitreich community is reachable on gopher at gopher://bitreich.org

Presenting Reed-alert at BitreichCON 2018

Written by Solène, on 20 August 2018.
Tags: #unix

Comments on Mastodon

As the author of reed-alert monitoring tool I have been speaking about my software at the bitreich conference 2018.

For the quick intro, reed-alert is a software to get notified when something is wrong on your server, it’s fully customizable and really easy-to-use.

How to fetch reed-alert

git clone git://bitreich.org/reed-alert

It was my very first talk in english, please be indulgent.

Plain text slides (tgz)

MP3 of the talk

MP3 of questions/answers

Bitreich community is reachable on gopher at gopher://bitreich.org

Tmux mastery

Written by Solène, on 05 July 2018.
Tags: #unix #shell

Comments on Mastodon

Tips for using Tmux more efficiently

Enter in copy mode

By default Tmux uses the emacs key-bindings, to make a selection you need to enter in copy-mode by pressing Ctrl+b and then [ with Ctrl+b being the tmux prefix key, if you changed it then do the replacement while reading.

If you need to quit the copy-mode, type Ctrl+C.

Make a selection

While in copy-mode, selects your start or ending position for your selection and then press Ctrl+Space to start the selection. Now, move your cursor to select the text and press Ctrl+w to validate.

Paste a selection

When you want to paste your selection, press Ctrl+b ] (you should not be in copy-mode for this!).

Make a rectangle selection

If you want to make a rectangular selection, press Ctrl+space to start and immediately, press R (capitalized R), then move your cursor and validate with Ctrl+w.

Output the buffer to X buffer

Make a selection to put the content in tmux buffer, then type

tmux save-buffer - | xclip

You may want to look at xclip (it’s a package) man page.

Output the buffer to a file

tmux save-buffer file

Load a file into buffer

It’s possible to load the content of a file inside the buffer for pasting it somewhere.

tmux load-buffer file

You can also load into the buffer the output of a command, using a pipe and - as a file like in this example:

echo 'something very interesting' | tmux load-buffer -

Display the battery percentage in the status bar

If you want to display your battery percentage and update it every 40 seconds, you can add two following lines in ~/.tmux.conf:

set status-interval 40
set -g status-right "#[fg=colour155]#(apm -l)%% | #[fg=colour45]%d %b %R"

This example works on OpenBSD using apm command. You can reuse this example to display others informations.

Writing an article using mdoc format

Written by Solène, on 03 July 2018.
Tags: #unix

Comments on Mastodon

I never wrote a man page. I already had to read at the source of a man page, but I was barely understand what happened there. As I like having fun and discovering new things (people call me a Hipster since last days days ;-) ).

I modified cl-yag (the website generator used for this website) to be only produced by mdoc files. The output was not very cool as it has too many html items (classes, attributes, tags etc…). The result wasn’t that bad but it looked like concatenated man pages.

I actually enjoyed playing with mdoc format (the man page format on OpenBSD, I don’t know if it’s used somewhere else). While it’s pretty verbose, it allows to separate the formatting from the paragraphs. As I’m playing with ed editor last days, it is easier to have an article written with small pieces of lines rather than a big paragraph including the formatting.

Finally I succeded at writing a command line which produced an usable html output to use it as a converter in cl-yag. Now, I’ll be able to write my articles in the mdoc format if I want :D (which is fun). The convert command is really ugly but it actually works, as you can see if you read this.

cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT

The trick here was to use markdown as an convert format between mdoc to html. As markdown is very weak compared to html (in possibilities), it will only use simple tags for formatting the html output. The sed command is needed to delete the mandoc output with the man page title at the top, and the operating system at the bottom.

By having played with this, writing a man page is less obscure to me and I have a new unusual format to use for writing my articles. Maybe unusual for this use case, but still very powerful!

Trying to move away from emacs

Written by Solène, on 03 July 2018.
Tags: #unix #emacs

Comments on Mastodon


Today I will write about my current process of trying to get rid of emacs. I use it extensively with org-mode for taking notes and making them into a agenda/todo-list, this helped me a lot to remember tasks to do and what people told to me. I also use it for editing of course, any kind of text or source code. This is usually the editor I use for writing the blog articles that you can read here. This one is written using ed. I also read my emails in emacs with mu4e (which last version doesn’t work anymore on powerpc due to a c++14 feature used and no compiler available on powerpc to compile it…).

While I like Emacs, I never liked to use one big tool for everything. My current quest is to look for a portable and efficient way to replace differents emacs parts. I will not stop using Emacs if the replacements are not good enough to do the job.

So, I identified my Emacs uses:

  • todo-list / agenda / taking notes
  • writing code (perl, C, php, Common LISP)
  • IRC
  • mails
  • writing texts
  • playing chess by mail
  • jabber client

I will try for each topic to identify alternatives and challenge them to Emacs.

Todo-list / Agenda / Notes taking

This is the most important part of my emacs use and it is the one I would really like to get out of Emacs. What I need is: writing quickly a task, add a deadline to it, add explanations or a description to it, be able to add sub-tasks for a task and be able to display it correctly (like in order of deadline with days / hours before deadline).

I am trying to convert my current todo-list to taskwarrior, the learning curve is not easy but after spending one hour playing with it while reading the man page, I have understood enough to replace org-mode with it. I do not know if it will be as good as org-mode but only time will let us know.

By the way, I found vit, a ncurses front-end for taskwarrior.

Writing code

Actually Emacs is a good editor. It supports syntax coloring, can evaluates regions of code (depend of the language), the editor is nice etc… I discovered jed which is a emacs-like editor written in C+libslang, it’s stable and light while providing more features than mg editor (available in OpenBSD base installation).

While I am currently playing with ed for some reasons (I will certainly write about it), I am not sure I could use it for writing a software from scratch.


There are lots of differents IRC clients around, I just need to pick up one.


I really enjoy using mu4e, I can find my mails easily with it, the query system is very powerful and interesting. I don’t know what I could use to replace it. I have been using alpine some times ago, and I tried mutt before mu4e and I did not like it. I have heard about some tools to manage a maildir folder using unix commands, maybe I should try this one. I did not any searches on this topic at the moment.

Writing text

For writing plain text like my articles or for using $EDITOR for differents tasks, I think that ed will do the job perfectly :-) There is ONE feature I really like in Emacs but I think it’s really easy to recreate with a script, the function bind on M-q to wrap a text to the correct column numbers!

Update: meanwhile I wrote a little perl script using Text::Wrap module available in base Perl. It wraps to 70 columns. It could be extended to fill blanks or add a character for the first line of a paragraph.

#!/usr/bin/env perl
use strict;use warnings;
use Text::Wrap qw(wrap $columns);
open IN, '<'.$ARGV[0];
$columns = 70;
my @file = <IN>;
print wrap("","",@file);

This script does not modify the file itself though.

Some people pointed me that Perl was too much for this task. I have been told about Groff or Par to format my files.

Finally, I found a very BARE way to handle this. As I write my text with ed, I added an new alias named “ruled” with spawn ed with a prompt of 70 characters #, so I have a rule each time ed displays its prompt!!! :D

It looks like this for the last paragraph:

###################################################################### c
been told about Groff or Par to format my files.

Finally, I found a very **BARE** way to handle this. As I write my
text with ed, I added an new alias named "ruled" with spawn ed with a
prompt of 70 characters #, so I have a rule each time ed displays its
prompt!!! :D
###################################################################### w

Obviously, this way to proceed only works when writing the content at first. If I need to edit a paragraph, I will need a tool to format correctly my document again.

Jabber client

Using jabber inside Emacs is not a very good experience. I switched to profanity (featured some times ago on this blog).

Playing Chess

Well, I stopped playing chess by mails, I am still waiting for my recipient to play his turn since two years now. We were exchanging the notation of the whole play in each mail, by adding our turn each time, I was doing the rendering in Emacs, but I do not remember exactly why but I had problems with this (replaying the string).

Easy encrypted backups on OpenBSD with base tools

Written by Solène, on 26 June 2018.
Tags: #unix #openbsd66 #openbsd

Comments on Mastodon

Old article

Hello, it turned out that this article is obsolete. The security used in is not safe at all so the goal of this backup system isn’t achievable, thus it should not be used and I need another backup system.

One of the most important feature of dump for me was to keep track of the inodes numbers. A solution is to save the list of the inodes numbers and their path in a file before doing a backup. This can be achieved with the following command.

$ doas ncheck -f "\I \P\n" /var

If you need a backup tool, I would recommend the following:


It supports remote backend like ftp/sftp which is quite convenient as you don’t need any configuration on this other side. It supports compression and incremental backup. I think it has some GUI tools available.


It supports remote backend like cloud storage provider or sftp, it doesn’t require any special tool on the remote side. It supports deduplication of the files and is able to manage multiples hosts in the same repository, this mean that if you backup multiple computers, the deduplication will work across them. This is the only backup software I know allowing this (I do not count backuppc which I find really unusable).


It supports remote backend like ssh only if borg is installed on the other side. It supports compression and deduplication but it is not possible to save multiples hosts inside the same repository without doing a lot of hacks (which I won’t recommend).

Change default application for xdg-open

Written by Solène, on 25 June 2018.
Tags: #unix

Comments on Mastodon

I write it as a note for me and if it can helps some other people, it’s fine.

To change the program used by xdg-open for opening some kind of file, it’s not that hard.

First, check the type of the file:

$ xdg-mime query filetype file.pdf

Then, choose the right tool for handling this type:

$ xdg-mime default mupdf.desktop application/pdf

Honestly, having firefox opening PDF files with GIMP IS NOT FUN.

Share a tmux session with someone with tmate

Written by Solène, on 01 June 2018.
Tags: #unix

Comments on Mastodon

New port of the week, and it’s about tmate.

If you ever wanted to share a terminal with someone without opening a remote access to your computer, tmate is the right tool for this.

Once started, tmate will create a new tmux instance connected through the tmate public server, by typing tmate show-messages you will get url for read-only or read-write links to share with someone, by ssh or web browser. Don’t forget to type clear to hide url after typing show-messages, otherwise viewing people will have access to the write url (and it’s not something you want).

If you don’t like the need of a third party, you can setup your own server, but we won’t cover this in this article.

When you want to end the share, you just need to exit the tmux opened by tmate.

If you want to install it on OpenBSD, just type pkg_add tmate and you are done. I think it’s available on most unix systems.

There is no much more to say about it, it’s great, simple, work out-of-the-box with no configuration needed.

Deploying cron programmaticaly the unix way

Written by Solène, on 31 May 2018.
Tags: #unix

Comments on Mastodon

Here is a little script to automatize in some way your crontab deployment when you don’t want to use a configuration tool like ansible/salt/puppet etc… This let you package a file in your project containing the crontab content you need, and it will add/update your crontab with that file.

The script works this way:

$ ./install_cron crontab_solene

with crontab_solene file being an actual crontab correct, which could looks like this:

## TAG ##
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##

Then it will include the file into my current user crontab, the TAG in the file is here to be able to remove it and replace it later with the new version. The script could be easily modified to support the tag name as parameter, if you have multiple deployments using the same user on the same machine.


$ crontab -l
0 * * * * pgrep iridium | xargs renice -n +20
$ ./install_cron crontab_solene
$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##

If I add to crontab_solene the line 0 20 * * * ~/bin/faubackup.sh I can now reinstall the crontab file.

$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
$ ./install_cron crontab_solene
$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
0 20 * * * ~/bin/faubackup.sh
## END_TAG ##

Here is the script:


if [ -z "$1" ]; then
    echo "Usage: $0 user_crontab_file"
    exit 1

grep "^## TAG ##$" "$1" >/dev/null
grep "^## END_TAG ##$" "$1" >/dev/null

if [ "$VALIDATION" -ne 0 ]
    echo "file ./${1} needs \"## TAG ##\" and \"## END_TAG ##\" to be used"
    exit 2

crontab -l | \
    awk '{ if($0=="## TAG ##") { hide=1 };  if(hide==0) { print } ; if($0=="## END_TAG ##") { hide=0 }; }' | \
    cat - "${1}" | \
    crontab -

Faster SSH with multiplexing

Written by Solène, on 22 May 2018.
Tags: #unix #ssh

Comments on Mastodon

I discovered today an OpenSSH feature which doesn’t seem to be widely known. The feature is called multiplexing and consists of reusing an opened ssh connection to a server when you want to open another one. This leads to faster connection establishment and less processes running.

To reuse an opened connection, we need to use the ControlMaster option, which requires ControlPath to be set. We will also set ControlPersist for convenience.

  • ControlMaster defines if we create, or use or nothing about multiplexing
  • ControlPath defines where to store the socket to reuse an opened connection, this should be a path only available to your user.
  • ControlPersist defines how much time to wait before closing a ssh connection multiplexer after all connection using it are closed. By default it’s “no” and once you drop all connections the multiplexer stops.

I choosed to use the following parameters into my ~/.ssh/config file:

Host *
ControlMaster auto
ControlPath ~/.ssh/sessions/%h%p%r.sock
ControlPersist 60

This requires to have ~/.ssh/sessions/ folder restricted to my user only. You can create it with the following command:

install -d -m 700 ~/.ssh/sessions

(you can also do mkdir ~/.ssh/sessions && chmod 700 ~/.ssh/sessions but this requires two commands)

The ControlPath variable will creates sessions with the name “${hostname}${port}${user}.sock”, so it will be unique per remote server.

Finally, I choose to use ControlPersist to 60 seconds, so if I logout from a remote server, I still have 60 seconds to reconnect to it instantly.

Don’t forget that if for some reason the ssh channel handling the multiplexing dies, all the ssh connections using it will die with it.

Benefits with ProxyJump

Another ssh feature that is very useful is ProxyJump, it’s really useful to access ssh hosts which are not directly available from your current place. Like servers with no public ssh server available. For my job, I have a lot of servers not facing the internet, and I can still connect to them using one of my public facing server which will relay my ssh connection to the destination. Using the ControlMaster feature, the ssh relay server doesn’t have to handle lot of connections anymore, but only one.

In my ~/.ssh/config file:

Host *.private.lan
ProxyJump public-server.com

Those two lines allow me to connect to every servers with .private.lan domains (which is known by my local DNS server) by typing ssh some-machine.private.lan. This will establish a connection to public-server.com and then connects to the next server.

Sending mail with mu4e

Written by Solène, on 22 May 2018.
Tags: #unix #emacs

Comments on Mastodon

In my article about mu4e I said that I would write about sending mails with it. This will be the topic covered in this article.

There are a lot of ways to send mails with a lot of differents use cases. I will only cover a few of them, the documentation of mu4e and emacs are both very good, I will only give hints about some interestings setups.

I would thank Raphael who made me curious about differents ways of sending mails from mu4e and who pointed out some mu4e features I wasn’t aware of.

Send mails through your local server

The easiest way is to send mails through your local mail server (which should be OpenSMTPD by default if you are running OpenBSD). This only requires the following line to works in your ~/.emacs file:

(setq message-send-mail-function 'sendmail-send-it)

Basically, it would be only relayed to the recipient if your local mail is well configured, which is not the case for most servers. This requires a reverse DNS address correctly configured (assuming a static IP address), a SPF record in your DNS and a DKIM signing for outgoing mail. This is the minimum to be accepted to others SMTP servers. Usually people send mails from their personal computer and not from the mail server.

Configure OpenSMTPD to relay to another smtp server

We can bypass this problem by configuring our local SMTP server to relay our mails sent locally to another SMTP server using credentials for authentication.

This is pretty easy to set-up, by using the following /etc/mail/smtpd.conf configuration, just replace remoteserver by your server.

table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets

listen on lo0

accept for local alias <aliases> deliver to mbox
accept for any relay via secure+auth://label@remoteserver:465 auth <secrets>

You will have to create the file /etc/mail/secrets and add your credentials for authentication on the SMTP server.

From smtpd.conf(5) man page, as root:

# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "label username:password" > /etc/mail/secrets

Then, all mail sent from your computer will be relayed through your mail server. With ’sendmail-send-it, emacs will delivered the mail to your local server which will relay it to the outgoing SMTP server.

SMTP through SSH

One setup I like and I use is to relay the mails directly to the outgoing SMTP server, this requires no authentication except a SSH access to the remote server.

It requires the following emacs configuration in ~/.emacs:

  message-send-mail-function 'smtpmail-send-it
  smtpmail-smtp-server "localhost"
  smtpmail-smtp-service 2525)

The configuration tells emacs to connect to the SMTP server on localhost port 2525 to send the mails. Of course, no mail daemon runs on this port on the local machine, it requires the following ssh command to be able to send mails.

$ ssh -N -L remoteserver

This will bind the port from the remote server point of view on your address from your computer point of view.

Your mail server should accept deliveries from local users of course.

SMTP authentication from emacs

It’s also possible to send mails from emacs using a regular smtp authentication directly from emacs. It is boring to setup, it requires putting credentials into a file named ~/.authinfo that it’s possible to encrypt using GPG but then it requires a wrapper to load it. It also requires to setup correctly the SMTP authentication. There are plenty of examples for this on the Internet, I don’t want to cover it.

Queuing mails for sending it later

Mu4e supports a very nice feature which is mail queueing from smtpmail emacs client. To enable it, it requires two easy steps:

In ~/.emacs:

  smtpmail-queue-mail t
  smtpmail-queue-dir "~/Mail/queue/cur")

In your shell:

$ mu mkdir ~/Mail/queue
$ touch ~/Mail/queue/.noindex

Then, mu4e will be aware of the queueing, in the home screen of mu4e, you will be able to switch from queuing to direct sending by pressing m and flushing the queue by pressing f.

Note: there is a bug (not sure it’s really a bug). When sending a mail into the queue, if your mail contains special characters, you will be asked to send it raw or to add a header containing the encoding.

Autoscrolling text for lazy reading

Written by Solène, on 17 May 2018.
Tags: #unix

Comments on Mastodon

Today I found a software named Lazyread which can read and display file an autoscroll at a chosen speed. I had to read its source code to make it work, the documentation isn’t very helpful, it doesn’t read ebooks (as in epub or mobi format) and doesn’t support stdin… This software requires some C code + a shell wrapper to works, it’s complicated for only scrolling.

So, after thinking a few minutes, the autoscroll can be reproduced easily with a very simple awk command. Of course, it will not have the interactive keys like lazyread to increase/decrease speed or some others options, but the most important part is there: autoscrolling.

If you want to read a file with a rate of 1 line per 700 millisecond, just type the following command:

$ awk '{system("sleep 0.7");print}' file

Do you want to read an html file (documentation file on the disk or from the web), you can use lynx or w3m to convert the html file on the fly to a readable text and pass it to awk stdin.

$ w3m -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ lynx -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ w3m -dump https://dataswamp.org/~solene/ | awk '{system("sleep 0.7");print}'

Maybe you want to read a man page?

$ man awk | awk '{system("sleep 0.7");print}'

If you want to pause the reading, you can use the true unix way, Ctrl+Z to send a signal which will stop the command and let it paused in background. You can resume the reading by typing fg.

One could easily write a little script parsing parameters for setting the speed or handling files or url with the correct command.

Notes: If for some reasons you try to use lazyread, fix the shebang in the file lesspipe.sh and you will need to call lazyread binary with the environment variable LESSOPEN="|./lesspipe.sh %s" (the path of the script if needed). Without this variable, you will have a very helpful error “file not found”.

Port of the week: Sent

Written by Solène, on 15 May 2018.
Tags: #unix

Comments on Mastodon

As the new port of the week, We will discover Sent. While we could think it is mail related, it is not. Sent is a nice software to make presentations from a simple text file. It has been developped by Suckless, a hacker community enjoying writing good software while keeping a small and sane source code, they also made software like st, dwm, slock, surf…

Sent is about simplicity. I will reuse a part of the example file which is also the documentation of the tool.

$ sent FILE1 [FILE2 …]

▸ one slide per paragraph
▸ lines starting with # are ignored
▸ image slide: paragraph containing @FILENAME
▸ empty slide: just use a \ as a paragraph

this text will not be displayed, since the @ at the start of the first line
makes this paragraph an image slide.

The previous text, saved into a file and used with sent will open a fullscreen window containg three “slides”. Each slide will resize the text to maximize the display usage, this mean the font size will change on each slide.

It is really easy to use. To display next slide, you have the choice between pressing space, right arrow, return or clicking any button. Pressing left arrow will go back.

If you want to install it on OpenBSD: pkg_add sent, the package comes from the port misc/sent.

Be careful, Sent does not produce any file, you will need it for the presentation!

Suckless sent website

Mounting remote samba share through SSH tunnel

Written by Solène, on 04 May 2018.
Tags: #unix

Comments on Mastodon

If for some reason you need to access a Samba share outside of the network, it is possible to access it through ssh and mount the share on your local computer.

Using the ssh command as root is required because you will bind local port 139 which is reserved for root:

# ssh -L 139: user@remote-server -N

Then you can mount the share as usual but using localhost instead of remote-server.

Example of a mount element for usmb

<mount id="public" credentials="me">

As a reminder, <!--tag>foobar</tag--> is a XML comment.

Extract files from winmail.dat

Written by Solène, on 02 May 2018.
Tags: #unix #email

Comments on Mastodon

If you ever receive a mail with an attachment named “winmail.dat” then may be disappointed. It is a special format used by Microsoft Exchange, it contains the files attached to the mail and need some software to extract them.

Hopefully, there is a little and effecient utility named “tnef” to extract the files.

Install it: pkg_add tnef

List files: tnef -t winmail.dat

Extract files: tnef winmail.dat

That’s all !

Port of the week: ledger

Written by Solène, on 02 May 2018.
Tags: #unix

Comments on Mastodon

In this post I will do a short presentation of the port productivity/ledger, an very powerful command line accounting software, using plain text as back-end. Writing on it is not an easy task, I will use a real life workflow of my usage as material, even if my use is special.

As I said before, Ledger is very powerful. It can helps you manage your bank accounts, bills, rents, shares and others things. It uses a double entry system which means each time you add an operation (withdraw, paycheck, …) , this entry will also have to contain the current state of the account after the operation. This will be checked by ledger by recalculating every operations made since it has been initialized with a custom amount as a start. Ledger can also tracks categories where you spend money or statistics about your payment method (check, credit card, bank transfer, money…).

As I am not an english native speaker and that I don’t work in banks or related, I am not very familiar with accounting words in english, it makes me very hard to understand all ledger keywords, but I found a special use case for accounting things and not money which is really practical.

My special use case is that I work from home for a company working in a remote location. From time to time, I take the train to the to the office, the full travel is

[home]   → [underground A] → [train] → [underground B] → [office]
[office] → [underground B] → [train] → [underground A] → [home]

It means I need to buy tickets for both underground A and underground B system, and I want to track tickets I use for going to work. I buy the tickets 10 by 10 but sometimes I use it for my personal use or sometimes I give a ticket to someone. So I need to keep track of my tickets to know when I can give a bill to my work for being refunded.

Practical example: I buy 10 tickets of A, I use 2 tickets at day 1. On day 2, I give 1 ticket to someone and I use 2 tickets in the day for personal use. It means I still have 5 tickets in my bag but, from my work office point of view, I should still have 8 tickets. This is what I am tracking with ledger.

2018/02/01 * tickets stock Initialization + go to work
    Tickets:inv                                   10 City_A
    Tickets:inv                                   10 City_B
    Tickets:inv                                   -2 City_A
    Tickets:inv                                   -2 City_B

2018/02/08 * Work
    Tickets:inv                                    -2 City_A
    Tickets:inv                                    -2 City_B

2018/02/15 * Work + Datacenter access through underground
    Tickets:inv                                    -4 City_B
    Tickets:inv                                    -2 City_A

At the point, running ledger -f tickets.dat balance Tickets shows my tickets remaining:

4 City_A
2 City_B  Tickets:inv

Will add another entry which requires me to buy tickets:

2018/02/22 * Work + Datacenter access through underground
    Tickets:inv                                    -4 City_B
    Tickets:inv                                    -2 City_A
    Tickets:inv                                    10 City_B

Now, running ledger -f tickets.dat balance Tickets shows my tickets remaining:

2 City_A
8 City_B  Tickets:inv

I hope that the example was clear enought and interesting. There is a big tutorial document available on the ledger homepage, I recommend to read it before using ledger, it contains real world examples with accounting. Homepage link

Port of the week: dnstop

Written by Solène, on 18 April 2018.
Tags: #unix

Comments on Mastodon

Dnstop is an interactive console application to watch in realtime the DNS queries going through a network interface. It currently only supports UDP DNS requests, the man page says that TCP isn’t supported.

It has a lot of parameters and keybinding for the interactive use

To install it on OpenBSD: doas pkg_add dnstop

We will start dnstop on the wifi interface using a depth of 4 for the domain names: as root type dnstop -l 4 iwm0 and then press ‘3’ to display up to 3 sublevel, the -l 4 parameter means we want to know domains with a depth of 4, it means that if a request for the domain my.very.little.fqdn.com. happens, it will be truncated as very.little.fqdn.com. If you press ‘2’ in the interactive display, the earlier name will be counted in the line fqdn.com’.

Example of output:

Queries: 0 new, 6 total                           Tue Apr 17 07:17:25 2018

Query Name          Count      %   cum%
--------------- --------- ------ ------
perso.pw                3   50.0   50.0
foo.bar                 1   16.7   66.7
hello.mydns.com         1   16.7   83.3
mydns.com.lan           1   16.7  100.0

If you want to use it, read the man page first, it has a lot of parameters and can filters using specific expressions.

How to read a epub book in a terminal

Written by Solène, on 17 April 2018.
Tags: #unix

Comments on Mastodon

If you ever had to read an ebook in a epub format, you may have find yourself stumbling on Calibre software. Personally, I don’t enjoy reading a book in Calibre at all. Choice is important and it seems that Calibre is the only choice for this task.

But, as the epub format is very simple, it’s possible to easily read it with any web browser even w3m or lynx.

With a few commands, you can easily find xhtml files that can be opened with a web browser, an epub file is a zip containing mostly xhtml, css and images files. The xhtml files have links to CSS and images contained in others folders unzipped.

In the following commands, I prefer to copy the file in a new directory because when you will unzip it, it will create folder in your current working directory.

$ mkdir /tmp/myebook/
$ cd /tmp/myebook
$ cp ~/book.epub .
$ unzip book.epub
$ cd OPS/xhtml
$ ls *xhtml

I tried with differents epub files, in most case you should find a lot of files named chapters-XX.xhtml with XX being 01, 02, 03 and so forth. Just open the files in the correct order with a web browser aka “html viewer”.

Port of the week: tig

Written by Solène, on 10 April 2018.
Tags: #unix #git

Comments on Mastodon

Today we will discover the software named tig whose name stands for Text-mode Interface for Git.

To install it on OpenBSD: pkg_add tig

Tig is a light and easy to use terminal application to browse a git repository in an interactive manner. To use it, just ‘cd’ into a git repository on your filesystem and type tig. You will get the list of all the commits, with the author and the date. By pressing “Enter” key on a commit, you will get the diff. Tig also displays branching and merging in a graphical way.

Tig has some parameters, one I like a lot if blame which is used like this: tig blame afile. Tig will show the file content and will display for each line to date of last commit, it’s author and the small identifier of the commit. With this function, it gets really easy to find who modified a line or when it was modified.

Tig has a lot of others possibilities, you can discover them in its man pages.

Monitor your systems with reed-alert

Written by Solène, on 17 January 2018.
Tags: #unix #lisp

Comments on Mastodon

This article will present my software reed-alert, it checks user-defined states and send user-defined notification. I made it really easy to use but still configurable and extensible.


reed-alert is not a monitoring tool producing graph or storing values. It does a job sysadmins are looking for because there are no alternative product (the alternatives comes from a very huge infrastructure like Zabbix so it’s not comparable).

From its configuration file, reed-alert will check various states and then, if it fails, will trigger a command to send a notification (totally user-defined).

Fetch it

This is a open-source and free software released under MIT license, you can install it with the following command:

# git clone git://bitreich.org/reed-alert
# cd reed-alert
# make
# doas make install

This will install a script reed-alert in /usr/local/bin/ with the default Makefile variables. It will try to use ecl and then sbcl if ecl is not installed.

A README file is available as documentation to describe how to use it, but we will see here how to get started quickly.

You will find a few files there, reed-alert is a Common LISP software and it has been chose for (I hope) good reasons that the configuration file is plain Common LISP.

There is a configuration file looking like a real world example named config.lisp.sample and another configuration file I use for testing named example.lisp containing lot of cases.

Let’s start

In order to use reed-alert we only need to create a new configuration file and then add a cron job.


We are going to see how to configure reed-alert. You can find more explanations or details in the README file.


We have to configure two kind of parameters, first we need to set-up a way to receive alerts, easiest way to do so is by sending a mail with “mail” command. Alerts are declared with the function alert and as parameters the alert name and the command to be executed. Some variables are replaced with values from the probe, in the README file you can find the list of probes, it looks like %date% or %params%.

In Common LISP functions are called by using a parenthesis before its name and until the parenthesis is closed, we are giving its parameters.


(alert mail "echo 'problem on %hostname%' | mail me@example.com")

One should take care about nesting quotes here.

reed-alert will fork a shell to start the command, so pipes and redirection works. You can be creative when writing alerts that:

  • use a SMS service
  • write a script to post on a forum
  • publishing a file on a server
  • send text to IRC with ii client


Now we have some alerts, we will configure some checks in order to make reed-alert useful. It uses probes which are pre-defined checks with parameters, a probe could be “has this file not been updated since N minutes ?” or “Is the disk space usage of partition X more than Y ?”

I chose to name the function “=>” to make a check, it isn’t a name and reminds an item or something going forward. Both previous example using our previous mail notifier would look like:

(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)

It’s also possible to use shell commands and check the return code using the command probe, allowing the user to define useful checks.

(=> mail command :command "echo '/is-this-gopher-server-up?' | nc -w 3 dataswamp.org 70"
                 :desc "dataswamp.org gopher server")

We use echo + netcat to check if a connection to a socket works. The :desc keyword will give a nicer name in the output instead of just “COMMAND”.


We wrote the minimum required to configure reed-alert, now the configuration file so your my-config.lisp file should looks like this:

(alert mail "echo 'problem on %hostname%' | mail me@example.com")
(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)

Now, you can start it every 5 minutes from a crontab with this:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )

If you prefer to use ecl:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )

The time between each run is up to you, depending on what you monitor.


By default, when a check returns a failure, reed-alert will only trigger the notifier associated once it reach the 3rd failure. And then, will notify again when the service is back (the variable %state% is replaced by start or end to know if it starts or stops.)

This is to prevent reed-alert to send a notification each time it checks, there is absolutely no need for this for most users.

The number of failures before triggering can be modified by using the keyword “:try” as in the following example:

(=> mail disk-usage :limit 90 :try 1)

In this case, you will get notified at the first failure of it.

The number of failures of failed checks is stored in files (1 per check) in the “states/” directory of reed-alert working directory.

New cl-yag version

Written by Solène, on 16 December 2017.
Tags: #unix #cl-yag

Comments on Mastodon


cl-yag is a static website generator. It's a software used to publish a website and/or a gopher hole from a list of articles. As the developer of cl-yag I'm happy to announce that a new version has been released.

New features

The new version, with its number 0.6, bring lot of new features :

  • supporting different markup language per article
  • date format configurable
  • gopher output format configurable
  • ship with the default theme "clyma", minimalist but responsive (the one used on this website)
  • easier to use
  • full user documentation

The code is available at git://bitreich.org/cl-yag, the program requires sbcl or ecl to work.

Per article markup language

The best feature I'm proud of is allowing to use a different language per article. While on my blog I choosed to use markdown, it's sometimes not adapted for more elaborated articles like the one about LISP containing code which was written in org-mode then converted to markdown manually to fit to cl-yag. Now, the user can declare a named "converter" which is a command line with pattern replacement, to produce the html file. We can imagine a lot of things with this, even producing a gallery with find + awk command. Now, I can use markdown by default and specify if I want to use org-mode or something else.

This is the way to declare a converter, taking org-mode as example, which is not very simple, because of emacs not being script friendly :

(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

And an easy way to produce a gallery with awk from a .txt file containing a list of images path.

(converter :name :gallery :extension ".txt"
	   :command (concatenate 'string
				 "awk 'BEGIN { print \"<div class=\\\"gallery\\\">\"} "
				 "{ print \"<img src=\\\"static/images/\"$1\"\\\" />\" } "
				 " END { print  \"</div>\"} data/%IN | tee %OUT"))

The concatenate function is only used to improve the presentation, to split the command in multiples lines and make it easier to read. It's possible to write all the command in one line.

The patterns %IN and %OUT are replaced by the input file name and the output file name when the command is executed.

For an easier example, the default markdown converter looks like this, calling multimarkdown command :

(converter :name :markdown :extension ".md"
	   :command "multimarkdown -t html -o %OUT data/%IN")

It's really easy (I hope !) to add new converters you need with this feature.

Date format configurable

One problem I had with cl-yag is that it's plain vanilla Common LISP without libraries, so it's easier to fetch and use but it lacks some elaborated libraries like one to parse date and format a date. Before this release, I was writing in plain text "14 December 2017" in the date field of a blog post. It was easy to use, but not really usable in the RSS feed in the pubDate attribute, and if I wanted to change the display of the date for some reason, I would have to rewrite everything.

Now, the date is simply in the format "YYYYMMDD" like "20171231" for the 31rd December 2017. And in the configuration variable, there is a :date-format keyword to define the date display. This variable is a string allowing pattern replacement of the following variables :

day of the month in number, from 1 to 31
day of the week, from Monday to Sunday, names are written in english in the source code and can be translated
month in number, from 1 to 12
month name, from January to December, names are written in english in the source code and can be translated

Currently, as the time of writing, I use the value "%DayNumber %MonthName %Year"

A :gopher-format keyword exist in the configuration file to configure the date format in the gopher export. It can be different from the html one.

More Gopher configuration

There are cases where the gopher server use an unusual syntax compared to most of the servers. I wanted to make it configurable, so the user could easily use cl-yag without having to mess with the code. I provide the default for geomyidae and in comments another syntax is available. There is also a configurable value to indicates where to store the gopher page menu, it's not always gophermap, it could be index.gph or whatever you need.

Easier to use

A comparison of code will make it easier to understand. There was a little change the way blog posts are declared :


(defparameter *articles*
   (list :id "third-article"  :title "My third article" :tag "me" :date "20171205")
   (list :id "second-article" :title "Another article"  :tag "me" :date "20171204")
   (list :id "first-article"  :title "My first article" :tag "me" :date "20171201")


(post :id "third-article"  :title "My third article" :tag "me" :date "20171205")
(post :id "second-article" :title "Another article"  :tag "me" :date "20171204")
(post :id "first-article"  :title "My first article" :tag "me" :date "20171201")

Each post are independtly declared and I plan to add a "page" function to create static pages, but this is going to be for the next version !

Future work

I am very happy to hack on cl-yag, I want to continue improving it but I should really think about each feature I want to add. I want to keep it really simple even if it limits the features.

I want to allow the creation of static pages like "About me", "Legal" or "websites I liked" that integrates well in the template. The user may not want all the static pages links to go at the same place in the template, or use the same template. I'm thinking about this.

Also, I think the gopher generation could be improved, but I still have no idea how.

Others themes may come in the default configuration, allowing the user to have a choice between themes. But as for now, I don't plan to bring a theme using javascript.

How to type using only one hand: keyboard mirroring

Written by Solène, on 12 December 2017.
Tags: #unix

Comments on Mastodon


Today is a bit special because I’m writing with a mirror keyboard layout. I use only half my keyboard to type all characters. To make things harder, the layout is qwerty while I use azerty usually (I’m used to qwerty but it doesn’t help).

Here, “caps lock” is a modifier key that must be pressed to obtain characters of the other side. As a mirror, one will find ‘p’ instead of ‘q’ or ‘h’ instead of ‘g’ while pressing caps lock.

It’s even possible to type backspace to delete characters or to achieve a newline. All the punctuation isn’t available throught this, only ‘.<|¦>’",’.

While I type this I get a bit faster and it become more and more easier. It’s definitely worth if you can’t use hands two.

This a been made possible by Randall Munroe. To enable it just download the file Here and type

xkbcomp mirrorlayout.kbd $DISPLAY

backspace is use with tilde and return with space, using the modifier of course.

I’ve spent approximately 15 minutes writing this, but the time spent hasn’t been linear, it’s much more fluent now !

Mirrorboard: A one-handed keyboard layout for the lazy by Randall Munroe

Markup languages comparison

Written by Solène, on 13 April 2017.
Tags: #unix

Comments on Mastodon

For the fun, here is a few examples of the same output in differents markup languages. The list isn’t exhaustive of course.

This is org-mode:

* This is a title level 1

+ first item
+ second item
+ third item with a [[http://dataswamp.org][link]]

** title level 2

Blah blah blah blah blah
blah blah blah *bold* here

#+BEGIN_SRC lisp
(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))

This is markdown :

# this is title level 1

+ first item
+ second item
+ third item with a [Link](http://dataswamp.org)

## Title level 2

Blah blah blah blah blah
blah blah blah **bold** here

    (let ((hello (init-string)))
       (format t "~A~%" (+ 1 hello))
       (print hello))


(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))

This is HTML :

<h1>This is title level 1</h1>
  <li>first item></li>
  <li>second item</li>
  <li>third item with a <a href="http://dataswamp.org">link</a></li>

<h2>Title level 2</h2>

<p>Blah blah blah blah blah
  blah blah blah <strong>bold</strong> here

<code><pre>(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))</pre></code>

This is LaTeX :


\section{This is title level 1}

\item First item
\item Second item
\item Third item

\subsection{Title level 2}

Blah blah blah blah blah
blah blah blah \textbf{bold} here

(let ((hello (init-string)))
    (format t "~A~%" (+ 1 hello))
    (print hello))


OpenBSD 6.1 released

Written by Solène, on 11 April 2017.
Tags: #openbsd #unix

Comments on Mastodon

Today OpenBSD 6.1 has been released, I won’t copy & paste the change list but, in a few words, it gets better.

Link to the official announce

I already upgraded a few servers, with both methods. One with bsd.rd upgrade but that requires physical access to the server and the other method well explained in the upgrade guide which requires to untar the files and do move some files. I recommend using bsd.rd if possible.

Connect to pfsense box console by usb

Written by Solène, on 10 April 2017.
Tags: #unix #network #openbsd66 #openbsd

Comments on Mastodon


I have a pfsense appliance (Netgate 2440) with a usb console port, while it used to be a serial port, now devices seems to have a usb one. If you plug an usb wire from an openbsd box to it, you woull see this in your dmesg

uslcom0 at uhub0 port 5 configuration 1 interface 0 "Silicon Labs CP2104 USB to UART Bridge Controller" rev 2.00/1.00 addr 7
ucom0 at uslcom0 portno 0

To connect to it from OpenBSD, use the following command:

# cu -l /dev/cuaU0 -s 115200

And you’re done

List of useful tools

Written by Solène, on 22 March 2017.
Tags: #unix

Comments on Mastodon

Here is a list of software that I find useful, I will update this list everytime I find a new tool. This is not an exhaustive list, theses are only software I enjoy using:

Backup Tool

  • duplicity
  • borg
  • restore/dump

File synchronization tool

  • unison
  • rsync
  • lsyncd

File sharing tool / “Cloud”

  • boar
  • nextcloud / owncloud
  • seafile
  • pydio
  • syncthing (works as peer-to-peer without a master)
  • sparkleshare (uses a git repository so I would recommend storing only text files)


  • emacs
  • vim
  • jed

Web browsers using keyboard

  • qutebrowser
  • firefox with vimperator extension

Todo list / Personal Agenda…

  • org-mode (within emacs)
  • ledger (accounting)

Mail client

  • mu4e (inside emacs, requires the use of offlineimap or mbsync to fetch mails)


  • curl
  • bwm-ng (to see bandwith usage in real time)
  • mtr (traceroute with a gui that updates every n seconds)

Files integrity

  • bitrot
  • par2cmdline
  • aide

Image viewer

  • sxiv
  • feh


  • entr (run command when a file change)
  • rdesktop (RDP client to connect to Windows VM)
  • xclip (read/set your X clipboard from a script)
  • autossh (to create tunnels that stays up)
  • mosh (connects to your ssh server with local input and better resilience)
  • ncdu (watch file system usage interactively in cmdline)
  • mupdf (PDF viewer)
  • pdftk (PDF manipulation tool)
  • x2x (share your mouse/keyboard between multiple computers through ssh)
  • profanity (XMPP cmdline client)
  • prosody (XMPP server)
  • pgmodeler (PostgreSQL database visualization tool)

How to check your data integrity?

Written by Solène, on 17 March 2017.
Tags: #unix #security

Comments on Mastodon

Today, the topic is data degradation, bit rot, birotting, damaged files or whatever you call it. It’s when your data get corrupted over the time, due to disk fault or some unknown reason.

What is data degradation ?

I shamelessy paste one line from wikipedia: “Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay or data rot.”.

Data degradation on Wikipedia

So, how do we know we encounter a bit rot ?

bit rot = (checksum changed) && NOT (modification time changed)

While updating a file could be mistaken as bit rot, there is a difference

update = (checksum changed) && (modification time changed)

How to check if we encounter bitrot ?

There is no way you can prevent bitrot. But there are some ways to detect it, so you can restore a corrupted file from a backup, or repair it with the right tool (you can’t repair a file with a hammer, except if it’s some kind of HammerFS ! :D )

In the following I will describe software I found to check (or even repair) bitrot. If you know others tools which are not in this list, I would be happy to hear about it, please mail me.

In the following examples, I will use this method to generate bitrot on a file:

% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% generate_checksum_database_with_tool
% echo "a" >> my_data/some_file_that_will_be_corrupted
% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% start_tool_for_checking

We generate the checksum database, then we alter a file by adding a “a” at the end of the file and we restore the modification and acess time of the file. Then, we start the tool to check for data corruption.

The first touch is only for convenience, we could get the modification time with stat command and pass the same value to touch after modification of the file.


This is a python script, it’s very easy to use. I will scan a directory and create a database with the checksum of the files and their modification date.

Initialization usage:

% cd /home/my_data/
% bitrot
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 189 new, 0 updated, 0 renamed, 0 missing.
Updating bitrot.sha512... done.
% echo $?

Verify usage (case OK):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
% echo $?

Exit status is 0, so our data are not damaged.

Verify usage (case Error):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
error: SHA1 mismatch for ./sometextfile.txt: expected 17b4d7bf382057dc3344ea230a595064b579396f, got db4a8d7e27bb9ad02982c0686cab327b146ba80d. Last good hash checked on 2017-03-16 21:04:39.
Finished. 199.41 MiB of data read. 1 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
error: There were 1 errors found.
% echo $?

When something is wrong. As the exit status of bitrot isn’t 0 when it fails, it’s easy to write a script running every day/week/month.

Github page

bitrot is available in OpenBSD ports in sysutils/bitrot since 6.1 release.


This tool works with PAR2 archives (see below for more informations about what PAR ) and from them, it will be able to check your data integrity AND repair it.

While it has some pros like being able to repair data, the cons is that it’s not very easy to use. I would use this one for checking integrity of long term archives that won’t changes. The main drawback comes from PAR specifications, the archives are created from a filelist, if you have a directory with your files and you add new files, you will need to recompute ALL the PAR archives because the filelist changed, or create new PAR archives only for the new files, but that will make the verify process more complicated. That doesn’t seems suitable to create new archives for every bunchs of files added in the directory.

PAR2 let you choose the percent of a file you will be able to repair, by default it will create the archives to be able to repair up to 5% of each file. That means you don’t need a whole backup for the files (while it’s would be a bad idea) and only an approximately extra of 5% of your data to store.

Create usage:

% cd /home/
% par2 create -a integrity_archive -R my_data
Skipping 0 byte file: /home/my_data/empty_file

Block size: 3812
Source file count: 17
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7

Opening: my_data/[....]
[text cut here]
Opening: my_data/[....]

Computing Reed Solomon matrix.
Constructing: done.
Wrote 381200 bytes to disk
Writing recovery packets
Writing verification packets

% echo $?

% ls -1

Verify usage (OK):

% par2 verify integrity_archive.par2 
Loading "integrity_archive.par2".
Loaded 36 new packets
Loading "integrity_archive.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.

All files are correct, repair is not required.
% echo $?

Verify usage (with error):

par2 verify integrity_archive.par.par2                                                 
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:

Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.

% echo $?

Repair usage:

% par2 repair integrity_archive.par.par2      
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:

Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.

Wrote 361069 bytes to disk

Verifying repaired files:

Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - found.

Repair complete.

% echo $?

par2cmdline is only one implementation doing the job, others tools working with PAR archives exists. They should be able to all works with the same PAR files.

Parchive on Wikipedia

Github page

par2cmdline is available in OpenBSD ports in archivers/par2cmdline.

If you find a way to add new files to existing archives, please mail me.


One can write a little script using mtree (in base system on OpenBSD and FreeBSD) which will create a file with the checksum of every files in the specified directories. If mtree output is different since last time, we can send a mail with the difference. This is a process done in base install of OpenBSD for /etc and some others files to warn you if it changed.

While it’s suited for directories like /etc, in my opinion, this is not the best tool for doing integrity check.


I would like to talk about ZFS and data integrity because this is where ZFS is very good. If you are using ZFS, you may not need any other software to take care about your data. When you write a file, ZFS will also store its checksum as metadata. By default, the option “checksum” is activated on dataset, but you may want to disable it for better performance.

There is a command to ask ZFS to check the integrity of the files. Warning: scrub is very I/O intensive and can takes from hours to days or even weeks to complete depending on your CPU, disks and the amount of data to scrub:

# zpool scrub zpool

The scrub command will recompute the checksum of every file on the ZFS pool, if something is wrong, it will try to repair it if possible. A repair is possible in the following cases:

If you have multiple disks like raid-Z or raid–1 (mirror), ZFS will be look on the differents disks if the non corrupted version of the file exists, if it finds it, it will restore it on the disk(s) where it’s corrupted.

If you have set the ZFS option “copies” to 2 or 3 (1 = default), that means that the file is written 2 or 3 time on the disk. Each file of the dataset will be allocated 2 or 3 time on the disk, so take care if you want to use it on a dataset containing heavy files ! If ZFS find thats a version of a file is corrupted, it will check the others copies of it and tries to restore the corrupted file is possible.

You can see the percentage of filesystem already scrubbed with

zfs status zpool

and the scrub can be stopped with

zfs scrub -s zpool


Its name is an acronym for “Advanced Intrusion Detection Environment”, it’s an complicated software which can be used to check for bitrot. I would not recommend using it if you only need bitrot detection.

Here is a few hints if you want to use it for checking your file integrity:


/home/my_data/ R
# Rule definition

The config file will create a database of all files in /home/my_data/ (R for recursive). “All” line list the checks we do on each file. For bitrot checking, we want to check modification time, size, checksum and inode of the files. The summarize_change line permit to have a list of changes if something is wrong.

This is the most basic config file you can have. Then you will have to run aide to create the database and then run aide to create a new database and compare the two databases. It doesn’t update its database itself, you will have to move the old database and tell it where to found the older database.

My use case

I have different kind of data. On a side, I have static data like pictures, clips, music or things that won’t change over time and the other side I have my mails, documents and folders where the content changes regularly (creation, deletetion, modification). I am able to afford a backup for 100% of my data with some history of the backup on a few days, so I won’t be interested about file repairing.

I want to be warned quickly if a file get corrupted, so I can still get the backup in my history but I don’t keep every versions of my files for too long. I choose to go with the python tool bitrot, it’s very easy to use and it doesn’t become a mess with my folders getting updated often.

I would go with par2cmdline if I could not be able to backup all my data. Having 5% or 10% of redundancy of my files should be enough to restore it in case of corruption without taking too much space.

Port of the week: rss2email

Written by Solène, on 24 January 2017.
Tags: #portoftheweek #unix #email

Comments on Mastodon

This is the kind of Port of the week I like. This is a software I just discovered and fall in love to. The tool r2e which is the port mail/rss2email on OpenBSD is a small python utility that solves a problem: how to deal with RSS feeds?

Until last week, I was using a “web app” named selfoss which was aggregating my RSS feeds and displaying it on a web page, I was able to filter by read/unread/marked and also filter by source. It is a good tool that does the job well but I wanted something that doesn’t rely on a web browser. Here comes r2e !

This simple software will send you a mail for each new entry in your RSS feeds. It’s really easy to configure and set-up. Just look at how I configured mine:

$ r2e new my-address+rss@my-domain.com
$ r2e add "http://undeadly.org/cgi?action=rss"
$ r2e add "https://dataswamp.org/~solene/rss.xml"
$ r2e add "https://www.dragonflydigest.com/feed"
$ r2e add "http://phoronix.com/rss.php"

Add this in your crontab to check new RSS items every 10 minutes:

*/10 * * * * /usr/local/bin/r2e run

Add a rule for my-address+rss to store mails in a separate folder, and you’re done !

NOTE: you can use r2e run –no-send for the first time, it will create the database and won’t send you mails for current items in feeds.

Convert mailbox to maildir with dovecot

Written by Solène, on 17 January 2017.
Tags: #unix #email

Comments on Mastodon

I have been using mbox format for a few years on my personal mail server. For those who don’t know what mbox is, it consists of only one file per folder you have on your mail client, each file containing all the mails of the corresponding folder. It’s extremely ineficient when you backup the mail directory because it must copy everything each time. Also, it reduces the system cache possibility of the server because if you have folders with lots of mails with attachments, it may not be cached.

Instead, I switched to maildir, which is a format where every mail is a regular file on the file system. This takes a lot of inodes but at least, it’s easier to backup or to deal with it for analysis.

Here how to switch from mbox to maildir with a dovecot tool.

# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox

That’s all ! In this case, my mbox folder was ~/mail/ and my INBOX file was ~/mail/inbox. It tooks me some time to find where my INBOX really was, at first I tried a few thing that didn’t work and tried a perl convert tool named mb2md.pl which has been able to extract some stuff but a lot of mails were broken. So I have been going back getting dsync working.

If you want to migrate, the whole process looks like:

# service smtpd stop

modify dovecot/conf.d/10-mail.conf, replace the first line
mail_location = mbox:~/mail:INBOX=/var/mail/%u   # BEFORE
mail_location = maildir:~/maildir                # AFTER

# service dovecot restart
# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox
# service smtpd start

Port of the week: entr

Written by Solène, on 07 January 2017.
Tags: #unix

Comments on Mastodon

entr is a command line tool that let you run arbitrary command on file change. This is useful when you are doing something that requires some processing when you modify it.

Recently, I have used it to edit a man page. At first, I had to run mandoc each time I modified to file to check the render. This was the first time I edited a man page so I had to modify it a lot to get what I wanted. I remembered about entr and this is how you use it:

$ ls stagit.1 | entr mandoc /_

This simple command will run “mandoc stagit.1” each time stagit.1 is modified. The file names must be given by stdin to entr, and then use the characters sequence /_ to replace the names (like {} in find).

The man page of entr is very well documented if you need more examples.

Port of the week: dnscrypt-proxy

Written by Solène, on 19 October 2016.
Tags: #unix #security #portoftheweek

Comments on Mastodon

2020 Update

Now, unwind on OpenBSD and unbound can support DNS over TLS or DNS over HTTPS, dnscrypt lost a bit of relevance but it’s still usable and a good alternative.


Today I will talk about net/dnscrypt-proxy. This let you encrypt your DNS traffic between your resolver and the remote DNS recursive server. More and more countries and internet provider use DNS to block some websites, and now they tend to do “man in the middle” with DNS answers, so you can’t just use a remote DNS you find on the internet. While a remote dnscrypt DNS server can still be affected by such “man in the middle” hijack, there is a very little chance DNS traffic is altered in datacenters / dedicated server hosting.

The article also deal with unbound as a dns cache because dnscrypt is a bit slow and asking multiple time the same domain in a few minutes is a waste of cpu/network/time for everyone. So I recommend setting up a DNS cache on your side (which can also permit to use it on a LAN).

At the time I write this article, their is a very good explanation about “how to install it” is named dnscrypt-proxy–1.9.5p3 in the folder /usr/local/share/doc/pkg-readmes/. The following article is made from this file. (Article updated at the time of OpenBSD 6.3)

While I write for OpenBSD this can be easily adapted to anthing else Unix-like.

Install dnscrypt

# pkg_add dnscrypt-proxy


Modify your resolv.conf file to this

/etc/resolv.conf :

lookup file bind
options edns0

When using dhcp client

If you use dhcp to get an address, you can use the following line to force having as nameserver by modifying dhclient config file. Beware, if you use it, when upgrading the system from bsd.rd, you will get as your DNS server but no service running.

/etc/dhclient.conf :

supersede domain-name-servers;


Now, we need to modify unbound config to tell him to ask DNS at port 40. Please adapt your config, I will just add what is mandatory. Unbound configuration file isn’t in /etc because it’s chrooted


    # this line is MANDATORY
    do-not-query-localhost: no

    name: "."
    # address dnscrypt listen on

If you want to allow other to resolv through your unbound daemon, please see parameters interface and access-control. You will need to tell unbound to bind on external interfaces and allow requests on it.


Now we need to configure dnscrypt, pick a server in the following LIST /usr/local/share/dnscrypt-proxy/dnscrypt-resolvers.csv, the name is the first column.

As root type the following (or use doas/sudo), in the example we choose dnscrypt.eu-nl as a DNS provider

# rcctl enable dnscrypt_proxy
# rcctl set dnscrypt_proxy flags -E -m1 -R dnscrypt.eu-nl -a
# rcctl start dnscrypt_proxy


You should be able to resolv address through dnscrypt now. You can use tcpdump on your external interface to see if you see something on udp port 53, you should not see traffic there.

If you want to use dig hostname -p 40 @ to make DNS request to dnscrypt without unbound, you will need net/isc-bind which will provide /usr/local/bin/dig. OpenBSD base dig can’t use a port different than 53.

How to publish a git repository on http

Written by Solène, on 07 October 2016.
Tags: #unix #git

Comments on Mastodon

Here is an how-to in order to make a git repository available for cloning through a simple http server. This method only allow people to fetch the repository, not to push. I wanted to set-up this to get my code, I don’t plan to have any commit on it from other people at this time so it’s enough.

In a folder publicly available from your http server clone your repository in bare mode. As explained in the [https://git-scm.com/book/tr/v2/Git-on-the-Server-The-Protocols](man page):

$ cd /var/www/htdocs/some-path/
$ git clone --bare /path/to/git_project gitproject.git
$ cd gitproject.git
$ git update-server-info
$ mv hooks/post-update.sample hooks/post-update
$ chmod o+x hooks/post-update

Then you will be able to clone the repository with

$ git clone https://your-hostname/some-path/gitproject.git

I’ve lost time because I did not execute git update-server-info so the clone wasn’t possible.

Port of the week: rlwrap

Written by Solène, on 04 October 2016.
Tags: #unix #shell #portoftheweek

Comments on Mastodon

Today I will present misc/rlwrap which is an utility tool when you use some command-line software which doesn’t provide you a nice readline input. By using rlwrap, you will be able to use telnet, a language REPL or any command-line tool where you input text with an history of what you type, ability to use emacs bindings like C-a C-e M-Ret etc… I use it often with telnet or sbcl.

Usage :

$ rlwrap telnet host port

How to kill processes by their name

Written by Solène, on 25 August 2016.
Tags: #unix

Comments on Mastodon

If you want to kill a process by its name instead of its PID number, which is easier if you have to kill processes from the same binary, here are the commands depending of your operating system:

FreeBSD / Linux

$ killall pid_name


$ pkill pid_name


Be careful with Solaris killall. With no argument, the command will send a signal to every active process, which is not something you want.

$ killall pid_name