About me: My name is Solène Rapenne. I like learning and sharing my knowledge related to IT stuff. Hobbies: '(BSD OpenBSD h+ Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on Freenode, solene+www at dataswamp dot org or solene@bsd.network (mastodon). If for some reason you want to give me some money, I accept paypal at the address donate@perso.pw.

# A curated non-violent games list

Written by Solène, on 18 October 2020.
Tags: #gaming

Comments on Mastodon

For long time I wanted to share a list of non-violent games I enjoyed, so here it is. Obviously, this list is FAR from being complete and exhaustive. It contains games I played and that I liked. They should all run on Linux and some on OpenBSD.

Aside this list, most tycoon and puzzle games should be non-violent.

## Automation / Building games

This game is like Factorio, you have to automate production lines and increase the output of shapes/colors. Very time consuming.

The project is Open source but you need to buy the game if you don’t want to compile yourself. Or just use my compiled version at https://perso.pw/shapez.io/ (require a chrome based browser…)

A transport tycoon game, multiplayer possible! Very complex, the community is active and you can find tons of mods.

The game is Open source and you can certainly install it on any distribution with the package manager.

This game is about building equipments to restore the nature into a wasteland, improve the biodiversity and then remove all your structures.

The game is not open source but is free of charge. The music seems to be under an open licence. Still, you can pay what you want for it to support the developer.

This is a short game about chaining producing buildings into another, all from garbages up to some secret ending :)

The game is not open source but is free of charge.

## Sandbox / Adventure game

This game is a clone of Minecraft, it supports a lot of mods (which can make the game very complex, like adding trains tracks with their signals, the pinnacle of complexity :D). As far as I know, the game now supports health but there are no fight involved.

The game is Open source and free of charge.

This game is about exploration in a forest. It has a nice music, gameplay is easy.

The game is not open source but it’s free. Still, you can pay what you want for it to support the developer.

## Action / reflex games

This category of games contains games that require some reflexes or at least need to player to be active to play.

This game is about driving a 2D motocross and pass through obstacles, it can be very hard and will challenge you for long time.

it’s Open source and free of charge.

This is a fun game where you need to drive some big trucks only using a displayed control panel with your mouse which make things very hard.

The game is not open source and not free, but the cost isn’t very high (3.99€ at the moment from France).

This game is about a teenager character who is on vacation in a place with no cell network, and you will have to make a hike and meet people to go to the end. Very relaxing :)

The game isn’t open source and isn’t free, but costs around 8€ at the moment from France.

This game is about adding trains to tracks and avoid them to crash. I found this game to be more about reflexes than building, simulation or tycoon. You mostly need to route the trains in real time.

The game isn’t open source and not free but costs around 10€.

## Puzzle games (Zachtronics games)

What’s a Zachtronics game? It’s a game edited by Zachtronics! Every game from this studio have a common pattern. You solve puzzles with more and more complexes systems, you can compare your result in speed / efficiency / steps to the others player. They are a mix in between automation and puzzles. Those games are really good. There are more than the 3 games I list, but I didn’t enjoy them all, check the full list

You play an alchemist who is asked to create product for a rich family. You need to setup devices to transforms and combine materials into the expected result.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is in 3D, you receive materials on conveyor belts and you will have to rotate and wield them to deliver the expect material.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is about writing code into assembly. There are calculations units that will add/sub values from registers and pass it to another unit. Even more fun if you print the old fashion instructions book!

The game isn’t open source and isn’t free. The average cost is 10€.

## Visual Novel

The expression Amrilato

This game is about a japanese girl who ends in a parallel world where everything seems similar but in this Japan, people talk Esperanto.

The game isn’t open source and isn’t free. The average cost is 20€.

## Not very violent

Way of the Passive Fist

I would like to add this game to this list. It’s a brawler (like street of rage) in which you don’t fight people, but you only dodge attacks to exhaust enemies or counter-attack. It’s still a bit violent because it involves violence toward you, and throwing back a knife would still be violent… But still, I think this is an unique game that merits to be better known. :)

The game isn’t open source and isn’t free, expect around 15€ for it.

# Making a home NAS using NixOS

Written by Solène, on 18 October 2020.
Tags: #nixos #linux #nas

Comments on Mastodon

Still playing with NixOS, I wanted to experience how difficult it would be to write a NixOS configuration file to turn a computer into a simple NAS with basics features: samba storage, dlna server and auto suspend/resume.

What is NixOS? As a reminder for some and introduction to the others, NixOS is a Linux distribution built by the Nix package manager, which make it very different than any other operating system out there, except Guix which has a similar approach with their own package manager written in Scheme.

NixOS uses a declarative configuration approach along with lot of others features derived from Nix. What’s big here is you no longer tweak anything in /etc or install packages, you can define the working state of the system in one configuration file. This system is a totally different beast than the others OS and require some time to understand how it work. Good news though, everything is documented in the man page configuration.nix, from fstab configuration to users managements or how to enable samba!

Here is the /etc/nixos/configuration.nix file on my NAS.

It enables ssh server, samba, minidlna and vnstat. Set up an user with my ssh public key. Ready to work.

Using rtcwake command (Linux specific), it’s possible to put the system into standby mode and schedule an auto resume after some time. This is triggered by a cron job at 01h00.

{ config, pkgs, ... }:
{
# include stuff related to hardware, auto generated at install
imports = ./hardware-configuration.nix ];
boot.loader.grub.device = "/dev/sda";

# network configuration
networking.interfaces.enp3s0.ipv4.addresses = [ {
address = "192.168.42.150";
prefixLength = 24;
} ];
networking.defaultGateway = "192.168.42.1";
networking.nameservers = [ "192.168.42.231" ];

# FR locales and layout
i18n.defaultLocale = "fr_FR.UTF-8";
console = { font = "Lat2-Terminus16"; keyMap = "fr"; };
time.timeZone = "Europe/Paris";

# Packages management
environment.systemPackages = with pkgs; [
kakoune vnstat borgbackup utillinux
];

# network disabled (I need to check the ports used first)
networking.firewall.enable = false;

# services to enable
services.openssh.enable = true;
services.vnstat.enable = true;

# auto standby
services.cron.systemCronJobs = [
"0 1 * * * root rtcwake -m mem --date +6h"
];

# samba service
services.samba.enable = true;
services.samba.enableNmbd = true;
services.samba.extraConfig = ''
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server
log file = /var/log/samba/smbd.%m
max log size = 50
dns proxy = no
map to guest = Bad User
'';
services.samba.shares = {
public = {
path = "/home/public";
browseable = "yes";
"writable" = "yes";
"guest ok" = "yes";
"public" = "yes";
"force user" = "share";
};
};

# minidlna service
services.minidlna.enable = true;
services.minidlna.announceInterval = 60;
services.minidlna.friendlyName = "Rorqual";
services.minidlna.mediaDirs = ["A,/home/public/Musique/" "V,/home/public/Videos/"];

# trick to create a directory with proper ownership
# note that tmpfiles are not necesserarly temporary if you don't
# set an expire time. Trick given on irc by someone I forgot the name..
systemd.tmpfiles.rules = [ "d /home/public 0755 share users" ];

# create my user, with sudo right and my public ssh key
users.users.solene = {
isNormalUser = true;
extraGroups = [ "wheel" "sudo" ];
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15viQXHYRjGqE4LLfvETMkjjgSz0mzMzS personal"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15vAQXBYRjGqE6L1fvETMkjjgSz0mxMzS pro"
];
};

# create a dedicated user for the shares
# I prefer a dedicated one than "nobody"
# can't log into it
users.users.share= {
isNormalUser = false;
};
}


# NixOS optional features in packages

Written by Solène, on 14 October 2020.
Tags: #nixos #linux

Comments on Mastodon

As a claws-mail user, I like to have calendar support in the mail client to be able to “accept” invitations. In the default NixOS claws-mail package, the vcalendar module isn’t installed with the package. Still, it is possible to add support for the vcalendar module without ugly hack.

It turns out, by default, the claws-mail package in Nixpkg has an optional build option for the vcalendar module, we need to tell nixpkg we want this module and claws-mail will be compiled.

As stated in the NixOS manual, the optionals features can’t be searched yet. So what’s possible is to search for your package in the NixOS packages search, click on the package name to get to the details and click on the link named “Nix expression” that will open a link to the package definition on GitHUB, claws-mail nix expression

As you can see on the claws-mail nix expression code, there are lot of lines with optional, those are features we can enable. Here is a sample:

[..]
++ optional (!enablePluginArchive) "--disable-archive-plugin"
++ optional (!enablePluginLitehtmlViewer) "--disable-litehtml_viewer-plugin"
++ optional (!enablePluginPdf) "--disable-pdf_viewer-plugin"
++ optional (!enablePluginPython) "--disable-python-plugin"
[..]


In your configuration.nix file, where you define the package list you want, you can tell you want to enable the plugin vcalendar, this is done as in the following example:

environment.systemPackages = with pkgs; [
kakoune git firefox irssi minetest
(pkgs.claws-mail.override { enablePluginVcalendar = true;})
];


When you rebuild your system to match the configuration definition, claws-mail will be compiled with the extras options you defined.

Now, I have claws-mail with vCalendar support.

# Unlock a full disk encryption NixOS with usb memory stick

Written by Solène, on 06 October 2020.
Tags: #nixos #linux

Comments on Mastodon

Using NixOS on a laptop on which the keyboard isn’t detected when I need to type the password to decrypt disk, I had to find a solution. This problem is hardware related, not Linux or NixOS related.

I highly recommend using full disk encryption on every computer following a thief threat model. Having your computer stolen is bad, but if the thief has access to all your data, you will certainly be in trouble.

This was time to find how to use an usb memory stick to unlock the full disk encryption in case I don’t have my hands on an usb keyboard to unlock the computer.

There are 4 steps to enable unlocking the luks volume using a device.

1. Create the key
2. Add the key on the luks volume
3. Write the key on the usb device
4. Configure NixOS

First step, creating the file. The easiest way is to the following:

# dd if=/dev/urandom of=/root/key.bin bs=4096 count=1


This will create a 4096 bytes key. You can choose the size you want.

Second step is to register that key in the luks volume, you will be prompted for luks password when doing so.

# cryptsetup luksAddKey /dev/sda1 /root/key.bin


Then, it’s time to write the key to your usb device, I assume it will be /dev/sdb.

# dd if=/root/key.bin of=/dev/sdb bs=4096 count=1


And finally, you will need to configure NixOS to give the information about the key. It’s important to give the correct size of the key. Don’t forget to adapt "crypted" to your luks volume name.

boot.initrd.luks.devices."crypted".keyFileSize = 4096;
boot.initrd.luks.devices."crypted".keyFile = "/dev/sdb";


Rebuild your system with nixos-rebuild switch and voilà!

### Going further

I recommend using the fallback to password feature so if you lose or don’t have your memory stick, you can type the password to unlock the disk. Note that you need to not put anything looking like a /dev/sdb because if it exists and no key are there, the system won’t ask for password, and you will need to reboot.

boot.initrd.luks.devices."crypted".fallbackToPassword = true;


It’s also possible to write the key in a partition or at a specific offset into your memory disk. For this, look at boot.initrd.luks.devices."volume".keyFileOffset entry.

# Playing chess by email

Written by Solène, on 28 September 2020.
Tags: #chess

Comments on Mastodon

It’s possible to play chess using email. This is possible because there are notations like PGN (Portable Game Notation) that describe the state of a game.

By playing on your computer and sending the PGN of the game to your opponent, that person will be able to play their move and send you the new PGN so you can play.

## Using xboard

This is quite easy with xboard (which should be available in most bsd/linux/unix distributions), as long as you are aware of the few keybindings.

When you start a game, press Ctrl+E to enter edition mode, this will prevent the AI to play, then make your move.

From there, you can press Ctrl+C to copy the state of the game. You will have something like this in your clipboard.

[Event "Edited game"]
[Site "solene.local"]
[Date "2020.09.28"]
[Round "-"]
[White "-"]
[Black "-"]
[Result "*"]

1. d3
*


You can send this to your opponent, but the only needed data is 1. d3 which is the PGN notation of the moves. You can throw the rest.

In a more advanced game, you will end up mailing this kind of data:

1. d3 e6 2. e4 f5 3. exf5 exf5 4. Qe2+ Be7 5. Qxe7+ Qxe7+


When you want to play your turn, load that line and press Ctrl+V, you should see the moves happening on the board.

## Using gnuchess

gnuchess allow playing chess in command line.

When you want to start a game, you will have a prompt, type manual to not play against the AI. I recommend using coords to display coordinates on the axis of the board.

When you type show board you will have this display:

  white  KQkq

8 r n b q k b n r
7 p p p p p p p p
6 . . . . . . . .
5 . . . . . . . .
4 . . . . . . . .
3 . . . . . . . .
2 P P P P P P P P
1 R N B Q K B N R
a b c d e f g h


Then, I can type d3 I get a display

8 r n b q k b n r
7 p p p p p p p p
6 . . . . . . . .
5 . . . . . . . .
4 . . . . . . . .
3 . . . P . . . .
2 P P P . P P P P
1 R N B Q K B N R
a b c d e f g h


From the game, you can save the game using pgnsave FILE and load a game using pgnload FILE.

You can see the list of the moves using show game.

# About pipelining OpenBSD ports contributions

Written by Solène, on 27 September 2020.
Tags: #openbsd #automation

Comments on Mastodon

After modest contributions to the NixOS operating system which made me learn about the contribution process, I found enjoyable to have an automatic report and feedback about the quality of the submitted work. While on NixOS this requires GitHub, I think this could be applied as well on OpenBSD and the mailing list contributing system.

I made a prototype before starting the real work and actually I’m happy with the result.

This is what I get after feeding the script with a mail containing a patch:

Determining package path         ✓
Verifying patch isn't committed  ✓
Applying the patch               ✓
Fetching distfiles               ✓
Distfile checksum                ✓
Applying ports patches           ✓
Extracting sources               ✓
Building result                  ✓


It requires a lot of checks to find a patch in the file, because we have have patches generated from cvs or git which have a slightly different output. And then, we need to find from where to apply this patch.

The idea would be to retrieve mails sent to ports@openbsd.org by subscribing, then store metadata about that submission into a database:

Sender
Date
Diff (raw text)
Status (already committed, doesn't apply, apply, compile)


Then, another program will pick a diff from the database, prepare a VM using a derivated qcow2 disk from a base image so it always start fresh and clean and ready, and do the checks within the VM.

Once it is finished, a mail could be sent as a reply to the original mail to give the status of each step until error or last check. The database could be reused to make a web page to track what compiles but is not yet committed. As it’s possible to verify if a patch is committed in the tree, this can automatically prune committed patches over time.

I really think this can improve tracking patches sent to ports@ and ease the contribution process.

DISCLAIMER

• This would not be an official part of the project, I do it on my own
• This may be cancelled
• This may be a bad idea
• This could be used “as a service” instead of pulling automatically from ports, meaning people could send mails to it to receive an automatic review. Ideally this should be done in portcheck(1) but I’m not sure how to verify a diff apply on the ports tree without enforcing requirements
• Human work will still be required to check the content and verify the port works correctly!

# Docker cheatsheet

Written by Solène, on 24 September 2020.
Tags: #docker

Comments on Mastodon

Simple Docker cheatsheet. This is a short introduction about Docker usage and common questions I have been asking myself about Docker.

The official documentation for building docker images can be found here

## Build an image

Building an image is really easy. As a requirement, you need to be in a directory that can contain data you will use for building the image but most importantly, you need a Dockerfile file.

The Dockerfile file hold all the instructions to create the container. A simple example would be this description:

FROM busybox
CMD "echo" "hello world"


This will create a docker container using busybox base image and run echo "hello world" when you run it.

To create the container, use the following command in the same directory in which Dockerfile is:

$docker build -t your-image-name .  ## Advanced image building If you need to compile sources to distribute a working binary, you need to prepare the environment to have the required dependencies to compile and then you need to compile a static binary to ship the container without all the dependencies. In the following example we will use a debian environment to build the software downloaded by git. FROM debian as work WORKDIR /project RUN apt-get update RUN apt-get install -y git make gcc RUN git clone git://bitreich.org/sacc /project RUN apt-get install -y libncurses5-dev libncurses5 RUN make LDFLAGS="-static -lncurses -ltinfo" FROM debian COPY --from=work /project/sacc /usr/local/bin/sacc CMD "sacc" "gopherproject.org"  I won’t explain every command here, but you may see that I have split the packages installation in two commands. This was to help debugging. The trick here is that the docker build process has a cache feature. Every time you use a FROM, COPY, RUN or CMD docker will cache the current state of the build process, if you re-run the process docker will be able to pick up the most recent state until the change. I wasn’t sure how to compile statically the software at first, and having to install git make and gcc and run git clone EVERY TIME was very time consuming and bandwidth consuming. In case you run this build and it fails, you can re-run the build and docker will catch up directly at the last working step. If you change a line, docker will reuse the last state with a FROM/COPY/RUN/CMD command before the changed line. Knowing about this is really important for more efficient cache use. ## Run an image With the previously locally built image we can run it with the command: $ docker run your-image-name
hello world


By default, when you use an image name to run, if you don’t have a local image that match the name docker will check on the docker official repository if an image exists, if so, it will be pulled and run.

$docker run hello-world  This is a sample official container that will display some explanations about docker. If you want to try a gopher client, I made a docker version of it that you can run with the following command: $ docker run -t -i rapennesolene/sacc


Why did you require -t and -i parameters? The former is to tell docker you want a tty because it will manipulate a terminal and the latter is to ask an interactive session.

## Persistant data

By default, every data of the docker container get wiped out once it stops, which may be really undesirable if you use docker to deploy a service that has a state and require an installation, configuration files etc…

Docker has two ways to solve it:

1) map a local directory 2) map a docker volume name

This is done with the parameter -v with the docker run command.

$docker run -v data:/var/www/html/ nextcloud  This will map a persistent storage named “data” on the host on the path /var/www/html in the docker instance. By using data, docker will check if /var/lib/docker/volumes/data exists, if so it will reuse it and if not it will create it. This is a convenient way to name volumes and let docker manage it. The other way is to map a local path to a container environment path. $ docker run -v /home/nextcloud:/var/www/html nextcloud


In this case, the directory /home/nextcloud on the host and /var/www/html in the docker environment will be the same directory.

# A few tips about the command cd

Written by Solène, on 04 September 2020.
Tags: #unix

Comments on Mastodon

While everyone familiar with a shell know about the command cd there are a few tips you should know.

### Moving to your $HOME directory $ pwd
/tmp
$cd$ pwd
/home/solene


Using cd without argument will change your current directory to your $HOME. ### Moving into someone$HOME directory

While this should fail most of the time because people shouldn’t allow anyone to visit their $HOME, there are use case it can be used though. $ cd ~user1
$pwd /home/user1$ cd ~solene
$pwd /home/solene  Using ~user as a parameter will move to that user$HOME directory, note that cd and cd ~youruser have the same result.

### Moving to previous directory

This is a very useful command which allow going back and forth between two directories.

$pwd /home/solene$ cd /tmp
$pwd /tmp$ cd -
/home/solene
$pwd /home/solene  When you use cd - the command will move to the previous directory in which you were. There are two special variables in your shell: PWD and OLDPWD, when you move somewhere, OLDPWD will hold your current location before moving and then PWD hold the new path. When you use cd - the two variables get exchanged, this mean you can only jump from two paths using cd - multiple times. Please note that when using cd - your new location is displayed. ### Changing directory by modifying current PWD thfr@ showed me a cd feature I never heard about, and it’s the perfect place to write about it. Note that this work in ksh and zsh but is reported to not work in bash. One example will explain better than any text. $ pwd
/tmp/pobj/foobar-1.2.0/work
$cd 1.2.0 2.4.0 /tmp/pobj/foobar-2.4.0/work  This tells cd to replace first parameter pattern by the second parameter in the current PWD and then cd into it. $ pwd
/home/solene
$cd solene user1 /home/user1  This could be done in a bloated way with the following command: $ cd $(echo$PWD | sed "s/solene/user1/")


I learned it a few minutes ago but I see a lot of uses cases where I could use it.

### Moving into the current directory after removal

In some specific case, like having your shell into a directory that existed but was deleted and removed (this happens often when you working into compilation directories).

A simple trick is to tell cd to go to the current location.

$cd .  or $ cd $PWD  And cd will go into the same path and you can start hacking again in that directory. # Find which package provides a given file in OpenBSD Written by Solène, on 04 September 2020. Tags: #openbsd Comments on Mastodon There is one very handy package on OpenBSD named pkglocatedb which provides the command pkglocate. If you need to find a file or binary/program and you don’t know which package contains it, use pkglocate. $ pkglocate */bin/exiftool
p5-Image-ExifTool-12.00:graphics/p5-Image-ExifTool:/usr/local/bin/exiftool


With the result, I know that the package p5-Image-ExifTool will provide me the command exiftool.

Another example looking for files containing the pattern “libc++”

$pkglocate libc++ base67:/usr/lib/libc++.so.5.0 base67:/usr/lib/libc++abi.so.3.0 comp67:/usr/lib/libc++.a comp67:/usr/lib/libc++_p.a comp67:/usr/lib/libc++abi.a comp67:/usr/lib/libc++abi_p.a qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/ qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.app qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.lib qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qmake.conf qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qplatformdefs.h qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/ qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qmake.conf qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qplatformdefs.h qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/ qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qmake.conf qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qplatformdefs.h  As you can see, base sets are also in the database used by pkglocate, so you can easily find if a file is from a set (that you should have) or if the file comes from a package. ## Find which package installed a file Klemmens Nanni (kn@) told me it’s possible to find which package installed a file present in the filesystem using pkg_info command which comes from the base system. This can be handy to know from which package an installed file comes from, without requiring pkglocatedb. $ pkg_info -E /usr/local/bin/convert
/usr/local/bin/convert: ImageMagick-6.9.10.86p0
ImageMagick-6.9.10.86p0 image processing tools


This tells me convert binary was installed by ImageMagick package.

# Download files listed in a http index with wget

Written by Solène, on 16 June 2020.
Tags: #wget #internet

Comments on Mastodon

Sometimes I need to download files through http from a list on an “autoindex” page and it’s always painful to find a correct command for this.

The easy solution is wget but you need to use the correct parameters because wget has a lot of mirroring options but you only want specific ones to achieve this goal.

I ended up with the following command:

wget --continue --accept "*.tgz" --no-directories --no-parent --recursive http://ftp.fr.openbsd.org/pub/OpenBSD/6.7/amd64/


This will download every tgz files available at the address given as last parameter.

The parameters given will filter to only download the tgz files, put the files in the current working directory and most important, don’t try to escape to the parent directory to start downloading again. The –continue parameter allow to interrupt wget and start again, downloaded file will be skipped and partially downloaded files will be completed.

Do not reuse this command if files changed on the remote server because continue feature only work if your local file and the remote file are the same, this simply look at the local and remote names and will ask the remote server to start downloading at the current byte range of your local file. If meanwhile the remote file changed, you will have a mix of the old and new file.

Obviously ftp protocol would be better suited for this download job but ftp is less and less available so I find wget to be a nice workaround for this.

# Birthdays dates management using calendar

Written by Solène, on 15 June 2020.
Tags: #openbsd #plaintext #automation

Comments on Mastodon

I manage my birthday list so I don’t forget about them in a calendar file so I can use it in scripts

The calendar file format is easy but sadly it only works using English month names.

This is an example file with differents spacing:

7  August   This is 7 august birthday!
8 August   This is 8 august birthday!
16 August   This is 16 august birthday!


Now you have a calendar file you can use the calendar binary on it and show incoming events in the next n days using -A flag.

calendar -A 20


Note that the default file is ~/.calendar/calendar so if you use this file you don’t need to use the -f flag in calendar.

Now, I also use it in crontab with xmessage to show a popup once a day with incoming birthdays.

30 13 * * *  calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -


You have to set the DISPLAY variable so it appear on the screen.

It’s important to check if calendar will have any output before calling xmessage to prevent having an empty window.

# prose - Blogging with emails

Written by Solène, on 11 June 2020.
Tags: #blog #email #blog #plaintext

Comments on Mastodon

The software developer prx, his website is available at https://ybad.name/ (en/fr), released a new software called prose to publish a blog by sending emails.

I really like this idea, while this doesn’t suit my needs at all, I wanted to write about it.

The code can be downloaded from this address https://dev.ybad.name/prose/ .

I will briefly introduce how it works but the README file is well explaining, prose must be started from the mail server, upon email receival in /etc/mail/aliases the email will be piped into prose which will produce the html output.

On the security side, prose doesn’t use any external command and on OpenBSD it will use unveil and pledge features to reduce privileges of prose, unveil will restrict the process file system accesses outside of the html output directory.

I would also congrats prx who demonstrates again that writing good software isn’t exclusive to IT professionnal.

# Gaming on OpenBSD

Written by Solène, on 05 June 2020.
Tags: #openbsd #gaming

Comments on Mastodon

While no one would expect this, there are huge efforts from a small team to bring more games into OpenBSD. In fact, now some commercial games works natively now, thanks to Mono or Java. There are no wine or linux emulation layer in OpenBSD.

Here is a small list of most well known games that run on OpenBSD:

• Northguard (RTS)
• Dead Cells (Side scroller action game)
• Stardew Valley (Farming / Roguelike)
• Slay The Spire (Card / Roguelike)
• Axiom Verge (Side scroller, metroidvania)
• Crosscode (top view twin stick shooter)
• Terraria (Side scroller action game with craft)
• Ion Fury (FPS)
• Doom 3 (FPS)
• Minecraft (Sandbox - not working using latest version)
• Tales Of Maj’Eyal (Roguelike with lot of things in it - open source and free)

I would also like to feature the recently made compatible games from Zachtronics developer, those are ingenious puzzles games requiring efficiency. There are games involving Assembly code, pseudo code, molecules etc…

• Opus Magnum
• Exapunks
• Molek-Syntez

Finally, there are good RPG running thanks to devoted developer spending their free time working on game engine reimplementation:

• Elder Scroll III: Morrowind (openmw engine)
• Baldur’s Gate 1 and 2 (gemrb engine)
• Planescape: Torment (gemrb engine)

There is a Peertube (opensource decentralized Youtube alternative) channel where I started publishing gaming videos recorded from OpenBSD. Now there are also videos from others people that are published. OpenBSD Gaming channel

The full list of running games is available in the Shopping guide webpage including information how they run, on which store you can buy them and if they are compatible.

Big thanks to thfr@ who works hard to keep the shopping guide up to date and who made most of this possible. Many thanks to all the other people in the OpenBSD Gaming community :)

Note that it seems last Terraria release/update doesn’t work on OpenBSD yet.

# Beautiful background pictures on OpenBSD

Written by Solène, on 20 May 2020.
Tags: #openbsd

Comments on Mastodon

While the title may appear quite strange, the article is about installing a package to have a new random wallpaper everytime you start the X session!

First, you need to install a package named openbsd-backgrounds which is quite large with a size of 144 MB. This package made by Marc Espie contains lot of pictures shot by some OpenBSD developers.

You can automatically set a picture as a background when xenodm start and prompt for your username by uncommenting a few lines in the file /etc/X11/xenodm/Xsetup_0:

Uncomment this part

if test -x /usr/local/bin/openbsd-wallpaper
then
/usr/local/bin/openbsd-wallpaper
fi


The command openbsd-wallpaper will display a different random picture on every screen (if you have multiples screen connected) every time you run it.

# Communauté OpenBSD française

Written by Solène, on 17 May 2020.
Tags: #openbsd

Comments on Mastodon

This article is exceptionnaly in French because it’s about a French OpenBSD community.

Bonjour à toutes et à tous.

Exceptionnellement je publie un billet en français sur mon blog car je tiens à faire passer le mot concernant la communauté française obsd4a.

Vous pourrez par exemple trouver la quasi intégralité de la FAQ OpenBSD traduite à cette adresse

Sur l’accueil du site vous pourrez trouver des liens vers le forum, le wiki, le blog, la mailing list et aussi les informations pour rejoindre le salon irc (#obsd4* sur freenode)

https://openbsd.fr.eu.org/

# New blog feature: Fediverse comments

Written by Solène, on 16 May 2020.
Tags: #fediverse #automation

Comments on Mastodon

I added a new feature to my blog today, when I post a new blog article this will trigger my dedicated Mastodon user https://bsd.network/@solenepercent to publish a Toot so people can discuss the content there.

Every article now contains a link to the toot if you want to discuss about an article.

This is not perfect but a good trade-off I think:

1. the website remains static and light (nothing is included, only one more link per blog post)
2. people who would like to discuss about it can proceed in a known place instead of writing reactions on reddit or other places without a chance for me to asnwer
3. this is not relying on proprietary services

Of course, if you want to give me feedback, I’m still happy to reply to emails or on IRC.

# FreeBSD 12.1 on a laptop

Written by Solène, on 11 May 2020.
Tags: #freebsd #mate #laptop

Comments on Mastodon

# Introduction

I’m using FreeBSD again on a laptop for some reasons so expect to read more about FreeBSD here. This tutorial explain how to get a graphical desktop using FreeBSD 12.1.

I used a Lenovo Thinkpad T480 for this tutorial.

# Intel graphics hardware support

If you have a recent Intel integrated graphic card (maybe less than 3 years), you have to install a package containing the driver:

pkg install drm-kmod


and you also have to tell the system the correct path of the module (because another i915kms.ko file exist):

sysrc kld_list="/boot/modules/i915kms.ko"


# Choose your desktop environnement

## Install Xfce

pkg install xfce


Then in your user ~/.xsession file you must append:

exec ck-launch-session startxfce4


## Install MATE

pkg install mate


Then in your user ~/.xsession file you must append:

exec ck-launch-session mate-session


## Install KDE5

pkg install kde5


Then in your user ~/.xsession file you must append:

exec ck-launch-session startplasma-x11


# Setting up the graphical interface

You have to enable a few services to have a working graphical session:

• moused to get laptop mouse support
• dbus for hald
• hald for hardware detection
• xdm for display manager where you log-in

You can install them with the command:

pkg install xorg dbus hal xdm


Then you can enable the services at boot using the following commands, order is important:

sysrc moused_enable="yes"
sysrc dbus_enable="yes"
sysrc hald_enable="yes"
sysrc xdm_enable="yes"


Reboot or start the services in the same order:

service moused start
service dbus start
service hald start
service xdm start


Note that xdm will be in qwerty layout.

# Power management

The installer should have prompted for the service powerd, if you didn’t activate it at this time, you can still enable it.

Check if it’s running

service powerd status


Enabling

sysrc powerd_enable="yes"


Starting the service

service powerd start


# Webcam support

If you have a webcam and want to use it, some configuration is required in order to make it work.

Install the package webcamd, it will displays all the instructions written below at the install step.

pkg install webcamd


From here, append this line to the file /boot/loader.conf to load webcam support at boot time:

cuse_load="yes"


Add your user to the webcamd group so it will be able to use the device:

pw groupmod webcamd -m YOUR_USER


Enable webcamd at boot:

sysrc webcamd_enable="yes"


Now, you have to logout from your user for the group change to take place. And if you want the webcamd daemon to work now and not wait next reboot:

kldload cuse
service webcamd start
service devd restart


You should have a /dev/video0 device now. You can test it easily with the package pwcview.

# External resources

I found this blog very interesting, I wish I found it before I struggle with all the configuration as it explains how to install FreeBSD on the exact same laptop. The author explains how to make a transparent lagg0 interface for switching from ethernet to wifi automatically with a failover pseudo device.

https://genneko.github.io/playing-with-bsd/hardware/freebsd-on-thinkpad-t480/

# Enable firefox dark mode

Written by Solène, on 04 May 2020.
Tags: #firefox

Comments on Mastodon

Some websites (like this one) now offers two differents themes: light and dark.

Dark themes are proven to be better for the eyes and reduce battery usage on mobiles devices because it requires less light to be displayed hence it requires less energy to display. The gain is optimal on OLED devices but it also works on classic LCD screens.

While on Windows and MacOS there is a global setting for the user interface in which you choose if your system is in light or dark mode, with that setting being used by lot of applications supporting dark/light themes, on Linux and BSDs (and others) operating systems there is no such settings and your web browser will keep displaying the light theme all the time.

Hopefully, it can be fixed in firefox as as explained in the documentation.

To make it short, in the about:config special Firefox page, one can create a new key ui.systemUsesDarkTheme with a number value of 1, the firefox about:config page should turn dark immediately and then Firefox will try to use dark themes when they are available.

You should note that as explained in the mozilla documentation, if you have the key privacy.resistFingerprinting set to true the dark mode can’t be used. It seems dark mode and privacy can’t belong together for some reasons.

Many thanks to https://tilde.zone/@andinus who pointed me this out after I overlooked that page and searched a long time with no result how to make Firefox display website using the dark theme.

# Aggregate internet links with mlvpn

Written by Solène, on 28 March 2020.
Tags: #openbsd67

Comments on Mastodon

In this article I’ll explain how to aggregate internet access bandwidth using mlvpn software. I struggled a lot to set this up so I wanted to share a how-to.

## Pre-requisites

mlvpn is meant to be used with DSL / fiber links, not wireless or 4G links with variable bandwidth or packet loss.

mlvpn requires to be run on a server which will be the public internet access and on the client on which you want to aggregate the links, this is like doing multiples VPN to the same remote server with a VPN per link, and aggregate them.

Multi-wan roundrobin / load balancer doesn’t allow to stack bandwidth but doesn’t require a remote server, depend on what you want to do, this may be enough and mlvpn may not be required.

mlvpn should be OS agnostic between client / server but I only tried between two OpenBSD hosts, your setup may differ.

## Some network diagram

Here is a simple network, the client has access to 2 ISP through two ethernet interfaces.

em0 and em1 will have to be on different rdomains (it’s a feature to separate routing tables).

Let’s say the public ip of the server is 1.2.3.4.

                [internet]
↑
| (public ip on em0)
#-------------#
|             |
|   Server    |
|             |
#-------------#
|       |
|       |
|       |
|       |
(internet)  |       | (internet)
#-------------#   #-------------#
|             |   |             |
|   ISP 1     |   |  ISP 2      |
|             |   |             |  (you certainly don't control those)
#-------------#   #-------------#
|       |
|       |
(dsl1 via em0)|       | (dsl1 via em1)
#-------------#
|             |
|   Client    |
|             |
#-------------#


## Network configuration

As said previously, em0 and em1 must be on different rdomains, it can easily be done by adding rdomain 1 and rdomain 2 to the interfaces configuration.

Example in /etc/hostname.em0

rdomain 1
dhcp


## mlvpn installation

On OpenBSD the installation is as easy as pkg_add mlvpn (should work starting from 6.7 because it required patching).

## mlvpn configuration

Once the network configuration is done on the client, there are 3 steps to do to get aggregation working:

1. mlvpn configuration on the server
2. mlvpn configuration on the client
3. activating NAT on the client

### Server configuration

On the server we will use the UDP ports 5080 et 5081.

Connections speed must be defined in bytes to allow mlvpn to correctly balance the traffic over the links, this is really important.

The line bandwidth_upload = 1468006 is the maximum download bandwidth of the client on the specified link in bytes. If you have a download speed of 1.4 MB/s then you can choose a value of 1.4*1024*1024 => 1468006.

The line bandwidth_download = 102400 is the maximum upload bandwidth of the client on the specified link in bytes. If you have an upload speed of 100 kB/s then you can choose a value of 100*1024 => 102400.

The password line must be a very long random string, it’s a shared secret between the client and the server.

# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
protocol = "tcp"
loglevel = 4
mode = "server"
tuntap = "tun"
interface_name = "tun0"
cleartext_data = 0
ip4 = "10.44.43.2/30"
ip4_gateway = "10.44.43.1"

# things you need to change
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"

[dsl1]
bindhost = "1.2.3.4"
bindport = 5080
bandwidth_upload = 1468006
bandwidth_download = 102400

[dsl2]
bindhost = "1.2.3.4"
bindport = 5081
bandwidth_upload = 1468006
bandwidth_download = 102400


### Client configuration

The password value must match the one on the server, the values of ip4 and ip4_gateway must be reversed compared to the server configuration (this is so in the following example).

The bindfib lines must correspond to the according rdomain values of your interfaces.

# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
loglevel = 4
mode = "client"
tuntap = "tun"
interface_name = "tun0"
ip4 = "10.44.43.1/30"
ip4_gateway = "10.44.43.2"
timeout = 30
cleartext_data = 0

password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"

[dsl1]
remotehost = "1.2.3.4"
remoteport = 5080
bindfib = 1

[dsl2]
remotehost = "1.2.3.4"
remoteport = 5081
bindfib = 2


### NAT configuration (server side)

As with every VPN you must enable packet forwarding and create a pf rule for the NAT.

Enable forwarding

Add this line in /etc/sysctl.conf:

net.inet.ip.forwarding=1


You can enable it now with sysctl net.inet.ip.forwarding=1 instead of waiting for a reboot.

In pf.conf you must allow the UDP ports 5080 and 5081 on the public interface and enable nat, this can be done with the following lines in pf.conf but you should obviously adapt to your configuration.

# allow NAT on VPN
pass in on tun0
pass out quick on em0 from 10.44.43.0/30 to any nat-to em0

# allow mlvpn to be reachable
pass in on egress inet proto udp from any to (egress) port 5080:5081


## Start mlvpn

On both server and client you can run mlvpn with rcctl:

rcctl enable mlvpn
rcctl start mlvpn


You should see a new tun0 device on both systems and being able to ping them through tun0.

Now, on the client you have to add a default gateway through the mlvpn tunnel with the command route add -net default 10.44.43.2 (adapt if you use others addresses). I still didn’t find how to automatize it properly.

Your client should now use both WAN links and being visible with the remote server public IP address.

mlvpn can be used for more links, you only need to add new sections. mlvpn also support IPv6 but I didn’t take time to find how to make it work, si if you are comfortable with ipv6 it may be easy to set up IPv6 with the variables ip6 and ip6_gateway in mlvpn.conf.

# OpenBSD -current - Frequent asked questions

Written by Solène, on 27 March 2020.
Tags: #openbsd

Comments on Mastodon

Hello, as there are so many questions about OpenBSD -current on IRC, Mastodon or reddit I’m writing this FAQ in hope it will help people.

The official FAQ already contains answers about -current like Following -current and using snapshots and Building the system from sources.

## What is OpenBSD -current?

OpenBSD -current is the development version of OpenBSD. Lot of people use it for everyday tasks.

## How to install OpenBSD -current?

OpenBSD -current refers to the last version built from sources obtained with CVS, however, it’s also possible to get a pre-built system (a snapshot) usually built and pushed on mirrors every 1 or 2 days.

You can install OpenBSD -current by getting an installation media like usual, but on the path /pub/OpenBSD/snapshots/ on the mirror.

## How do I upgrade from -release to -current?

There are two ways to do so:

1. Download bsd.rd file from the snapshots directory and boot it to upgrade like for a -release to -release upgrade
2. Run sysupgrade -s command as root, this will basically download all sets under /home/_sysupgrade and boot on bsd.rd with an autoinstall(8) config.

## How do I upgrade my -current snapshot to a newer snapshot?

Exactly the same process as going from -release to -current.

No.

## What issues can I expect in OpenBSD -current?

There are a few issues possibles that one can expect

### Out of sync packages

If a library get updated into the base system and you want to update packages, they won’t be installable until packages are rebuilt with that new library, this usually takes 1 up to 3 days.

This only create issues in case you want to install a package you don’t have.

The other way around, you can have an old snapshot and packages are not installable because the libraries linked to by the packages are newer than what is available in your system, in this case you have to upgrade snapshot.

### Snapshots sets are getting updated on the mirror

If you download the sets on the mirror to update your -current version, you may have an issue with the sha256 sum, this is because the mirror is getting updated and the sha256 file is the first to be transferred, so sets you are downloading are not the one the sha256 will compare.

### Unexpected system breakage

Sometimes, very rarely (maybe 2 or 3 time in a year?), some snapshots are borked and will prevent system to boot or lead to regularly crashes. In that case, it’s important to report the issue with the sendbug utility.

You can fix this by using an older snapshot from this archives server and prevent this to happen by reading bugs@ mailing list before updating.

### Broken package

Sometimes, a package update will break it or break some others packages, this is often quickly fixed on popular packages but in some niche packages you may be the only one using it on -current and the only one who can report about it.

If you find breakage on something you use, it may be a good idea to report the problem on ports@openbsd.org mailing list if nobody did before. By doing so, the issue will be fixed and next -release users will be able to install a working package.

## Is -current stable enough for a server or a workstation?

It’s really up to you. Developers are all using -current and are forbidden to break it, so the system should totally be usable for everyday use.

What may be complicated on a server is keep updating it regularly and face issues requires troubleshooting (like major database upgrade which was missing a quirk).

For a workstation I think it’s pretty safe as long as you can deal with packages that can’t be installed until they are in sync.

# Advice for working remotely from home

Written by Solène, on 17 March 2020.
Tags: #life

Comments on Mastodon

Hello,

A few days ago, as someone working remotely since 3 years I published some tips to help new remote workers to feel more confident into their new workplace: home

I’ve been told I should publish it on my blog so it’s easier to share the information, so here it is.

• dedicate some space to your work area, if you use a laptop try to dedicate a table corner for it, so you don’t have to remove your “work station” all the time

• keep track of the time, remember to drink and stand up / walk every hour, you can set an alarm every hour to remember or use software like http://www.workrave.org/ or https://github.com/hovancik/stretchly which are very useful. If you are alone at home, you may lose track of time so this is important.

• don’t forget to keep your phone at hand if you use it for communication with colleagues. Think that they may only know your phone number, so it’s their only way to reach you

• keep some routine for lunch, you should eat correctly and take the time to do so, avoid eating in front of the computer

• don’t work too much after work hours, do like at your workplace, leave work when you feel it’s time to and shutdown everything related to work, it’s a common trap to want to do more and keep an eye on mails, don’t fall into it.

• depending on your social skills, work field and colleagues, speak with others (phone, text whatever), it’s important to keep social links.

Here are some others tips from Jason Robinson

• after work, distance yourself from the work time by taking a short walk outside, cooking, doing laundry, or anything that gets you away from the work area and cuts the flow.

• take at least one walk outside if possible during the day time to get fresh air.

• get a desk that can be adjusted for both standing and sitting.

I hope those advices will help you going through the crisis, take care of yourselves.

# A day as an OpenBSD developer

Written by Solène, on 19 February 2020.
Tags: #life #openbsd

Comments on Mastodon

This is a little story that happened a few days ago, it explains well how I usually get involved into ports in OpenBSD.

## 1 - Lurking into ports/graphics/

At first, I was looking in various ports there are in the graphics category, searching for an image editor that would run correctly on my offline laptop. Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open ports randomly to read their pkg/DESCR file.

This way, I often find gems I reuse later, sometimes I have less luck and I only tried 20 ports which are useless to me. It happens I find issues in ports looking randomly like this…

## 2 - Find the port « comix »

Then, the second or third port I look at is « comix », here is the DESCR file.

Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.


That looked awesome, I have lot of books as PDF I want to read but it’s not convenient in a “normal” PDF reader, so maybe comix would help!

## 3 - Using comix

Once comix was compiled (a mix of python and gtk), I start it and I get errors opening PDFs… I start it again from console, and in the output I get the explanation that PDF files are not usable in comix.

Then I read about the CBZ or CBT files, they are archives (zip or tar) containing pictures, definitely not what a PDF is.

## 4 - mcomix > comix

After a few searches on the Internet, I find that comix last release is from 2009 and it never supported PDF, so nothing wrong here, but I also found comix had a fork named mcomix.

mcomix forked a long time ago from comix to fix issues and add support for new features (like PDF support), while last release is from 2016, it works and still receive commits (last is from late 2019). I’m going for using comix!

## 5 - Installing mcomix from ports

Best way to install a program on OpenBSD is to make a port, so it’s correctly packaged, can be deinstalled and submit to ports@ mailing list later.

I did copy comix folder into mcomix, use a brain dead sed command to replace all occurrence of comix by mcomix, and it mostly worked! I won’t explain little details, but I got mcomix to work within a few minutes and I was quite happy! Fun fact is that comix port Makefile was mentioning mcomix as a suggestion for upgrade.

## 6 - Enjoying a CBR reader

With mcomix installed, I was able to read some PDF, it was a good experience and I was pretty happy with it. I’ve spent a few hours reading, a few moments after mcomix was installed.

## 7 - mcomix works but not all the time

After reading 2 longs PDFs, I got issues with the third, some pages were not rendered and not displayed. After digging this issue a bit, I found about mcomix internals. Reading PDF is done by rendering every page of the PDF using mutool binary from mupdf software, this is quite CPU intensive, and for some reason in mcomix the command execution fails while I can do the exact same command a hundred time with no failure. Worse, the issue is not reproducible in mcomix, sometimes some pages will fail to be rendered, sometimes not!

## 8 - Time to debug some python

I really want to read those PDF so I take my favorite editor and start debugging some python, adding more debug output (mcomix has a -W parameter to enable debug output, which is very nice), to try to understand why it fails at getting output of a working command.

Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just found it fail, sometimes, but I wasn’t able to understand why.

## 9 - mcomix on PowerPC

While mcomix is clunky with PDF, I wanted to check if it was working on PowerPC, it took some times to get all the dependencies installed on my old computer but finally I got mcomix displayed on the screen… and dying on PDF loading! Crash seems related to GTK and I don’t want to touch that, nobody will want to patch GTK for that anyway so I’ve lost hope there.

## 10 - Looking for alternative

Once I knew about mcomix, I was able to search the Internet for alternatives of it and also CBR readers. A program named zathura seems well known here and we have it in the OpenBSD ports tree.

Weird thing is that it comes with two different PDF plugins, one named mupdf and the other one poppler. I did try quickly on my amd64 machine and zathura was working.

## 11 - Zathura on PowerPC

As Zathura was working nice on my main computer, I installed it on the PowerPC, first with the poppler plugin, I was able to view PDF, but installing this plugin did pull so many packages dependencies it was a bit sad. I deinstalled the poppler PDF plugin and installed mupdf plugin.

I opened a PDF and… error. I tried again but starting zathura from the terminal, and I got the message that PDF is not a supported format, with a lot of lines related to mupdf.so file not being usable. The mupdf plugin work on amd64 but is not usable on powerpc, this is a bug I need to report, I don’t understand why this issue happens but it’s here.

## 12 - Back to square one

It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT files and then use any CBT reader out there and not having to deal with that PDF madness!!

## 13 - Use big calibre for the job

I have found on the Internet that Calibre is the most used tool to convert a PDF into CBT files (or into something else but I don’t really care here). I installed calibre, which is not lightweight, started it and wanted to change the default library path, the software did hang when it displayed the file dialog. This won’t stop me, I restart calibre and keep the default path, I click on « Add a book » and then it hang again on file dialog. I did report this issue on ports@ mailing list, but it didn’t solve the issue and this mean calibre is not usable.

## 14 - Using the command line

After all, CBT files are images in a tar file, it should be easy to reproduce the mcomix process involving mutool to render pictures and make a tar of that.

IT WORKED.

I found two ways to proceed, one is extremely fast but may not make pages in the correct order, the second requires CPU time.

#### Making CBT files - easiest process

The first way is super easy, it requires mutool (from mupdf package) and it will extract the pictures from the PDF, given it’s not a vector PDF, not sure what would happen on those. The issue is that in the PDF, the embedded pictures have a name (which is a number from the few examples I found), and it’s not necessarily in the correct order. I guess this depend how the PDF is made.

$mutool extract The_PDF_file.pdf$ tar cvf The_PDF_file.tar *jpg


That’s all you need to have your CBT file. In my PDF there was jpg files in it, but it may be png in others, I’m not sure.

#### Making CBT files - safest process (slow)

The other way of making pictures out of the PDF is the one used in mcomix, call mutool for rendering each page as a PNG file using width/height/DPI you want. That’s the tricky part, you may not want to produce pictures with larger resolution than the original pictures (and mutool won’t automatically help you for this) because you won’t get any benefit. This is the same for the DPI. I think this could be done automatically using a correct script checking each PDF page resolution and using mutool to render the page with the exact same resolution.

As a rule of thumb, it seems that rendering using the same width as your screen is enough to produce picture of the correct size. If you use large values, it’s not really an issue, but it will create bigger files and take more time for rendering.

$mutool draw -w 1920 -o page%d.png The_PDF_file.pdf$ tar cvf The_PDF_file.tar page*.png


You will get PNG files for each page, correctly numbered, with a width of 1920 pixels. Note that instead of tar, you can use zip to create a zip file.

## 15 - Finally reading books again

After all this LONG process, I was finally able to read my PDF with any CBR reader out there (even on phone), and once the process is done, it uses no cpu for viewing files at the opposite of mcomix rendering all the pages when you open a file.

I have to use zathura on PowerPC, even if I like it less due to the continuous pages display (can’t be turned off), but mcomix definitely work great when not dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports tree if it fails randomly on random pages with PDF.

## 16 - Being an open source activist is exhausting

All I wanted was to read a PDF book with a warm cup of tea at hand. It ended into learning new things, debugging code, making ports, submitting bugs and writing a story about all of this.

# Daily life with the offline laptop

Written by Solène, on 18 February 2020.
Tags: #life #disconnected

Comments on Mastodon

Last year I wrote a huge blog post about an offline laptop attempt. It kinda worked but I wasn’t really happy with the setups, need and goals.

So, it is back and I use it know, and I am very happy with it. This article explains my experience at solving my needs, I would appreciate not receiving advice or judgments here.

## State of the need

### Internet is infinite, my time is not

Having access to the Internet is a gift, I can access anything or anyone. But this comes with a few drawbacks. I can waste my time on anything, which is not particularly helpful. There are so many content that I only scratch things, knowing it will still be there when I need it, and jump to something else. The amount of data is impressive, one human can’t absorb that much, we have to deal with it.

I used to spend time of what I had, and now I just spend time on what exist. An example of this statement is that instead of reading books I own, I’m looking for which book I may want to read once, meanwhile no book are read.

### Network socialization requires time

When I say “network socialization” this is so to avoid the easy “social network” saying. I do speak with people on IRC (in real time most of the time), I am helping people on reddit, I am reading and writing mail most of the time for OpenBSD development.

Don’t get me wrong, I am happy doing this, but I always keep an eye on each, trying to help people as soon as they ask a question, but this is really time consuming for me. I spend a lot of time jumping from one thing to another to keep myself updated on everything, and so I am too distracted to do anything.

In my first attempt of the offline laptop, I wanted to get my mails on it, but it was too painful to download everything and keep mails in sync. Sending emails would have required network too, it wouldn’t be an offline laptop anymore.

### IT as a living and as a hobby

On top of this, I am working in IT so I spend my day doing things over the Internet and after work I spend my time on open source projects. I can not really disconnect from the Internet for both.

## How I solved this

First step was to define « What do I like to do? », and I came with this short list:

• reading
• listening to music
• playing video games
• writing things
• learning things

One could say I don’t need a computer to read books, but I have lots of ebooks and PDF about lots of subjects. The key is to load everything you need on the computer, because it can be tempting to connect the device to the Internet because you need a bit of this or that.

I use a very old computer with a PowerPC CPU (1.3 GHz single core) with 512MB of ram. I like that old computer, and slower computer forbid doing multiple things at the same time and help me keeping focused.

### Reading files

For reading, I found zathura or comix (and its fork mcomix) very useful for reading huge PDF, the scrolling customization make those tools useful.

### Listening to music

I buy my music as FLAC files and download it, this doesn’t require any internet access except at purchase time, so nothing special there. I use moc player which is easy to use, have a lot of feature and supports FLAC (on powerpc).

### Video games

Emulation is a nice way to play lot of games on OpenBSD, on my old computer it’s up to game boy advance / super nes / megadrive which should allow me to do again lots of games I own.

We also have a lot of nice games in ports, but my computer is too slow to run them or they won’t work on powerpc.

### Encyclopedia - Wikipedia

I’ve set up a local wikipedia replica like I explained in a previous article, so anytime I need to find about something, I can ask my local wikipedia. It’s always available. This is the best I found for a local encyclopedia, works well.

### Writing things

Since I started the offline computer experience, I started a diary. I never felt the need to do so but I wanted to give it a try. I have to admit summing up what I achieved in the day before going to bed is a satisfying experience and now I continue to update it.

You can use any text editor you want, there are special software with specific features, like rednotebook or lifeograph which supports embedded pictures or on the fly markdown rendering. But a text file and your favorite editor also do the job.

I also write some articles of this blog. It’s easy to do so as articles are text files in a git repository. When I finish and I need to publish, I get network and push changes to the connected computer which will do the publishing job.

## Technical details

I will go fast on this. My set up is an old Apple Powerbook G4 with a 1024x768 screen (I love that 4:3 ratio) running OpenBSD.

The system firewall pf is configured to prevent any incoming connections, and only allow TCP on the network to port 22, because when I need to copy files, I use ssh / sftp. The /home partition is encrypted using the softraid crypto device, full disk encryption is not supported on powerpc.

The experience is even more enjoyable with a warm cup of tea on hand.

# Cycling / bike trips and opensource

Written by Solène, on 06 February 2020.
Tags: #biking

Comments on Mastodon

# Introduction

I started doing biking seriously a few months ago, as I love having statistics I needed to gather some. I found a lot of devices on the market but I prefered using opensource tool and not relying on any vendor.

The best option to do so for me was reusing a 6 years old smartphone on which the SIM card bus is broken, that phone lose the sim card when it is shaked a little and requires a reboot to find it again, I am happy I found a way to reuse it.

Tip: turn ON airplane mode on the smartphone while riding, even without a SIM card it will try to get network and it will draw battery + emitting useless radio waves. In case of emergency, just disable the airplane mode to get access to your local emergency call number. GPS is a passive module and doesn’t require any network.

This smartphone has a GPS receiver, it’s enough for recording my position as often I want. Using the correct GPS software from F-droid store and a program for sftp transfer, I can record data and transfer it easily to my computer.

The most common file format for recording GPS position is the GPX format, it’s a simple XML file containing all positions with their timestamp, sometimes with a few more information like speed at that time, but given you have all positions, software can calculate the speed between each position.

# Android GPS Software

It seems GPS software for recording GPX tracks are becoming popular, and in the last months, lot of new software appeared, which is a good thing, I didn’t tested all of them though but they tend to be more easy to use and minimalistic.

## OpenStreetMap app - OSMand~

You can install it from F-droid an alternate store for Android only with opensource software, it’s a full free version (and opensource) compared to the one you can find on Android store.

This is OpenStreetMap official software, it’s full of features and quite heavy, you can download maps for navigation, record tracks, view tracks statistics, contribute to OSM, get Wikipedia information for an area and everything of this while being OFFLINE. Not only on my bike, I use it all the time while walking or in my car.

Recorded GPX can be found in the default path Android/data/net.osmand.plus/files/tracks/rec/

## Trekarta

I found another software named Trekarta which is a lot more lighter than OSM, but only focuses on recording your tracks. I would recommend it if you don’t want any other feature or have a really old android compatible phone or low disk space.

# Analyzing GPX files / keep track of everything

I found Turtlesport, an opensource software in Java for which last release was years ago but still work out of the box, given you have a java implementation installed. You can find it at the following link.

/usr/local/bin/jdk-1.8.0/bin/java -jar turtlesport.jar


Turtlesport is a nice tool for viewing tracks, it’s not for only for cycling and can be used for various sports, the process is the following:

• define sports you do (bike, skateboard, hiking etc..)
• define equipments you use (bike, sport shoes, skis etc..)
• import GPX files and tell Turtlesport which sport and equipment it’s related to

Then, for each GPX file, you will be able to see it on a map, see elevation and speed of that track, but you can also make statistics per sport or equipment, like “How many km I ride with that bike over last year, per week”.

If you don’t have a GPX file, you can still add a new trip into the database by drawing the path on a map.

In the equipments, you will see how many kilometers you used each, with an alert feature if the equipment goes beyond a defined wearing limit. I’m not sure about the use of this, maybe you want to know your shoes shouldn’t be used for more than 2000 km?? Maybe it’s possible to use it for maintenance purpose, says your bike has a wearing limit of 1000 km, when you reach it you get an alert, do your maintenance and set the new limit to 2000km.

# Viewing GPX files

From OpenBSD 6.7 you can install the package gpxsee to open multiple GPX files, they will be shown on a map, each track with a different colour, and nice charts displaying the elevation or speed over the travel for every tracks.

Before gpxsee I was using the GIS (Geographical Information System) tool qgis but it is really heavy and complicated. But if you want to work on your recorded data like doing complex statistics, it’s a powerful tool if you know how to use it.

I like to use it in a gamification purpose: I’m trying to ride over every road around my home, viewing all GPX files at the same time allow me to plan the next trip where I never went.

# Miscellaneous

## Create an unique GPX file from all records

It is possible to merge GPX file into one giant one using gpsbabel .I was using this before having *gpxsee but I have no idea about what you can do with that, this create one big spaggheti track. I choose to keep the command here, in case it’s useful for someone one day:

gpsbabel -s -r -t -i GPX $(ls /path/to/files/*gpx | awk '{ printf "-f %s ",$1 }') -o GPX -F - > sum.gpx


## Cycling using electronic devices

Of course, if you are a true cyclist racer and GPX files will not be enough for you, you will certainly want devices such as a power meter or a cadence meter and an on-board device to use them. I can’t help much about hardware.

However, you may want to give a try to Golden Cheetah to import all your data from various devices and make complex statistics from it. I tried it and I had no idea about the purpose of 90% of the features.

## Have fun

Don’t forget to have fun and do not get obscessed by numbers!

# Common LISP awk macro for easy text file operations

Written by Solène, on 04 February 2020.
Tags: #awk #lisp

Comments on Mastodon

I like Common LISP and I also like awk. Dealing with text files in Common LISP is often painful. So I wrote a small awk like common lisp macro, which helps a lot dealing with text files.

Here is the implementation, I used the uiop package for split-string function, it comes with sbcl. But it's possible to write your own split-string or reused the infamous split-str function shared on the Internet.

(defmacro awk(file separator &body code)
"allow running code for each line of a text file,
giving access to NF and NR variables, and also to
fields list containing fields, and line containing $0" (progn (let ((stream (open ,file :if-does-not-exist nil))) (when stream (loop for line = (read-line stream nil) counting t into NR while line do (let* ((fields (uiop:split-string line :separator ,separator)) (NF (length fields))) ,@code))))))  It's interesting that the "do" in the loop could be replaced with a "collect", allowing to reuse awk output as a list into another function, a quick example I have in mind is this: ;; equivalent of awk '{ print NF }' file | sort | uniq ;; for counting how many differents fields long line we have (uniq (sort (awk "file" " " NF)))  Now, here are a few examples of usage of this macro, I've written the original awk command in the comments in comparison: ;; numbering lines of a text file with NR ;; awk '{ print NR": "$0 }' file.txt
;;
(awk "file.txt" " "
(format t "~a: ~a~%" NR line))

;; display NF-1 field (yes it's -2 in the example because -1 is last field in the list)
;; awk -F ';' '{ print NF-1 }' file.csv
;;
(awk "file.csv" ";"
(print (nth (- NF 2) fields)))

;; filtering lines (like grep)
;; awk '/unbound/ { print }' /var/log/messages
;;
(awk "/var/log/messages" " "
(when (search "unbound" line)
(print line)))

;; printing 4nth field
;; awk -F ';' '{ print $4 }' data.csv ;; (awk "data.csv" ";" (print (nth 4 fields)))  # Using the OpenBSD ports tree with dedicated users Written by Solène, on 11 January 2020. Tags: #openbsd Comments on Mastodon If you want to contribute to OpenBSD ports collection you will want to enable thePORTS_PRIVSEP feature. When this variable is set, ports system will use dedicated users for tasks. Source tarballs will be downloaded by the user _pfetch and all compilation and packaging will be done by the user _pbuild. Those users are created at system install and pf have a default rule to prevent _pbuild user doing network access. This will prevent ports from doing network stuff, and this is what you want. This adds a big security to the porting process and any malicious code run by ports being compiled will be harmless. In order to enable this feature, a few changes must be made. The file /etc/mk.conf must contains PORTS_PRIVSEP=yes SUDO=doas  Then, /etc/doas.conf must allows your user to become _pfetch and _pbuild permit keepenv nopass solene as _pbuild permit keepenv nopass solene as _pfetch permit keepenv nopass solene as root  If you don’t want to use the last line, there is an explanation in the bsd.port.mk(5) man page. Finally, within the ports tree, some permissions must be changed. # chown -R _pfetch:_pfetch /usr/ports/distfiles # chown -R _pbuild:_pbuild /usr/ports/{packages,plist,pobj,bulk}  If directories doesn’t exist yet on your system (this is the case on a fresh ports checkout / untar), you can create them with the commands: # install -d -o _pfetch -g _pfetch /usr/ports/distfiles # install -d -o _pbuild -g _pbuild /usr/ports/{packages,plist,pobj,bulk}  Now, when you run a command in the ports tree, privileges should be dropped to according users. # Using rsnapshot for easy backups Written by Solène, on 10 January 2020. Tags: #openbsd Comments on Mastodon ## Introduction rsnapshot is a handy tool to manage backups using rsync and hard links on the filesystem. rsnapshot will copy folders and files but it will skip duplication over backups using hard links for files which has not changed. This kinda create snapshots of your folders you want to backup, only using rsync, it’s very efficient and easy to use, and getting files from backups is really easy as they are stored as files under the rsnapshot backup. ## Installation Installing rsnapshot is very easy, on most systems it will be in your official repository packages. To install it on OpenBSD: pkg_add rsnapshot (as root) ## Configuration Now you may want to configure it, in OpenBSD you will find a template in /etc/rsnapshot.conf that you can edit for your needs (you can make a backup of it first if you want to start over). As it’s stated in big (as big as it can be displayed in a terminal) letters at the top of the configuration sample file, you will see that things must be separated by TABS and not spaces. I’ve made the mistakes more than once, don’t forget using tabs. I won’t explain all options, but only the most importants. The variable snapshot_root is where you want to store the backups. Don’t put that directory in a directory you will backup (that will end into an infinite loop) The variable backup is for telling rsnapshot what you want to backup from your system to which directory inside snapshot_root Here are a few examples: backup /home/solene/ myfiles/ backup /home/shera/Documents shera_files/ backup /home/shera/Music shera_files/ backup /etc/ etc/ backup /var/ var/ exclude=logs/*  Be careful when using ending slashes to paths, it works the same as with rsync. /home/solene/ means that into target directory, it will contains the content of /home/solene/ while /home/solene will copy the folder solene within the target directory, so you end up with target_directory/solene/the_files_here. The variables retain are very important, this will define how rsnapshot keep your data. In the example you will see alpha, beta, gamma but it could be hour, day, week or foo and bar. It’s only a name that will be used by rsnapshot to name your backups and also that you will use to tell rsnapshot which kind of backup to do. Now, I must explain how rsnapshot actually work. ## How it work Let’s go for a straighforward configuration. We want a backup every hour on the last 24h, a backup every day for the past 7 days and 3 manuals backup that we start manually. We will have this in our rsnapshot configuration retain hourly 24 retain daily 7 retain manual 3  but how does rsnapshot know how to do what? The answer is that it doesn’t. In root user crontab, you will have to add something like this: # run rsnapshot every hour at 0 minutes 0 * * * * rsnapshot hourly # run rsnapshot every day at 4 hours 0 minutes 0 4 * * * rsnapshot daily  and then, when you want to do a manual backup, just start rsnapshot manual Every time you run rsnapshot for a “kind” of backup, the last version will be named in the rsnapshoot root directory like hourly.0 and every backups will be shifted by one. The directory getting a number higher than the number in the retain line will be deleted. ## New to crontab? If you never used crontab, I will share two important things to know about it. Use MAILTO=“” if you don’t want to receive every output generated from scripts started by cron. Use a PATH containing /usr/local/bin/ in it because in the default cron PATH it is not present. Instead of setting PATH you can also using full binary paths into the crontab, like /usr/local/bin/rsnapshot daily You can edit the current user crontab with the command crontab -e. Your crontab may then look like: PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin MAILTO="" # comments are allowed in crontab # run rsnapshot every hour at 0 minutes 0 * * * * rsnapshot hourly # run rsnapshot every day at 4 hours 0 minutes 0 4 * * * rsnapshot daily  # Crop a video using ffmpeg Written by Solène, on 20 December 2019. Tags: #ffmpeg Comments on Mastodon If you ever need to crop a video, which mean that you want to reduce the area of the video to a square of it to trim areas you don’t want. This is possible with ffmpeg using the video filter crop. To make the example more readable, I replaced values with variables names: • WIDTH = width of output video • HEIGHT = height of output video • START_LEFT = relative position of the area compared to the left, left being 0 • START_TOP = relative position of the area compared to the top, top being 0 So the actual commands look like ffmpeg -i input_video.mp4 -filter:v "crop=$WIDTH:$HEIGHT:$START_LEFT:$START_TOP" output_video.mp4  If you want to crop the video to get a 320x240 video from the top-left position 500,100 the command would be ffmpeg -i input_video.mp4 -filter:v "crop=320:240:500:100" output_video.mp4  # Separate or merge audio and video using ffmpeg Written by Solène, on 20 December 2019. Tags: #ffmpeg Comments on Mastodon # Extract audio and video (separation) If for some reasons you want to separate the audio and the video from a file you can use those commands: ffmpeg -i input_file.flv -vn -acodec copy audio.aac ffmpeg -i input_file.flv -an -vcodec copy video.mp4  Short explanation: • -vn means -video null and so you discard video • -an means -audio null and so you discard audio • codec copy means the output is using original format from the file. If the audio is mp3 then the output file will be a mp3 whatever the extension you choose. Instead of using codec copy you can choose a different codec for the extracted file, but copy is a good choice, it performs really fast because you don’t need to re-encode it and is loss-less. I use this to rework the audio with audacity. # Merge audio and video into a single file (merge) After you reworked tracks (audio and/or video) of your file, you can combine them into a single file. ffmpeg -i input_audio.aac -i input_video.mp4 -acodec copy -vcodec copy -f flv merged_video.flv  # Playing CrossCode within a web browser Written by Solène, on 09 December 2019. Tags: #gaming #openbsd #openindiana Comments on Mastodon Good news for my gamers readers. It’s not really fresh news but it has never been written anywhere. The commercial video game Crosscode is written in HTML5, making it available on every system having chromium or firefox. The limitation is that it may not support gamepad (except if you find a way to make it work). A demo is downloadable at this address https://radicalfishgames.itch.io/crosscode and should work using the following instructions. You need to buy the game to be able to play it, it’s not free and not opensource. Once you bought it, the process is easy: 1. Download the linux installer from GOG (from steam it may be too) 2. Extract the data 3. Patch a file if you want to use firefox 4. Serve the files through a http server The first step is to buy the game and get the installer. Once you get a file named like “crosscode_1_2_0_4_32613.sh”, run unzip on it, it’s a shell script but only a self contained archive that can extract itself using the small shell script at the top. Change directory into data/noarch/game/assets and apply this patch, if you don’t know how to apply a patch or don’t want to, you only need to remove/comment the part you can see in the following patch: --- node-webkit.html.orig Mon Dec 9 17:27:17 2019 +++ node-webkit.html Mon Dec 9 17:27:39 2019 @@ -51,12 +51,12 @@ <script type="text/javascript"> // make sure we don't let node-webkit show it's error page // TODO for release mode, there should be an option to write to a file or something. - window['process'].once('uncaughtException', function() { +/* window['process'].once('uncaughtException', function() { var win = require('nw.gui').Window.get(); if(!(win.isDevToolsOpen && win.isDevToolsOpen())) { win.showDevTools && win.showDevTools(); } - }); + });*/ function doStartCrossCodePlz(){ if(window.startCrossCode){  Then you need to start a http server in the current path, an easy way to do it is using… php! Because php contains a http server, you can start the server with the following command: $ php -S 127.0.0.1:8080


Now, you can play the game by opening http://localhost:8080/node-webkit.html

I really thank Thomas Frohwein aka thfr@ for finding this out!

Tested on OpenBSD and OpenIndiana, it works fine on an Intel Core 2 Duo T9400 (CPU from 2008).

# Host your own wikipedia backup

Written by Solène, on 13 November 2019.
Tags: #openbsd #wikipedia #life

Comments on Mastodon

## Wikipedia and openzim

If you ever wanted to host your own wikipedia replica, here is the simplest way.

As wikipedia is REALLY huge, you don’t really want to host a php wikimedia software and load the huge database, instead, the project made the openzim format to compress the huge database that wikipedia became while allowing using it for fast searches.

Sadly, on OpenBSD, we have no software reading zim files and most software requires the library openzim to work which requires extra work to get it as a package on OpenBSD.

Hopefully, there is a python package implementing all you need as pure python to serve zim files over http and it’s easy to install.

This tutorial should work on all others unix like systems but packages or binary names may change.

## Downloading wikipedia

The project Kiwix is responsible for wikipedia files, they create regularly files from various projects (including stackexchange, gutenberg, wikibooks etc…) but for this tutorial we want wikipedia: https://wiki.kiwix.org/wiki/Content_in_all_languages

You will find a lot of files, the language is contained into the filename. Some filenames will also self explain if they contain everything or categories, and if they have pictures or not.

The full French file is 31.4 GB worth.

## Running the server

For the next steps, I recommend setting up a new user dedicated to this.

On OpenBSD, we will require python3 and pip:

$doas pkg_add py3-pip--  Then we can use pip to fetch and install dependencies for the zimply software, the flag --user is rather important as it allows any user to download and install python libraries in its home folder instead of polluting the whole system as root. $ pip3.7 install --user --upgrade zimply


I wrote a small script to start the server using the zim file as a parameter, I rarely write python so the script may not be high standard.

File server.py:

from zimply import ZIMServer
import sys
import os.path

if len(sys.argv) == 1:
print("usage: " + sys.argv[0] + " file")
exit(1)

if os.path.exists(sys.argv[1]):
ZIMServer(sys.argv[1])
else:
print("Can't find file " + sys.argv[1])


And then you can start the server using the command:

$python3.7 server.py /path/to/wikipedia_fr_all_maxi_2019-08.zim  You will be able to access wikipedia on the url http://localhost:9454/ Note that this is not a “wiki” as you can’t see history and edit/create pages. This kind of backup is used in place like Cuba or Africa areas where people don’t have unlimited internet access, the project lead by Kiwix allow more people to access knowledge. # Creating new users dedicated to processes Written by Solène, on 12 November 2019. Tags: #openbsd Comments on Mastodon ## What this article is about ? For some times I wanted to share how I manage my personal laptop and systems. I got the habit to create a lot of users for just everything for security reasons. Creating a new users is fast, I can connect as this user using doas or ssh -X if I need a X app and this allows preventing some code to steal data from my main account. Maybe I went this way too much, I have a dedicated irssi users which is only for running irssi, same with mutt. I also have a user with a stupid name and I can use it for testing X apps and I can wipe the data in its home directory (to try fresh firefox profiles in case of ports update for example). ## How to proceed? Creating a new user is as easy as this command (as root): # useradd -m newuser # echo "permit nopass keepenv solene as newuser" >> /etc/doas.conf  Then, from my main user, I can do: $ doas -u newuser 'mutt'


and it will run mutt as this user.

This way, I can easily manage lots of services from packages which don’t come with dedicated daemons users.

For this to be effective, it’s important to have a chmod 700 on your main user account, so others users can’t browse your files.

## Graphicals software with dedicated users

It becomes more tricky for graphical users. There are two options there:

• allow another user to use your X session, it will have native performance but in case of security issue in the software your whole X session is accessible (recording keys, screnshots etc…)
• running the software through ssh -X will restricts X access to the software but the rendering will be a bit sluggish and not suitable for some uses.

Example of using ssh -X compared to ssh -Y:

$ssh -X foobar@localhost scrot X Error of failed request: BadAccess (attempt to access private resource denied) Major opcode of failed request: 104 (X_Bell) Serial number of failed request: 6 Current serial number in output stream: 8$ ssh -Y foobar@localhost scrot
(nothing output but it made a screenshot of the whole X area)


## Real world example

On a server I have the following new users running:

• torrents
• idlerpg
• searx
• znc
• minetest
• quake server
• awk cron parsing http

they can have crontabs.

Maybe I use it too much, but it’s fine to me.

# How to remove a part of a video using ffmpeg

Written by Solène, on 02 October 2019.
Tags: #ffmpeg

Comments on Mastodon

If you want to remove parts of a video, you have to cut it into pieces and then merge the pieces, so you can avoid parts you don’t want.

The command is not obvious at all (like in all ffmpeg uses), I found some parts on differents areas of the Internet.

Split in parts, we want to keep from 00:00:00 to 00:30:00 and 00:35:00 to 00:45:00

ffmpeg -i source_file.mp4 -ss 00:00:00 -t 00:30:00 -acodec copy -vcodec copy part1.mp4
ffmpeg -i source_file.mp4 -ss 00:35:00 -t 00:10:00 -acodec copy -vcodec copy part2.mp4


The -ss parameter tells ffmpeg where to start the video and -t parameter tells it about the duration.

Then, merge the files into one file:

printf "file %s\n" part1.mp4 part2.mp4 > file_list.txt
ffmpeg -f concat -i file_list.txt -c copy result.mp4


instead of printf you can write into file_list.txt the list of files like this:

file /path/to/test1.mp4
file /path/to/test2.mp4


# GPG2 cheatsheet

Written by Solène, on 06 September 2019.
Tags: #security

Comments on Mastodon

## Introduction

I don’t use gpg a lot but it seems the only tool out there for encrypting data which “works” and widely used.

So this is my personal cheatsheet for everyday use of gpg.

In this post, I use the command gpg2 which is the binary to GPG version 2. On your system, “gpg” command could be gpg2 or gpg1. You can use gpg --versionif you want to check the real version behind gpg binary.

In your ~/.profile file you may need the following line:

export GPG_TTY=$(tty)  ## Install GPG The real name of GPG is GnuPG, so depending on your system the package can be either gpg2, gpg, gnupg, gnugp2 etc… On OpenBSD, you can install it with: pkg_add gnupg--%gnupg2 ## GPG Principle using private/public keys • YOU make a private and a public key (associated with a mail) • YOU give the public key to people • PEOPLE import your public key into they keyring • PEOPLE use your public key from the keyring • YOU will need your password everytime I think gpg can do much more, but read the manual for that :) ## Initialization We need to create a public and a private key. solene$ gpg2 --gen-key
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.


In this part, you should put your real name and your email address and validate with “O” if you are okay with the input. You will get ask for a passphrase after.

Real name: Solene
Email address: solene@domain.example
You selected this USER-ID:
"Solene <solene@domain.example>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 368E580748D5CA75 marked as ultimately trusted
gpg: revocation certificate stored as '/home/solene/.gnupg/openpgp-revocs.d/7914C6A7439EADA52643933B368E580748D5CA75.rev'
public and secret key created and signed.

pub   rsa2048 2019-09-06 [SC] [expires: 2021-09-05]
7914C6A7439EADA52643933B368E580748D5CA75
uid                    Solene <solene@domain.example>
sub   rsa2048 2019-09-06 [E] [expires: 2021-09-05]


The key will expire in 2 years, but this is okay. This is a good thing, if you stop using the key, it will die silently at it expiration time. If you still use it, you will be able to extend the expiracy time and people will be able to notice you still use that key.

## Export the public key

If someone asks your GPG key, this is what they want:

gpg2 --armor --export solene@domain.example > solene.asc


## Import a public key

Import the public key:

gpg2 --import solene.asc


## Delete a public key

In case someone change their public key, you will want to delete it to import a new one, replace $FINGERPRINT by the actual fingerprint of the public key. gpg2 --delete-keys$FINGERPRINT


## Encrypt a file for someone

If you want to send file picture.jpg to remote@mail then use the command:

gpg2 --encrypt --recipient remote@domain.example picture.jpg > picture.jpg.gpg


You can now send picture.jpg.gpg to remote@mail who will be able to read the file with his/her private key.

You can use –armor parameter to make the output plaintext, so you can put it into a mail or a text file.

## Decrypt a file

Easy!

gpg2 --decrypt image.jpg.gpg > image.jpg


## Get public key fingerprint

The fingerprint is a short string made out of your public key and can be embedded in a mail (often as a signature) or anywhere.

It allows comparing a public key you received from someone with the fingerprint that you may find in mailing list archives, twitter, a html page etc.. if the person spreaded it somewhere. This allow to multiple check the authenticity of the public key you received.

it looks like:

4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909


This is my real key fingerprint, so if I send you my public key, you can use the fingerprint from this page to check it matches the key you received!

You can obtain your fingerprint using the following command:

solene@t480 ~ $gpg2 --fingerprint pub rsa4096 2018-06-08 [SC] 4398 3BAD 3EDC B35C 9B8F 2442 8CD4 2DFD 57F0 A909 uid [ ultime ] XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX sub rsa4096 2018-06-08 [E]  ## Add a new mail / identity If for some reason, you need to add another mail to your GPG key (like personal/work keys) you can create a new identity with the new mail. Type gpg2 --edit-key solene@domain.example and then in the prompt, type adduid and answer questions. You can now export the public key with a different identity. ## List known keys If you want to get the list of keys you imported, you can use gpg2 -k  ## Testing If you want to do some tests, I’d recommend making new users on your system, exchanges their keys and try to encrypt a message from one user to another. I have a few spare users on my system on which I can ssh locally for various tests, it is always useful. # BitreichCON 2019 talks available Written by Solène, on 27 August 2019. Tags: #unix #drist #automation #awk Comments on Mastodon Earlier in August 2019 happened the BitreichCON 2019. There was awesome talks there during two days but there are two I would like to share. You can find all the informations about this event at the following address with the Gopher protocol gopher://bitreich.org/1/con/2019 BrCON talks are happening through an audio stream, a ssh session for viewing the current slide and IRC for questions. I have the markdown files producing the slides (1 title = 1 slide) and the audio recording. ## Simple solutions This is a talk I have made for this conference. It as about using simple solutions for most problems. Simple solutions come with simple tools, unix tools. I explain with real life examples like how to retrieve my blog articles titles from the website using curl, grep, tr or awk. Link to the audio Link to the slides ## Experiences with drist Another talk from Parazyd about my deployment tool Drist so I feel obligated to share it with you. In his talk he makes a comparison with slack (debian package, not the online community), explains his workflow with Drist and how it saves his precious time. Link to the audio Link to the slides ### About the bitreich community If you want to know more about the bitreich community, check gopher://bitreich.org or IRC #bitreich-en on Freenode servers. There is also the bitreich website which is a website parody of the worse of what you can daily see. # Stream live video using nginx Written by Solène, on 26 August 2019. Tags: #openbsd68 #openbsd #gaming #nginx Comments on Mastodon This blog post is about a nginx rtmp module for turning your nginx server into a video streaming server. The official website of the project is located on github at: https://github.com/arut/nginx-rtmp-module/ I use it to stream video from my computer to my nginx server, then viewers can use mpv rtmp://perso.pw/gaming in order to view the video stream. But the nginx server will also relay to twitch for more scalability (and some people prefer viewing there for some reasons). The module will already be installed with nginx package since OpenBSD 6.6 (not already out at this time). There is no package for install the rtmp module before 6.6. On others operating systems, check for something like “nginx-rtmp” or “rtmp” in an nginx context. Install nginx on OpenBSD: pkg_add nginx  Then, add the following to the file /etc/nginx/nginx.conf load_module modules/ngx_rtmp_module.so; rtmp { server { listen 1935; buflen 10s; application gaming { live on; allow publish 176.32.212.34; allow publish 175.3.194.6; deny publish all; allow play all; record all; record_path /htdocs/videos/; record_suffix %d-%b-%y_%Hh%M.flv; } } }  The previous configuration sample is a simple example allowing 172.32.212.34 and 175.3.194.6 to stream through nginx, and that will record the videos under /htdocs/videos/ (nginx is chrooted in /var/www). You can add the following line in the “application” block to relay the stream to your Twitch broadcasting server, using your API key. push rtmp://live-ams.twitch.tv/app/YOUR_API_KEY;  I made a simple scripts generating thumbnails of the videos and generating a html index file. Every 10 minutes, a cron check if files have to be generated, make thumbnails for videos (tries at 05:30 of the video and then 00:03 if it doesn’t work, to handle very small videos) and then create the html. The script checking for new stuff and starting html generation: #!/bin/sh cd /var/www/htdocs/videos for file in$(find . -mmin +1 -name '*.flv')
do
echo $file PIC=$(echo $file | sed 's/flv$/jpg/')
if [ ! -f "$PIC" ] then ffmpeg -ss 00:05:30 -i "$file" -vframes 1 -q:v 2 "$PIC" if [ ! -f "$PIC" ]
then
ffmpeg -ss 00:00:03 -i "$file" -vframes 1 -q:v 2 "$PIC"
if [ ! -f "$PIC" ] then echo "problem with$file" | mail user@my-tld.com
fi
fi
fi
done
cd ~/dev/videos/ && sh html.sh


This one makes the html:

#!/bin/sh

cd /var/www/htdocs/videos

PER_ROW=3
COUNT=0

cat << EOF > index.html
<html>
<body>
<h1>Replays</h1>
<table>
EOF

for file in $(find . -mmin +3 -name '*.flv') do if [$COUNT -eq 0 ]
then
echo "<tr>" >> index.html
INROW=1
fi
COUNT=$(( COUNT + 1 )) SIZE=$(ls -lh $file | awk '{ print$5 }')
PIC=$(echo$file | sed 's/flv$/jpg/') echo$file
echo "<td><a href=\"$file\"><img src=\"$PIC\" width=320 height=240 /><br />$file ($SIZE)</a></td>" >> index.html
if [ $COUNT -eq$PER_ROW ]
then
echo "</tr>" >> index.html
COUNT=0
INROW=0
fi
done

if [ $INROW -eq 1 ] then echo "</tr>" >> index.html fi cat << EOF >> index.html </table> </body> </html> EOF  # Minimalistic markdown subset to html converter using awk Written by Solène, on 26 August 2019. Tags: #unix #awk Comments on Mastodon Hello As on my blog I use different markup languages I would like to use a simpler markup language not requiring an extra package. To do so, I wrote an awk script handling titles, paragraphs and code blocks the same way markdown does. 16 December 2019 UPDATE: adc sent me a patch to add ordered and unordered list. Code below contain the addition. It is very easy to use, like: awk -f mmd file.mmd > output.html The script is the following: BEGIN { in_code=0 in_list_unordered=0 in_list_ordered=0 in_paragraph=0 } { # escape < > characters gsub(/</,"\<",$0);
gsub(/>/,"\>",$0); # close code blocks if(! match($0,/^    /)) {
if(in_code) {
in_code=0
printf "</code></pre>\n"
}
}

# close unordered list
if(! match($0,/^- /)) { if(in_list_unordered) { in_list_unordered=0 printf "</ul>\n" } } # close ordered list if(! match($0,/^[0-9]+\. /)) {
if(in_list_ordered) {
in_list_ordered=0
printf "</ol>\n"
}
}

# display titles
if(match($0,/^#/)) { if(match($0,/^(#+)/)) {
printf "<h%i>%s</h%i>\n", RLENGTH, substr($0,index($0,$2)), RLENGTH } # display code blocks } else if(match($0,/^    /)) {
if(in_code==0) {
in_code=1
printf "<pre><code>"
print substr($0,5) } else { print substr($0,5)
}

# display unordered lists
} else if(match($0,/^- /)) { if(in_list_unordered==0) { in_list_unordered=1 printf "<ul>\n" printf "<li>%s</li>\n", substr($0,3)
} else {
printf "<li>%s</li>\n", substr($0,3) } # display ordered lists } else if(match($0,/^[0-9]+\. /)) {
n=index($0," ")+1 if(in_list_ordered==0) { in_list_ordered=1 printf "<ol>\n" printf "<li>%s</li>\n", substr($0,n)
} else {
printf "<li>%s</li>\n", substr($0,n) } # close p if current line is empty } else { if(length($0) == 0 && in_paragraph == 1 && in_code == 0) {
in_paragraph=0
printf "</p>"
} # we are still in a paragraph
if(length($0) != 0 && in_paragraph == 1) { print } # open a p tag if previous line is empty if(length(previous_line)==0 && in_paragraph==0) { in_paragraph=1 printf "<p>%s\n",$0
}
}
previous_line = $0 } END { if(in_code==1) { printf "</code></pre>\n" } if(in_list_unordered==1) { printf "</ul>\n" } if(in_list_ordered==1) { printf "</ol>\n" } if(in_paragraph==1) { printf "</p>\n" } }  # Life with an offline laptop Written by Solène, on 23 August 2019. Tags: #openbsd #life #disconnected Comments on Mastodon Hello, this is a long time I want to work on a special project using an offline device and work on it. I started using computers before my parents had an internet access and I was enjoying it. Would it still be the case if I was using a laptop with no internet access? When I think about an offline laptop, I immediately think I will miss IRC, mails, file synchronization, Mastodon and remote ssh to my servers. But do I really need it _all the time_? As I started thinking about preparing an old laptop for the experiment, differents ideas with theirs pros and cons came to my mind. Over the years, I produced digital data and I can not deny this. I don't need all of them but I still want some (some music, my texts, some of my programs). How would I synchronize data from the offline system to my main system (which has replicated backups and such). At first I was thinking about using a serial line over the two laptops to synchronize files, but both laptop lacks serial ports and buying gears for that would cost too much for its purpose. I ended thinking that using an IP network _is fine_, if I connect for a specific purpose. This extended a bit further because I also need to install packages, and using an usb memory stick from another computer to get packages and allow the offline system to use it is _tedious_ and ineffective (downloading packages and correct dependencies is a hard task on OpenBSD in the case you only want the files). I also came across a really specific problem, my offline device is an old Apple PowerPC laptop being big-endian and amd64 is little-endian, while this does not seem particularly a problem, OpenBSD filesystem is dependent of endianness, and I could not share an usb memory device using FFS because of this, alternatives are fat, ntfs or ext2 so it is a dead end. Finally, using the super slow wireless network adapter from that offline laptop allows me to connect only when I need for a few file transfers. I am using the system firewall pf to limit access to outside. In my pf.conf, I only have rules for DNS, NTP servers, my remote server, OpenBSD mirror for packages and my other laptop on the lan. I only enable wifi if I need to push an article to my blog or if I need to pull a bit more music from my laptop. This is not entirely _offline_ then, because I can get access to the internet at any time, but it helps me keeping the device offline. There is no modern web browser on powerpc, I restricted packages to the minimum. So far, when using this laptop, there is no other distraction than the stuff I do myself. At the time I write this post, I only use xterm and tmux, with moc as a music player (the audio system of the iBook G4 is surprisingly good!), writing this text with ed and a 72 long char prompt in order to wrap words correctly manually (I already talked about that trick!). As my laptop has a short battery life, roughly two hours, this also helps having "sessions" of a reasonable duration. (Yes, I can still plug the laptop somewhere). I did not use this laptop a lot so far, I only started the experiment a few days ago, I will write about this sometimes. I plan to work on my gopher space to add new content only available there :) # OpenBSD -stable packages Written by Solène, on 14 August 2019. Tags: #automation #openbsd Comments on Mastodon Hi, I’m happy to announce the OpenBSD project will now provide -stable binary packages. This mean, if you run last release (syspatch applied or not), pkg_add -u will update packages to get security fixes. Remember to restart services that may have been updated, to be sure to run new binaries. Link to official announcement # OpenBSD ttyplot examples Written by Solène, on 29 July 2019. Tags: #openbsd68 #openbsd Comments on Mastodon I said I will rewrite ttyplot examples to make them work on OpenBSD. Here they are, but a small notice before: Examples using systat will only work for 10000 seconds , or increase that -d parameter, or wrap it in an infinite loop so it restart (but don’t loop systat for one run at a time, it needs to start at least once for producing results). The systat examples won’t work before OpenBSD 6.6, which is not yet released at the time I’m writing this, but it’ll work on a -current after 20 july 2019. I made a change to systat so it flush output at every cycle, it was not possible to parse its output in realtime before. Enjoy! ## Examples list ### ping Replace test.example by the host you want to ping. ping test.example | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"  ### cpu usage vmstat 1 | awk 'NR>2 { print 100-$(NF); fflush(); }' | ttyplot -t "Cpu usage" -s 100


### disk io

 systat -d 1000 -b  iostat 1 | awk '/^sd0/ && NR > 20 { print $2/1024 ; print$3/1024 ; fflush }' | ttyplot -2 -t "Disk read/write in kB/s"


### load average 1 minute

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($8,0,length($8)-1) ; fflush }' | ttyplot -t "load average 1"


### load average 5 minutes

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($9,0,length($9)-1) ; fflush }' | ttyplot -t "load average 5"


{ while :; do uptime ; sleep 1 ; done } | awk '{ print $10 ; fflush }' | ttyplot -t "load average 15"  ### wifi signal strengh Replace iwm0 by your interface name. { while :; do ifconfig iwm0 | tr ' ' '\n' ; sleep 1 ; done } | awk '/%$/ { print ; fflush }' | ttyplot -t "Wifi strength in %" -s 100


{ while :; do sysctl -n hw.sensors.cpu0.temp0 ; sleep 1 ; done } | awk '{ print $1 ; fflush }' | ttyplot -t "CPU temperature in °C"  ### pf state searches rate systat -d 10000 -b pf 1 | awk '/state searches/ { print$4 ; fflush }' | ttyplot -t "PF state searches per second"


systat -d 10000 -b pf 1 | awk '/state inserts/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"  ### network bandwidth Replace trunk0 by your interface. This is the same command as in my previous article. netstat -b -w 1 -I trunk0 | awk 'NR>3 { print$1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"  ## Tip You can easily use those examples over ssh for gathering data, and leave the plot locally as in the following example: ssh remote_server "netstat -b -w 1 -I trunk0" | awk 'NR>3 { print$1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"  or ssh remote_server "ping test.example" | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"  # Realtime bandwidth terminal graph visualization Written by Solène, on 19 July 2019. Tags: #openbsd68 #openbsd Comments on Mastodon If for some reasons you want to visualize your bandwidth traffic on an interface (in or out) in a terminal with a nice graph, here is a small script to do so, involving ttyplot, a nice software making graphics in a terminal. The following will works on OpenBSD. You can install ttyplot by pkg_add ttyplot as root, ttyplot package appeared since OpenBSD 6.5. For Linux, the ttyplot official website contains tons of examples. ### Example Output example while updating my packages:  IN Bandwidth in KB/s ↑ 1499.2 KB/s# │ # │ # │ # │ ## │ ## │ 1124.4 KB/s## │ ## │ ## │ ## │ ## │ ## │ 749.6 KB/s ## │ ## │ ## │ ## # │ ## # # # # ## │ ## # ### # ## # # # ## ## # # ## │ 374.8 KB/s ## ## #### # # ## # # ### ## ## ### # ## ### # # # # ## # ## │ ## ### ##### ########## ############# ### # ## ### ##### #### ## ## ###### ## ## │ ## ### ##### ########## ############# ### #### ### ##### #### ## ## ## ###### ## ### │ ## ### ##### ########## ############## ### #### ### ##### #### ## ## ######### ## #### │ ## ### ##### ############################## ######### ##### #### ## ## ############ #### │ ## ### #################################################### #### ## ##################### │ ## ### #################################################### ############################# └────────────────────────────────────────────────────────────────────────────────────────────────────→ # last=422.0 min=1.3 max=1499.2 avg=352.8 KB/s Fri Jul 19 08:30:25 2019 github.com/tenox7/ttyplot 1.4  In the following command, we will use trunk0 with INBOUND traffic as the interface to monitor. At the end of the article, there is a command for displaying both in and out at the same time, and also instructions for customizing to your need. Article update: the following command is extremely long and complicated, at the end of the article you can find a shorter and more efficient version, removing most of the awk code. You can copy/paste this command in your OpenBSD system shell, this will produce a graph of trunk0 inbound traffic. { while :; do netstat -i -b -n ; sleep 1 ; done } | awk 'BEGIN{old=-1} /^trunk0/ { if(!index($4,":") && old>=0)  { print ($5-old)/1024 ; fflush ; old =$5 } if(old==-1) { old=$5 } }' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"  The script will do an infinite loop doing netstat -ibn every second and sending that output to awk. You can quit it with Ctrl+C. ## Explanations Netstat output contains total bytes (in or out) since system has started so awk needs to remember last value and will display the difference between two output, avoiding first value because it would make a huge spike (aka the total network transfered since boot time). If I decompose the awk script, this is a lot more readable. Awk is very readable if you take care to format it properly as any source code! #!/bin/sh { while :; do netstat -i -b -n sleep 1 done } | awk ' BEGIN { old=-1 } /^trunk0/ { if(!index($4,":") && old>=0) {
print ($5-old)/1024 fflush old =$5
}
if(old==-1) {

### Using fauxstream

fauxstream script comes with a README.md file containing some useful informations, you can also check the usage

View usage:

$./fauxstream  ### Starting a stream When you start a stream, take care your API key isn’t displayed on the stream! I redirect stderr to /dev/null so all the output containing the key is not displayed. Here is the settings I use to stream: $ ./fauxstream -m -vmic 5.0 -vmon 0.2 -r 1920x1080 -f 20 -b 4000 $TWITCH 2> /dev/null  If you choose a smaller resolution than your screen, imagine a square of that resolution starting at the top left corner of your screen, the content of this square will be streamed. I recommend bwm-ng package (I wrote a ports of the week article about it) to view your realtime bandwidth usage, if you see the bandwidth reach a fixed number this mean you reached your bandwidth limit and the stream is certainly not working correctly, you should lower resolution, fps or bitrate. I recommend doing a few tries before you want to stream, to be sure it’s ok. Note that the flag -a may be be required in case of audio/video desynchronization, there is no magic value so you should guess and try. ### Adding webcam I found an easy trick to add webcam on top of a video game. $ mpv --no-config --video-sync=display-vdrop --framedrop=vo --ontop av://v4l2:/dev/video1


The trick is to use mpv to display your webcam video on your screen and use the flag to make it stay on top of any other window (this won’t work with cwm(1) window manager). Then you can resize it and place it where you want. What you see is what get streamed.

The others mpv flags are to reduce lag between the webcam video stream and the display, mpv slowly get a delay and after 10 minutes, your webcam will be lagging by like 10 seconds and will be totally out of sync between the action and your face.

Don’t forget to use chown to change the ownership of your video device to your user, by default only root has access to video devices. This is reset upon reboot.

### Viewing a stream

For less overhead, people can watch a stream using mpv software, I think this will require youtube-dl package too.

Example to view me streaming:

$mpv https://www.twitch.tv/seriphyde  This would also work with a recorded video: $ mpv https://www.twitch.tv/videos/447271018


# High quality / low latency VOIP server with umurmur/Mumble on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd68

Comments on Mastodon

Hello,

I HATE Discord.

Discord users keep telling about their so called discord server, which is not dedicated to them at all. And Discord has a very bad quality and a lot of voice distorsion.

Why not run your very own mumble server with high voice quality and low latency and privacy respect? This is very easy to setup on OpenBSD!

Mumble is an open source voip client, it has a client named Mumble (available on various operating system) and at least Android, the server part is murmur but there is a lightweight server named umurmur. People authentication is done through certificate generated locally and automatically accepted on a server, and the certificate get associated with a nickname. Nobody can pick the same nickname as another person if it’s not the same certificate.

### How to install?

# pkg_add umurmur
# rcctl enable umurmurd
# cp /usr/local/share/examples/umurmur/umurmur.conf /etc/umurmur/


We can start it as this, you may want to tweak the configuration file to add a password to your server, or set an admin password, create static channels, change ports etc….

You may want to increase the max_bandwidth value to increase audio quality, or choose the right value to fit your bandwidth. Using umurmur on a DSL line is fine up to 1 or 2 remote people. The daemon uses very little CPU and very little memory. Umurmur is meant to be used on a router!

# rcctl start umurmurd


If you have a restrictive firewall (I hope so), you will have to open the ports TCP and UDP 64738.

### How to connect to it?

The client is named Mumble and is packaged under OpenBSD, we need to install it:

# pkg_add mumble


The first time you run it, you will have a configuration wizard that will take only a couple of minutes.

Don’t forget to set the sysctl kern.audio.record to 1 to enable audio recording, as OpenBSD did disable audio input by default a few releases ago.

You will be able to choose a push-to-talk mode or voice level to activate and quality level.

Once the configuration wizard is done, you will have another wizard for generating the certificate. I recommend choosing “Automatically create a certificate”, then validate and it’s done.

You will be prompted for a server, click on “Add new”, enter the name server so you can recognized it easily, type its hostname / IP, its port and your nickname and click OK.

Congratulations, you are now using your own private VOIP server, for real!

# Nginx and acme-client on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd68 #openbsd #nginx #automation

Comments on Mastodon

I write this blog post as I spent too much time setting up nginx and SSL on OpenBSD with acme-client, due to nginx being chrooted and not stripping path and not doing it easily.

First, you need to set up /etc/acme-client.conf correctly. Here is mine for the domain ports.perso.pw:

authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}

domain ports.perso.pw {
domain key "/etc/ssl/private/ports.key"
domain full chain certificate "/etc/ssl/ports.fullchain.pem"
sign with letsencrypt
}


This example is for OpenBSD 6.6 (which is current when I write this) because of Let’s encrypt API URL. If you are running 6.5 or 6.4, replace v02 by v01 in the api url

Then, you have to configure nginx this way, the most important part in the following configuration file is the location block handling acme-challenge request. Remember that nginx is in chroot /var/www so the path to acme directory is acme.

http {
include       mime.types;
default_type  application/octet-stream;
index         index.html index.htm;
keepalive_timeout  65;
server_tokens off;

upstream backendurl {
server unix:tmp/plackup.sock;
}

server {
listen       80;
server_name ports.perso.pw;

access_log logs/access.log;
error_log  logs/error.log info;

root /htdocs/;

location /.well-known/acme-challenge/ {
rewrite ^/.well-known/acme-challenge/(.*) /$1 break; root /acme; } location / { return 301 https://$server_name$request_uri; } } server { listen 443 ssl; server_name ports.perso.pw; access_log logs/access.log; error_log logs_error.log info; root /htdocs/; ssl_certificate /etc/ssl/ports.fullchain.pem; ssl_certificate_key /etc/ssl/private/ports.key; ssl_protocols TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; [... stuff removed ...] } }  That’s all! I wish I could have find that on the Internet so I share it here. # OpenBSD as an IPv6 router Written by Solène, on 13 June 2019. Tags: #openbsd66 #openbsd #network Comments on Mastodon This blog post is an update (OpenBSD 6.5 at that time) of this very same article I published in June 2018. Due to rtadvd replaced by rad, this text was not useful anymore. I subscribed to a VPN service from the french association Grifon (Grifon website[FR] to get an IPv6 access to the world and play with IPv6. I will not talk about the VPN service, it would be pointless. I now have an IPv6 prefix of 48 bits which can theorically have 280 addresses. I would like my computers connected through the VPN to let others computers in my network to have IPv6 connectivity. On OpenBSD, this is very easy to do. If you want to provide IPv6 to Windows devices on your network, you will need one more. In my setup, I have a tun0 device which has the IPv6 access and re0 which is my LAN network. First, configure IPv6 on your lan: # ifconfig re0 inet6 autoconf  that’s all, you can add a new line “inet6 autoconf” to your file /etc/hostname.if to get it at boot. Now, we have to allow IPv6 to be routed through the differents interfaces of the router. # sysctl net.inet6.ip6.forwarding=1  This change can be made persistent across reboot by adding net.inet6.ip6.forwarding=1 to the file /etc/sysctl.conf. ### Automatic addressing Now we have to configure the daemon rad to advertise the we are routing, devices on the network should be able to get an IPv6 address from its advertisement. The minimal configuration of /etc/rad.conf is the following: interface re0 { prefix 2a00:5414:7311::/48 }  In this configuration file we only define the prefix available, this is equivalent to a dhcp addresses range. Others attributes could provide DNS servers to use for example, see rad.conf man page. Then enable the service at boot and start it: # rcctl enable rad # rcctl start rad  ### Tweaking resolv.conf By default OpenBSD will ask for IPv4 when resolving a hostname (see resolv.conf(5) for more explanations). So, you will never have IPv6 traffic until you use a software which will request explicit IPv6 connection or that the hostname is only defined with a AAAA field. # echo "family inet6 inet4" >> /etc/resolv.conf.tail  The file resolv.conf.tail is appended at the end of resolv.conf when dhclient modifies the file resolv.conf. ### Microsoft Windows If you have Windows systems on your network, they won’t get addresses from rad. You will need to deploy dhcpv6 daemon. The configuration file for what we want to achieve here is pretty simple, it consists of telling what range we want to allow on DHCPv6 and a DNS server. Create the file /etc/dhcp6s.conf: interface re0 { address-pool pool1 3600; }; pool pool1 { range 2a00:5414:7311:1111::1000 to 2a00:5414:7311:1111::4000; }; option domain-name-servers 2001:db8::35;  Note that I added “1111” into the range because it should not be on the same network than the router. You can replace 1111 by what you want, even CAFE or 1337 if you want to bring some fun to network engineers. Now, you have to install and configure the service: # pkg_add wide-dhcpv6 # touch /etc/dhcp6sctlkey # chmod 400 /etc/dhcp6sctlkey # echo SOME_RANDOM_CHARACTERS | openssl enc -base64 > /etc/dhcp6sctlkey # echo "dhcp6s -c /etc/dhcp6s.conf re0" >> /etc/rc.local  The openbsd package wide-dhcpv6 doesn’t provide a rc file to start/stop the service so it must be started from a command line, a way to do it is to type the command in /etc/rc.local which is run at boot. The openssl command is needed for dhcpv6 to start, as it requires a base64 string as a secret key in the file /etc/dhcp6sctlkey. # RSS feed for OpenBSD stable packages repository (made with XSLT) Written by Solène, on 05 June 2019. Tags: #openbsd #automation Comments on Mastodon I am happy to announce there is now a RSS feed for getting news in case of new packages available on my repository https://stable.perso.pw/ The file is available at https://stable.perso.pw/rss.xml. I take the occasion of this blog post to explain how the file is generated as I did not find easy tool for this task, so I ended up doing it myself. I choosed to use XSLT, which is not quite common. Briefly, XSLT allows to use some kind of XML template on a XML data file, this allow loops, filtering etc… It requires only two parts: the template and the data. Simple RSS template The following file is a template for my RSS file, we can see a few tags starting by xsl like xsl:for-each or xsl:value-of. It’s interesting to note that the xsl-for-each can use a condition like position < 10 in order to limit the loop to the 10 first items. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <description></description> <!-- BEGIN CONFIGURATION --> <title>OpenBSD unofficial stable packages repository</title> <link>https://stable.perso.pw/</link> <atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" /> <!-- END CONFIGURATION --> <!-- Generating items --> <xsl:for-each select="feed/news[position()&lt;10]"> <item> <title> <xsl:value-of select="title"/> </title> <description> <xsl:value-of select="description"/> </description> <pubDate> <xsl:value-of select="date"/> </pubDate> </item> </xsl:for-each> </channel> </rss> </xsl:template> </xsl:stylesheet>  Simple data file Now, we need some data to use with the template. I’ve added a comment block so I can copy / paste it to add a new entry into the RSS easily. As the date is in a painful format to write for a human, I added to my Makefile starting the commands a call to a script replacing the string DATE by the current date with the correct format. <feed> <news> <title>www/mozilla-firefox</title> <description>Firefox 67.0.1</description> <date>Wed, 05 Jun 2019 06:00:00 GMT</date> </news> <!-- copy paste for a new item <news> <title></title> <description></description> <date></date> </news> --> </feed>  Makefile I love makefiles, so I share it even if this one is really short. all: sh replace_date.sh xsltproc template.xml news.xml | xmllint -format - | tee rss.xml scp rss.xml perso.pw:/home/stable/ clean: rm rss.xml  When I want to add an entry, I copy / paste the comment block in news.xml, add DATE, run make and it’s uploaded :) The command xsltproc is available from the package libxslt on OpenBSD. And then, after writing this, I realise that manually editing the result file rss.xml is as much work as editing the news.xml file and then process it with xslt… But I keep that blog post as this can be useful for more complicated cases. :) # Simple way to use ssh tunnels in scripts Written by Solène, on 15 May 2019. Tags: #ssh #automation Comments on Mastodon While writing a script to backup a remote database, I did not know how to handle a ssh tunnel inside a script correctly/easily. A quick internet search pointed out this link to me: https://gist.github.com/scy/6781836 While I’m not a huge fan of the ControlMaster solution which consists at starting a ssh connection with ControlMaster activated, and tell ssh to close it, and don’t forget to put a timeout on the socket otherwise it won’t close if you interrupt the script. But I really enjoyed a neat solution which is valid for most of the cases: $ ssh -f -L 5432:localhost:5432 user@host "sleep 5" && pg_dumpall -p 5432 -h localhost > file.sql


This will create a ssh connection and make it go to background because of -f flag, but it will close itself after the command is run, sleep 5 in this case. As we chain it quickly to a command using the tunnel, ssh will only stops when the tunnel is not used anymore, keeping it alive only the required time for the pg_dump command, not more. If we interrupt the script, I’m not sure ssh will stop immediately or only after it ran successfully the command sleep, in both cases ssh will stop correctly. There is no need to use a long sleep value because as I said previously, the tunnel will stay up until nothing uses it.

You should note that the ControlMaster way is the only reliable way if you need to use the ssh tunnel for multiples commands inside the script.

# Kermit command line to fetch remote files through ssh

Written by Solène, on 15 May 2019.
Tags: #kermit

Comments on Mastodon

I previously wrote about Kermit for fetching remote files using a kermit script. I found that it’s possible to achieve the same with a single kermit command, without requiring a script file.

Given I want to download files from my remote server from the path /home/mirror/pub and that I’ve setup a kermit server on the other part using inetd:

File /etc/inetd.conf:

7878 stream tcp nowait solene /usr/local/bin/kermit-sshsub kermit-sshsub


I can make a ssh tunnel to it to reach it locally on port 7878 to download my files.

kermit -I -j localhost:7878 -C "remote cd /home/mirror/pub","reget /recursive .",close,EXIT


Some flags can be added to make it even faster, like -v 31 -e 9042. I insist on kermit because it’s super reliable and there are no security issues if running behind a firewall and accessed through ssh.

Fetching files can be stopped at any time, it supports very poor connection too, it’s really reliable. You can also skip files, because sometimes you need some file first and you don’t want to modify your script to fetch a specific file (this only works if you don’t have too many files to get of course because you can skip them only one by one).

# Simple shared folder with Samba on OpenBSD 6.5

Written by Solène, on 15 May 2019.
Tags: #samba #openbsd

Comments on Mastodon

This article explains how to set up a simple samba server to have a CIFS / Windows shared folder accessible by everyone. This is useful in some cases but samba configuration is not straightforward when you need it for a one shot time or this particular case.

The important covered case here is that no user are needed. The trick comes from map to guest = Bad User configuration line in [global] section. This option will automatically map an unknown user or no provided user to the guest account.

Here is a simple /etc/samba/smb.conf file to share /home/samba to everyone, except map to guest and the shared folder, it’s the stock file with comments removed.

[global]
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server
log file = /var/log/samba/smbd.%m
max log size = 50
dns proxy = no
map to guest = Bad User

[myfolder]
browseable = yes
path = /home/samba
writable = yes
guest ok = yes
public = yes


If you want to set up this on OpenBSD, it’s really easy:

# pkg_add samba
# rcctl enable smbd nmbd
# vi /etc/samba/smb.conf (you can use previous config)
# mkdir -p /home/samba
# chown nobody:nobody /home/samba
# rcctl start smbd nmbd


And you are done.

# Neomutt cheatsheet

Written by Solène, on 23 April 2019.
Tags: #neomutt #openbsd

Comments on Mastodon

I switched from a homemade script using mblaze to neomutt (after being using mutt, alpine and mu4e) and it’s difficult to remember everything. So, let’s do a cheatsheet!

• Mark as read: Ctrl+R
• Mark to delete: d

# Create a dedicated user for ssh tunneling only

Written by Solène, on 17 April 2019.
Tags: #openbsd #ssh

Comments on Mastodon

I use ssh tunneling A LOT, for everything. Yesterday, I removed the public access of my IMAP server, it’s now only available through ssh tunneling to access the daemon listening on localhost. I have plenty of daemons listening only on localhost that I can only reach through a ssh tunnel. If you don’t want to bother with ssh and redirect ports you need, you can also make a VPN (using ssh, openvpn, iked, tinc…) between your system and your server. I tend to avoid setting up VPN for the current use case as it requires more work and more maintenance than running ssh server and a ssh client.

The last change, for my IMAP server, added an issue. I want my phone to access the IMAP server but I don’t want to connect to my main account from my phone for security reasons. So, I need a dedicated user that will only be allowed to forward ports.

This is done very easily on OpenBSD.

The steps are: 1. generate ssh keys for the new user 2. add an user with no password 3. allow public key for port forwarding

Obviously, you must allow users (or only this one) to make port forwarding in your sshd_config.

### Generating ssh keys

Please generate the keys in a safe place, using ssh-keygen

$ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:SOMETHINGSOMETHINSOMETHINSOMETHINSOMETHING user@myhost The key's randomart image is: +---[RSA 3072]----+ | | | ** | | * ** . | | * * | | **** * | | **** | | | | | | | +----[SHA256]-----+  This will create your public key in ~/.ssh/id_rsa.pub and the private key in ~/.ssh/id_rsa ### Adding an user On OpenBSD, we will create an user named tunnel, this is done with the following command as root: # useradd -m tunnel  This user has no password and can’t login on ssh. ### Allow the public key to port forward only We will use the command restriction in the authorized_keys file to allow the previously generated key to only forward. Edit /home/tunnel/.ssh/authorized_keys as following command="echo 'Tunnel only!'" ssh-rsa PUT_YOUR_PUBLIC_KEY_HERE  This will tell “Tunnel only” and abort the connection if the user connects and with a shell or a command. ### Connect using ssh You can connect with ssh(1) as usual but you will require the flag -N to not start a shell on the remote server. $ ssh -N -L 10000:localhost:993 tunnel@host


If you want the tunnel to stay up in the most automated way possible, you can use autossh from ports, which will do a great job at keeping ssh up.

$autossh -M 0 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "TCPKeepAlive yes" -N -v -L 9993:localhost:993 tunnel@host  This command will start autossh, restart if forwarding doesn’t work which is likely to happens when you lose connectivity, it takes some time for the remote server to disable the forwarding effectively. It will make a keep alive check so the tunnel stays up and ensure it’s up (this is particularly useful on wireless connection like 4G/LTE). The others flags are also ssh parameters, to not start a shell, and for making a local forwarding. Don’t forget that as a regular user, you can’t bind on ports less than 1024, that’s why I redirect the port 993 to the local port 9993 in the example. ### Making the tunnel on Android If you want to access your personal services from your Android phone, you can use ConnectBot ssh client. It’s really easy: 1. upload your private key to the phone 2. add it in ConnectBot from the main menu 3. create a new connection the user and your remote host 4. choose to use public key authentication and choose the registered key 5. uncheck “start a shell session” (this is equivalent to -N ssh flag) 6. from the main menu, long touch the connection and edit the forwarded ports Enjoy! # Deploying munin-node with drist Written by Solène, on 17 April 2019. Tags: #drist #automation #openbsd Comments on Mastodon The following guide is a real world example of drist usage. We will create a script to deploy munin-node on OpenBSD systems. We need to create a script that will install munin-node package but also configure it using the default proposal. This is done easily using the script file. #!/bin/sh # checking munin not installed pkg_info | grep munin-node if [$? -ne 0 ]; then
pkg_add munin-node
munin-node-configure --suggest --shell | sh
rcctl enable munin_node
fi

rcctl restart munin_node


The script contains some simple logic to prevent trying installing munin-node each time we will run it, and also prevent re-configuring it automatically every time. This is done by checking if pkg_info output contains munin-node.

We also need to provide a munin-node.conf file to allow our munin server to reach the nodes. For this how-to, I’ll dump the configuration in the commands using cat, but of course, you can use your favorite editor to create the file, or copy an original munin-node.conf file and edit it to suit your needs.

mkdir -p files/etc/munin/

cat <<EOF > files/etc/munin/munin-node.conf
log_level 4
log_file /var/log/munin/munin-node.log
pid_file /var/run/munin/munin-node.pid
background 1
setsid 1
user root
group wheel
ignore_file [\#~]$ignore_file DEADJOE$
ignore_file \.bak$ignore_file %$
ignore_file \.dpkg-(tmp|new|old|dist)$ignore_file \.rpm(save|new)$
ignore_file \.pod$allow ^127\.0\.0\.1$
allow ^192\.168\.1\.100$allow ^::1$
host *
port 4949
EOF


Now, we only need to use drist on the remote host:

drist root@myserver


Last version of drist as now also supports privilege escalation using doas instead of connecting to root by ssh:

drist -s -e doas user@myserver


# Playing Slay the Spire on OpenBSD

Written by Solène, on 01 April 2019.
Tags: #openbsd #gaming

Comments on Mastodon

Thanks to a hard work from thfr@, it is now possible to play the commercial game Slay The Spire on OpenBSD.

Small introduction to the game by myself. It’s a solo card player where you need to escalate a tower. Each floor may contain enemie(s), a merchant, an elite (harder enemies) or an event. There are three characters playable, each unlocked after some time. The game is really easy to understand, each time you restart from scratch with your character, you will earn items and cards to build a deck for this run. When you die, you can unlock some new items per characters and unlock cards for next runs. Every run really start over from scratch. The goal is to go to the top of the tower. Each character are really different to play and each allow a few types of obvious deck builds.

The game work with an OpenBSD 6.5 minimum. For this you will need:

1. Buy Slay The Spire on GOG or Steam (steam requires to install the windows or linux client)
2. Copy files from a Slay The Spire installation (Windows or Linux) to your OpenBSD system
3. Install some packages with pkg_add(1): apache-ant openal jdk rsync lwjgl xz maven
4. Download this script to build and replace libraries of the game with new one for OpenBSD
5. Don’t forget to eat, hydrate yourself and sleep. This game is time consuming :)

The process is easy to achieve, find the file desktop–1.0.jar from the game, and run the previously downloaded script in the same folder of this file. This will download an archive from my server which contains sources of libgdx modified by thfr@ to compile on OpenBSD. The script will take care of downloading it, compile a few components, replace original files of the game.

Finally, start the game with the following command:

/usr/local/jdk-1.8.0/bin/java -Xmx1G -Dsun.java2d.dpiaware=true com.megacrit.cardcrawl.desktop.DesktopLauncher


All settings and saves are stored in the game folder, so you may want to backup it if you don’t want to lose your progression.

Again, thanks to thfr@ for his huge work on making games working on OpenBSD!

# Using haproxy for TLS layer

Written by Solène, on 07 March 2019.
Tags: #openbsd

Comments on Mastodon

This article explains how to use haproxy to add a TLS layer to any TCP protocol. This includes http or gopher. The following example explains the minimal setup required in order to make it work, haproxy has a lot of options and I won’t use them.

The idea is to let haproxy manage the TLS part and let your http server (or any daemon listening on TCP) replying within the wrapped connection.

You need a simple haproxy.cfg which can looks like that:

defaults
mode    tcp
timeout client 50s
timeout server 50s
timeout connect 50s

frontend haproxy
bind *:7000 ssl crt /etc/ssl/certificat.pem
default_backend gopher

backend gopher
server gopher 127.0.0.1:7070 check


The idea is that it waits on port 7000 and will use the file /etc/ssl/certificat.pem as a certificate, and forward requests to the backend on 127.0.0.1:7070. That is ALL. If you want to do https, you need to listen on port 443 and redirect to your port 80.

The PEM file is made from the privkey concatenated with the fullchain certificate. If you use a self signed certificate, you can make it with the following command:

cat secret.key certificate.crt > cert.pem


One can use a folder with PEM certificates files inside instead of using a file. This will allow haproxy to receive connections for ALL the certificates loaded.

For more security, I recommend using the chroot feature and a dh file but it’s out of the current topic.

# Add a TLS layer to your Gopher server

Written by Solène, on 07 March 2019.
Tags: #gopher #openbsd

Comments on Mastodon

Hi,

In this article I will explain how to setup a gopher server supporting TLS. Gopher TLS support is not “official” as there is currently no RFC to define it. It has been recently chose by the community how to make it work, while keeping compatibility with old servers / clients.

The way to do it is really simple.

Client A tries to connects to Server B, Client A tries TLS handshake, if Server B answers correctly to the TLS handshakes, then Client A sends the gopher request and Server B answers the gopher requests. If Server B doesn’t understand the TLS handshakes, then it will probably output a regular gopher page, then this is throwed and Client A retries the connection using plaintext gopher and Server B answers the gopher request.

This is easy to achieve because gopher protocol doesn’t require the server to send anything to the client before the client sends its request.

The way to add the TLS layer and the dispatching can be achieved using sslh and relayd. You could use haproxy instead of relayd, but the latter is in OpenBSD base system so I will use it. Thanks parazyd for sharing about sslh for this use case.

sslh is a protocol demultiplexer, it listens on a port, and depending on what it receives, it will try to guess the protocol used by the client and send it to the according backend. It’s first purpose was to make ssh available on port 443 while still having https daemon working on that server.

Here is a schema of the setup

                        +→ relayd for TLS + forwarding
↑                        ↓
↑ tls?                   ↓
client -> sslh TCP 70 → +                        ↓
↓ not tls                ↓
↓                        ↓
+→ → → → → → → gopher daemon on localhost


This method allows to wrap any server to make it TLS compatible. The best case would be to have TLS compatibles servers which do all the work without requiring sslh and something to add the TLS. But it’s currently a way to show TLS for gopher is real.

## Relayd

The relayd(1) part is easy, you first need a x509 certificate for the TLS part, I will not explain here how to get one, there are already plenty of how-to and one can use let’s encrypt with acme-client(1) to get one on OpenBSD.

We will write our configuration in /etc/relayd.conf

log connection
relay "gopher" {
listen on 127.0.0.1 port 7000 tls
forward to 127.0.0.1 port 7070
}


In this example, relayd listens on port 7000 and our gopher daemon listens on port 7070. According to relayd.conf(5), relayd will look for the certificate at the following places: /etc/ssl/private/$LISTEN_ADDRESS:$PORT.key and /etc/ssl/$LISTEN_ADDRESS:$PORT.crt, with the current example you will need the files: /etc/ssl/private/127.0.0.1:7000.key and /etc/ssl/127.0.0.1:7000.crt

relayd can be enabled and started using rcctl:

# rcctl enable relayd
# rcctl start relayd


## Gopher daemon

Choose your favorite gopher daemon, I recommend geomyidae but any other valid daemon will work, just make it listening on the correct address and port combination.

# pkg_add geomyidae
# rcctl enable geomyidae
# rcctl set geomyidae flags -p 7070
# rcctl start geomyidae


## SSLH

We will use sslh_fork (but sslh_select would be valid too, they have differents pros/cons). The --tls parameters tells where to forward a TLS connection while --ssh will forward to the gopher daemon. This is so because the protocol ssh is already configured within sslh and acts exactly like a gopher daemon: the client doesn’t expect the server to be the first sending data.

# pkg_add sslh
# rcctl enable sslh_fork
# rcctl set sslh_fork flags --tls 127.0.0.1:7000 --ssh 127.0.0.1:7070 -p 0.0.0.0:70
# rcctl start sslh_fork


## Client

You can easily test if this works using openssl to connect by hand to the port 70

targetaddr $target1 targetname "iqn.1994-04.org.netbsd.iscsi-target:target0" }  While most lines are really obvious, it is mandatory to have the line initiatoraddr, many thanks to cwen@ for pointing this out when I was stuck on it. The targetname value will depend of the iSCSI target server. If you use netbsd-iscsi-target, then you only need to care about the last part, aka target0 and replace it by the name of your target (which is target0 for the default one). Then we can enable the daemon and start it: # rcctl enable iscsid # rcctl start iscsid  In your dmesg, you should see a line like: sd4 at scsibus0 targ 1 lun 0: <NetBSD, NetBSD iSCSI, 0> SCSI3 0/direct fixed t10.NetBSD_0x5c6cf1b69fc3b38a  If you use netbsd-iscsi-target, the whole line should be identic except for the sd4 part which can change, depending of your hardware. If you don’t see it, you may need to reload iscsid configuration file with iscsictl reload. Warning: iSCSI is a bit of pain to debug, if it doesn’t work, double check the IPs in /etc/iscsi.conf, check your PF rules on the initiator and the target. You should be at least able to telnet into the target IP port 3260. Once you found your new sd device, you can format it and mount it as a regular disk device: # newfs /dev/rsd4c # mount /dev/sd4c /mnt  iSCSI is far mor efficient and faster than NFS but it has a total different purpose. I’m using it on my powerpc machines to build packages on it. This reduce their old IDE disks usage while giving better response time and equivalent speed. # OpenBSD and iSCSI part1: the target (server) Written by Solène, on 21 February 2019. Tags: #unix #openbsd #iscsi Comments on Mastodon This is the first article of a series about iSCSI. iSCSI is a protocol designed for sharing a block device across network as if it was a local disk. This doesn’t permit using that disk from multiples places at once though, except if you use a specific filesystem like GFS2 or OCFS2 (Linux only). In this article, we will learn how to create an iSCSI target, which is the “server” part of iSCSI, the target is the system holding the disk and making it available to others on the network. OpenBSD does not have an target server in base, we will have to use net/netbsd-iscsi-target for this. The setup is really simple. First, we obviously need to install the package and we will activate the daemon so it start automatically at boot, but don’t start it yet: # pkg_add netbsd-iscsi-target # rcctl enable iscsi_target  The configurations files are in /etc/iscsi/ folder, it contains files auths and targets. The default configuration files are the same. By looking at the source code, it seems that auths is used there but it seems to have no use at all. We will just overwrite it everytime we modify targets to keep them in sync. Default /etc/iscsi/targets (with comments stripped): extent0 /tmp/iscsi-target0 0 100MB target0 rw extent0 10.4.0.0/16  The first line defines the file holding our disk in the second field, and the last field defines the size of it. When iscsi-target will be started, it will create files as required with the size defined here. The second line defines permissions, in that case, the extent0 disk can be used read/write by the net 10.4.0.0/16. For this example, I will only change the netmask to suit my network, then I copy targets over auths. Let’s start the daemon: # rcctl start iscsi_target # rcctl check iscsi_target iscsi_target(ok)  If you want to restrict ports using PF, you only have to allows the TCP port 3260 from the network that will connect to the target. The according line would looks like this: pass in proto tcp to port 3260  Done! # Drist release with persistent ssh Written by Solène, on 18 February 2019. Tags: #unix #automation #drist Comments on Mastodon Drist see its release 1.04 available. This adds support for the flag -p to make the ssh connection persistent across the script using the ssh ControlMaster feature. This fixes one use case where you modify ssh keys in two operations: copy file + script to change permissions and this makes drist a lot faster for fast tasks. Drist makes a first ssh connection to get the real hostname of the remote machine, and then will ssh for each step (copy, copy-hostname, absent, absent-hostname, script, script-hostname), this mean in the use case where you copy one file and reload a service, it was doing 3 connections. Now with the persistent flag, drist will keep the first connection and reusing it, closing the control socket at the end of the script. Drist is now 121 lines long. Download v1.04 SHA512 checksum, it is split it in two to not break the display: 525a7dc1362877021ad2db8025832048d4a469b72e6e534ae4c92cc551b031cd 1fd63c6fa3b74a0fdae86c4311de75dce10601d178fd5f4e213132e07cf77caa  # Aspell to check spelling Written by Solène, on 12 February 2019. Tags: #unix Comments on Mastodon I never used a command line utility to check the spelling in my texts because I did not know how to do. After taking five minutes to learn how to do it, I feel guilty about not having used it before as it is really simple. First, you want to install aspell package, which may be already there pulled as a dependency. In order to proceed on OpenBSD it’s easy: # pkg_add aspell  I will only explain how to use it on text files. I think it is possible to have some integration with text editors but then, it would be more relevant to check out the editor documentation. If I want to check the spelling in my file draft.txt it is as simple as: $ aspell -l en_EN -c draft.txt


The parameter -l en_EN will depend of your locale, I have fr_FR.UTF–8 so aspell uses it by default if I don’t enforce another language. With this command, aspell will make an interactive display in the terminal

The output looks like this, with the word ful highlighted which I can not render in my article.

It's ful of mistakkes!

I dont know how to type corectly!

1) flu                                              6) FL
2) foul                                             7) fl
3) fuel                                             8) UL
4) full                                             9) fol
5) furl                                             0) fur
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit

?


I am asked how I want to resolve the issue with ful, as I wanted to write full, I will type 4 and aspell will replace the word ful with full. This will automatically jump to the next error found, mistakkes in my case:

It's full of mistakkes!

I dont know how to type corectly!

1) mistakes                                         6) misstates
2) mistake's                                        7) mistimes
3) mistake                                          8) mistypes
4) mistaken                                         9) stake's
5) stakes                                           0) Mintaka's
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit

?


and it will continue until there are no errors left, then the file is saved with the changes.

I will use aspell everyday from now.

# Port of the week: sct

Written by Solène, on 07 February 2019.
Tags: #unix #openbsd

Comments on Mastodon

Long time I didn’t write a “port of the week”.

This week, I am happy to present you sct, a very small utility software to set the color of your screen. You can install it on OpenBSD with pkg_add sct and its usage is really simple, just run sct $temp where$temp is the temperature you want to get on your screen.

The default temperature is 6500, if you lower this value, the screen will change toward red, meaning your screen will appear less blue and this may be more comfortable for some people. The temperature you want to use depend from the screen and from your feeling, I have one screen which is correct at 5900 but another old screen which turn too much red below 6200!

You can add sct 5900 to your .xsession file to start it when you start your X11 session.

There is an alternative to sct whose name is redshift, it is more complicated as you need to tell it your location with latitude and longitude and, as a daemon, it will correct continuously your screen temperature depending on the time. This is possible because when you know your location on earth and the time, you can compute the sunrise time and dawn time. sct is not a daemon, you run it once and does not change the temperature until you call it again.

# How to parallelize Drist

Written by Solène, on 06 February 2019.
Tags: #drist #automation #unix

Comments on Mastodon

This article will show you how to make drist faster by using it on multiple servers at the same time, in a correct way.

What is drist?

It is easily possible to parallelize drist (this works for everything though) using Makefile. I use this to deploy a configuration on my servers at the same time, this is way faster.

A simple BSD Make compatible Makefile looks like this:

SERVERS=tor-relay.local srvmail.tld srvmail2.tld
${SERVERS}: drist$*
install: ${SERVERS} .PHONY: all install${SERVERS}


This create a target for each server in my list which will call drist. Typing make install will iterate over $SERVERS list but it is so possible to use make -j 3 to tell make to use 3 threads. The output may be mixed though. You can also use make tor-relay.local if you don’t want make to iterate over all servers. This doesn’t do more than typing drist tor-relay.local in the example, but your Makefile may do other logic before/after. If you want to type make to deploy everything instead of make install you can add the line all: install in the Makefile. If you use GNU Make (gmake), the file requires a small change: The part ${SERVERS}: must be changed to ${SERVERS}: %:, I think that gmake will print a warning but I did not succeed with better result. If you have the solution to remove the warning, please tell me. If you are not comfortable with Makefiles, the .PHONY line tells make that the targets are not valid files. Make is awesome! # Vincent Delft talk at FOSDEM 2019: OpenBSD as a full-featured NAS Written by Solène, on 05 February 2019. Tags: #unix #openbsd Comments on Mastodon Hi, I rarely post about external links or other people work, but at FOSDEM 2019 Vincent Delft had a talk about running OpenBSD as a full featured NAS. I do use OpenBSD on my NAS, I wanted to write an article about it since long time but never did it. Thanks to Vincent, I can just share his work which is very very interesting if you plan to make your own NAS. Videos can be downloaded directly with following links provided by Fosdem: # Transfer your files with Kermit Written by Solène, on 31 January 2019. Tags: #unix #kermit Comments on Mastodon Hi, it’s been long time I wanted to write this article. The topic is Kermit, which is a file transfer protocol from the 80’s which solved problems of that era (text files and binaries files, poor lines, high latency etc..). There is a comm/kermit package on OpenBSD and I am going to show you how to use it. The package is the program ckermit which is a client/server for kermit. Kermit is a lot of things, there is a protocol, but it’s also the client/server, when you type kermit, it opens a kermit shell, where you can type commands or write kermit scripts. This allows scripts to be done using a kermit in the shebang. I personally use kermit over ssh to retrieve files from my remote server, this requires kermit on both machines. My script is the following: #!/usr/local/bin/kermit + set host /pty ssh -t -e none -l solene perso.pw kermit remote cd /home/ftp/ cd /home/solene/Downloads/ reget /recursive /delete . close exit  This connects to the remote server and starts kermit. It changes the current directory on the remote server into /home/ftp and locally it goes into /home/solene/Downloads, then, it start retrieving data, continuing previous transfer if not finished (reget command), for every file finished, it’s deleted on the remote server. Once finished, it close the ssh connection and exits. The transfer interfaces looks like this. It shows how you are connected, which file is currently transferring, its size, the percent done (0% in the example), time left, speed and some others information. C-Kermit 9.0.302 OPEN SOURCE:, 20 Aug 2011, solene.perso.local [192.168.43.56] Current Directory: /home/downloads/openbsd Network Host: ssh -t -e none -l solene perso.pw kermit (UNIX) Network Type: TCP/IP Parity: none RTT/Timeout: 01 / 03 RECEIVING: src.tar.gz => src.tar.gz => src.tar.gz File Type: BINARY File Size: 183640885 Percent Done: ...10...20...30...40...50...60...70...80...90..100 Estimated Time Left: 00:43:32 Transfer Rate, CPS: 70098 Window Slots: 1 of 30 Packet Type: D Packet Count: 214 Packet Length: 3998 Error Count: 0 Last Error: Last Message: X to cancel file, Z to cancel group, <CR> to resend last packet, E to send Error packet, ^C to quit immediately, ^L to refresh screen.  What’s interesting is that you can skip a file by pressing “X”, kermit will stop the downloading (but keep the file for later resuming) and start downloading the next file. It can be useful sometimes when you transfer a bunch of files, and it’s really big and you don’t want it now and don’t want to type the command by hand, just “X” and it skips it. Z or E will exists the transfer and close the connection. Speed can be improved by adding the following lines before the reget command: set reliable set window 32 set receive packet-length 9024  This improves performance because nowadays our networks are mostly reliable and fast. Kermit was designed at a time when serial line was used to transfer data. It’s also reported that Kermit is in use in the ISS (International Space Station), I can’t verify if it’s still in use there. I never had any issue while transferring, even by getting a file by resuming it so many times or using a poor 4G hot-spot with 20s of latency. I did some tests and I get same performances than rsync over the Internet, it’s a bit slower over Lan though. I only described an use case. Scripts can be made, there are a lot of others commands. You can type “help” in the kermit shell to get some hints for more help, “?” will display the command list. It can be used interactively, you can queue files by using “add” to create a send-list, and then proceed to transfer the queue. Another way to use it is to start the local kermit shell, then type “ssh user@remote-server” which will ssh into a remote box. Then you can type “kermit” and type kermit commands, this make a link between your local kermit and the remote one. You can go back to the local kermit by typing “Ctrl+", and go back to the remote by entering the command ”C". This is a piece of software I found by lurking into the ports tree for discovering new software and I felt in love with it. It’s really reliable. It does a different job compared to rsync, I don’t think it can preserve time, permissions etc… but it can be scripted completely, using parameters, and it’s an awesome piece of software! It should support HTTP, HTTPS and ftp transfers too, as a client, but I did not get it work. On OpenBSD, the HTTPS support is disabled, it requires some work to switch to libreSSL. You can find information on the official website. # Some 2019 news Written by Solène, on 14 January 2019. Tags: #blog Comments on Mastodon Hi from 2019! Some news about me and this blog. It’s been more than a month since the last article, which is unusual. I don’t have much time these days and the ideas in the queue are not easy topics, so I don’t publish anything. I am now on Mastodon at solene@bsd.network, publishing things on the Fediverse. Mostly UNIX propaganda. This year I plan to work on reed-alert to improve its usage, maybe write more how-to or documentation about it too. I also think about writing non-core probes in a separate repository. Cl-yag, the blog generator that I use for this blog should deserve some attention too, I would like to make it possible to create static pages not in the index/RSS, this doesn’t require much code as I already have a proof of concept, but it requires some changes to better integrate within. Finally, my deployment tool drist should definitely be fixed to support tcsh and csh on remote shells for script execution. This requires a few easy changes. Some better documentation and how-to would be nice too. I also revived a project named faubackup, it’s a backup software which is now hosted on Framagit. And I revived another project which is from me, a packages statistics website to have some stats about installed OpenBSD packages. The code is not great, the web UI is not great, the filters are not great but it works. It needs improvements. I’m thinking about making a package of it for people wishing to participate, that would install the client and add a cron to update the package list weekly. The Web UI is at this address Pkgstat, that name is not good but I did not find a good name yet. The code can be downloaded here. Thank you for reading :) # Fun tip #3: Split a line using ed Written by Solène, on 04 December 2018. Tags: #fun-tip #unix #openbsd66 Comments on Mastodon In this new article I will explain how to programmaticaly a line (with a newline) using ed. We will use commands sent to ed in its stdin to do so. The logic is to locate the part where to add the newline and if a character need to be replaced. this is a file with a too much line in it that should be split but not this one.  In order to do so, we will format using printf(1) the command list using a small trick to insert the newline. The command list is the following: /too much line s/that /that ,p  This search the first line matching “too much line” and then replaced “that ” by "that0, the trick is to escape using a backslash so the substitution command can accept the newline, and at the end we print the file (replace ,n by w to write it). The resulting command line is: $ printf '/too much line0/that /that\0n0 | ed file.txt
81
> with a too much line in it that should be split
> should be split
> 1     this is a file
2       with a too much line in it that
3       should be split
4       but not this one.
> ?


# Configuration deployment made easy with drist

Written by Solène, on 29 November 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Hello, in this article I will present you my deployement tool drist (if you speak Russian, I am already aware of what you think). It reached a feature complete status today and now I can write about it.

As a system administrator, I started using salt a few years ago. And honestly, I can not cope with it anymore. It is slow, it can get very complicated for some tasks like correctly ordering commands and a configuration file can become a nightmare when you start using condition in it.

You may already have read and heard a bit about drist as I wrote an article about my presentation of it at bitreichcon 2018.

### History

I also tried alternatives like ansible, puppet, Rex etc… One day, when lurking in the ports tree, I found sysutils/radmind which got a lot interest from me even if it is really poorly documented. It is a project from 1995 if I remember correctly, but I liked the base idea. Radmind works with files, you create a known working set of files for your system, and you can propagate that whole set to other machines, or see differences between the reference and the current system. Sets could be negative, meaning that the listed files should not be present on the system, but it was also possible to add extra sets for specific hosts. The whole thing is really really cumbersome, this requires a lot of work, I found little documentation etc… so I did not used it but, that lead me to write my own deployment tool using ideas from radmind (working with files) and from Rex (using a script for doing changes).

### Concept

drist aims at being simple to understand and pluggable with standard tools. There is no special syntax to learn, no daemon to run, no agent, and it relies on base tools like awk, sed, ssh and rsync.

drist is cross platform as it has a few requirements but it is not well suited for deploying on too much differents operating systems.

When executed, drist will execute six steps in a specific order, you can use only steps you need.

Shamelessly copied from the man page, explanations after:

1. If folder files exists, its content is copied to server rsync(1).
2. If folder files-HOSTNAME exists, its content is copied to server using rsync(1).
3. If folder absent exists, filenames in it are deleted on server.
4. If folder absent-HOSTNAME exists, filenames in it are deleted on server.
5. If file script exists, it is copied to server and executed there.
6. If file script-HOSTNAME exists, it is copied to server and executed there.

In the previous list, all the existences checks are done from the current working directory where drist is started. The text HOSTNAME is replaced by the output of uname -n of the remote server, and files are copied starting from the root directory.

drist does not do anything more. In a more litteral manner, it copies files to the remote server, using a local filesystem tree (folder files). It will delete on the remote server all files present in the local filesystem tree (folder absent), and it will run on the remote server a script named script.

Each of theses can be customized per-host by adding a “-HOSTNAME” suffix to the folder or file name, because experience taught me that some hosts does require specific configuration.

If a folder or a file does not exist, drist will skip it. So it is possible to only copy files, or only execute a script, or delete files and execute a script after.

### Drist usage

The usage is pretty simple. drist has 3 flags which are optionals.

• -n flag will show what happens (simuation mode)
• -s flag tells drist to use sudo on the remote host
• -e flag with a parameter will tell drist to use a specific path for the sudo program

The remote server address (ssh format like user@host) is mandatory.

$drist my_user@my_remote_host  drist will look at files and folders in the current directory when executed, this allow to organize as you want using your filesystem and a revision control system. ### Simple examples Here are two examples to illustrate its usage. The examples are easy, for learning purpose. #### Deploying ssh keys I want to easily copy my users ssh keys to a remote server. $ mkdir drist_deploy_ssh_keys
$cd drist_deploy_ssh_keys$ mkdir -p files/home/my_user1/.ssh
$mkdir -p files/home/my_user2/.ssh$ cp -fr /path/to/key1/id_rsa files/home/my_user1/.ssh/
$cp -fr /path/to/key2/id_rsa files/home/my_user2/.ssh/$ drist user@remote-host
Copying files from folder "files":
/home/my_user1/.ssh/id_rsa
/home/my_user2/.ssh/id_rsa


#### Deploying authorized_keys file

We can easily create the authorized_key file by using cat.

$mkdir drist_deploy_ssh_authorized$ cd drist_deploy_ssh_authorized
$mkdir -p files/home/user/.ssh/$ cat /path/to/user/keys/*.pub > files/home/user/.ssh/authorized_keys
$drist user@remote-host Copying files from folder "files": /home/user/.ssh/authorized_keys  This can be automated using a makefile running the cat command and then running drist. all: cat /path/to/keys/*.pub > files/home/user.ssh/authorized_keys drist user@remote-host  #### Installing nginx on FreeBSD This module (aka a folder which contain material for drist) will install nginx on FreeBSD and start it. $ mkdir deploy_nginx
$cd deploy_nginx$ cat >script <<EOF
#!/bin/sh
test -f /usr/local/bin/nginx
if [ $? -ne 0 ]; then pkg install -y nginx fi sysrc nginx_enable=yes service nginx restart EOF$ drist user@remote-host
Executing file "script":
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
nginx: 1.14.1,2

Number of packages to be installed: 1

The process will require 1 MiB more space.
421 KiB to be downloaded.
[1/1] Fetching nginx-1.14.1,2.txz: 100%  421 KiB 430.7kB/s    00:01
Checking integrity... done (0 conflicting)
[1/1] Installing nginx-1.14.1,2...
===> Creating groups.
Using existing group 'www'.
===> Creating users
Using existing user 'www'.
[1/1] Extracting nginx-1.14.1,2: 100%
Message from nginx-1.14.1,2:

===================================================================
Recent version of the NGINX introduces dynamic modules support.  In
FreeBSD ports tree this feature was enabled by default with the DSO
knob.  Several vendor's and third-party modules have been converted
to dynamic modules.  Unset the DSO knob builds an NGINX without
dynamic modules support.

To load a module at runtime, include the new load_module'
directive in the main context, specifying the path to the shared
object file for the module, enclosed in quotation marks.  When you
reload the configuration or restart NGINX, the module is loaded in.
It is possible to specify a path relative to the source directory,
or a full path, please see
https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and
http://nginx.org/en/docs/ngx_core_module.html#load_module for
details.

Default path for the NGINX dynamic modules is

/usr/local/libexec/nginx.
===================================================================
nginx_enable:  -> yes
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
nginx not running? (check /var/run/nginx.pid).
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.


### More complex example

Now I will show more complexes examples, with host specific steps. I will not display the output because the previous output were sufficient enough to give a rough idea of what drist does.

#### Removing someone ssh access

We will reuse an existing module here, an user should not be able to login anymore on its account on the servers using the ssh key.

$cd ssh$ mkdir -p absent/home/user/.ssh/
$touch absent/home/user/.ssh/authorized_keys$ drist user@server


#### Installing php on FreeBSD

The following module will install php and remove the opcache.ini file, and will install php72-pdo_pgsql if it is run on server production.domain.private.

$mkdir deploy_php && cd deploy_php$ mkdir -p files/usr/local/etc
$cp /some/correct/config.ini files/usr/local/etc/php.ini$ cat > script <<EOF
#!/bin/sh
test -f /usr/local/etc/php-fpm.conf || pkg install -f php-extensions
sysrc php_fpm_enable=yes
service php-fpm restart
test -f /usr/local/etc/php/opcache.ini || rm /usr/local/etc/php/opcache.ini
EOF
$cat > script-production.domain.private <<EOF #!/bin/sh test -f /usr/local/etc/php/pdo_pgsql.ini || pkg install -f php72-pdo_pgsql service php-fpm restart EOF  #### The monitoring machine This one is unique and I would like to avoid applying its configuration against another server (that happened to me once with salt and it was really really bad). So I will just do all the job using the hostname specific cases. $ mkdir my_unique_machine && cd my_unique_machine
$mkdir -p files-unique-machine.private/usr/local/etc/{smokeping,munin}$ cp /good/config files-unique-machine.private/usr/local/etc/smokeping/config
$cp /correct/conf files-unique-machine.private/usr/local/etc/munin/munin.conf$ cat > script-unique-machine.private <<EOF
#!/bin/sh
pkg install -y smokeping munin-master munin-node
munin-configure --shell --suggest | sh
sysrc munin_node_enable=yes
sysrc smokeping_enable=yes
service munin-node restart
service smokeping restart
EOF
$drist user@incorrect-host$ drist user@unique-machine.private
Copying files from folder "files-unique-machine.private":
/usr/local/etc/smokeping/config
/usr/local/etc/munin/munin.conf
Executing file "script-unique-machine.private":
[...]


Nothing happened on the wrong system.

#### Be creative

Everything can be automated easily. I have some makefile in a lot of my drist modules, because I just need to type “make” to run it correctly. Sometimes it requires concatenating files before being run, sometimes I do not want to make mistake or having to remember on which module apply on which server (if it’s specific), so the makefile does the job for me.

One of my drist module will look at all my SSL certificates from another module, and make a reed-alert configuration file using awk and deploying it on the monitoring server. All I do is typing “make” and enjoy my free time.

### How to get it and install it

• Drist can be downloaded at this address.
• Sources can be cloned using git clone git://bitreich.org/drist

In the sources folder, type “make install” as root, that will copy drist binary to /usr/bin/drist and its man page to /usr/share/man/man1/drist.1

For copying files, drist requires rsync on both local and remote hosts.

For running the script file, a sh compatible shell is required (csh is not working).

# Fun tip #2: Display trailing spaces using ed

Written by Solène, on 29 November 2018.
Tags: #unix #fun-tip #openbsd66

Comments on Mastodon

This second fun-tip article will explain how to display trailing spaces in a text file, using the ed(1) editor. ed has a special command for showing a dollar character at the end of each line, which mean that if the line has some spaces, the dollar character will spaced from the last visible line character.

$echo ",pl" | ed some-file.txt 453 This second fun-tip article will explain how to display trailing$
spaces in a text file, using the$[ed(1)$](https://man.openbsd.org/ed)
editor.$ed has a special command for showing a dollar character at the end of$
each line, which mean that if the line has some spaces, the dollar$character will spaced from the last visible line character.$
$.Bd -literal -offset indent$
echo ",pl" | ed some-file.txt$ This is the output of the article file while I am writing it. As you can notice, there is no trailing space here. The first number shown in the ed output is the file size, because ed starts at the end of the file and then, wait for commands. If I use that very same command on a small text files with trailing spaces, the following result is expected: 49 this is full$
of trailing  $spaces !$


It is also possible to display line numbers using the “n” command instead of the “p” command. This would produce this result for my current article file:

1559
1       .Dd November 29, 2018$2 .Dt "Show trailing spaces using ed"$
3       This second fun-tip article will explain how to display trailing$4 spaces in a text file, using the$
5       .Lk https://man.openbsd.org/ed ed(1)$6 editor.$
7       ed has a special command for showing a dollar character at the end of$8 each line, which mean that if the line has some spaces, the dollar$
9       character will spaced from the last visible line character.$10$
11      .Bd -literal -offset indent$12 echo ",pl" | ed some-file.txt$
13      453$14 .Dd November 29, 2018 15 .Dt "Show trailing spaces using ed" 16 This second fun-tip article will explain how to display trailing 17 spaces in a text file, using the 18 .Lk https://man.openbsd.org/ed ed(1) 19 editor. 20 ed has a special command for showing a '\ character at the end of 21 each line, which mean that if the line has some spaces, the '\ 22 character will spaced from the last visible line character. 23 24 \&.Bd \-literal \-offset indent 25 \echo ",pl" | ed some-file.txt 26 .Ed$
27      $28 This is the output of the article file while I am writing it. As you$
29      can notice, there is no trailing space here.$30$
31      The first number shown in the ed output is the file size, because ed$32 starts at the end of the file and then, wait for commands.$
33      $34 If I use that very same command on a small text files with trailing$
35      spaces, the following result is expected:$36$
37      .Bd -literal -offset indent$38 49$
39      this is full
40      of trailing
41      spaces      !
42      .Ed$43$
44      It is also possible to display line numbers using the "n" command$45 instead of the "p" command.$
46      This would produce this result for my current article file:$47 .Bd -literal -offset indent$


This shows my article file with each line numbered plus the position of the last character of each line, this is awesome!

I have to admit though that including my own article as example is blowing up my mind, especially as I am writing it using ed.

# Tor part 6: onionshare for sharing files anonymously

Written by Solène, on 21 November 2018.
Tags: #tor #unix #network #openbsd66

Comments on Mastodon

If for some reasons you need to share a file anonymously, this can be done through Tor using the port net/onionshare. Onionshare will start a web server displaying an unique page with a list of shared files and a Download Files button leading to a zip file.

While waiting for a download, onionshare will display HTTP logs. By default, onionshare will exit upon successful download of the files but this can be changed with the flag –stay-open.

Its usage is very simple, execute onionshare with the list of files to share, as you can see in the following example:

solene@computer ~ $onionshare Epictetus-The_Enchiridion.txt Onionshare 1.3 | https://onionshare.org/ Connecting to the Tor network: 100% - Done Configuring onion service on port 17616. Starting ephemeral Tor onion service and awaiting publication Settings saved to /home/solene/.config/onionshare/onionshare.json Preparing files to share. * Running on http://127.0.0.1:17616/ (Press CTRL+C to quit) Give this address to the person you're sending the file to: http://3ngjewzijwb4znjf.onion/hybrid-marbled Press Ctrl-C to stop server  Now, I need to give the address http://3ngjewzijwb4znjf.onion/hybrid-marbled to the receiver who will need a web browser with Tor to download it. # Tor part 5: onioncat for IPv6 VPN over tor Written by Solène, on 13 November 2018. Tags: #tor #unix #network #openbsd66 Comments on Mastodon This article is about a software named onioncat, it is available as a package on most Unix and Linux systems. This software allows to create an IPv6 VPN over Tor, with no restrictions on network usage. First, we need to install onioncat, on OpenBSD: $ doas pkg_add onioncat


Run a tor hidden service, as explained in one of my previous article, and get the hostname value. If you run multiples hidden services, pick one hostname.

# cat /var/tor/ssh_hidden_service/hostname
g6adq2w15j1eakzr.onion


Now that we have the hostname, we just need to run ocat.

# ocat g6adq2w15j1eakzr.onion


If everything works as expected, a tun interface will be created. With a fe80:: IPv6 address assigned to it, and a fd87:: address.

Your system is now reachable, via Tor, through its IPv6 address starting with fd87:: . It supports every IP protocol. Instead of using torsocks wrapper and .onion hostname, you can use the IPv6 address with any software.

# Moving away from Emacs, 130 days after

Written by Solène, on 13 November 2018.
Tags: #emacs

Comments on Mastodon

It has been more than four months since I wrote my article about leaving Emacs. This article will quickly speak about my journey.

First, I successfully left Emacs. Long story short, I like Emacs and think it’s a great piece of software, but I’m not comfortable being dependent of it for everything I do. I chose to replace all my Emacs usage by other software (agenda, notes taking , todo-list, IRC client, jabber client, editor etc..).

• agenda is not replaced by when (port productivity/when), but I plan to replace it by calendar(1) as it’s in base and that when doesn’t do much.
• todo-list: I now use taskwarrior + a kanban board (using kanboard) for team work
• notes: I wrote a small software named “notes” which is a wrapper for editing files and following edition using git. It’s available at git://bitreich.org/notes
• IRC: weechat (not better or worse than emacs circe)
• jabber: profanity
• editor: vim, ed or emacs, that depend what I do. Emacs is excellent for writing Lisp or Scheme code, while I prefer to use vim for most of edition task. I now use ed for small editions.
• mail: I wrote some kind of a wrapper on top of mblaze. I plan to share it someday.

I’m happy to have moved out from Emacs.

# Fun tip #1: Apply a diff with ed

Written by Solène, on 13 November 2018.
Tags: #fun-tip #unix #openbsd66

Comments on Mastodon

I am starting a new kind of articles that I chose to name it ”fun facts“. Theses articles will be about one-liners which can have some kind of use, or that I find interesting from a technical point of view. While not useless, theses commands may be used in very specific cases.

The first of its kind will explain how to programmaticaly use diff to modify file1 to file2, using a command line, and without a patch.

First, create a file, with a small content for the example:

$printf "first line\nsecond line\nthird line\nfourth line with text\n" > file1$ cp file1{,.orig}
$printf "very first line\nsecond line\n third line\nfourth line\n" > file1  We will use diff(1) -e flag with the two files. $ diff -e file1 file1.orig
4c
fourth line
.
1c
very first line
.


The diff(1) output is batch of ed(1) commands, which will transform file1 into file2. This can be embedded into a script as in the following example. We also add w last commands to save the file after edition.

#!/bin/sh
ed file1 <<EOF
4c
fourth line
.
1c
very first line
.
w
EOF


This is a quite convenient way to transform a file into another file, without pushing the entire file. This can be used in a deployment script. This is more precise and less error prone than a sed command.

In the same way, we can use ed to alter configuration file by writing instructions without using diff(1). The following script will change the whole first line containing “Port 22” into Port 2222 in /etc/ssh/sshd_config.

#!/bin/sh
ed /etc/ssh/sshd_config <<EOF
/Port 22
c
Port 2222
.
w
EOF


The sed(1) equivalent would be:

sed -i'' 's/.*Port 22.*/Port 2222/' /etc/ssh/sshd_config


Both programs have their use, pros and cons. The most important is to use the right tool for the right job.

# Play Stardew Valley on OpenBSD

Written by Solène, on 09 November 2018.
Tags: #games #openbsd66

Comments on Mastodon

It’s possible to play native Stardew Valley on OpenBSD, and it’s not using a weird trick!

First, you need to buy Stardew Valley, it’s not very expensive and is often available at a lower price. I recommend to buy it on GOG.

Now, follow the steps:

1. install packages unzip and fnaify
2. On GOG, download the linux installer
3. unzip the installer (use unzip command on the .sh file)
4. cd into data/noarch/game
5. fnaify -y
6. ./StardewValley

Enjoy!

# Safely restrict commands through SSH

Written by Solène, on 08 November 2018.
Tags: #ssh #security #openbsd66 #highlight

Comments on Mastodon

sshd(8) has a very nice feature that is often overlooked. That feature is the ability to allow a ssh user to run a specified command and nothing else, not even a login shell.

This is really easy to use and the magic happens in the file authorized_keys which can be used to restrict commands per public key.

For example, if you want to allow someone to run the “uptime” command on your server, you can create a user account for that person, with no password so the password login will be disabled, and add his/her ssh public key in ~/.ssh/authorized_keys of that new user, with the following content.

restrict,command="/usr/bin/uptime" ssh-rsa the_key_content_here


The user will not be able to log-in, and doing the command ssh remoteserver will return the output of uptime. There is no way to escape this.

While running uptime is not really helpful, this can be used for a much more interesting use case, like allowing remote users to use vmctl without giving a shell account. The vmctl command requires parameters, the configuration will be slightly different.

restrict,pty,command="/usr/sbin/vmctl $SSH_ORIGINAL_COMMAND" ssh-rsa the_key_content_here"  The variable SSH_ORIGINAL_COMMAND contains the value of what is passed as parameter to ssh. The pty keyword also make an appearance, that will be explained later. If the user connects to ssh, vmctl with no parameter will be output. $ ssh remotehost
usage:  vmctl [-v] command [arg ...]
vmctl console id
vmctl create "path" [-b base] [-i disk] [-s size]
vmctl load "path"
vmctl log [verbose|brief]
vmctl reload
vmctl reset [all|vms|switches]
vmctl show [id]
vmctl start "name" [-Lc] [-b image] [-r image] [-m size]
[-n switch] [-i count] [-d disk]* [-t name]
vmctl status [id]
vmctl stop [id|-a] [-fw]
vmctl pause id
vmctl unpause id
vmctl send id
vmctl receive id


If you pass parameters to ssh, it will be passed to vmctl.

$ssh remotehost show ID PID VCPUS MAXMEM CURMEM TTY OWNER NAME 1 - 1 1.0G - - solene test$ ssh remotehost start test
vmctl: started vm 1 successfully, tty /dev/ttyp9
$ssh -t remotehost console test (I)nstall, (U)pgrade, (A)utoinstall or (S)hell?  The ssh connections become a call to vmctl and ssh parameters become vmctl parameters. Note that in the last example, I use “ssh -t”, this is so to force allocation of a pseudo tty device. This is required for vmctl console to get a fully working console. The keyword restrict does not allow pty allocation, that is why we have to add pty after restrict, to allow it. # Tor part 4: run a relay Written by Solène, on 08 November 2018. Tags: #unix #tor Comments on Mastodon In this fourth Tor article, I will quickly cover how to run a Tor relay, the Tor project already have a very nice and up-to-date Guide for setting a relay. Those relays are what make Tor usable, with more relay, Tor gets more bandwidth and it makes you harder to trace, because that would mean more traffic to analyze. A relay server can be an exit node, which will relay Tor traffic to the outside. This implies a lot of legal issues, the Tor project foundation offers to help you if your exit node gets you in trouble. Remember that being an exit node is optional. Most relays are not exit nodes. They will either relay traffic between relays, or become a guard which is an entry point to the Tor network. The guard gets the request over non-tor network and send it to the next relay of the user circuit. Running a relay requires a lot of CPU (capable of some crypto) and a huge amount of bandwidth. Running a relay requires at least a bandwidth of 10Mb/s, this is a minimal requirement. If you have less, you can still run a bridge with obfs4 but I won’t cover it here. When running a relay, you will be able to set a daily/weekly/monthly traffic limit, so your relay will stop relaying when it reach the quota. It’s quiet useful if you don’t have unmeasured bandwidth, you can also limit the bandwidth allowed to Tor. To get real-time information about your relay, the software Nyx (net/nyx) is a Tor top-like front end which show Tor CPU usage, bandwidth, connections, log in real time. The awesome Official Tor guide # File versioning with rcs Written by Solène, on 31 October 2018. Tags: #openbsd66 #highlight #unix Comments on Mastodon In this article I will present you the rcs tools and we will use it for versioning files in /etc to track changes between editions. These tools are part of the OpenBSD base install. ## Prerequisites You need to create a RCS folder where your files are, so the files versions will be saved in it. I will use /etc in the examples, you can adapt to your needs. # cd /etc # mkdir RCS  The following examples use the command ci -u. This will be explained later why so. ## Tracking a file We need to add a file to the RCS directory so we can track its revisions. Each time we will proceed, we will create a new revision of the file which contain the whole file at that point of time. This will allow us to see changes between revisions, and the date of each revision (and some others informations). I really recommend to track the files you edit in your system, or even configuration file in your user directory. In next example, we will create the first revision of our file with ci, and we will have to write some message about it, like what is doing that file. Once we write the message, we need to validate with a single dot on the line. # cd /etc # ci -u fstab fstab,v <-- fstab enter description, terminated with single '.' or end of file: NOTE: This is NOT the log message! >> this is the /etc/fstab file >> . initial revision: 1.1 done  ## Editing a file The process of edition has multiples steps, using ci and co: 1. checkout the file and lock it, this will make the file available for writing and will prevent using co on it again (due to lock) 2. edit the file 3. commit the new file + checkout When using ci to store the new revision, we need to write a small message, try to use something clear and short. The log messages can be seen in the file history, that should help you to know which change has been made and why. The full process is done in the following example. # co -l fstab RCS/fstab,v --> fstab revision 1.1 (locked) done # echo "something wrong" >> fstab # ci -u fstab RCS/fstab,v <-- fstab new revision: 1.4; previous revision: 1.3 enter log message, terminated with a single '.' or end of file: >> I added a mistake on purpose! >> . revision 1.4 (unlocked) done  ## View changes since last version Using previous example, we will use rcsdiff to check the changes since the last version. # co -l fstab RCS/fstab,v --> fstab revision 1.1 (locked) done # echo "something wrong" >> fstab # rcsdiff -u fstab --- fstab 2018/10/28 14:28:29 1.1 +++ fstab 2018/10/28 14:30:41 @@ -9,3 +9,4 @@ 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2 +something wrong  The -u flag is so to produce an unified diff, which I find easier to read. Lines with + shows additions, and lines with - show deletions (there are none in the example). ## Use of ci -u The examples were using ci -u this is because, if you use ci some_file, the file will be saved in the RCS folder but will be missing in its place. You should use co some_file to get it back (in read-only). # co -l fstab RCS/fstab,v --> fstab revision 1.1 (locked) done # echo "something wrong" >> fstab # ci -u fstab RCS/fstab,v <-- fstab new revision: 1.4; previous revision: 1.3 enter log message, terminated with a single '.' or end of file: >> I added a mistake on purpose! >> . done # ls fstab ls: fstab: No such file or directory # co fstab RCS/fstab,v --> fstab revision 1.5 done # ls fstab fstab  Using ci -u is very convenient because it prevent the user to forget to checkout the file after commiting the changes. ## Show existing revisions of a file # rlog fstab RCS file: RCS/fstab,v Working file: fstab head: 1.2 branch: locks: strict access list: symbolic names: keyword substitution: kv total revisions: 2; selected revisions: 2 description: new file ---------------------------- revision 1.2 date: 2018/10/28 14:45:34; author: solene; state: Exp; lines: +1 -0; Adding a disk ---------------------------- revision 1.1 date: 2018/10/28 14:45:18; author: solene; state: Exp; Initial revision =============================================================================  We have revisions 1.1 and 1.2, if we want to display the file in its 1.1 revision, we can use the following command: # co -p1.1 fstab RCS/fstab,v --> standard output revision 1.1 52fdd1ce48744600.b none swap sw 52fdd1ce48744600.a / ffs rw 1 1 52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.f /usr ffs rw,nodev 1 2 52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2 52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2 52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2 done  Note that there is no space between the flag and the revision! This is required. We can see that the command did output some extra informations about the file and “done” at the end of the file. Thoses extra informations are sent to stderr while the actual file content is sent to stdout. That mean if we redirect stdout to a file, we will get the file content. # co -p1.1 fstab > a_file RCS/fstab,v --> standard output revision 1.1 done # cat a_file 52fdd1ce48744600.b none swap sw 52fdd1ce48744600.a / ffs rw 1 1 52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.f /usr ffs rw,nodev 1 2 52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2 52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2 52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2  ## Show a diff of a file since a revision We can use rcsdiff using -r flag to tell it to show the changes between last and one specific revision. # rcsdiff -u -r1.1 fstab --- fstab 2018/10/29 14:45:18 1.1 +++ fstab 2018/10/29 14:45:34 @@ -9,3 +9,4 @@ 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2 +something wrong  # Configure OpenSMTPD to relay on a network Written by Solène, on 29 October 2018. Tags: #openbsd66 #highlight #opensmtpd Comments on Mastodon With the new OpenSMTPD syntax change which landed with OpenBSD 6.4 release, changes are needed for making opensmtpd to act as a lan relay to a smtp server. This case wasn’t covered in my previous article about opensmtpd, I was only writing about relaying from the local machine, not for a network. Mike (a reader of the blog) shared that it would be nice to have an article about it. Here it is! :) A simple configuration would look like the following: listen on em0 listen on lo0 table aliases db:/etc/mail/aliases.db table secrets db:/etc/mail/secrets.db action "local" mbox alias <aliases> action "relay" relay host smtps://myrelay@remote-smtpd.tld auth <secrets> match for local action "local" match from local for any action "relay" match from src 192.168.1.0/24 for action relay  The daemon will listen on em0 interface, and mail delivered from the network will be relayed to remote-smtpd.tld. For a relay using authentication, the login and passwords must be defined in the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd smtpd.conf(5) explains creation of /etc/mail/secrets like this: touch /etc/mail/secrets chmod 640 /etc/mail/secrets chown root:_smtpd /etc/mail/secrets  # Tor part 3: Tor Browser Written by Solène, on 24 October 2018. Tags: #openbsd66 #openbsd #unix #tor Comments on Mastodon In this third Tor article, we will discover the web browser Tor Browser. The Tor Browser is an official Tor project. It is a modified Firefox, including some defaults settings changes and some extensions. The default changes are all related to privacy and anonymity. It has been made to be easy to browse the Internet through Tor without leaving behing any information which could help identify you, because there are much more informations than your public IP address which could be used against you. It requires tor daemon to be installed and running, as I covered in my first Tor article. Using it is really straightforward. #### How to install tor-browser  pkg_add tor-browser  #### How to start tor-browser  tor-browser  It will create a ~/TorBrowser-Data folder at launch. You can remove it as you want, it doesn’t contain anything sensitive but is required for it to work. # Show OpenSMTPD queue and force sending queued mails Written by Solène, on 24 October 2018. Tags: #opensmtpd #highlight #openbsd66 #openbsd Comments on Mastodon If you are using opensmtpd on a device not always connected on the internet, you may want to see what mail did not go, and force it to be delivered NOW when you are finally connected to the Internet. We can use smtpctl to show the current queue.  doas smtpctl show queue 1de69809e7a84423|local|mta|auth|so@tld|dest@tld|dest@tld|1540362112|1540362112|0|2|pending|406|No MX found for domain  The previous command will report nothing if the queue is empty. In the previous output, we see that there is one mail from me to dest@tld which is pending due to “NO MX found for domain” (which is normal as I had no internet when I sent the mail). We need to extract the first field, which is 1de69809e7a84423 in the current example. In order to tell opensmtpd to deliver it now, we will use the following command:  doas smtpctl schedule 1de69809e7a84423 1 envelope scheduled doas smtpctl show queue  My mail was delivered, it’s not in the queue anymore. If you wish to deliver all enveloppes in the queue, this is as simple as:  doas smtpctl schedule all  # New cl-yag version Written by Solène, on 12 October 2018. Tags: #cl-yag #unix Comments on Mastodon My website/gopherhole static generator cl-yag has been updated today, and see its first release! New feature added today is that the gopher output now supports an index menu of tags, and a menu for each tags displaying articles tagged by that tag. The gopher output was a bit of a second class citizen before this, only listing articles. New release v1.00 can be downloaded here (sha512 sum 53839dfb52544c3ac0a3ca78d12161fee9bff628036d8e8d3f54c11e479b3a8c5effe17dd3f21cf6ae4249c61bfbc8585b1aa5b928581a6b257b268f66630819). Code can be cloned with git: git://bitreich.org/cl-yag # Tor part 2: hidden service Written by Solène, on 11 October 2018. Tags: #openbsd66 #openbsd #unix #tor #security Comments on Mastodon In this second Tor article, I will present an interesting Tor feature named hidden service. The principle of this hidden service is to make available a network service from anywhere, with only prerequisites that the computer must be powered on, tor not blocked and it has network access. This service will be available through an address not disclosing anything about the server internet provider or its IP, instead, a hostname ending by .onion will be provided by tor for connecting. This hidden service will be only accessible through Tor. There are a few advantages of using hidden services: • privacy, hostname doesn’t contain any hint • security, secure access to a remote service not using SSL/TLS • no need for running some kind of dynamic dns updater The drawback is that it’s quite slow and it only work for TCP services. From here, we assume that Tor is installed and working. Running an hidden service require to modify the Tor daemon configuration file, located in /etc/tor/torrc on OpenBSD. Add the following lines in the configuration file to enable a hidden service for SSH: HiddenServiceDir /var/tor/ssh_service HiddenServicePort 22 127.0.0.1:22  The directory /var/tor/ssh_service will be be created. The directory /var/tor is owned by user _tor and not readable by other users. The hidden service directory can be named as you want, but it should be owned by user _tor with restricted permissions. Tor daemon will take care at creating the directory with correct permissions once you reload it. Now you can reload the tor daemon to make the hidden service available.  doas rcctl reload tor  In the /var/tor/ssh_service directory, two files are created. What we want is the content of the file hostname which contains the hostname to reach our hidden service.  doas cat /var/tor/ssh_service/hostname piosdnzecmbijclc.onion  Now, we can use the following command to connect to the hidden service from anywhere.  torsocks ssh piosdnzecmbijclc.onion  In Tor network, this feature doesn’t use an exit node. Hidden services can be used for various services like http, imap, ssh, gopher etc… Using hidden service isn’t illegal nor it makes the computer to relay tor network, as previously, just check if you can use Tor on your network. Note: it is possible to have a version 3 .onion address which will prevent hostname collapsing, but this produce very long hostnames. This can be done like in the following example: HiddenServiceDir /var/tor/ssh_service HiddenServicePort 22 127.0.0.1:22 HiddenServiceVersion 3  This will produce a really long hostname like tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion If you want to have the short and long hostnames, you need to specify twice the hidden service, with differents folders. Take care, if you run a ssh service on your website and using this same ssh daemon on the hidden service, the host keys will be the same, implying that someone could theoricaly associate both and know that this public IP runs this hidden service, breaking anonymity. # Tor part 1: how-to use Tor Written by Solène, on 10 October 2018. Tags: #openbsd66 #openbsd #unix #tor #security Comments on Mastodon Tor is a network service allowing to hide your traffic. People sniffing your network will not be able to know what server you reach and people on the remote side (like the administrator of a web service) will not know where you are from. Tor helps keeping your anonymity and privacy. To make it quick, tor make use of an entry point that you reach directly, then servers acting as relay not able to decrypt the data relayed, and up to an exit node which will do the real request for you, and the network response will do the opposite way. You can find more details on the Tor project homepage. Installing tor is really easy on OpenBSD. We need to install it, and start its daemon. The daemon will listen by default on localhost on port 9050. On others systems, it may be quite similar, install the tor package and enable the daemon if not enabled by default. # pkg_add tor # rcctl enable tor # rcctl start tor  Now, you can use your favorite program, look at the proxy settings and choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and use the default address: 127.0.0.1 with port 9050. If you need to use tor with a program that doesn’t support setting a SOCKS proxy, it’s still possible to use torsocks to wrap it, that will work with most programs. It is very easy to use. # pkg_add torsocks torsocks ssh remoteserver  This will make ssh going through tor network. Using tor won’t make you relaying anything, and is legal in most countries. Tor is like a VPN, some countries has laws about VPN, check for your country laws if you plan to use tor. Also, note that using tor may be forbidden in some networks (companies, schools etc..) because this allows to escape filtering which may be against some kind of “Agreement usage” of the network. I will cover later the relaying part, which can lead to legal uncertainty. Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to wrap network system calls, there is a way to do it more cleanly with ssh (or any program supporting a custom command for initialize the connection) using netcat. ssh -o ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p' address.onion  This can be simplified by adding the following lines to your ~/.ssh/config file, in order to automatically use the proxy command when you connect to a .onion hostname: Host *.onion ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p'  This netcat command is tested under OpenBSD, there are differents netcat implementations, the flags may be differents or may not even exist. # Add an new OpenBSD partition from unused space Written by Solène, on 20 September 2018. Tags: #openbsd66 #openbsd #highlight Comments on Mastodon The default OpenBSD partition layout uses a pre-defined template. If you have a disk more than 356 GB you will have unused space with the default layout (346 GB before 6.4). It’s possible to create a new partition to use that space if you did not modify the default layout at installation. You only need to start disklabel with flag -E* and type a to add a partition, default will use all remaining space for the partition. # disklabel -E sd0 Label editor (enter '?' for help at any prompt) > a partition: [m] offset: [741349952] size: [258863586] FS type: [4.2BSD] > w > q No label changes.  The new partition here is m. We can format it with: # newfs /dev/rsd0m  Then, you should add it to your /etc/fstab, for that, use the same uuid as for other partitions, it would look something like 52fdd1ce48744600 52fdd1ce48744600.e /data ffs rw,nodev,nosuid 1 2  It will be auto mounted at boot, you only need to create the folder /data. Now you can do # mkdir /data # mount /data  and /data is usable right now. You can read disklabel(8) and newfs for more informations. # Display the size of installed packages ordered by size Written by Solène, on 11 September 2018. Tags: #openbsd66 #openbsd #highlight Comments on Mastodon Simple command line to display your installed packages listed by size from smallest to biggest.  pkg_info -sa | paste - - - - | sort -n -k 5  Thanks to sthen@ for the command, I was previously using one involving awk which was less readable. paste is often forgotten, it has very specifics uses which can’t be mimic easily with other tools, its purpose is to joins multiples lines into one with some specific rules. You can easily modify the output to convert the size from bytes to megabytes with awk:  pkg_info -sa | paste - - - - | sort -n -k 5 | awk '{ NF=NF/1024/1024 ; print }'  This divides the last element (using space separator) of each line twice by 1024 and displays the line. # News about the blog Written by Solène, on 11 September 2018. Tags: #highlight Comments on Mastodon Today I will write about my blog itself. While I started it as my own documentation for some specific things I always forget about (like “How to add a route through a specific interface on FreeBSD”) or to publish my dot files, I enjoyed it and wanted to share about some specific topics. Then I started the “port of the week” things, but as time goes, I find less of those software and so I don’t have anything to write about. Then, as I run multiples servers, sometimes when I feel that the way I did something is clean and useful, I share it here, as it is a reminder for me I also write it to be helpful for others. Doing things right is time consuming, but I always want to deliver a polished write. In my opinion, doing things right includes the following: • explain why something is needed • explain code examples • give hints about potential traps • where to look for official documentation • provide environment informations like the operating system version used at the writing time • make the reader to think and get inspired instead of providing a material ready to be copy / pasted brainlessly I try to keep as much as possible close to those guidelines. I even update from time to time my previous articles to check it still works on the latest operating system version, so the content is still relevant. And until it’s not updated, having the system version let the reader think about “oh, it may have changed” (or not, but it becomes the reader problem). Now, I want to share about some OpenBSD specifics features, in a way to highlight features. In OpenBSD everything is documented correctly, but as a Human, one can’t read and understand every man page to know what is possible. Here come the highlighting articles, trying to show features, how to use it and where they are documented. I hope you, reader, like what I write. I am writing here since two years and I still like it. # Manage ”nice” priority of daemons on OpenBSD Written by Solène, on 11 September 2018. Tags: #openbsd66 #openbsd #highlight Comments on Mastodon Following a discussion on the OpenBSD mailing list misc, today I will write about how to manage the priority (as in nice priority) of your daemons or services. In man page rc(8), one can read: Before init(8) starts rc, it sets the process priority, umask, and resource limits according to the “daemon” login class as described in login.conf(5). It then starts rc and attempts to execute the sequence of commands therein.  Using /etc/login.conf we can manage some limits for services and daemon, using their rc script name. For example, to make jenkins at lowest priority (so it doesn’t make troubles if it builds), using this line will set it to nice 20. jenkins:priority=20  If you have a file /etc/login.conf.db you have to update it from /etc/login.conf using the software cap_mkdb. This creates a hashed database for faster information retrieval when this file is big. By default, that file doesn’t exist and you don’t have to run cap_mkdb. See login.conf(5) for more informations. # Configuration of OpenSMTPD to relay mails to outbound smtp server Written by Solène, on 06 September 2018. Tags: #openbsd66 #openbsd #opensmtpd #highlight Comments on Mastodon In this article I will show how to configure OpenSMTPD, the default mail server on OpenBSD, to relay mail sent locally to your smtp server. In pratice, this allows to send mail through “localhost” by the right relay, so it makes also possible to send mail even if your computer isn’t connected to the internet. Once connected, opensmtpd will send the mails. All you need to understand the configuration and write your own one is in the man page smtpd.conf(5). This is only a highlight on was it possible and how to achieve it. In OpenBSD 6.4 release, the configuration of opensmtpd changed drasticaly, now you have to defines rules and action to do when a mail match the rules, and you have to define those actions. In the following example, we will see two kinds of relay, the first is through smtp over the Internet, it’s the most likely you will want to setup. And the other one is how to relay to a remote server not allowing relaying from outside. /etc/mail/smtpd.conf table aliases file:/etc/mail/aliases table secrets file:/etc/mail/secrets listen on lo0 action "local" mbox alias <aliases> action "relay" relay action "myserver" relay host smtps://myrelay@perso.pw auth <secrets> action "openbsd" relay host localhost:2525 match mail-from "@perso.pw" for any action "myserver" match mail-from "@openbsd.org" for any action "openbsd" match for local action "local" match for any action "relay"  I defined 2 actions, one from “myserver”, it has a label “myrelay” and we use auth <secrets> to tell opensmtpd it needs authentication. The other action is “openbsd”, it will only relay to localhost on port 2525. To use them, I define 2 matching rules of the very same kind. If the mail that I want to send match the @domain-name, then choose relay “myserver” or “openbsd”. The “openbsd” relay is only available when I create a SSH tunnel, binding the local port 25 of the remote server to my port 2525, with flags -L 2525:127.0.0.1:25. For a relay using authentication, the login and passwords must be defined in the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd smtpd.conf(5) explains creation of /etc/mail/secrets like this: touch /etc/mail/secrets chmod 640 /etc/mail/secrets chown root:_smtpd /etc/mail/secrets  Now, restarts your server. Then if you need to send mails, just use “mail” command or localhost as a smtp server. Depending on your From address, a different relay will be used. Deliveries can be checked in /var/log/maillog log file. ### See mails in queue doas smtpctl show queue  ### Try to deliver now doas smtpctl schedule all  # Automatic switch wifi/ethernet on OpenBSD Written by Solène, on 30 August 2018. Tags: #openbsd66 #openbsd #network #highlight Comments on Mastodon Today I will cover a specific topic on OpenBSD networking. If you are using a laptop, you may switch from ethernet to wireless network from time to time. There is a simple way to keep the network instead of having to disconnect / reconnect everytime. It’s possible to aggregate your wireless and ethernet devices into one trunk pseudo device in failover mode, which give ethernet the priority if connected. To achieve this, it’s quite simple. If you have devices em0 and iwm0 create the following files. /etc/hostname.em0 up  /etc/hostname.iwm0 join "office_network" wpakey "mypassword" join "my_home_network" wpakey "9charshere" join "roaming phone" wpakey "something" join "Public Wifi" up  /etc/hostname.trunk0 trunkproto failover trunkport em0 trunkport iwm0 dhcp  As you can see in the wireless device configuration we can specify multiples network to join, it is a new feature that will be available from 6.4 release. You can enable the new configuration by running sh /etc/netstart as root. This setup is explained in trunk(4) man page and in the OpenBSD FAQ as well. # Presenting drist at BitreichCON 2018 Written by Solène, on 21 August 2018. Tags: #unix #drist #automation Comments on Mastodon Still about bitreich conference 2018, I’ve been presenting drist, an utility for server deployment (like salt/puppet/ansible…) that I wrote. drist makes deployments easy to understand and easy to extend. Basically, it has 3 steps: 1. copying a local file tree on the remote server (for deploying files) 2. delete files on the remote server if they are present in a local tree 3. execute a script on the remote server Each step is run if the according file/folder exists, and for each step, it’s possible to have a general / per-host setup. How to fetch drist git clone git://bitreich.org/drist  It was my very first talk in english, please be indulgent. Plain text slides (tgz) MP3 of the talk MP3 of questions/answers Bitreich community is reachable on gopher at gopher://bitreich.org # Presenting Reed-alert at BitreichCON 2018 Written by Solène, on 20 August 2018. Tags: #unix Comments on Mastodon As the author of reed-alert monitoring tool I have been speaking about my software at the bitreich conference 2018. For the quick intro, reed-alert is a software to get notified when something is wrong on your server, it’s fully customizable and really easy-to-use. How to fetch reed-alert git clone git://bitreich.org/reed-alert  It was my very first talk in english, please be indulgent. Plain text slides (tgz) MP3 of the talk MP3 of questions/answers Bitreich community is reachable on gopher at gopher://bitreich.org # Generate qrcode using command line Written by Solène, on 14 July 2018. Tags: #unix Comments on Mastodon If you need to generate a QR picture using command line tool. I would recommend libqrencode. qrencode -o file.png 'some text'  It’s also possible to display the QR code inside the terminal with the following command. qrencode -t ANSI256 'some text'  Official qrencode website # Tmux mastery Written by Solène, on 05 July 2018. Tags: #unix #shell Comments on Mastodon Tips for using Tmux more efficiently ### Enter in copy mode By default Tmux uses the emacs key-bindings, to make a selection you need to enter in copy-mode by pressing Ctrl+b and then [ with Ctrl+b being the tmux prefix key, if you changed it then do the replacement while reading. If you need to quit the copy-mode, type Ctrl+C. ### Make a selection While in copy-mode, selects your start or ending position for your selection and then press Ctrl+Space to start the selection. Now, move your cursor to select the text and press Ctrl+w to validate. ### Paste a selection When you want to paste your selection, press Ctrl+b ] (you should not be in copy-mode for this!). ### Make a rectangle selection If you want to make a rectangular selection, press Ctrl+space to start and immediately, press R (capitalized R), then move your cursor and validate with Ctrl+w. ### Output the buffer to X buffer Make a selection to put the content in tmux buffer, then type tmux save-buffer - | xclip  You may want to look at xclip (it’s a package) man page. ### Output the buffer to a file tmux save-buffer file  ### Load a file into buffer It’s possible to load the content of a file inside the buffer for pasting it somewhere. tmux load-buffer file  You can also load into the buffer the output of a command, using a pipe and - as a file like in this example: echo 'something very interesting' | tmux load-buffer -  ### Display the battery percentage in the status bar If you want to display your battery percentage and update it every 40 seconds, you can add two following lines in ~/.tmux.conf: set status-interval 40 set -g status-right "#[fg=colour155]#(apm -l)%% | #[fg=colour45]%d %b %R"  This example works on OpenBSD using apm command. You can reuse this example to display others informations. # Writing an article using mdoc format Written by Solène, on 03 July 2018. Tags: #unix Comments on Mastodon I never wrote a man page. I already had to read at the source of a man page, but I was barely understand what happened there. As I like having fun and discovering new things (people call me a Hipster since last days days ;-) ). I modified cl-yag (the website generator used for this website) to be only produced by mdoc files. The output was not very cool as it has too many html items (classes, attributes, tags etc…). The result wasn’t that bad but it looked like concatenated man pages. I actually enjoyed playing with mdoc format (the man page format on OpenBSD, I don’t know if it’s used somewhere else). While it’s pretty verbose, it allows to separate the formatting from the paragraphs. As I’m playing with ed editor last days, it is easier to have an article written with small pieces of lines rather than a big paragraph including the formatting. Finally I succeded at writing a command line which produced an usable html output to use it as a converter in cl-yag. Now, I’ll be able to write my articles in the mdoc format if I want :D (which is fun). The convert command is really ugly but it actually works, as you can see if you read this. cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT


The trick here was to use markdown as an convert format between mdoc to html. As markdown is very weak compared to html (in possibilities), it will only use simple tags for formatting the html output. The sed command is needed to delete the mandoc output with the man page title at the top, and the operating system at the bottom.

By having played with this, writing a man page is less obscure to me and I have a new unusual format to use for writing my articles. Maybe unusual for this use case, but still very powerful!

# Trying to move away from emacs

Written by Solène, on 03 July 2018.
Tags: #unix #emacs

Comments on Mastodon

Hello

Today I will write about my current process of trying to get rid of emacs. I use it extensively with org-mode for taking notes and making them into a agenda/todo-list, this helped me a lot to remember tasks to do and what people told to me. I also use it for editing of course, any kind of text or source code. This is usually the editor I use for writing the blog articles that you can read here. This one is written using ed. I also read my emails in emacs with mu4e (which last version doesn’t work anymore on powerpc due to a c++14 feature used and no compiler available on powerpc to compile it…).

While I like Emacs, I never liked to use one big tool for everything. My current quest is to look for a portable and efficient way to replace differents emacs parts. I will not stop using Emacs if the replacements are not good enough to do the job.

So, I identified my Emacs uses:

• todo-list / agenda / taking notes
• writing code (perl, C, php, Common LISP)
• IRC
• mails
• writing texts
• playing chess by mail
• jabber client

I will try for each topic to identify alternatives and challenge them to Emacs.

## Todo-list / Agenda / Notes taking

This is the most important part of my emacs use and it is the one I would really like to get out of Emacs. What I need is: writing quickly a task, add a deadline to it, add explanations or a description to it, be able to add sub-tasks for a task and be able to display it correctly (like in order of deadline with days / hours before deadline).

I am trying to convert my current todo-list to taskwarrior, the learning curve is not easy but after spending one hour playing with it while reading the man page, I have understood enough to replace org-mode with it. I do not know if it will be as good as org-mode but only time will let us know.

By the way, I found vit, a ncurses front-end for taskwarrior.

## Writing code

Actually Emacs is a good editor. It supports syntax coloring, can evaluates regions of code (depend of the language), the editor is nice etc… I discovered jed which is a emacs-like editor written in C+libslang, it’s stable and light while providing more features than mg editor (available in OpenBSD base installation).

While I am currently playing with ed for some reasons (I will certainly write about it), I am not sure I could use it for writing a software from scratch.

## IRC

There are lots of differents IRC clients around, I just need to pick up one.

## Mails

I really enjoy using mu4e, I can find my mails easily with it, the query system is very powerful and interesting. I don’t know what I could use to replace it. I have been using alpine some times ago, and I tried mutt before mu4e and I did not like it. I have heard about some tools to manage a maildir folder using unix commands, maybe I should try this one. I did not any searches on this topic at the moment.

## Writing text

For writing plain text like my articles or for using $EDITOR for differents tasks, I think that ed will do the job perfectly :-) There is ONE feature I really like in Emacs but I think it’s really easy to recreate with a script, the function bind on M-q to wrap a text to the correct column numbers! Update: meanwhile I wrote a little perl script using Text::Wrap module available in base Perl. It wraps to 70 columns. It could be extended to fill blanks or add a character for the first line of a paragraph. #!/usr/bin/env perl use strict;use warnings; use Text::Wrap qw(wrap$columns);
open IN, '<'.$ARGV[0];$columns = 70;
my @file = <IN>;
print wrap("","",@file);


This script does not modify the file itself though.

Some people pointed me that Perl was too much for this task. I have been told about Groff or Par to format my files.

Finally, I found a very BARE way to handle this. As I write my text with ed, I added an new alias named “ruled” with spawn ed with a prompt of 70 characters #, so I have a rule each time ed displays its prompt!!! :D

It looks like this for the last paragraph:

###################################################################### c
been told about Groff or Par to format my files.

Finally, I found a very **BARE** way to handle this. As I write my
text with ed, I added an new alias named "ruled" with spawn ed with a
prompt of 70 characters #, so I have a rule each time ed displays its
prompt!!! :D
.
###################################################################### w


Obviously, this way to proceed only works when writing the content at first. If I need to edit a paragraph, I will need a tool to format correctly my document again.

## Jabber client

Using jabber inside Emacs is not a very good experience. I switched to profanity (featured some times ago on this blog).

## Playing Chess

Well, I stopped playing chess by mails, I am still waiting for my recipient to play his turn since two years now. We were exchanging the notation of the whole play in each mail, by adding our turn each time, I was doing the rendering in Emacs, but I do not remember exactly why but I had problems with this (replaying the string).

# Easy encrypted backups on OpenBSD with base tools

Written by Solène, on 26 June 2018.
Tags: #unix #openbsd66 #openbsd

Comments on Mastodon

# Old article

Hello, it turned out that this article is obsolete. The security used in is not safe at all so the goal of this backup system isn’t achievable, thus it should not be used and I need another backup system.

One of the most important feature of dump for me was to keep track of the inodes numbers. A solution is to save the list of the inodes numbers and their path in a file before doing a backup. This can be achieved with the following command.

$doas ncheck -f "\I \P\n" /var  If you need a backup tool, I would recommend the following: # Duplicity It supports remote backend like ftp/sftp which is quite convenient as you don’t need any configuration on this other side. It supports compression and incremental backup. I think it has some GUI tools available. # Restic It supports remote backend like cloud storage provider or sftp, it doesn’t require any special tool on the remote side. It supports deduplication of the files and is able to manage multiples hosts in the same repository, this mean that if you backup multiple computers, the deduplication will work across them. This is the only backup software I know allowing this (I do not count backuppc which I find really unusable). # Borg It supports remote backend like ssh only if borg is installed on the other side. It supports compression and deduplication but it is not possible to save multiples hosts inside the same repository without doing a lot of hacks (which I won’t recommend). # Change default application for xdg-open Written by Solène, on 25 June 2018. Tags: #unix Comments on Mastodon I write it as a note for me and if it can helps some other people, it’s fine. To change the program used by xdg-open for opening some kind of file, it’s not that hard. First, check the type of the file: $ xdg-mime query filetype file.pdf
application/pdf


Then, choose the right tool for handling this type:

$xdg-mime default mupdf.desktop application/pdf  Honestly, having firefox opening PDF files with GIMP IS NOT FUN. # Share a tmux session with someone with tmate Written by Solène, on 01 June 2018. Tags: #unix Comments on Mastodon New port of the week, and it’s about tmate. If you ever wanted to share a terminal with someone without opening a remote access to your computer, tmate is the right tool for this. Once started, tmate will create a new tmux instance connected through the tmate public server, by typing tmate show-messages you will get url for read-only or read-write links to share with someone, by ssh or web browser. Don’t forget to type clear to hide url after typing show-messages, otherwise viewing people will have access to the write url (and it’s not something you want). If you don’t like the need of a third party, you can setup your own server, but we won’t cover this in this article. When you want to end the share, you just need to exit the tmux opened by tmate. If you want to install it on OpenBSD, just type pkg_add tmate and you are done. I think it’s available on most unix systems. There is no much more to say about it, it’s great, simple, work out-of-the-box with no configuration needed. # Deploying cron programmaticaly the unix way Written by Solène, on 31 May 2018. Tags: #unix Comments on Mastodon Here is a little script to automatize in some way your crontab deployment when you don’t want to use a configuration tool like ansible/salt/puppet etc… This let you package a file in your project containing the crontab content you need, and it will add/update your crontab with that file. The script works this way: $ ./install_cron crontab_solene


with crontab_solene file being an actual crontab correct, which could looks like this:

## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##


Then it will include the file into my current user crontab, the TAG in the file is here to be able to remove it and replace it later with the new version. The script could be easily modified to support the tag name as parameter, if you have multiple deployments using the same user on the same machine.

Example:

$crontab -l 0 * * * * pgrep iridium | xargs renice -n +20$ ./install_cron crontab_solene
$crontabl -l 0 * * * * pgrep iridium | xargs renice -n +20 ## TAG ## MAILTO="" */5 * * * * ( cd ~/dev/reed-alert && ecl --load check.lisp ) */10 * * * * /usr/local/bin/r2e run 1 * * * * vacuumdb -azf -U postgres ## END_TAG ##  If I add to crontab_solene the line 0 20 * * * ~/bin/faubackup.sh I can now reinstall the crontab file. $ crontabl -l
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
$./install_cron crontab_solene$ crontabl -l
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
0 20 * * * ~/bin/faubackup.sh
## END_TAG ##


Here is the script:

#!/bin/sh

if [ -z "$1" ]; then echo "Usage:$0 user_crontab_file"
exit 1
fi

VALIDATION=0
grep "^## TAG ##$" "$1" >/dev/null
VALIDATION=$? grep "^## END_TAG ##$" "$1" >/dev/null VALIDATION=$(( VALIDATION + $? )) if [ "$VALIDATION" -ne 0 ]
then
echo "file ./${1} needs \"## TAG ##\" and \"## END_TAG ##\" to be used" exit 2 fi crontab -l | \ awk '{ if($0=="## TAG ##") { hide=1 };  if(hide==0) { print } ; if($0=="## END_TAG ##") { hide=0 }; }' | \ cat - "${1}" | \
crontab -


# Mount a folder on another folder

Written by Solène, on 22 May 2018.
Tags: #openbsd66 #openbsd

Comments on Mastodon

This article will explain quickly how to bind a folder to access it from another path. It can be useful to give access to a specific folder from a chroot without moving or duplicating the data into the chroot.

Real world example: “I want to be able to access my 100GB folder /home/my_data/ from my httpd web server chrooted in /var/www/”.

The trick on OpenBSD is to use NFS on localhost. It’s pretty simple.

# rcctl enable portmap nfsd mountd
# echo "/home/my_data -network=127.0.0.1 -mask=255.255.255.255" > /etc/exports
# rcctl start portmap nfsd mountd


The order is really important. You can check that the folder is available through NFS with the following command:

$showmount -e Exports list on localhost: /home/my_data 127.0.0.1  If you don’t have any line after “Exports list on localhost:”, you should kill mountd with pkill -9 mountd and start mountd again. I experienced it twice when starting all the daemons from the same commands but I’m not able to reproduce it. By the way, mountd only supports reload. If you modify /etc/exports, you only need to reload mountd using rcctl reload mountd. Once you have check that everything was alright, you can mount the exported folder on another folder with the command: # mount localhost:/home/my_data /var/www/htdocs/my_data  You can add -ro parameter in the /etc/exports file on the export line if you want it to be read-only where you mount it. Note: On FreeBSD/DragonflyBSD, you can use mount_nullfs /from /to, there is no need to setup a local NFS server. And on Linux you can use mount --bind /from /to and some others ways that I won’t cover here. # Faster SSH with multiplexing Written by Solène, on 22 May 2018. Tags: #unix #ssh Comments on Mastodon I discovered today an OpenSSH feature which doesn’t seem to be widely known. The feature is called multiplexing and consists of reusing an opened ssh connection to a server when you want to open another one. This leads to faster connection establishment and less processes running. To reuse an opened connection, we need to use the ControlMaster option, which requires ControlPath to be set. We will also set ControlPersist for convenience. • ControlMaster defines if we create, or use or nothing about multiplexing • ControlPath defines where to store the socket to reuse an opened connection, this should be a path only available to your user. • ControlPersist defines how much time to wait before closing a ssh connection multiplexer after all connection using it are closed. By default it’s “no” and once you drop all connections the multiplexer stops. I choosed to use the following parameters into my ~/.ssh/config file: Host * ControlMaster auto ControlPath ~/.ssh/sessions/%h%p%r.sock ControlPersist 60  This requires to have ~/.ssh/sessions/ folder restricted to my user only. You can create it with the following command: install -d -m 700 ~/.ssh/sessions  (you can also do mkdir ~/.ssh/sessions && chmod 700 ~/.ssh/sessions but this requires two commands) The ControlPath variable will creates sessions with the name “${hostname}${port}${user}.sock”, so it will be unique per remote server.

Finally, I choose to use ControlPersist to 60 seconds, so if I logout from a remote server, I still have 60 seconds to reconnect to it instantly.

Don’t forget that if for some reason the ssh channel handling the multiplexing dies, all the ssh connections using it will die with it.

## Benefits with ProxyJump

Another ssh feature that is very useful is ProxyJump, it’s really useful to access ssh hosts which are not directly available from your current place. Like servers with no public ssh server available. For my job, I have a lot of servers not facing the internet, and I can still connect to them using one of my public facing server which will relay my ssh connection to the destination. Using the ControlMaster feature, the ssh relay server doesn’t have to handle lot of connections anymore, but only one.

In my ~/.ssh/config file:

Host *.private.lan
ProxyJump public-server.com


Those two lines allow me to connect to every servers with .private.lan domains (which is known by my local DNS server) by typing ssh some-machine.private.lan. This will establish a connection to public-server.com and then connects to the next server.

# Sending mail with mu4e

Written by Solène, on 22 May 2018.
Tags: #unix #emacs

Comments on Mastodon

In my article about mu4e I said that I would write about sending mails with it. This will be the topic covered in this article.

There are a lot of ways to send mails with a lot of differents use cases. I will only cover a few of them, the documentation of mu4e and emacs are both very good, I will only give hints about some interestings setups.

I would thank Raphael who made me curious about differents ways of sending mails from mu4e and who pointed out some mu4e features I wasn’t aware of.

## Send mails through your local server

The easiest way is to send mails through your local mail server (which should be OpenSMTPD by default if you are running OpenBSD). This only requires the following line to works in your ~/.emacs file:

(setq message-send-mail-function 'sendmail-send-it)


Basically, it would be only relayed to the recipient if your local mail is well configured, which is not the case for most servers. This requires a reverse DNS address correctly configured (assuming a static IP address), a SPF record in your DNS and a DKIM signing for outgoing mail. This is the minimum to be accepted to others SMTP servers. Usually people send mails from their personal computer and not from the mail server.

### Configure OpenSMTPD to relay to another smtp server

We can bypass this problem by configuring our local SMTP server to relay our mails sent locally to another SMTP server using credentials for authentication.

This is pretty easy to set-up, by using the following /etc/mail/smtpd.conf configuration, just replace remoteserver by your server.

table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets

listen on lo0

accept for local alias <aliases> deliver to mbox
accept for any relay via secure+auth://label@remoteserver:465 auth <secrets>


You will have to create the file /etc/mail/secrets and add your credentials for authentication on the SMTP server.

From smtpd.conf(5) man page, as root:

# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "label username:password" > /etc/mail/secrets


Then, all mail sent from your computer will be relayed through your mail server. With ’sendmail-send-it, emacs will delivered the mail to your local server which will relay it to the outgoing SMTP server.

## SMTP through SSH

One setup I like and I use is to relay the mails directly to the outgoing SMTP server, this requires no authentication except a SSH access to the remote server.

It requires the following emacs configuration in ~/.emacs:

(setq
message-send-mail-function 'smtpmail-send-it
smtpmail-smtp-server "localhost"
smtpmail-smtp-service 2525)


The configuration tells emacs to connect to the SMTP server on localhost port 2525 to send the mails. Of course, no mail daemon runs on this port on the local machine, it requires the following ssh command to be able to send mails.

$touch ~/Mail/queue/.noindex  Then, mu4e will be aware of the queueing, in the home screen of mu4e, you will be able to switch from queuing to direct sending by pressing m and flushing the queue by pressing f. Note: there is a bug (not sure it’s really a bug). When sending a mail into the queue, if your mail contains special characters, you will be asked to send it raw or to add a header containing the encoding. # Autoscrolling text for lazy reading Written by Solène, on 17 May 2018. Tags: #unix Comments on Mastodon Today I found a software named Lazyread which can read and display file an autoscroll at a chosen speed. I had to read its source code to make it work, the documentation isn’t very helpful, it doesn’t read ebooks (as in epub or mobi format) and doesn’t support stdin… This software requires some C code + a shell wrapper to works, it’s complicated for only scrolling. So, after thinking a few minutes, the autoscroll can be reproduced easily with a very simple awk command. Of course, it will not have the interactive keys like lazyread to increase/decrease speed or some others options, but the most important part is there: autoscrolling. If you want to read a file with a rate of 1 line per 700 millisecond, just type the following command: $ awk '{system("sleep 0.7");print}' file


Do you want to read an html file (documentation file on the disk or from the web), you can use lynx or w3m to convert the html file on the fly to a readable text and pass it to awk stdin.

$w3m -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'$ lynx -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$w3m -dump https://dataswamp.org/~solene/ | awk '{system("sleep 0.7");print}'  Maybe you want to read a man page? $ man awk | awk '{system("sleep 0.7");print}'


If you want to pause the reading, you can use the true unix way, Ctrl+Z to send a signal which will stop the command and let it paused in background. You can resume the reading by typing fg.

One could easily write a little script parsing parameters for setting the speed or handling files or url with the correct command.

Notes: If for some reasons you try to use lazyread, fix the shebang in the file lesspipe.sh and you will need to call lazyread binary with the environment variable LESSOPEN="|./lesspipe.sh %s" (the path of the script if needed). Without this variable, you will have a very helpful error “file not found”.

# Port of the week: Sent

Written by Solène, on 15 May 2018.
Tags: #unix

Comments on Mastodon

As the new port of the week, We will discover Sent. While we could think it is mail related, it is not. Sent is a nice software to make presentations from a simple text file. It has been developped by Suckless, a hacker community enjoying writing good software while keeping a small and sane source code, they also made software like st, dwm, slock, surf…

Sent is about simplicity. I will reuse a part of the example file which is also the documentation of the tool.

usage:
$sent FILE1 [FILE2 …] ▸ one slide per paragraph ▸ lines starting with # are ignored ▸ image slide: paragraph containing @FILENAME ▸ empty slide: just use a \ as a paragraph @nyan.png this text will not be displayed, since the @ at the start of the first line makes this paragraph an image slide.  The previous text, saved into a file and used with sent will open a fullscreen window containg three “slides”. Each slide will resize the text to maximize the display usage, this mean the font size will change on each slide. It is really easy to use. To display next slide, you have the choice between pressing space, right arrow, return or clicking any button. Pressing left arrow will go back. If you want to install it on OpenBSD: pkg_add sent, the package comes from the port misc/sent. Be careful, Sent does not produce any file, you will need it for the presentation! Suckless sent website # Use ramdisk on /tmp on OpenBSD Written by Solène, on 08 May 2018. Tags: #openbsd66 #openbsd Comments on Mastodon If you have enough memory on your system and that you can afford to use a few hundred megabytes to store temporary files, you may want to mount a mfs filesystem on /tmp. That will help saving your SSD drive, and if you use an old hard drive or a memory stick, that will reduce your disk load and improve performances. You may also want to mount a ramdisk on others mount points like ~/.cache/ or a database for some reason, but I will just explain how to achieve this for /tmp with is a very common use case. First, you may have heard about tmpfs, but it has been disabled in OpenBSD years ago because it wasn’t stable enough and nobody fixed it. So, OpenBSD has a special filesystem named mfs, which is a FFS filesystem on a reserved memory space. When you mount a mfs filesystem, the size of the partition is reserved and can’t be used for anything else (tmpfs, as the same on Linux, doesn’t reserve the memory). Add the following line in /etc/fstab (following fstab(5)): swap /tmp mfs rw,nodev,nosuid,-s=300m 0 0  The permissions of the mountpoint /tmp should be fixed before mounting it, meaning that the /tmp folder on / partition should be changed to 1777: # umount /tmp # chmod 1777 /tmp # mount /tmp  This is required because mount_mfs inherits permissions from the mountpoint. # Mounting remote samba share through SSH tunnel Written by Solène, on 04 May 2018. Tags: #unix Comments on Mastodon If for some reason you need to access a Samba share outside of the network, it is possible to access it through ssh and mount the share on your local computer. Using the ssh command as root is required because you will bind local port 139 which is reserved for root: # ssh -L 139:127.0.0.1:139 user@remote-server -N  Then you can mount the share as usual but using localhost instead of remote-server. Example of a mount element for usmb <mount id="public" credentials="me"> <server>127.0.0.1</server> <!--server>192.168.12.4</server--> <share>public</share> <mountpoint>/mnt/share</mountpoint> <options>allow_other,uid=1000</options> </mount>  As a reminder, <!--tag>foobar</tag--> is a XML comment. # Extract files from winmail.dat Written by Solène, on 02 May 2018. Tags: #unix #email Comments on Mastodon If you ever receive a mail with an attachment named “winmail.dat” then may be disappointed. It is a special format used by Microsoft Exchange, it contains the files attached to the mail and need some software to extract them. Hopefully, there is a little and effecient utility named “tnef” to extract the files. Install it: pkg_add tnef List files: tnef -t winmail.dat Extract files: tnef winmail.dat That’s all ! # Port of the week: ledger Written by Solène, on 02 May 2018. Tags: #unix Comments on Mastodon In this post I will do a short presentation of the port productivity/ledger, an very powerful command line accounting software, using plain text as back-end. Writing on it is not an easy task, I will use a real life workflow of my usage as material, even if my use is special. As I said before, Ledger is very powerful. It can helps you manage your bank accounts, bills, rents, shares and others things. It uses a double entry system which means each time you add an operation (withdraw, paycheck, …) , this entry will also have to contain the current state of the account after the operation. This will be checked by ledger by recalculating every operations made since it has been initialized with a custom amount as a start. Ledger can also tracks categories where you spend money or statistics about your payment method (check, credit card, bank transfer, money…). As I am not an english native speaker and that I don’t work in banks or related, I am not very familiar with accounting words in english, it makes me very hard to understand all ledger keywords, but I found a special use case for accounting things and not money which is really practical. My special use case is that I work from home for a company working in a remote location. From time to time, I take the train to the to the office, the full travel is [home] → [underground A] → [train] → [underground B] → [office] [office] → [underground B] → [train] → [underground A] → [home]  It means I need to buy tickets for both underground A and underground B system, and I want to track tickets I use for going to work. I buy the tickets 10 by 10 but sometimes I use it for my personal use or sometimes I give a ticket to someone. So I need to keep track of my tickets to know when I can give a bill to my work for being refunded. Practical example: I buy 10 tickets of A, I use 2 tickets at day 1. On day 2, I give 1 ticket to someone and I use 2 tickets in the day for personal use. It means I still have 5 tickets in my bag but, from my work office point of view, I should still have 8 tickets. This is what I am tracking with ledger. 2018/02/01 * tickets stock Initialization + go to work Tickets:inv 10 City_A Tickets:inv 10 City_B Tickets:inv -2 City_A Tickets:inv -2 City_B Tickets 2018/02/08 * Work Tickets:inv -2 City_A Tickets:inv -2 City_B Tickets 2018/02/15 * Work + Datacenter access through underground Tickets:inv -4 City_B Tickets:inv -2 City_A Tickets  At the point, running ledger -f tickets.dat balance Tickets shows my tickets remaining: 4 City_A 2 City_B Tickets:inv  Will add another entry which requires me to buy tickets: 2018/02/22 * Work + Datacenter access through underground Tickets:inv -4 City_B Tickets:inv -2 City_A Tickets:inv 10 City_B Tickets  Now, running ledger -f tickets.dat balance Tickets shows my tickets remaining: 2 City_A 8 City_B Tickets:inv  I hope that the example was clear enought and interesting. There is a big tutorial document available on the ledger homepage, I recommend to read it before using ledger, it contains real world examples with accounting. Homepage link # Port of the week: dnstop Written by Solène, on 18 April 2018. Tags: #unix Comments on Mastodon Dnstop is an interactive console application to watch in realtime the DNS queries going through a network interface. It currently only supports UDP DNS requests, the man page says that TCP isn’t supported. It has a lot of parameters and keybinding for the interactive use To install it on OpenBSD: doas pkg_add dnstop We will start dnstop on the wifi interface using a depth of 4 for the domain names: as root type dnstop -l 4 iwm0 and then press ‘3’ to display up to 3 sublevel, the -l 4 parameter means we want to know domains with a depth of 4, it means that if a request for the domain my.very.little.fqdn.com. happens, it will be truncated as very.little.fqdn.com. If you press ‘2’ in the interactive display, the earlier name will be counted in the line fqdn.com’. Example of output: Queries: 0 new, 6 total Tue Apr 17 07:17:25 2018 Query Name Count % cum% --------------- --------- ------ ------ perso.pw 3 50.0 50.0 foo.bar 1 16.7 66.7 hello.mydns.com 1 16.7 83.3 mydns.com.lan 1 16.7 100.0  If you want to use it, read the man page first, it has a lot of parameters and can filters using specific expressions. # How to read a epub book in a terminal Written by Solène, on 17 April 2018. Tags: #unix Comments on Mastodon If you ever had to read an ebook in a epub format, you may have find yourself stumbling on Calibre software. Personally, I don’t enjoy reading a book in Calibre at all. Choice is important and it seems that Calibre is the only choice for this task. But, as the epub format is very simple, it’s possible to easily read it with any web browser even w3m or lynx. With a few commands, you can easily find xhtml files that can be opened with a web browser, an epub file is a zip containing mostly xhtml, css and images files. The xhtml files have links to CSS and images contained in others folders unzipped. In the following commands, I prefer to copy the file in a new directory because when you will unzip it, it will create folder in your current working directory. $ mkdir /tmp/myebook/
$cd /tmp/myebook$ cp ~/book.epub .
$unzip book.epub$ cd OPS/xhtml
$ls *xhtml  I tried with differents epub files, in most case you should find a lot of files named chapters-XX.xhtml with XX being 01, 02, 03 and so forth. Just open the files in the correct order with a web browser aka “html viewer”. # Port of the week: tig Written by Solène, on 10 April 2018. Tags: #unix #git Comments on Mastodon Today we will discover the software named tig whose name stands for Text-mode Interface for Git. To install it on OpenBSD: pkg_add tig Tig is a light and easy to use terminal application to browse a git repository in an interactive manner. To use it, just ‘cd’ into a git repository on your filesystem and type tig. You will get the list of all the commits, with the author and the date. By pressing “Enter” key on a commit, you will get the diff. Tig also displays branching and merging in a graphical way. Tig has some parameters, one I like a lot if blame which is used like this: tig blame afile. Tig will show the file content and will display for each line to date of last commit, it’s author and the small identifier of the commit. With this function, it gets really easy to find who modified a line or when it was modified. Tig has a lot of others possibilities, you can discover them in its man pages. # Unofficial OpenBSD FAQ Written by Solène, on 16 March 2018. Tags: #openbsd66 #openbsd Comments on Mastodon Frequently asked questions (with answers) on #openbsd IRC channel Please read the official OpenBSD FAQ I am writing this to answer questions asked too many times. If some answers get good enough, maybe we could try to merge it in the OpenBSD FAQ if the topic isn’t covered. If the topic is covered, then a link to the official FAQ should be used. If you want to participate, you can fetch the page using gopher protocol and send me a diff: $ printf '/~solene/article-openbsd-faq.txt\r\n' | nc dataswamp.org 70 > faq.md


## OpenBSD features / not features

Here is a list for newcomers to tell what is and what is not OpenBSD

• Packet Filter : super awesome firewall

• Sane defaults : you install, it works, no tweak

• Stability : upgrades go smooth and are easy

• pledge and unveil : security features to reduce privileges of software, lots of ports are patched

• W^X security

• Microphone muted by default, unlockable by root only

• Video devices owned by root by default, not usable by users until permission change

• Has only FFS file system which is slow and has no “feature”

• No wine for windows compatibility

• No linux compatibility

• No bluetooth support

• No usb3 full speed performance

• No VM guest additions

• Only in-house VMM for being a VM host, only supports OpenBSD and some Linux

• Poor fuse support (it crashes quite often)

• No nvidia support (nvidia’s fault)

• No container / docker / jails

## Does OpenBSD has a Code Of Conduct?

No and there is no known plan of having one.

This is a topic upsetting OpenBSD people, just don’t ask about it and send patches.

## What is the OpenBSD release process?

OpenBSD FAQ official information

The last two releases are called “-release” and are officially supported (patches for security issues are provided).

-stable version is the latest release with the base system patches applied, the -stable ports tree has some patches backported from -current, mainly to fix security issues. Official packages for -stable are built and are picked up automatically by pkg_add(1).

## What is -current?

It’s the development version with latest packages and latest code. You shouldn’t use it only to get latest package versions.

## How do I install -current ?

OpenBSD FAQ about current

• download the latest snapshot install .iso or .fs file from your favorite mirror under /snapshots/ directory
• boot from it

## How do I upgrade to -current

OpenBSD FAQ about current

You can use the script sysupgrade -s, note that the flag is only useful if you are not running -current right now but harmless otherwise.

# Monitor your systems with reed-alert

Written by Solène, on 17 January 2018.
Tags: #unix #lisp

Comments on Mastodon

This article will present my software reed-alert, it checks user-defined states and send user-defined notification. I made it really easy to use but still configurable and extensible.

## Description

reed-alert is not a monitoring tool producing graph or storing values. It does a job sysadmins are looking for because there are no alternative product (the alternatives comes from a very huge infrastructure like Zabbix so it’s not comparable).

From its configuration file, reed-alert will check various states and then, if it fails, will trigger a command to send a notification (totally user-defined).

## Fetch it

This is a open-source and free software released under MIT license, you can install it with the following command:

# git clone git://bitreich.org/reed-alert
# cd reed-alert
# make
# doas make install


This will install a script reed-alert in /usr/local/bin/ with the default Makefile variables. It will try to use ecl and then sbcl if ecl is not installed.

A README file is available as documentation to describe how to use it, but we will see here how to get started quickly.

You will find a few files there, reed-alert is a Common LISP software and it has been chose for (I hope) good reasons that the configuration file is plain Common LISP.

There is a configuration file looking like a real world example named config.lisp.sample and another configuration file I use for testing named example.lisp containing lot of cases.

## Let’s start

In order to use reed-alert we only need to create a new configuration file and then add a cron job.

### Configuration

We are going to see how to configure reed-alert. You can find more explanations or details in the README file.

#### Alerts

We have to configure two kind of parameters, first we need to set-up a way to receive alerts, easiest way to do so is by sending a mail with “mail” command. Alerts are declared with the function alert and as parameters the alert name and the command to be executed. Some variables are replaced with values from the probe, in the README file you can find the list of probes, it looks like %date% or %params%.

In Common LISP functions are called by using a parenthesis before its name and until the parenthesis is closed, we are giving its parameters.

Example:

(alert mail "echo 'problem on %hostname%' | mail me@example.com")


One should take care about nesting quotes here.

reed-alert will fork a shell to start the command, so pipes and redirection works. You can be creative when writing alerts that:

• use a SMS service
• write a script to post on a forum
• publishing a file on a server
• send text to IRC with ii client

#### Checks

Now we have some alerts, we will configure some checks in order to make reed-alert useful. It uses probes which are pre-defined checks with parameters, a probe could be “has this file not been updated since N minutes ?” or “Is the disk space usage of partition X more than Y ?”

I chose to name the function “=>” to make a check, it isn’t a name and reminds an item or something going forward. Both previous example using our previous mail notifier would look like:

(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)


It’s also possible to use shell commands and check the return code using the command probe, allowing the user to define useful checks.

(=> mail command :command "echo '/is-this-gopher-server-up?' | nc -w 3 dataswamp.org 70"
:desc "dataswamp.org gopher server")


We use echo + netcat to check if a connection to a socket works. The :desc keyword will give a nicer name in the output instead of just “COMMAND”.

#### Garniture

We wrote the minimum required to configure reed-alert, now the configuration file so your my-config.lisp file should looks like this:

(alert mail "echo 'problem on %hostname%' | mail me@example.com")
(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)


Now, you can start it every 5 minutes from a crontab with this:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )


If you prefer to use ecl:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )


The time between each run is up to you, depending on what you monitor.

#### Important

By default, when a check returns a failure, reed-alert will only trigger the notifier associated once it reach the 3rd failure. And then, will notify again when the service is back (the variable %state% is replaced by start or end to know if it starts or stops.)

This is to prevent reed-alert to send a notification each time it checks, there is absolutely no need for this for most users.

The number of failures before triggering can be modified by using the keyword “:try” as in the following example:

(=> mail disk-usage :limit 90 :try 1)


In this case, you will get notified at the first failure of it.

The number of failures of failed checks is stored in files (1 per check) in the “states/” directory of reed-alert working directory.

# How to merge changes with git when you are a noob

Written by Solène, on 13 December 2017.
Tags: #git

Comments on Mastodon

I’m very noob with git and I always screw everything when someone clone one of my repo, contributes and asks me to merge the changes.

Now I found an easy way to merge commits from another repository. Here is a simple way to handle this. We will get changes from project1_modified to merge it into our project1 repository. This is not the fastest way or maybe not the optimal way, but I found it to work reliabily.

$cd /path/to/projects$ git clone git://remote/project1_modified
$cd my_project1$ git checkout master
$git remote add modified ../project1_modified/$ git remote update
$git checkout -b new_code$ git merge modified/master
$git checkout master$ git merge new_code
$git branch -d new_code  This process will makes you download the repository of the people who contributed to the code, then you add it as a remote sources into your project, you create a new branch where you will do the merge, if something is wrong you will be able to manage conflicts easily. Once you tried the code and you are fine, you need to merge this branch to master and then, when you are done, you can delete the branch. If later you need to get new commits from the other repo, it become easier. $ cd /path/to/projects
$cd project1_modified$ git pull
$cd ../my_project1$ git pull modified
$git merge modified/master  And you are done ! # How to type using only one hand: keyboard mirroring Written by Solène, on 12 December 2017. Tags: #unix Comments on Mastodon Hello Today is a bit special because I’m writing with a mirror keyboard layout. I use only half my keyboard to type all characters. To make things harder, the layout is qwerty while I use azerty usually (I’m used to qwerty but it doesn’t help). Here, “caps lock” is a modifier key that must be pressed to obtain characters of the other side. As a mirror, one will find ‘p’ instead of ‘q’ or ‘h’ instead of ‘g’ while pressing caps lock. It’s even possible to type backspace to delete characters or to achieve a newline. All the punctuation isn’t available throught this, only ‘.<|¦>’",’. While I type this I get a bit faster and it become more and more easier. It’s definitely worth if you can’t use hands two. This a been made possible by Randall Munroe. To enable it just download the file Here and type xkbcomp mirrorlayout.kbd$DISPLAY


backspace is use with tilde and return with space, using the modifier of course.

I’ve spent approximately 15 minutes writing this, but the time spent hasn’t been linear, it’s much more fluent now !

Mirrorboard: A one-handed keyboard layout for the lazy by Randall Munroe

# Showing some Common Lisp features

Written by Solène, on 05 December 2017.
Tags: #lisp

Comments on Mastodon

# Introduction: comparing LISP to Perl and Python

We will refer to Common LISP as CL in the following article.

I wrote it to share what I like about CL. I’m using Perl to compare CL features. I am using real world cases for the average programmer. If you are a CL or perl expert, you may say that some example could be rewritten with very specific syntax to make it smaller or faster, but the point here is to show usual and readable examples for usual programmers.

This article is aimed at people with programming interest, some basis of programming knowledge are needed to understand the following. If you know how to read C, Php, Python or Perl it should be enough. Examples have been choosed to be easy.

I thank my friend killruana for his contribution as he wrote the python code.

## Variables

### Scope: global

Common Lisp code

(defparameter *variable* "value")


Defining a variable with defparameter on top-level (= outside of a function) will make it global. It is common to surround the name of global variables with \* character in CL code. This is only for readability for the programmer, the use of \* has no incidence.

Perl code

my $variable = "value";  Python code variable = "value";  ### Scope: local This is where it begins interesting in CL. Declaring a local variable with let create a new scope with parenthesis where the variable isn’t known outside of it. This prevent doing bad things with variables not set or already freed. let can define multiple variables at once, or even variables depending on previously declared variables using let\* Common Lisp code (let ((value (http-request))) (when value (let* ((page-title (get-title value)) (title-size (length page-title))) (when page-title (let ((first-char (subseq page-title 0 1))) (format t "First char of page title is ~a~%" first-char))))))  Perl code { local$value = http_request;
if($value) { local$page_title = get_title $value; local$title_size = get_size $page_title; if($page_title) {
local $first_char = substr$page_title, 0, 1;
printf "First char of page title is %s\n", $first_char; } } }  The scope of a local value is limited to the parent curly brakets, of a if/while/for/foreach or plain brakets. Python code if True: hello = 'World' print(hello) # displays World  There is no way to define a local variable in python, the scope of the variable is limited to the parent function. ## Printing and format text CL has a VERY powerful function to print and format text, it’s even named format. It can even manage plurals of words (in english only) ! Common Lisp code (let ((words (list "hello" "Dave" "How are you" "today ?"))) (format t "~{~a ~}~%" words))  format can loop over lists using ~{ as start and ~} as end. Perl code my @words = @{["hello", "Dave", "How are you", "today ?"]}; foreach my$element (@words) {
printf "%s ", $element; } print "\n";  Python code # Printing and format text # Loop version words = ["hello", "Dave", "How are you", "today ?"] for word in words: print(word, end=' ') print() # list expansion version words = ["hello", "Dave", "How are you", "today ?"] print(*words)  ## Functions ### function parameters: rest Sometimes we need to pass to a function a not known number of arguments. CL supports it with &rest keyword in the function declaration, while perl supports it using the @_ sigil. Common Lisp code (defun my-function(parameter1 parameter2 &rest rest) (format t "My first and second parameters are ~a and ~a.~%Others parameters are~%~{ - ~a~%~}~%" parameter1 parameter2 rest)) (my-function "hello" "world" 1 2 3)  Perl code sub my_function { my$parameter1 = shift;
my $parameter2 = shift; my @rest = @_; printf "My first and second parameters are %s and %s.\nOthers parameters are\n",$parameter1, $parameter2; foreach my$element (@rest) {
printf "    - %s\n", $element; } } my_function "hello", "world", 0, 1, 2, 3;  Python code def my_function(parameter1, parameter2, *rest): print("My first and second parameters are {} and {}".format(parameter1, parameter2)) print("Others parameters are") for parameter in rest: print(" - {}".format(parameter)) my_function("hello", "world", 0, 1, 2, 3)  The trick in python to handle rests arguments is the wildcard character in the function definition. ### function parameters: named parameters CL supports named parameters using a keyword to specify its name. While it’s not at all possible on perl. Using a hash has parameter can do the job in perl. CL allow to choose a default value if a parameter isn’t set, it’s harder to do it in perl, we must check if the key is already set in the hash and give it a value in the function. Common Lisp code (defun my-function(&key (key1 "default") (key2 0)) (format t "Key1 is ~a and key2 (~a) has a default of 0.~%" key1 key2)) (my-function :key1 "nice" :key2 ".Y.")  There is no way to pass named parameter to a perl function. The best way it to pass a hash variable, check the keys needed and assign a default value if they are undefined. Perl code sub my_function { my$hash = shift;

if(! exists $hash->{key1}) {$hash->{key1} = "default";
}

if(! exists $hash->{key2}) {$hash->{key2} = 0;
}

printf "My key1 is %s and key2 (%s) default to 0.\n",
$hash->{key1},$hash->{key2};
}

my_function { key1 => "nice", key2 => ".Y." };


Python code

def my_function(key1="default", key2=0):
print("My key1 is {} and key2 ({}) default to 0.".format(key1, key2))

my_function(key1="nice", key2=".Y.")


## Loop

CL has only one loop operator, named loop, which could be seen as an entire language itself. Perl has do while, while, for and foreach.

### loop: for

Common Lisp code

(loop for i from 1 to 100
do
(format t "Hello ~a~%" i))


Perl code

for(my $i=1;$i <= 100; $i++) { printf "Hello %i\n"; }  Python code for i in range(1, 101): print("Hello {}".format(i))  ### loop: foreach Common Lisp code (let ((elements '(a b c d e f))) (loop for element in elements counting element into count do (format t "Element number ~s : ~s~%" count element)))  Perl code # verbose and readable version my @elements = @{['a', 'b', 'c', 'd', 'e', 'f']}; my$count = 0;
foreach my $element (@elements) {$count++;
printf "Element number %i : %s\n", $count,$element;
}

# compact version
for(my $i=0;$i<$#elements+1;$i++) {
printf "Element number %i : %s\n", $i+1,$elements[$i]; }  Python code # Loop foreach elements = ['a', 'b', 'c', 'd', 'e', 'f'] count = 0 for element in elements: count += 1 print("Element number {} : {}".format(count, element)) # Pythonic version elements = ['a', 'b', 'c', 'd', 'e', 'f'] for index, element in enumerate(elements): print("Element number {} : {}".format(index, element))  ## LISP only tricks ### Store/restore data on disk The simplest way to store data in LISP is to write a data structure into a file, using print function. The code output with print can be evaluated later with read. Common Lisp code (defun restore-data(file) (when (probe-file file) (with-open-file (x file :direction :input) (read x)))) (defun save-data(file data) (with-open-file (x file :direction :output :if-does-not-exist :create :if-exists :supersede) (print data x))) ;; using the functions (save-data "books.lisp" *books*) (defparameter *books* (restore-data "books.lisp"))  This permit to skip the use of a data storage format like XML or JSON. Common LISP can read Common LISP, this is all it needs. It can store objets like arrays, lists or structures using plain text format. It can’t dump hash tables directly. ### Creating a new syntax with a simple macro Sometimes we have cases where we need to repeat code and there is no way to reduce it because it’s too much specific or because it’s due to the language itself. Here is an example where we can use a simple macro to reduce the written code in a succession of conditions doing the same check. We will start from this Common Lisp code (when value (when (string= line-type "3") (progn (print-with-color "error" 'red line-number) (log-to-file "error"))) (when (string= line-type "4") (print-with-color text)) (when (string= line-type "5") (print-with-color "nothing")))  to this, using a macro Common Lisp code (defmacro check(identifier &body code) (progn (when (string= line-type ,identifier) ,@code))) (when value (check "3" (print-with-color "error" 'red line-number) (log-to-file "error")) (check "4" (print-with-color text)) (check "5" (print-with-color "nothing")))  The code is much more readable and the macro is easy to understand. One could argue that in another language a switch/case could work here, I choosed a simple example to illustrate the use of a macro, but they can achieve more. ### Create powerful wrappers with macros I’m using macros when I need to repeat code that affect variables. A lot of CL modules offers a structure like with-something, it’s a wrapper macro that will do some logic like opening a database, checking it’s opened, closing it at the end and executing your code inside. Here I will write a tiny http request wrapper, allowing me to write http request very easily, my code being able to use variables from the macro. Common Lisp code (defmacro with-http(url) (progn (multiple-value-bind (content status head) (drakma:http-request ,url :connection-timeout 3) (when content ,@code)))) (with-http "https://dataswamp.org/" (format t "We fetched headers ~a with status ~a. Content size is ~d bytes.~%" status head (length content)))  In Perl, the following would be written like this Perl code sub get_http { my$url = $1; my %http = magic_http_get$url;
if($http{content}) { return %http; } else { return undef; } } { local %data = get_http "https://dataswamp.org/"; if(%data) { printf "We fetched headers %s with status %d. Content size is %d bytes.\n",$http{headers}, $http{status}, length($http{content});
}
}


The curly brackets are important there, I want to emphase that the local %data variable is only available inside the curly brackets. Lisp is written in a successive of local scope and this is something I really like.

Python code

import requests
with requests.get("https://dataswamp.org/") as fd:
print("We fetched headers %s with status %d. Content size is %s bytes." \
% (list(fd.headers.keys()), fd.status_code, len(fd.content)))


# Allow wide resolution on intel graphics laptop

Written by Solène, on 22 November 2017.
Tags: #hardware

Comments on Mastodon

I just received a wide screen with a 2560x1080 resolution but xrandr wasn’t allowing me to use it. The intel graphics specifications say that I should be able to go up to 4096xsomething so it’s a software problem.

Generate the informations you need with gtf

$gtf 2560 1080 59.9  Takes only the numbers after the resolution name between quotes, so in Modeline "2560x1080_59.90" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync keep only 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync Now add the new resolution and make it available to your output (mine is HDMI2): $ xrandr --newmode "2560x1080" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync
$xrandr --addmode HDMI2 2560x1080  You can now use this mode with arandr using the GUI or with xrandr by typing xrandr --output HDMI1 --mode 2560x1080 You will need to set the new mode each time the system start. I added the 2 lines in my ~/.xsession file which starts stumpwm. # Low bandwidth: Fetch OpenBSD sources Written by Solène, on 09 November 2017. Tags: #openbsd66 #openbsd Comments on Mastodon When you fetch OpenBSD src or ports from CVS and that you want to save bandwidth during the process there is a little trick that change everything: compression Just add -z9 to the parameter of your cvs command line and the remote server will send you compressed files, saving 10 times the bandwidth, or speeding up 10 times the transfer, or both (I’m in the case where I have differents users on my network and I’m limiting my incoming bandwidth so other people can have bandwidth too so it is important to reduce the packets transffered if possible). The command line should looks like: $ cvs -z9 -qd anoncvs@anoncvs.fr.openbsd.org:/cvs checkout -P src


Don’t abuse this, this consumes CPU on the mirror.

# Gentoo port of the week: slrn

Written by Solène, on 08 November 2017.
Tags: #portoftheweek

Comments on Mastodon

## Introduction

Hello,

Today I will speak about slrn, a nntp client. I’m using it to fetch mailing lists I’m following (without necesserarly subscribing to them) and read it offline. I’ll speak about using nntp to read news-groups, I’m not sure but in a more general way nntp is used to access usenet. I’m not sure to know what usenet is, so we will stick here by connecting to mailing-list archives offered by gmane.org (which offers access to mailing-lists and newsgroups through nntp).

Long story short, recently I moved and now I have a very poor DSL connection. Plus I’m often moving by train with nearly no 4G/LTE support during the trip. I’m going to write about getting things done offline and about reducing bandwith usage. This is a really interesting topic in our hyper-connected world.

So, back to slrn, I want to be able to fetch lot of news and read it later. Every nntp client I tried were getting the articles list (in nntp, an article = a mail, a forum = mailing list) and then it download each article when we want to read it. Some can cache the result when you fetch an article, so if you want to read it later it is already fetched. While slrn doesn’t support caching at all, it comes with the utility slrnpull which will create a local copy of forums you want, and slrn can be configured to fetch data from there. slrnpull need to be configured to tell it what to fetch, what to keep etc… and a cron will start it sometimes to fetch the new articles.

## Configuration

The following configuration is made to be simple to use, it runs with your regular user. This is for gentoo, maybe some another system would provide a dedicated user and everything pre-configured.

Create the folder for slrnpull and change the owner:

$sudo mkdir /var/spool/slrnpull$ sudo chown user /var/spool/slrnpull


slrnpull configuration file must be placed in the folder it will use. So edit /var/spool/slrnpull/slrnpull.conf as you want, my configuration file is following.

default 200 45 0
# indicates a default value of 20 articles to be retrieved from the server and
# that such an article will expire after 14 days.

gmane.network.gopher.general
gmane.os.freebsd.questions
gmane.os.freebsd.devel.ports
gmane.os.openbsd.misc
gmane.os.openbsd.ports
gmane.os.openbsd.bugs


The client slrn needs to be configured to find the informations from slrnpull.

File ~/.slrnrc:

set hostname "your.hostname.domain"
set spool_inn_root "/var/spool/slrnpull"
set spool_root "/var/spool/slrnpull/news"
set spool_nov_root "/var/spool/slrnpull/news"
set read_active 1
set use_slrnpull 1
set post_object "slrnpull"
set server_object "spool"


Add this to your crontab to fetch news once per hour (at HH:00 minutes):

0 * * * * NNTPSERVER=news.gmane.org slrnpull -d /var/spool/slrnpull/


Now, just type slrn and enjoy.

## Cheat Sheet

Quick cheat sheet for using slrn, there is a help using “?” but it is not very easy to understand at first.

• h : hide/display the article view
• space : scroll to next page in the article, go to next at the end
• enter : scroll one line
• tab : scroll to the end of quotes
• c : mark all as read

## Tips

• when a forum is empty, it is not shown by default

I found that a slrnconf software provide a GUI to configure slrn exists, I didn’t try it.

## Going further

It seems nntp clients supports a score file that can mark interesting articles using user defined rules.

nntp protocol allow to submit articles (reply or new thread) but I have no idea how it works. Someone told me to forget about this and use mails to mailing-lists when it is possible.

leafnode daemon can be used instead of slrnpull in a more generic way. It is a nntp server that one would use locally as a proxy to nntp servers. It will mirror forums you want and serve it back through nntp, allowing you to use any nntp client (slrnpull enforces the use of slrn). leafnode seems old, a v2 is still in development but seems rather inactive. Leafnode is old and complicated, I wanted something KISS (Keep It Simple Stupid) and it is not.

## Others clients you may want to try

nntp console client

• gnus (in emacs)
• wanderlust (in emacs too)
• alpine

GUI client

• pan (may be able to download, but I failed using it)
• seamonkey (the whole mozilla suite supports nntp)

# Zooming with emacs, tmux or stumpwm

Written by Solène, on 25 October 2017.
Tags: #emacs #window-manager #tmux

Comments on Mastodon

Hey ! You use stumpwm, emacs or tmux and your screen (not the GNU screen) split in lot of parts ? There is a solution to improve that. ZOOMING !

Each of them work with a screen divided into panes/windows (the meaning of theses words change between the program), sometime you want want to have the one where your work in fullscreen. An option exists in each of them to get fullscreen temporarly on a window.

## Emacs: (not native)

This is not native in emacs, you will need to install zoom-window from your favorite repository.

Add the thoses lines in your ~/.emacs:

(require 'zoom-window)
(global-set-key (kbd "C-x C-z") 'zoom-window-zoom)


Type C-x C-z to zoom/unzoom your current frame

## Tmux

Toogle zoom (in or out)

C-b z


## Stumpwm

Add this to your ~/.stumpwmrc

(define-key *root-map* (kbd "z")            "fullscreen")


Using “prefix z” the current window will toggle fullscreen.

# Gentoo port of the week: Nethogs

Written by Solène, on 17 October 2017.
Tags: #portoftheweek

Comments on Mastodon

Today I will present you a nice port (from Gentoo this time, not from a FreeBSD) and this port is even linux only.

nethogs is a console program which shows the bandwidth usage of each running application consuming network. This can be particulary helpful to find which application is sending traffic and at which rate.

It can be installed with emerge as simple as emerge -av net-analyzer/nethogs.

It is very simple of use, just type nethogs in a terminal (as root). There are some parameters and it’s a bit interactive but I recommend reading the manual if you need some details about them.

I am currently running Gentoo on my main workstation, that makes me discover new things so maybe I will write more regularly about gentoo ports.

# How to limit bandwidth usage of emerge in Gentoo

Written by Solène, on 16 October 2017.
Tags: #linux

Comments on Mastodon

If for some reason you need to reduce the download speed of emerge when downloading sources you can use a tweak in portage’s make.conf as explained in the handbook.

To keep wget and just add the bandwidth limit, add this to /etc/portage/make.conf:

FETCHCOMMAND="${FETCHCOMMAND} --limit-rate=200k"  Of course, adjust your rate to your need. # Display manually installed packages on FreeBSD 11 Written by Solène, on 16 August 2017. Tags: #freebsd11 Comments on Mastodon If you want to show the packages installed manually (and not installed as dependency of another package), you have to use “pkg query” and compare if %a (automatically installed == 1) isn’t 1. The second string will format the output to display the package name: $ pkg query -e "%a != 1" "%n"


# Using firefox on Guix distribution

Written by Solène, on 16 August 2017.
Tags: #linux

Comments on Mastodon

Update 2020: This method may certainly not work anymore but I don’t have a Guix installation to try.

I’m new to Guix, it’s a wonderful system but it’s such different than any other usual linux distribution that it’s hard to achieve some basics tasks. As Guix is 100% free/libre software, Firefox has been removed and replaced by icecat. This is nearly the same software but some “features” has been removed (like webRTC) for some reasons (security, freedom). I don’t blame Guix team for that, I understand the choice.

But my problem is that I need Firefox. I finally achieve to get it working from the official binary downloaded from mozilla website.

You need to install some packages to get the libraries, which will become available under your profile directory. Then, tells firefox to load libraries from there and it will start.

guix package -i glibc glib gcc gtk+ libxcomposite dbus-glib libxt
LD_LIBRARY_PATH=~/.guix-profile/lib/ ~/.guix-profile/lib/ld-linux-x86-64.so.2 ~/firefox_directory/firefox


Also, it seems that running icecat and firefox simultanously works, they store data in ~/.mozilla/icecat and ~/.mzoilla/firefox so they are separated.

# Using emacs to manage mails with mu4e

Written by Solène, on 15 June 2017.
Tags: #emacs #email

Comments on Mastodon

In this article we will see how to fetch, read and manage your emails from Emacs using mu4e. The process is the following: mbsync command (while mbsync is the command name, the software name is isync) create a mirror of an imap account into a Maildir format on your filesystem. mu from mu4e will create a database from the Maildir directory using xapian library (full text search database), then mu4e (mu for emacs) is the GUI which queries xapian database to manipulates your mails.

Mu4e handles with dynamic bookmarks, so you can have some predefined filters instead of having classic folders. You can also do a query and reduce the results with successives queries.

You may have heard about using notmuch with emacs to manage mails, mu4e and notmuch doesn’t do the same job. While notmuch is a nice tool to find messages from queries and create filters, it operates as a read-only tool and can’t do anything with your mail. mu4e let you write mail, move, delete, flag etc… AND still allow to make complex queries.

I wrote this article to allow people to try mu4e quickly, you may want to read both isync and mu4e manual to have a better configuration suiting your needs.

## Installation

On OpenBSD you need to install 2 packages:

# pkg_add mu4 isync


## isync configuration

We need to configure isync to connect to the IMAP server:

Edit the file ~/.mbsyncrc, there is a trick to not have the password in clear text in the configuration file, see isync configuration manual for this:

iMAPAccount my_imap
Host my_host_domain.info
User imap_user
Pass my_pass_in_clear_text
SSLType IMAPS

IMAPStore my_imap-remote
Account my_imap

MailDirStore my_imap-local
Path ~/Maildir/my_imap/
Inbox ~/Maildir/my_imap/Inbox
SubFolders Legacy

channel my_imap
Master :my_imap-remote:
Slave :my_imap-local:
Patterns *
Create Slave
Expunge Both


## mu4e / emacs configuration

We need to configure mu4e in order to tell where to find the mail folder. Add this to your ~/.emacs file.

(require 'mu4e)
(setq mu4e-maildir "~/Maildir/my_imap/"
mu4e-sent-folder "/Sent Messages/"
mu4e-trash-folder "/Trash"
mu4e-drafts-folder "/Drafts")


## First start

A few commands are needed in order to make everything works. We need to create the base folder as mbsync command won’t do the job for some reason, and we need mu to index the mails the first time.

mbsync can takes a moment because it will download ALL your mails.

$mkdir -p ~/Maildir/my_imap$ mbsync -aC
$mu index --maildir=~/Maildir/my_imap  ## How to use mu4e start emacs, run M-x mu4e RET and enjoy, the documentation of mu4e is well done. Press “U” at mu4e screen to synchronize with imap server. A query for mu4e looks like this: list:misc.openbsd.org flag:unread avahi  This query will search mails having list header “misc.openbsd.org” and which are unread and which contains “avahi” pattern. date:20140101..20150215 urgent  This one will looks for mails within date range of 1st january 2014 to 15th february 2015 containing word “urgent”. ## Additional notes The current setup doesn’t handle sending mails, I’ll write another article about this. This requires configuring a smtp authentification and an identify for mu4e. Also, you may need to tweak mbsync configuration or mu4e configuration, some settings must be changed depending on the imap server, this is particuliarly important for deleted mails. # Fold functions in emacs Written by Solène, on 16 May 2017. Tags: #emacs Comments on Mastodon You want to fold (hide) code between brackets like an if statement, a function, a loop etc.. ? Use the HideShow minor-mode which is part of emacs. All you need is to enable hs-minor-mode. Now you can fold/unfold by cycling with C-c @ C-c. HideShow on EmacsWiki # How to change Firefox locale to ... esperanto? Written by Solène, on 14 May 2017. Tags: #firefox Comments on Mastodon Hello ! Today I felt the need to change the language of my Firefox browser to esperanto but I haven’t been able to do this, it is not straightforward… First, you need to install your language pack, depending if you use the official Mozilla Firefox or Icecat, the rebranded firefox with non-free stuff removed Then, open about:config in firefox, we will need to change 2 keys. Firefox needs to know that we don’t want to use our user’s locale as Firefox language and which language we want to set. • set intl.locale.matchOS to false • set general.useragent.locale to the language code you want (eo for esperanto) • restart firefox/icecat you’re done ! Bonan tagon # Bandwidth limit / queue on OpenBSD 6.6 Written by Solène, on 25 April 2017. Tags: #openbsd66 #openbsd #unix #network Comments on Mastodon Today I will explain how to do traffic limit with OpenBSD and PF. This is not hard at all if you want something easy, the man page pf.conf(5) in QUEUEING section is pretty good but it may disturbing when you don’t understand how it works. This is not something I master, I’m not sure of the behaviour in some cases but the following example works as I tested it ! :) ### Use case Internet is down at home, I want to use my phone as 4G router trough my OpenBSD laptop which will act as router. I don’t want the quota (some Gb) to be eaten in a few seconds, this connection allow to download up to 10 Mb/s so it can go quickly ! We will limit the total bandwidth to 1M (~ 110 kb/s) for people behind the NAT. It will be slow, but we will be sure that nothing behind the NAT like a program updating, cloud stuff synchronizing or videos in auto play won’t consume our quota. Edit /etc/pf.conf accordigly to your network internet="urndis0" lan="em0" # we define our available bandwidth queue main on$lan bandwidth 100M

# we will let 1M but we will allow
# 3M during 200 ms when initiating connection to keep the web a bit interactive
queue limited parent main bandwidth 1M min 0K max 1M burst 3M for 200ms default

set skip on lo

# we do NAT here
match out on egress inet from !(egress:network) to any nat-to (egress:0)

block all
pass out quick inet

# we apply the queue here on EVERYTHING coming from the internet
pass in on $lan proto tcp port www set queue web  ### Per host ? As before, you can apply queues on IP host/range rather than protocols, or you can even mix both if you want. ### Warning The limit function changed in OpenBSD 5.5, everything you can read on the internet about ALTQ isn’t working anymore. # Markup languages comparison Written by Solène, on 13 April 2017. Tags: #unix Comments on Mastodon For the fun, here is a few examples of the same output in differents markup languages. The list isn’t exhaustive of course. This is org-mode: * This is a title level 1 + first item + second item + third item with a [[http://dataswamp.org][link]] ** title level 2 Blah blah blah blah blah blah blah blah *bold* here #+BEGIN_SRC lisp (let ((hello (init-string))) (format t "~A~%" (+ 1 hello)) (print hello)) #+END_SRC  This is markdown : # this is title level 1 + first item + second item + third item with a [Link](http://dataswamp.org) ## Title level 2 Blah blah blah blah blah blah blah blah **bold** here (let ((hello (init-string))) (format t "~A~%" (+ 1 hello)) (print hello)) or  (let ((hello (init-string))) (format t "~A~%" (+ 1 hello)) (print hello))   This is HTML : <h1>This is title level 1</h1> <ul> <li>first item></li> <li>second item</li> <li>third item with a <a href="http://dataswamp.org">link</a></li> </ul> <h2>Title level 2</h2> <p>Blah blah blah blah blah blah blah blah <strong>bold</strong> here <code><pre>(let ((hello (init-string))) (format t "~A~%" (+ 1 hello)) (print hello))</pre></code>  This is LaTeX : \begin{document} \section{This is title level 1} \begin{itemize} \item First item \item Second item \item Third item \end{itemize} \subsection{Title level 2} Blah blah blah blah blah blah blah blah \textbf{bold} here \begin{verbatim} (let ((hello (init-string))) (format t "~A~%" (+ 1 hello)) (print hello)) \end{verbatim} \end{document}  # OpenBSD 6.1 released Written by Solène, on 11 April 2017. Tags: #openbsd #unix Comments on Mastodon Today OpenBSD 6.1 has been released, I won’t copy & paste the change list but, in a few words, it gets better. Link to the official announce I already upgraded a few servers, with both methods. One with bsd.rd upgrade but that requires physical access to the server and the other method well explained in the upgrade guide which requires to untar the files and do move some files. I recommend using bsd.rd if possible. # Connect to pfsense box console by usb Written by Solène, on 10 April 2017. Tags: #unix #network #openbsd66 #openbsd Comments on Mastodon Hello, I have a pfsense appliance (Netgate 2440) with a usb console port, while it used to be a serial port, now devices seems to have a usb one. If you plug an usb wire from an openbsd box to it, you woull see this in your dmesg uslcom0 at uhub0 port 5 configuration 1 interface 0 "Silicon Labs CP2104 USB to UART Bridge Controller" rev 2.00/1.00 addr 7 ucom0 at uslcom0 portno 0  To connect to it from OpenBSD, use the following command: # cu -l /dev/cuaU0 -s 115200  And you’re done # List of useful tools Written by Solène, on 22 March 2017. Tags: #unix Comments on Mastodon Here is a list of software that I find useful, I will update this list everytime I find a new tool. This is not an exhaustive list, theses are only software I enjoy using: ## Backup Tool • duplicity • borg • restore/dump ## File synchronization tool • unison • rsync • lsyncd ## File sharing tool / “Cloud” • boar • nextcloud / owncloud • seafile • pydio • syncthing (works as peer-to-peer without a master) • sparkleshare (uses a git repository so I would recommend storing only text files) ## Editors • emacs • vim • jed ## Web browsers using keyboard • qutebrowser • firefox with vimperator extension ## Todo list / Personal Agenda… • org-mode (within emacs) • ledger (accounting) ## Mail client • mu4e (inside emacs, requires the use of offlineimap or mbsync to fetch mails) ## Network • curl • bwm-ng (to see bandwith usage in real time) • mtr (traceroute with a gui that updates every n seconds) ## Files integrity • bitrot • par2cmdline • aide ## Image viewer • sxiv • feh ## Stuff • entr (run command when a file change) • rdesktop (RDP client to connect to Windows VM) • xclip (read/set your X clipboard from a script) • autossh (to create tunnels that stays up) • mosh (connects to your ssh server with local input and better resilience) • ncdu (watch file system usage interactively in cmdline) • mupdf (PDF viewer) • pdftk (PDF manipulation tool) • x2x (share your mouse/keyboard between multiple computers through ssh) • profanity (XMPP cmdline client) • prosody (XMPP server) • pgmodeler (PostgreSQL database visualization tool) # How to check your data integrity? Written by Solène, on 17 March 2017. Tags: #unix #security Comments on Mastodon Today, the topic is data degradation, bit rot, birotting, damaged files or whatever you call it. It’s when your data get corrupted over the time, due to disk fault or some unknown reason. # What is data degradation ? I shamelessy paste one line from wikipedia: “Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay or data rot.”. Data degradation on Wikipedia So, how do we know we encounter a bit rot ? bit rot = (checksum changed) && NOT (modification time changed)  While updating a file could be mistaken as bit rot, there is a difference update = (checksum changed) && (modification time changed)  # How to check if we encounter bitrot ? There is no way you can prevent bitrot. But there are some ways to detect it, so you can restore a corrupted file from a backup, or repair it with the right tool (you can’t repair a file with a hammer, except if it’s some kind of HammerFS ! :D ) In the following I will describe software I found to check (or even repair) bitrot. If you know others tools which are not in this list, I would be happy to hear about it, please mail me. In the following examples, I will use this method to generate bitrot on a file: % touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted % generate_checksum_database_with_tool % echo "a" >> my_data/some_file_that_will_be_corrupted % touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted % start_tool_for_checking  We generate the checksum database, then we alter a file by adding a “a” at the end of the file and we restore the modification and acess time of the file. Then, we start the tool to check for data corruption. The first touch is only for convenience, we could get the modification time with stat command and pass the same value to touch after modification of the file. ## bitrot This is a python script, it’s very easy to use. I will scan a directory and create a database with the checksum of the files and their modification date. Initialization usage: % cd /home/my_data/ % bitrot Finished. 199.41 MiB of data read. 0 errors found. 189 entries in the database, 189 new, 0 updated, 0 renamed, 0 missing. Updating bitrot.sha512... done. % echo$?
0


Verify usage (case OK):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
% echo $? 0  Exit status is 0, so our data are not damaged. Verify usage (case Error): % cd /home/my_data/ % bitrot Checking bitrot.db integrity... ok. error: SHA1 mismatch for ./sometextfile.txt: expected 17b4d7bf382057dc3344ea230a595064b579396f, got db4a8d7e27bb9ad02982c0686cab327b146ba80d. Last good hash checked on 2017-03-16 21:04:39. Finished. 199.41 MiB of data read. 1 errors found. 189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing. error: There were 1 errors found. % echo$?
1


When something is wrong. As the exit status of bitrot isn’t 0 when it fails, it’s easy to write a script running every day/week/month.

Github page

bitrot is available in OpenBSD ports in sysutils/bitrot since 6.1 release.

## par2cmdline

This tool works with PAR2 archives (see below for more informations about what PAR ) and from them, it will be able to check your data integrity AND repair it.

While it has some pros like being able to repair data, the cons is that it’s not very easy to use. I would use this one for checking integrity of long term archives that won’t changes. The main drawback comes from PAR specifications, the archives are created from a filelist, if you have a directory with your files and you add new files, you will need to recompute ALL the PAR archives because the filelist changed, or create new PAR archives only for the new files, but that will make the verify process more complicated. That doesn’t seems suitable to create new archives for every bunchs of files added in the directory.

PAR2 let you choose the percent of a file you will be able to repair, by default it will create the archives to be able to repair up to 5% of each file. That means you don’t need a whole backup for the files (while it’s would be a bad idea) and only an approximately extra of 5% of your data to store.

Create usage:

% cd /home/
% par2 create -a integrity_archive -R my_data
Skipping 0 byte file: /home/my_data/empty_file

Block size: 3812
Source file count: 17
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7

Opening: my_data/[....]
[text cut here]
Opening: my_data/[....]

Computing Reed Solomon matrix.
Constructing: done.
Wrote 381200 bytes to disk
Writing recovery packets
Writing verification packets
Done

% echo $? 0 % ls -1 integrity_archive.par2 integrity_archive.vol000+01.par2 integrity_archive.vol001+02.par2 integrity_archive.vol003+04.par2 integrity_archive.vol007+08.par2 integrity_archive.vol015+16.par2 integrity_archive.vol031+32.par2 integrity_archive.vol063+37.par2 my_data  Verify usage (OK): % par2 verify integrity_archive.par2 Loading "integrity_archive.par2". Loaded 36 new packets Loading "integrity_archive.vol000+01.par2". Loaded 1 new packets including 1 recovery blocks Loading "integrity_archive.vol001+02.par2". Loaded 2 new packets including 2 recovery blocks Loading "integrity_archive.vol003+04.par2". Loaded 4 new packets including 4 recovery blocks Loading "integrity_archive.vol007+08.par2". Loaded 8 new packets including 8 recovery blocks Loading "integrity_archive.vol015+16.par2". Loaded 16 new packets including 16 recovery blocks Loading "integrity_archive.vol031+32.par2". Loaded 32 new packets including 32 recovery blocks Loading "integrity_archive.vol063+37.par2". Loaded 37 new packets including 37 recovery blocks Loading "integrity_archive.par2". No new packets found There are 17 recoverable files and 0 other files. The block size used was 3812 bytes. There are a total of 2000 data blocks. The total size of the data files is 7595275 bytes. Verifying source files: Target: "my_data/....." - found. [...cut here...] Target: "my_data/....." - found. All files are correct, repair is not required. % echo$?
0


Verify usage (with error):

par2 verify integrity_archive.par.par2
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:

Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.

% echo $? 1  Repair usage: % par2 repair integrity_archive.par.par2 Loading "integrity_archive.par.par2". Loaded 36 new packets Loading "integrity_archive.par.vol000+01.par2". Loaded 1 new packets including 1 recovery blocks Loading "integrity_archive.par.vol001+02.par2". Loaded 2 new packets including 2 recovery blocks Loading "integrity_archive.par.vol003+04.par2". Loaded 4 new packets including 4 recovery blocks Loading "integrity_archive.par.vol007+08.par2". Loaded 8 new packets including 8 recovery blocks Loading "integrity_archive.par.vol015+16.par2". Loaded 16 new packets including 16 recovery blocks Loading "integrity_archive.par.vol031+32.par2". Loaded 32 new packets including 32 recovery blocks Loading "integrity_archive.par.vol063+37.par2". Loaded 37 new packets including 37 recovery blocks Loading "integrity_archive.par.par2". No new packets found There are 17 recoverable files and 0 other files. The block size used was 3812 bytes. There are a total of 2000 data blocks. The total size of the data files is 7595275 bytes. Verifying source files: Target: "my_data/....." - found. [...cut here...] Target: "my_data/....." - found. Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks. Scanning extra files: Repair is required. 1 file(s) exist but are damaged. 16 file(s) are ok. You have 2000 out of 2000 data blocks available. You have 100 recovery blocks available. Repair is possible. You have an excess of 100 recovery blocks. None of the recovery blocks will be used for the repair. Wrote 361069 bytes to disk Verifying repaired files: Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - found. Repair complete. % echo$?
0


par2cmdline is only one implementation doing the job, others tools working with PAR archives exists. They should be able to all works with the same PAR files.

Parchive on Wikipedia

Github page

par2cmdline is available in OpenBSD ports in archivers/par2cmdline.

If you find a way to add new files to existing archives, please mail me.

## mtree

One can write a little script using mtree (in base system on OpenBSD and FreeBSD) which will create a file with the checksum of every files in the specified directories. If mtree output is different since last time, we can send a mail with the difference. This is a process done in base install of OpenBSD for /etc and some others files to warn you if it changed.

While it’s suited for directories like /etc, in my opinion, this is not the best tool for doing integrity check.

## ZFS

I would like to talk about ZFS and data integrity because this is where ZFS is very good. If you are using ZFS, you may not need any other software to take care about your data. When you write a file, ZFS will also store its checksum as metadata. By default, the option “checksum” is activated on dataset, but you may want to disable it for better performance.

There is a command to ask ZFS to check the integrity of the files. Warning: scrub is very I/O intensive and can takes from hours to days or even weeks to complete depending on your CPU, disks and the amount of data to scrub:

# zpool scrub zpool


The scrub command will recompute the checksum of every file on the ZFS pool, if something is wrong, it will try to repair it if possible. A repair is possible in the following cases:

If you have multiple disks like raid-Z or raid–1 (mirror), ZFS will be look on the differents disks if the non corrupted version of the file exists, if it finds it, it will restore it on the disk(s) where it’s corrupted.

If you have set the ZFS option “copies” to 2 or 3 (1 = default), that means that the file is written 2 or 3 time on the disk. Each file of the dataset will be allocated 2 or 3 time on the disk, so take care if you want to use it on a dataset containing heavy files ! If ZFS find thats a version of a file is corrupted, it will check the others copies of it and tries to restore the corrupted file is possible.

You can see the percentage of filesystem already scrubbed with

zfs status zpool


and the scrub can be stopped with

zfs scrub -s zpool


### AIDE

Its name is an acronym for “Advanced Intrusion Detection Environment”, it’s an complicated software which can be used to check for bitrot. I would not recommend using it if you only need bitrot detection.

Here is a few hints if you want to use it for checking your file integrity:

/etc/aide.conf

/home/my_data/ R
# Rule definition
All=m+s+i+sha256
summarize_changes=yes


The config file will create a database of all files in /home/my_data/ (R for recursive). “All” line list the checks we do on each file. For bitrot checking, we want to check modification time, size, checksum and inode of the files. The summarize_change line permit to have a list of changes if something is wrong.

This is the most basic config file you can have. Then you will have to run aide to create the database and then run aide to create a new database and compare the two databases. It doesn’t update its database itself, you will have to move the old database and tell it where to found the older database.

# My use case

I have different kind of data. On a side, I have static data like pictures, clips, music or things that won’t change over time and the other side I have my mails, documents and folders where the content changes regularly (creation, deletetion, modification). I am able to afford a backup for 100% of my data with some history of the backup on a few days, so I won’t be interested about file repairing.

I want to be warned quickly if a file get corrupted, so I can still get the backup in my history but I don’t keep every versions of my files for too long. I choose to go with the python tool bitrot, it’s very easy to use and it doesn’t become a mess with my folders getting updated often.

I would go with par2cmdline if I could not be able to backup all my data. Having 5% or 10% of redundancy of my files should be enough to restore it in case of corruption without taking too much space.

# Port of the week: rss2email

Written by Solène, on 24 January 2017.
Tags: #portoftheweek #unix #email

Comments on Mastodon

This is the kind of Port of the week I like. This is a software I just discovered and fall in love to. The tool r2e which is the port mail/rss2email on OpenBSD is a small python utility that solves a problem: how to deal with RSS feeds?

Until last week, I was using a “web app” named selfoss which was aggregating my RSS feeds and displaying it on a web page, I was able to filter by read/unread/marked and also filter by source. It is a good tool that does the job well but I wanted something that doesn’t rely on a web browser. Here comes r2e !

This simple software will send you a mail for each new entry in your RSS feeds. It’s really easy to configure and set-up. Just look at how I configured mine:

$r2e new my-address+rss@my-domain.com$ r2e add "http://undeadly.org/cgi?action=rss"
$r2e add "https://dataswamp.org/~solene/rss.xml"$ r2e add "https://www.dragonflydigest.com/feed"
$r2e add "http://phoronix.com/rss.php"  Add this in your crontab to check new RSS items every 10 minutes: */10 * * * * /usr/local/bin/r2e run  Add a rule for my-address+rss to store mails in a separate folder, and you’re done ! NOTE: you can use r2e run –no-send for the first time, it will create the database and won’t send you mails for current items in feeds. # Dovecot: folder appears empty Written by Solène, on 23 January 2017. Tags: #email Comments on Mastodon Today I encountered an unknown issue to me with my Imap server dovecot. In roundcube mail web client, my Inbox folder appeared empty after being reading a mail. My Android mail client K9-Mail was displaying “IOException:readStringUnti….” when trying to synchronize this folder. I solved it easily by connecting to my server with SSH, cd-ing into the maildir directory and in the Inbox folder, renamed dovecot.index.log to dovecot.index.log.bak (you can remove it if it fix the problem). And now, mails are back. This is the very first time I have a problem of this kind with dovecot… # New cl-yag version Written by Solène, on 21 January 2017. Tags: #lisp #cl-yag Comments on Mastodon Today I just updated my tool cl-yag that implies a slightly change on my website. Now, on the top of this blog, you can see a link “Index of articles”. This page only display articles titles, without any text from the article. Cl-yag is a tool to generate static website like this one. It’s written in Common LISP. For reminder, it’s also capable of producing both html and gopher output now. If you don’t know what Gopher is, you will learn a lot reading the following links Wikipedia : Gopher (Protocol) and Why is gopher still relevant # Let's encrypt on OpenBSD in 5 minutes Written by Solène, on 20 January 2017. Tags: #security #openbsd66 #openbsd Comments on Mastodon Let’s encrypt is a free service which provides free SSL certificates. It is fully automated and there are a few tools to generate your certificates with it. In the following lines, I will just explain how to get a certificate in a few minutes. You can find more informations on Let’s Encrypt website. To make it simple, the tool we will use will generate some keys on the computer, send a request to Let’s Encrypt service which will use http challenging (there are also dns and another one kind of challenging) to see if you really own the domain for which you want the certificate. If the challenge process is ok, you have the certificate. Please, if you don’t understand the following commands, don’t type it. While the following is right for OpenBSD, it may change slightly for others systems. Acme-client is part of the base system, you can read the man page acme-client(1). ## Prepare your http server For each certificate you will ask a certificate, you will be challenged for each domain on the port 80. A file must be available in a path under “/.well-known/acme-challenge/”. You must have this in your httpd config file. If you use another web server, you need to adapt. server "mydomain.com" { root "/empty" listen on * port 80 location "/.well-known/acme-challenge/*" { root { "/acme/" , request strip 2 } } }  The request strip 2 part is IMPORTANT. (I’ve lost 45 minutes figuring out why root “/acme/” wasn’t working.) ## Prepare the folders As stated in acme-client man page and if you don’t need to change the path. You can do the following commands with root privileges : # mkdir /var/www/acme # mkdir -p /etc/ssl/acme/private /etc/acme # chmod 0700 /etc/ssl/acme/private /etc/acme  ## Request the certificates As root, in the acme-client sources folder, type the following the generate the certificates. The verbose flag is interesting and you will see if the challenging step work. If it doesn’t work, you should try manually to get a file like with the same path tried from Let’s encrypt, and try again the command when you succeed. $ acme-client -vNn mydomain.com www.mydomain.com mail.mydomain.com


## Use the certificates

Now, you can use your SSL certificates for your mail server, imap server, ftp server, http server…. There is a little drawback, if you generate certificates for a lot of domains, they are all written in the certificate. This implies that if someone visit one page, look at the certificate, this person will know every domain you have under SSL. I think that it’s possible to ask every certificate independently but you will have to play with acme-client flags and make some kind of scripts to automatize this.

Certificate file is located at /etc/ssl/acme/fullchain.pem and contains the full certification chain (as its name is explicit). And the private key is located at /etc/ssl/acme/private/privkey.pem.

Restart the service with the certificate.

## Renew certificates

Certificates are valid for 3 months. Just type

./acme-client mydomain.com www.mydomain.com mail.mydomain.com


Restart your ssl services

EASY !

# How to use ssh tramp on Emacs Windows?

Written by Solène, on 18 January 2017.
Tags: #emacs #windows

Comments on Mastodon

If you are using emacs under Microsoft Windows and you want to edit remote files through SSH, it’s possible to do it without using Cygwin. Tramp can use the tool “plink” from putty tools to do ssh.

What you need is to get “plink.exe” from the following page and get it into your $PATH, or choose the installer which will install all putty tools. Putty official website Then, edit your emacs file to add the following lines to tell it that you want to use plink when using tramp (require 'tramp) (set-default 'tramp-default-method "plink")  Now, you can edit your remote files, but you will need to type your password. I think that in order to get password-less with ssh keys, you would need to use putty key agent. # Convert mailbox to maildir with dovecot Written by Solène, on 17 January 2017. Tags: #unix #email Comments on Mastodon I have been using mbox format for a few years on my personal mail server. For those who don’t know what mbox is, it consists of only one file per folder you have on your mail client, each file containing all the mails of the corresponding folder. It’s extremely ineficient when you backup the mail directory because it must copy everything each time. Also, it reduces the system cache possibility of the server because if you have folders with lots of mails with attachments, it may not be cached. Instead, I switched to maildir, which is a format where every mail is a regular file on the file system. This takes a lot of inodes but at least, it’s easier to backup or to deal with it for analysis. Here how to switch from mbox to maildir with a dovecot tool. # dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox  That’s all ! In this case, my mbox folder was ~/mail/ and my INBOX file was ~/mail/inbox. It tooks me some time to find where my INBOX really was, at first I tried a few thing that didn’t work and tried a perl convert tool named mb2md.pl which has been able to extract some stuff but a lot of mails were broken. So I have been going back getting dsync working. If you want to migrate, the whole process looks like: # service smtpd stop modify dovecot/conf.d/10-mail.conf, replace the first line mail_location = mbox:~/mail:INBOX=/var/mail/%u # BEFORE mail_location = maildir:~/maildir # AFTER # service dovecot restart # dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox # service smtpd start  # Port of the week: entr Written by Solène, on 07 January 2017. Tags: #unix Comments on Mastodon entr is a command line tool that let you run arbitrary command on file change. This is useful when you are doing something that requires some processing when you modify it. Recently, I have used it to edit a man page. At first, I had to run mandoc each time I modified to file to check the render. This was the first time I edited a man page so I had to modify it a lot to get what I wanted. I remembered about entr and this is how you use it: $ ls stagit.1 | entr mandoc /_


This simple command will run “mandoc stagit.1” each time stagit.1 is modified. The file names must be given by stdin to entr, and then use the characters sequence /_ to replace the names (like {} in find).

The man page of entr is very well documented if you need more examples.

# Emacs 25: save cursor position

Written by Solène, on 08 December 2016.
Tags: #emacs

Comments on Mastodon

Since I upgraded to Emacs 25 it was no longer saving my last cursor position in edited file. This is a feature I really like because I often fire and close emacs rather than keeping it opened.

Before (< emacs 25)

(setq save-place-file "~/.emacs.d/saveplace")
(setq-default save-place t)
(require 'saveplace)


Emacs 25

(save-place-mode t)
(setq save-place-file "~/.emacs.d/saveplace")
(setq-default save-place t)


That’s all :)

# Port of the week: dnscrypt-proxy

Written by Solène, on 19 October 2016.
Tags: #unix #security #portoftheweek

Comments on Mastodon

### 2020 Update

Now, unwind on OpenBSD and unbound can support DNS over TLS or DNS over HTTPS, dnscrypt lost a bit of relevance but it’s still usable and a good alternative.

### Dnscrypt

Today I will talk about net/dnscrypt-proxy. This let you encrypt your DNS traffic between your resolver and the remote DNS recursive server. More and more countries and internet provider use DNS to block some websites, and now they tend to do “man in the middle” with DNS answers, so you can’t just use a remote DNS you find on the internet. While a remote dnscrypt DNS server can still be affected by such “man in the middle” hijack, there is a very little chance DNS traffic is altered in datacenters / dedicated server hosting.

The article also deal with unbound as a dns cache because dnscrypt is a bit slow and asking multiple time the same domain in a few minutes is a waste of cpu/network/time for everyone. So I recommend setting up a DNS cache on your side (which can also permit to use it on a LAN).

At the time I write this article, their is a very good explanation about “how to install it” is named dnscrypt-proxy–1.9.5p3 in the folder /usr/local/share/doc/pkg-readmes/. The following article is made from this file. (Article updated at the time of OpenBSD 6.3)

While I write for OpenBSD this can be easily adapted to anthing else Unix-like.

### Install dnscrypt

# pkg_add dnscrypt-proxy


### Resolv.conf

Modify your resolv.conf file to this

/etc/resolv.conf :

nameserver 127.0.0.1
lookup file bind
options edns0


### When using dhcp client

If you use dhcp to get an address, you can use the following line to force having 127.0.0.1 as nameserver by modifying dhclient config file. Beware, if you use it, when upgrading the system from bsd.rd, you will get 127.0.0.1 as your DNS server but no service running.

/etc/dhclient.conf :

supersede domain-name-servers 127.0.0.1;


### Unbound

Now, we need to modify unbound config to tell him to ask DNS at 127.0.0.1 port 40. Please adapt your config, I will just add what is mandatory. Unbound configuration file isn’t in /etc because it’s chrooted

/var/unbound/etc/unbound.conf:

server:
# this line is MANDATORY
do-not-query-localhost: no

forward-zone:
name: "."
forward-addr: 127.0.0.1@40
# address dnscrypt listen on


If you want to allow other to resolv through your unbound daemon, please see parameters interface and access-control. You will need to tell unbound to bind on external interfaces and allow requests on it.

### Dnscrypt-proxy

Now we need to configure dnscrypt, pick a server in the following LIST /usr/local/share/dnscrypt-proxy/dnscrypt-resolvers.csv, the name is the first column.

As root type the following (or use doas/sudo), in the example we choose dnscrypt.eu-nl as a DNS provider

# rcctl enable dnscrypt_proxy
# rcctl set dnscrypt_proxy flags -E -m1 -R dnscrypt.eu-nl -a 127.0.0.1:40
# rcctl start dnscrypt_proxy


### Conclusion

You should be able to resolv address through dnscrypt now. You can use tcpdump on your external interface to see if you see something on udp port 53, you should not see traffic there.

If you want to use dig hostname -p 40 @127.0.0.1 to make DNS request to dnscrypt without unbound, you will need net/isc-bind which will provide /usr/local/bin/dig. OpenBSD base dig can’t use a port different than 53.

# How to publish a git repository on http

Written by Solène, on 07 October 2016.
Tags: #unix #git

Comments on Mastodon

Here is an how-to in order to make a git repository available for cloning through a simple http server. This method only allow people to fetch the repository, not to push. I wanted to set-up this to get my code, I don’t plan to have any commit on it from other people at this time so it’s enough.

In a folder publicly available from your http server clone your repository in bare mode. As explained in the [https://git-scm.com/book/tr/v2/Git-on-the-Server-The-Protocols](man page):

$cd /var/www/htdocs/some-path/$ git clone --bare /path/to/git_project gitproject.git
$cd gitproject.git$ git update-server-info
$mv hooks/post-update.sample hooks/post-update$ chmod o+x hooks/post-update


Then you will be able to clone the repository with

$git clone https://your-hostname/some-path/gitproject.git  I’ve lost time because I did not execute git update-server-info so the clone wasn’t possible. # Port of the week: rlwrap Written by Solène, on 04 October 2016. Tags: #unix #shell #portoftheweek Comments on Mastodon Today I will present misc/rlwrap which is an utility tool when you use some command-line software which doesn’t provide you a nice readline input. By using rlwrap, you will be able to use telnet, a language REPL or any command-line tool where you input text with an history of what you type, ability to use emacs bindings like C-a C-e M-Ret etc… I use it often with telnet or sbcl. Usage : $ rlwrap telnet host port


# Common LISP: How to open an SSL / TLS stream

Written by Solène, on 26 September 2016.
Tags: #lisp #network

Comments on Mastodon

Here is a tiny code to get a connection to an SSL/TLS server. I am writing an IRC client and an IRC bot too and it’s better to connect through a secure channel.

This requires usocket and cl+ssl:

(usocket:with-client-socket (socket stream *server* *port*)
(let ((ssl-stream (cl+ssl:make-ssl-client-stream stream
:external-format '(:iso-8859-1 :eol-style :lf)
:unwrap-stream-p t
:hostname *server*)))
(format ssl-stream "hello there !~%")
(force-output ssl-stream)))


# Port of the week: stumpwm

Written by Solène, on 21 September 2016.
Tags: #window-manager #portoftheweek #lisp

Comments on Mastodon

When I started port of the week articles I was planning to write an article every week but now I don’t have much ports too speak about.

Today is about x11/stumpwm ! I wrote about this window manager earlier. It’s now available in OpenBSD since 6.1 release.

# Redirect stdin into a variable in shell

Written by Solène, on 12 September 2016.
Tags: #shell #unix

Comments on Mastodon

If you want to write a script reading stdin and put it into a variable, there is an very easy way to procede :

#!/bin/sh
var=cat
echo $var  That’s all # Android phone and Unix Written by Solène, on 06 September 2016. Tags: #android #emacs Comments on Mastodon If you have an android Phone, here are two things you may like: ### Org-mode <=> Android First is the MobileOrg app to synchronize your calendar/tasks between your computer org-mode files and your phone. I am using org-mode since a few months, I think I do pretty basics things with it like having a todo list with a deadline for each item. Having it in my phone calendar is a good enhancement. I can also add todo items from my phone to show it on my computer. The phone and your computer get synced by publishing a special format of org files for the mobile on a remote server. Mobile Org supports ssh, webdav, dropbox or sdcard. I’m using ssh because I own a server and I can reliabily have my things connected together there on a dedicated account. Emacs will then use tramp to publish/retrieve the files. Official MobileOrg website MobileOrg on Google Play ### Read/Write sms from a remote place Second useful thing I like with my android phone is being able to write and send sms (+ some others things but I was most interested by SMS) from my computer. A few services already exists but they work with “cloud” logic and I don’t want my phone to be connected to one more service. The MAXS app provides me what I need : ability to read/write the sms of my phone from the computer without web browser and relying on my own services. MAXS connects the phone to a XMPP account and you set a whitelist of XMPP mails able to send commands, that’s all. Here are a few examples of use: To write a SMS I just need to speak to the jabber account of my phone and write sms send firstname lastname hello how are you ?  Be careful, there are 2 spaces after the lastname ! I think it’s like this so MAXS can make easily the difference between the name and the message. I can also reply quickly to the last contacted person reply to Yes I'm answering from my computer  To read the last n sms sms read n  It’s still not perfect because sometimes it lose connectivity and you can’t speak with it anymore but from the project author it’s not a problem seen on every phone. I did not have the time yet to report exactly the problem (I need to play with Android Debug Bridge for that). If you want to install MAXS, you will need a few app from the store to get it working. First, you will need MAXS main and MAXS transport (a plugin to use XMPP) and then plugins for the differents commands you want, so, maybe, smsread and smswrite. Check their website for more informations. As presenter earlier on my website, I use profanity as XMPP client. It’s a light and easy to configure/use console client. Official MAXS Website MAXS on Google Play # How to kill processes by their name Written by Solène, on 25 August 2016. Tags: #unix Comments on Mastodon If you want to kill a process by its name instead of its PID number, which is easier if you have to kill processes from the same binary, here are the commands depending of your operating system: FreeBSD / Linux $ killall pid_name


OpenBSD

$pkill pid_name  Solaris Be careful with Solaris killall. With no argument, the command will send a signal to every active process, which is not something you want. $ killall pid_name


# Automatically mute your Firefox tab

Written by Solène, on 17 August 2016.
Tags: #firefox

Comments on Mastodon

At work I have the sound of my laptop not muted because I need sound from time to time. But browsing the internet with Firefox can sometime trigger some undesired sound, very boring in the office. There is the extension Mute Tab to auto-mute a new tab on Firefox so it won’t play sound. The auto-mute must be activated in the plugin options, it’s un-checked by default.

You can find it here, no restart required: Firefox Mute Tab addon

I also use FlashStopper which block by default flash and HTML5 videos, so you can click on it to activate them, it doesn’t autoplay.

Firefox FlashStopper addon

# Port of the week: pwgen

Written by Solène, on 12 August 2016.
Tags: #security #portoftheweek

Comments on Mastodon

I will talk about security/pwgen for the current port of the week. It’s a very light executable to generate passwords. But it’s not just a dumb password generator, it has options to choose what kind of password you want.

Here is a list of options with their flag, you will find a lot more in the nice man page of pwgen:

• -A : don’t use capital letters
• -B : don’t use characters which could be missread (O/0, I/l/1 …)
• -v : don’t use vowels
• etc…

You can also use a seed to generate your “random” password (which aren’t very random in this case), you may need it for some reason to be able to reproduce password you lost for a ftp/http access for example.

Example of pwgen output generating 5 password of 10 characters. Using –1 parameter so it will only display one password per line, otherwise it display a grid (on column and multiple lines) of passwords.

$pwgen -1 10 5 fohchah9oP haNgeik0ee meiceeW8ae OReejoi5oo ohdae2Eisu  # Website now compatible gopher ! Written by Solène, on 11 August 2016. Tags: #gopher #network #lisp Comments on Mastodon My website is now available with Gopher protocol ! I really like this protocol. If you don’t know it, I encourage you reading this page : Why is Gopher still relevant?. This has been made possible by modifying the tool generating the website pages to make it generating gopher compatible pages. This was a bit of work but I am now proud to have it working. I have also made a “big” change into the generator, it now rely on a “markdown-to-html” tool which sadden me a bit. Before that, I was using ham-mode in emacs which was converting html on the fly to markdown so I can edit in markdown, and was exporting into html on save. This had pros and cons. Nothing more than a lisp interpreter was needed on the system generating the files, but I was sometimes struggling with ham-mode because the conversion was destructive. Multiple editing in a row of the same file was breaking code blocks, because it wasn’t exported the same way each time until it wasn’t a code block anymore. There are some articles that I update sometimes to keep it up-to-date or fix an error in it, and it was boring to fix the code everytime. Having the original markdown text was mandatory for gopher export, and is now easier to edit with any tool. There is a link to my gopher site on the right of this page. You will need a gopher client to connect to it. There is an android client working, also Firefox can have an extension to become compatible (gopher support was native before it have been dropped). You can find a list of clients on Wikipedia. Gopher is nice, don’t let it die. # Port of the week: feh Written by Solène, on 08 August 2016. Tags: #portoftheweek Comments on Mastodon Today I will talk about graphics/feh, it’s a tool to view pictures and it can also be used to set an image as background. I use this command line, invoked by stumpwm when my session starts so I can a nice background with cubes :) $ feh --bg-scale /home/solene/Downloads/cubes.jpg


feh as a lot of options and is really easy to use, I still prefer sxiv for viewing but I use feh for my background.

# Port of the week: Puddletag

Written by Solène, on 20 July 2016.
Tags: #portoftheweek

Comments on Mastodon

If you ever need to modify the tags of your music library (made of MP3s) I would recommend you audio/puddletag. This tool will let you see all your music metadata like a spreadsheet and just modify the cells to change the artist name, title etc… You can also select multiple cells and type one text and it will be applied on all the selected cells. There is also a tool to extract data from the filename with a regex. This tool is very easy and pleasant to use.

There is an option in the configuration panel that is good to be aware of, by default, when you change the tag of a file, the modification time isn’t changed, so if you use some kind of backup relying on the modification time it won’t be synchronized. In the configuration panel, you will find an option to check which will bump the modification timestamp when you change a tag on a song.

# Port of the week: Profanity

Written by Solène, on 12 July 2016.
Tags: #portoftheweek #network

Comments on Mastodon

Profanity is a command-line ncurses based XMPP (Jabber) client. It’s easy to use and seem inspired from irssi for the interface. It’s available in net/profanity.

It’s really easy to use and the documentation on its website is really clear.

To log-in, just type /connect myusername@mydomain and after the password prompt, you will be connected. Easy.

Profanity official website

# Stop being tracked by Google search with Firefox

Written by Solène, on 04 July 2016.
Tags: #security #web

Comments on Mastodon

When you use google search and you click on a link, you a redirected on a google server that will take care of saving your navigation choice from their search engine into their database.

1. This is bad for your privacy
2. This slow the process of using the search engine because you have a redirection (that you don’t see) when you want to visit a link

There is a firefox extension that will fix the links in the results of the search engine so when you click, you just go on the website without saying “hello Google I clicked there”: Google Search Link Fix

You can also use another web engine if you don’t like Google. I keep it because I have best results when searching technical. I tried to use Yahoo, Bing, Exalead, Qwant, Duck duck go, each one for a few days and Google has the bests results so far.

# Port of the week: OpenSCAD

Written by Solène, on 04 July 2016.
Tags: #portoftheweek

Comments on Mastodon

OpenSCAD is a software for creating 3D objects like a programming language, with the possibility to preview your creation.

I am personaly interested in 3D things, I have been playing with 3ds Max and Blender for creating 3d objects but I never felt really comfortable with them. I discovered pov-ray a few years ago which is used to create rendered pictures instead of creating objects. Pov-ray use its own “programming language” to describe the scene and make the render. Now, I have a 3D printer and I would like to create things to print, but I don’t like the GUI stuff of Blender and Pov-ray don’t create objects, so… OpenSCAD ! This is the pov-ray of objects !

Here is a simple example that create an empty box (difference of 2 cubes) and a screw propeller:

width = 3;
height = 3;
depth = 6;
thickness = 0.2;

difference() {
cube( [width,depth,height], true);

translate( [0,0,thickness] )
cube( [width-thickness, depth-thickness, height], true);
}

translate( [ width , 0 , 0 ])
linear_extrude(twist = 400, height = height*2)
square(2,true);


The following picture is made from the code above:

There are scad-mode and scad-preview for emacs for editing OpenSCAD files. scad-mode will check the coloration/syntax and scad-preview will create the OpenScad render inside a Emacs pane. Personaly, I use OpenSCAD opened in some corner of the screen with option set to render on file change, and I edit with emacs. Of course you can use any editor, or the embedded editor which is a Scintilla one which is pretty usable.

OpenSCAD website

OpenSCAD gallery

# Port of the week: arandr

Written by Solène, on 27 June 2016.
Tags: #portoftheweek

Comments on Mastodon

Today the Port of the week is x11/arandr, it’s a very simple tool to set-up your screen display when using multiple monitors. It’s very handy when you want to make something complicated or don’t want to use xrandr in command line. There is not much to say because it’s very easy to use!

It can generates your current configuration as a script that you will find under the ~/.screenlayout/ repertory. This is quite useful to configure your screens from your ~/.xsession file in case a monitor is connected.

xrandr | grep "HDMI-2 connected" && .screenlayout/dual-monitor.sh


If HDMI–2 has a screen connected, when I log-in my session, I will have my dual-monitor setup!

# Port of the week: x2x

Written by Solène, on 23 June 2016.
Tags: #portoftheweek

Comments on Mastodon

Port of the week is now presenting you x2x which stands for X to X connection. This is a really tiny tool in one executable file that let you move your mouse and use your keyboard on another X server than yours. It’s like the other tool synergy but easier to use and open-source (I think synergy isn’t open source anymore).

If you want to use the computer on your left, just use the following command (x2x must be installed on it and ssh available)

$ssh -CX the_host_address "x2x -west -to :0.0"  and then you can move your cursor to the left of your screen and you will see that you can use your cursor or type with the keyboard on your other computer ! I am using it to manage a wall of screen made of raspberry Pi first generation. I used to connect to it with VNC but it was very very slow. # Git cheat sheet Written by Solène, on 08 June 2016. Tags: #cheatsheet #git Comments on Mastodon Here is my git cheat sheet ! Because I don’t like git I never remember how to do X or Y with it so I need to write down simple commands ! (I am used to darcs and mercurial but with the “git trend” I need to learn it and use it). ### Undo uncommited changes on a tracked file $ git reset --hard


$git pull  ### Make a commit containing all tracked files $ git commit -m "Commit message" -a


$git push  # How to send html signature in mu4e Written by Solène, on 07 June 2016. Tags: #email #emacs Comments on Mastodon I switched to mu4e to manage my mails at work, and also to send mails. But in our corporation we all have a signature that include our logo and some hypertext links, so I couldn’t just insert my signature and be done with that. There is a simple way to deal with this problem, I fetched the html part of my signature (which include an image in base64) and pasted it into my emacs config file this way. (setq mu4e-compose-signature "<#part type=text/html><html><body><p>Hello ! I am the html signature which can contains anything in html !</p></body></html><#/part>" )  I pasted my signature instead of the hello world text of course, but you only have to use the part tag and you are done ! The rest of your mails will be plain text, except this part. # My Stumpwm config on OpenBSD Written by Solène, on 06 June 2016. Tags: #window-manager #lisp Comments on Mastodon I want to talk about stumpwm, a window manager written in Common LISP. I think one must at least like emacs to like stumpwm. Stumpwm is a tiling window manager one which you create “panes” on the screen like windows on Emacs. A single pane takes 100% of the screen, then you can split it into 2 panes vertically or horizontally and resize it, and you can split again and again. There is no “automatic” tiling. By default, if you have ONE pane, you will only have ONE window displayed, this is a bit different that others tiling wm I had tried. Also, virtual desktops are named groups, nothing special here, you can create/delete groups and rename it. Finally, stumpwm is not minimalistic. To install it, you need to get the sources of stumpwm, install a common lisp interpreter (sbcl, clisp, ecl etc…), install quicklisp (which is not in packages), install the quicklisp packages cl-ppcre and clx and then you can compile stumpwm, that will produce a huge binary which embedded a common lisp interpreter (that’s a way to share common lisp executables, the interpreter can create an executable from itself and include the files you want to execute). I would like to make a package for OpenBSD but packaging quicklisp and its packages seems too difficult for me at the moment. Here is my config file in ~/.stumpwmrc. Updated: 23th january 2018 (defun chomp(text) (subseq text 0 (- (length text) 1))) (defmacro cmd(command) (progn (:eval (chomp (stumpwm:run-shell-command ,,command t))))) (defun get-latence() (let ((now (get-universal-time))) (when (> (- now *latence-last-update* ) 30) (setf *latence-last-update* now) (when (probe-file "/tmp/latenceresult") (with-open-file (x "/tmp/latenceresult" :direction :input) (setf *latence* (read-line x)))))) *latence*) (defvar *latence-last-update* (get-universal-time)) (defvar *latence* "nil") (set-module-dir "~/dev/stumpwm-contrib/") (stumpwm:run-shell-command "setxkbmap fr") (stumpwm:run-shell-command "feh --bg-fill red_damask-wallpaper-1920x1080.jpg") (defvar color1 "#886666") (defvar color2 "#222222") (setf stumpwm:*mode-line-background-color* color2 stumpwm:*mode-line-foreground-color* color1 stumpwm:*mode-line-border-color* "#555555" stumpwm:*screen-mode-line-format* (list "%g | %v ^>^7 %B | " '(:eval (get-latence)) "ms %d ") stumpwm:*mode-line-border-width* 1 stumpwm:*mode-line-pad-x* 6 stumpwm:*mode-line-pad-y* 1 stumpwm:*mode-line-timeout* 5 stumpwm:*mouse-focus-policy* :click ;;stumpwm:*group-format* "%n·%t stumpwm:*group-format* "%n" stumpwm:*time-modeline-string* "%H:%M" stumpwm:*window-format* "^b^(:fg \"#7799AA\")<%25t>" stumpwm:*window-border-style* :tight stumpwm:*normal-border-width* 1 ) (stumpwm:set-focus-color "#7799CC") (stumpwm:grename "Alpha") (stumpwm:gnewbg "Beta") (stumpwm:gnewbg "Tau") (stumpwm:gnewbg "Pi") (stumpwm:gnewbg "Zeta") (stumpwm:gnewbg "Teta") (stumpwm:gnewbg "Phi") (stumpwm:gnewbg "Rho") (stumpwm:toggle-mode-line (stumpwm:current-screen) (stumpwm:current-head)) (set-prefix-key (kbd "M-a")) (define-key *root-map* (kbd "c") "exec urxvtc") (define-key *root-map* (kbd "RET") "move-window down") (define-key *root-map* (kbd "z") "fullscreen") (define-key *top-map* (kbd "M-&") "gselect 1") (define-key *top-map* (kbd "M-eacute") "gselect 2") (define-key *top-map* (kbd "M-\"") "gselect 3") (define-key *top-map* (kbd "M-quoteright") "gselect 4") (define-key *top-map* (kbd "M-(") "gselect 5") (define-key *top-map* (kbd "M--") "gselect 6") (define-key *top-map* (kbd "M-egrave") "gselect 7") (define-key *top-map* (kbd "M-underscore") "gselect 8") (define-key *top-map* (kbd "s-l") "exec slock") (define-key *top-map* (kbd "s-t") "exec urxvtc") (define-key *top-map* (kbd "M-S-RET") "exec urxvtc") (define-key *top-map* (kbd "M-C") "exec urxvtc") (define-key *top-map* (kbd "s-s") "exec /home/solene/dev/screen_up.sh") (define-key *top-map* (kbd "s-Left") "gprev") (define-key *top-map* (kbd "s-Right") "gnext") (define-key *top-map* (kbd "M-ISO_Left_Tab")"other") (define-key *top-map* (kbd "M-TAB") "fnext") (define-key *top-map* (kbd "M-twosuperior") "next-in-frame") (load-module "battery-portable") (load-module "stumptray")  I use a function to get latency from a script that is started every 20 seconds to display the network latency or nil if I don’t have internet access. I use rxvt-unicode daemon (urxvtd) as a terminal emulator, so the terminal command is urxvtc (for client), it’s lighter and faster to load. I also use a weird “alt+tab” combination: • Alt+tab switch between panes • Alt+² (the key above tab) circles windows in the current pane • Alt+Shift+Tab switch to the previous windows selected StumpWM website # Port of the week: mbuffer Written by Solène, on 31 May 2016. Tags: #portoftheweek #network Comments on Mastodon This Port of the week is a bit special because sadly, the port isn’t available on OpenBSD. The port is mbuffer (which you can find in misc/mbuffer). I discovered it while looking for a way to enhance one of my network stream scripts. I have some scripts that get a dump of a postgresql base through SSH, copy it from stdin to a file with tee and send it out to the local postgres, the command line looks like $ ssh remote-base-server "pg_dump my_base | gzip -c -f -" | gunzip -f | tee dumps/my_base.dump | psql my_base


I also use the same kind of command to receive a ZFS snapshot from another server.

But there is an issue, the end server is relatively slow, postgresql and ZFS will eat lot of data from stdin and then it will stop for sometimes writing on the disk, when they are ready to take new data, it’s slow to fill them. This is where mbuffer takes places. This tool permit to add a buffer that will take data from stdin and fill its memory (that you set on the command line), so when the slowest part of the command is ready to take data, mbuffer will empty its memory into the pipe, so the slowlest command isn’t waiting to get filled before working again.

The new command looks like that for a buffer of 300 Mb

ssh remote-base-server "pg_dump my_base | gzip -c -f -" |  gunzip -f | tee dumps/my_base.dump | mbuffer -s 8192 -m 300M | psql my_base


mbuffer also comes with a nice console output, showing

• bandwith in

• bandwith out

• percentage/consumption of memory filled

• total transfered

in @ 1219 KiB/s, out @ 1219 KiB/s, 906 MiB total, buffer 0% full

In this example the server is too fast so there is no wait, the buffer isn’t used (0% full).

mbuffer can also listen on TCP, unix socket and have a lot of parameters that I didn’t try, if you think that can be useful for you, just go for it !

# FreeBSD 11 and Perc H720P Mini raid controller

Written by Solène, on 25 May 2016.
Tags: #freebsd11 #hardware

Comments on Mastodon

I had a problem with my 3 latests R430 Dell server which all have a PERC H730P Mini raid controller. The installer could barely works and slowly, and 2 servers were booting and crashing with FS corruption while the latest just didn’t boot and the raid was cleared.

It is a problem with a driver of the raid controller. I don’t understand exatly the problem but I found a fix.

From man page mfi(4)

A tunable is provided to adjust the mfi driver's behaviour when attaching
to a card.  By default the driver will attach to all known cards with
high probe priority.  If the tunable hw.mfi.mrsas_enable is set to 1,
then the driver will reduce its probe priority to allow mrsas to attach
to the card instead of mfi.


In order to install the system, you have to set hw.mfi.mrsas_enable=1 on the install media, and set this on the installed system before booting it.

There are two ways for that:

• if you use a usb media, you can mount it and edit /boot/loader.conf and add hw.mfi.mrsas_enable=1
• at the boot screen with the logo freebsd, choose 3) Espace to boot prompt, type set hw.mfi.mrsas_enable=1 and boot

You will have to edit /boot/loader.conf to add the line on the installed system from the live system of the installer.

I have been struggling a long before understanding the problem. I hope this message could save time to somebody else.

# Port of the week: rdesktop

Written by Solène, on 20 May 2016.
Tags: #portoftheweek

Comments on Mastodon

This week we will have a quick look at the tool rdesktop. Rdesktop is a RDP client (RDP stands for Remote Desktop Protocol) which is used to share your desktop with another machine. RDP is a Microsoft thing and it’s most used on Windows.

I am personally using it because sometimes I need to use Microsoft Word/Excel or Windows only software and I have a dedidated virtual machine for this. So I use rdesktop to connect in fullscreen to the virtual machine and I can work on Windows. The RDP protocol is very efficient, on LAN network there is no lag. I appreciate much more using the VM with RDP than VNC.

You can also have RDP servers within virtual machines. VirtualBox let you have (with an additional package to add on the host) RDP server for a VM. Maybe VmWare provides RDP servers too. I know that Xen and KVM can give access through VNC or Spice but no RDP.

For its usage, if you want to connect to a RDP server whose IP address is 192.168.1.100 in fullscreen with max quality, type:

$ls -a repo2 . .. .git  You may use this one for local use, but you may want to clone it later, and work with this repository and doing push/pull. That’s how gitit works, it has a folder “wikidata” that should be initiated as git, and it will works locally. But if you want to clone it on your computer, work on the documentation and then push your changes to gitit, you may get this error when pushing : ### Problem when pushing I cloned the repository, made changes, committed and now I want to push, but no… Décompte des objets: 3, fait. Écriture des objets: 100% (3/3), 232 bytes | 0 bytes/s, fait. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. ! [remote rejected] master -> master (branch is currently checked out)  git is unhappy, I can’t push ### Solution You can fix this “problem” by changing a config in the server repository with this command : $ git config --local receive.denyCurrentBranch updateInstead


Now you should be able to push to your non-bare repository.

# Port of the week: sxiv

Written by Solène, on 13 May 2016.
Tags: #portoftheweek

Comments on Mastodon

This week I will talk about the command line image viewer sxiv. While it’s a command line tool, of course it spawn a X window to display the pictures. It’s very light and easy of use, it’s my favorite image viewer.

Quick start: (you should read the man page for more informations)

• sxiv file1 file2… : Sxiv open only files given as parameter or filenames from stdin
• p/n : previous/next
• f : fullscreen
• 12 G : go to 12th image of the list
• Return : switch to the thumbnails mode / select the image from the thumbnails mode
• q : quit
• a lot more in the well written man page !

For power users who have a LOT of pictures to sort: Sxiv has a nice function that let you mark images you see and dump the list of marked images in a file (see parameter -o).

Tip for zsh users, if you want to read every jpg files in a tree, you
can use **sxiv **/*.jpg** globbing as seen in the Zsh cheat sheet
).

In OpenBSD ports tree, check graphics/sxiv.

# Port of the week: bwm-ng

Written by Solène, on 06 May 2016.
Tags: #portoftheweek #network

Comments on Mastodon

I am starting a periodic posting for something I wanted to do since a long time. Take a port in the tree and introduce it quickly. There are tons of ports in the tree that we don’t know about. So, I will write frequently about ports that I use frequently and that I find useful, if you read this, maybe I will find a new tool to your collection of “useful program”. :-)

For a first one, I would like to present net/bwm-ng. Its name stands for “_BandWitch Monitor next-generation_”, it allows the user to watch in real-time the bandwith usage of the different network interfaces. By default, it will update the display every 0.5 second. You can change the frequency of updating by pressing keys ‘+’ and ‘-’.

Let see the bindings of the interactive mode :

• ‘t’ will cycle between current rate, maximum peak, sum, average on 30 seconds.
• ‘n’ will cycle between data sources, on OpenBSD it defaults to “getifaddrs” and you can also choose “sysctl” or “netstat -i”.
• ‘d’ will change the unit, by default it shows KB but you can change to another units that suits better your current data.

Summary output after downloading a file

bwm-ng v0.6.1 (probing every 5.700s), press 'h' for help
input: getifaddrs type: sum
-         iface                   Rx                   Tx                Total
==============================================================================
lo0:           0.00  B              0.00  B              0.00  B
em0:          19.89 MB            662.82 KB             20.54 MB
pflog0:           0.00  B              0.00  B              0.00  B
------------------------------------------------------------------------------
total:          19.89 MB            662.82 KB             20.54 MB


It’s available on *BSD, Linux and maybe others.

In OpenBSD ports tree, look for net/bwm-ng.

# My mutt cheat sheet

Written by Solène, on 03 May 2016.
Tags: #cheatsheet #mutt #email

Comments on Mastodon

I am learning mutt and I am lost. If you are like me, you may like the following cheat sheet!

I am using it through imap, it may be different with local mailbox.

Case is important !

• Change folder : Y
• Filter the display : l (for limit) and then a filter like this
• ~d <2w : ~d for date and <2w for “less than 2 weeks” no space in <2w !
• ~b “hello mate” : ~b is for body and the string is something to find in the body
• ~f somebody@zxy.abc : ~f for from and you can make an expression
• ~s “Urgent” : ~s stands for subject and use a pattern
• Delete messages with filter : D with a filter, if you used limit before it will propose by default the filter of limit
• Delete a message : d (it will be marked as Deleted)

Deleted messages will be removed when you change the folder or if you exit. Pressing $can do it manually. # My zsh cheat sheet Written by Solène, on 03 May 2016. Tags: #cheatsheet #zsh Comments on Mastodon I may add new things in the future, as they come for me, if I find new features useful. ### How to repeat a command n time repeat 5 curl http://localhost/counter_add.php  ### How to expand recursively If you want to find every file ending by .lisp in the folder and subfolder you can use the following syntax. Using ****** inside a pattern while do a recursive globbing. ls **/*.lisp  ### Work with temp files If you want to work on some command outputs without having to manage temporary files, zsh can do it for you with the following syntax: =(command that produces stdout). In the example we will use emacs to open the list of the files in our personal folder. emacs =(find ~ -type f)  This syntax will produce a temp file that will be removed when emacs exits. ### My ~/.zshrc here is my ~/.zshrc, very simple (I didn’t pasted the aliases I have), I have a 1000 lines history that skips duplicates. HISTFILE=~/.histfile HISTSIZE=1000 SAVEHIST=1000 setopt hist_ignore_all_dups setopt appendhistory bindkey -e zstyle :compinstall filename '/home/solene/.zshrc' autoload -Uz compinit compinit export LANGUAGE=fr_FR.UTF-8 export LANG=fr_FR.UTF-8 export LC_ALL=fr_FR.UTF-8 export LC_CTYPE=fr_FR.UTF-8 export LC_MESSAGES=fr_FR.UTF-8  # Simple emacs config Written by Solène, on 02 May 2016. Tags: #emacs #cheatsheet Comments on Mastodon Here is a dump of my emacs config file. That may be useful for some emacs users who begin. If you doesn’t want to have your_filename.txt~ files with a tilde at the end (this is a default backup file), add this ; I don't want to have backup files everywhere with filename~ name (setq backup-inhibited t) (setq auto-save-default nil)  To have parenthesis highlighting on match, which is very useful, you will need this ; show match parenthesis (show-paren-mode 1)  I really like this one. It will save the cursor position in every file you edit. When you edit it again, you start exactly where you leaved the last time. ; keep the position of the cursor after editing (setq save-place-file "~/.emacs.d/saveplace") (setq-default save-place t) (require 'saveplace)  If you write in utf–8 (which is very common now) you should add this. ; utf8 (prefer-coding-system 'utf-8)  Emacs modes are used depending on the extension of a file. Sometime you need to edit files with a custom extension but you want to use a mode for it. So, you just need to add some line like this to get your mode automatically when you load the file. ; associate extension - mode (add-to-list 'auto-mode-alist '("\\.md\\'" . markdown-mode)) (add-to-list 'auto-mode-alist '("\\.tpl$" . html-mode))


My Org-mode part in the config file

(require 'org)
(define-key global-map "\C-ca" 'org-agenda)
(setq org-log-done t)
(setq org-agenda-files (list "~/Org/work.org" "~/Org/home.org"))


Stop mixing tabs and space when indenting

(setq indent-tabs-mode nil)


# How to add a route through a specific interface on FreeBSD 10

Written by Solène, on 02 May 2016.
Tags: #freebsd10 #network

Comments on Mastodon

If someday under FreeBSD you have a system with multiple IP address on the same network and you need to use a specific IP for a route, you have to use the -ifa parameter in the route command.

In our example, we have to use the address 192.168.1.140 to access the network 192.168.30.0 through the router 192.168.1.1, this is as easy as the following.

route add -net 192.168.30.0 192.168.1.1 -ifa 192.168.1.140
`

You can add this specific route like any other route in your rc.conf as usual, just add the -ifa X.X.X.X parameter.