About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(BSD OpenBSD Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on libera.chat, solene+www at dataswamp dot org or @solene@bsd.network (mastodon). If for some reason you want to support my work, this is my paypal address: donate@perso.pw.

Studying the impact of being on Hacker News first page

Written by Solène, on 27 July 2021.
Tags: #network #openbsd #blog

Comments on Mastodon

Introduction §

Since beginning of 2021, my blog has been popular a few times on the website Hacker News and it draws a lot of traffic. This is a report of the traffic generated by Hacker News because I found this topic quite interesting.

Hacker News website: a portal where people give interesting URL and members can vote/comment the link

Data §

From data gathered from the http server access logs, my blog has an average of 1200 visitors and 1100 hits every day.

The blog was featured on hacker news: 16th February, 10th May, 7th July and 24th July. On the following diagram, you can see each spike being an appearance on hacker news.

What's really interesting, is the different between 24th July and the other spikes, only 24th July appearance made up to the front page of hacker news. That day, the server received 36 000 visitors and 132 000 hits and it continued the next date at a slower rate but still a lot more noticeable than other spikes.

Visitors/Hits of the blog (generated using goaccess)

The following diagram comes from the tool pfstat, gathering data from the OpenBSD firewall to produce images. We can see the firewall is usually at a rate of ~35 new TCP states per seconds, on 24th July, it drastically increased very fast to 230 states per second for at least 12h and the load continued for days compared to the usual traffic.

Firewall states per second

Conclusion §

I don't have much more data than this, but it's already interesting to see the insane traffic drag and audience that Hacker News can generate. Having a static website and enough bandwidth didn't made it hard to absorb the load, but if you have a dynamic website running code, you could be worried to be featured on Hacker News which would certainly trigger a denial of service.

Wikipedia article on the "Slashdot effect" explaining this phenomena

The Old Computer Challenge: 10 days later, what changed?

Written by Solène, on 26 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Mastodon

Introduction §

Ten days ago I finished the Old Computer Challenge I started, it gather a dozen of people over the days and we had a great week of fun restricting ourselves with a 1 CPU / 512 MB old computer and try to manage our daily tasks using it.

In my last article about it, I noticed many things about my computer use and reported them. Did it change my habits?

How it changed me §

Because I noticed using an old computer improved my life because I was using it less made me realize it was all about self discipline.

Checking news once a day is enough §

I have accounts on some specialized news website (bike, video games) and I used to check them every too often when I was clueless about what to do. I'm trying to reduce the number of time I look for news there, if I miss a news I can still read it the next day. I'm also more looking into RSS feed when available so I can just stop visiting the website entirely.

Forums with low traffic §

Same as for news, I only look a few time in the day the forums I participate to check for replies or new message, instead of every 10 minutes.

Shutdown instead of suspend §

I started to shutdown my computer at evening after my news routine check. If nothing had to be done on the computer, I find it better to shutdown it so I'm not tempting to reuse it. I was using suspend/resume before and it was too easy to just resume the computer to look for a new IRC message. I realized IRC messages can wait.

Read NOW §

A biggest change on the old computer was that when browsing the internet and blogs, I was actually reading the content instead of bookmarking it and never come back or reading the text very fast by looking for some key word to have some vague idea of the text.

On my laptop, when reading content in Firefox, I find it very hard to focus on text, maybe because of the font, the size, the spacing, the screen contrast, I don't know. Using the Reader mode in Firefox drastically helps me focusing on the text. When land on a page with some interesting text, I switch to reader me and read it. HUGE WIN for me here.

I really don't know why I find text easier to read in w3m, I should try it on my computer but it's quite a pain to reach a page on some websites, maybe I should try to open w3m to read content I want after I find it using Firefox.

Slow is slow §

Sometimes I found my OpenBSD computer to be slow, using a very old computer helped me put it into perspective. Using my time more efficiently with less task switching doesn't require as much as performance as one would think.

Driving development ideas §

I recently wrote the software "potcasse" to manage podcasts distribution, I came to it thinking I want to record my podcasts and publish them from the old computer, I needed a simple and fast method to use it on that old system.

Conclusion §

The challenge was not always easy but it has bring a lot of fun for a week and in the end, it changed the way I use computer now. No regret!

OpenBSD full Tor setup

Written by Solène, on 25 July 2021.
Tags: #openbsd #tor #privacy #security

Comments on Mastodon

Introduction §

If for some reasons you want to block all your traffic except traffic going through Tor, here is how to proceed on OpenBSD.

The setup is simple and consists at installing Tor, running the service and configure the firewall to block every requests that doesn't come from the user _tor used by Tor daemon.

Setup §

Modify /etc/pf.conf to make it look like the following:

set skip on lo

# block OUT traffic
block out

# block IN traffic and allow response to our OUT requests
block return

# allow TCP requests made by _tor user
pass out on egress proto tcp user _tor

If you forgot to save your pf.conf file, the default file is available in /etc/examples/pf.conf if you want to go back to a standard PF configuration.

Here are the commands to type as root to install tor and reload PF:

pkg_add tor
rcctl enable tor
rcctl start tor
pfctl -f /etc/pf.conf

Configure your programs to use the proxy SOCKS5 localhost:9050, if you need to reach a remote server / service of yours, you will need to have a server running tor and define HiddenServices to access them through Tor.

Privacy considerations §

Please consider that if you are using DHCP to obtain an IP on the network the hostname of your system is shared and also its MAC address.

As for the MAC address, you can use "lladdr random" in your interface configuration file to have a new random MAC address on every boot.

As for the hostname, I didn't test it but it should work, rewrite your /etc/myname file with a new value at each boot, meaning the next boot you will have a new value. To do so, you could run an /etc/rc.local with this script:

#!/bin/sh

grep -v ^# /usr/share/misc/airport | cut -d ':' -f 1 | sort -R | head -n 1 > /etc/myname

The script will take a random name out of the 2000+ entries of the airport list (every airport in the list has been visited by OpenBSD developed before it is added). This still mean you have 1/2000 chance to have the same name upon reboot, if you prefer more entropy you can make a script generating a long random string.

Potential issues §

The only issue I can imagine right now is connecting on a network with a captive portal to reach the Internet, you would have to disable the PF rule (or entire PF) at the risk of some programs leaking data.

Same setup with I2P §

If you prefer using i2p only to reach external services, replace _tor by _i2p or _i2pd in the pf.conf rule, depending on which implementation you used.

Conclusion §

I'm not a huge Tor user but for the people who need to be sure non-Tor traffic can't go out, this is a simple setup to make.

Why self hosting is important

Written by Solène, on 23 July 2021.
Tags: #fediverse #selfhosting #chatons #life #internet

Comments on Mastodon

Introduction §

Computers are amazing tools and Internet is an amazing network, we can share everything we want with anyone connected. As for now, most of the Internet is neutral, meaning ISP have to give access to the Internet to their customer and don't make choices depending on the destination (like faster access for some websites).

This is important to understand, this mean you can have your own website, your own chat server or your own gaming server hosted at home or on a dedicated server you rent, this is called self hosting. I suppose putting the label self hosting on dedicated server may not make everyone agree, this is true it's a grey area. The opposite of self hosting is to rely on a company to do the job for you, under their conditions, free or not.

Why is self hosting exactly? §

Self hosting is about freedom, you can choose what server you want to run, which version, which features and which configuration you want. If you self host at home, You can also pick the hardware to match your needs (more Ram ? More Disk? RAID?).

Self hosting is not a perfect solution, you have to buy the hardware, replace faulty components, do the system maintenance to keep the software part alive.

Why does it matter? §

When you rely on a company or a third party offering services, you become tied to their ecosystem and their decisions. A company can stop what you rely on at any time, they can decide to suspend your account at any time without explanation. Companies will try to make their services good are appealing, no doubt on it, and then lock you in their ecosystem. For example, if you move all your projects on github and you start using github services deeply (more than a simple git repository), moving away from Github will be complicated because you don't have _reversibility_, which mean the right to get out and receive help from your service to move away without losing data or information.

Self hosting empower the users instead of making profit from them. Self hosting is better when it's done in community, a common mail server for a group of people and a communication server federated to a bigger network (such as XMPP or Matrix) are a good way to create a resilient Internet while not giving away your rights to capitalist companies.

Community hosting §

Asking everyone to host their own services is not even utopia but rather stupid, we don't need everyone to run their own server for their own services, we should rather build a constellation of communities that connect using federated protocol such as Email, XMPP, Matrix, ActivityPub (protocol used for Mastodon, Pleroma, Peertube).

In France, there is a great initiative named CHATONS (which is the french word for KITTENS) gathering associative hosting with some pre-requisites like multiple sysadmin to avoid relying on one person.

[English] CHATONS website

[French] Site internet du collectif CHATONS

In Catalonia, a similiar initiative started:

[Catalan] Mixetess website

Quality of service §

I suppose most of my readers will argue that self hosting is nice but can't compete with "cloud" services, I admit this is true. Companies put a lot of money to make great services to get customers and earn money, if their service were bad, they wouldn't exist long.

But not using open source and self hosting won't make alternatives to your service provider greater, you become part of the problem by feeding the system. For example, Google Mail GMAIL is now so big that they can decide which domain is allowed to reach them and which can't. It is such a problem that most small email servers can't send emails to Gmail without being treated as spam and we can't do anything to it, the more users they are, the less they care about other providers.

Great achievements can be done in open source federated services like Peertube, one can host videos on a Peertube instance and follow the local rules of the instance, while some other big companies could just disable your video because some automatic detection script found a piece of music or inappropriate picture.

Giving your data to a company and relying on their services make you lose your freedom. If you don't think it's true this is okay, freedom is a vague concept and it comes with various steps on a high scale.

Tips for self hosting §

Here are a few tips if you want to learn more about hosting your own services.

  • ask people you trust if they want to participate, it's better to have more than only one person to manage servers.
  • you don't need to be an IT professional, but you need to understand you will have to learn.
  • backups are not a luxury, they are mandatory.
  • asking (for contributing or as a requirement) for money is fine as long as you can justify why (a peertube server can be very expensive to run for example).
  • people around usually throw old hardware, ask friends or relative if they have old unused hardware. You can easily repair "that old Windows laptop I replaced because wifi stopped working" and use it as a server.
  • electricity usage must be considered but on the other hand, buying a brand new hardware to save 20W is not necessarily more ecological.
  • some services such as email servers can't be hosted on most ISP connection due to specific requirements
  • you will certainly need to buy a domain name
  • redundancy is overkill most of the time, shit happens but in redundant servers shit happens twice more often

IndieWeb website: a community proposing alternatives to the "corporate web".

There is a Linux disribution dedicated to self hosting named "Yunohost" (Y U No Host) that make the task really easy and give you a beginner friendly interface to manage your own service.

Yunohost website

Yunohost documentation "What is Yunohost ?"

Conclusion §

I'm self hosting since I first understood running a web server was the only thing I required to have my own PHP forum 15 years ago. I mostly keep this blog alive to show and share my experiments, most of the time happening when playing with my self hosting servers.

I have a strong opinion on the subject, hosting your own services is a fantastic way to learn new skills or perfect them, but it's also important for freedom. In France we even have associative ISP and even if they are small, their existence force the big ISP companies to be transparent on their processes and interoperatibility.

If you disagree with me, this is fine.

Self host your Podcast easily with potcasse

Written by Solène, on 21 July 2021.
Tags: #openbsd #scripts #podcast

Comments on Mastodon

Introduction §

I wrote « potcasse », pronounced "pot kas", a tool to help people to publish and self host a podcast easily without using a third party service. I found it very hard to find information to self host your own podcast and make it available easily on "apps" / podcast players so I wrote potcasse.

Where to get it §

Get the code from git and run "make install" or just copy the script "potcasse" somewhere available in your $PATH. Note that rsync is a required dependency.

Gitea access to potcasse

direct git url to the sources

What is it doing? §

Potcasse will gather your audio files with some metadata (date, title), some information about your Podcast (name, address, language) and will create an output directory ready to be synced on your web server.

Potcasse creates a RSS feed compatible with players but also a simple HTML page with a summary of your episodes, your logo and the podcast title.

Why potcasse? §

I wanted to self host my podcast and I only found Wordpress, Nextcloud or complex PHP programs to do the job, I wanted something static like my static blog that will work on any hosting platform securely.

How to use it §

The process is simple for initialization:

  • init the project directory using "potcasse init"
  • edit the metadata.sh file to configure your Podcast

Then, for every new episode:

  • import audio files using "potcasse episode" with the required arguments
  • generate the html output directory using "potcasse gen"
  • use rsync to push the output directory to your web server

There is a README file in the project that explain how to configure it, once you deploy you should have an index.html file with links to your episodes and also a link for the RSS feed that can be used in podcast applications.

Conclusion §

This was a few hours of work to get the job done, I'm quite proud of the result and switched my podcast (only 2 episodes at the moment...) to it in a few minutes. I wrote the commands lines and parameters while trying to use it as if it was finished, this helped me a lot to choose what is required, optional, in which order, how I would like to manually make changes as an author etc...

I hope you will enjoy this simple tool as much as I do.

Simple scripts I made over time

Written by Solène, on 19 July 2021.
Tags: #openbsd #scripts #shell

Comments on Mastodon

Introduction §

I wanted to share a few scripts of mine for some time, here they are!

Scripts §

Over time I'm writing a few scripts to help me in some tasks, they are often associated to a key binding or at least in my ~/bin/ directory that I add to my $PATH.

Screenshot of a region and upload §

When I want to share something displayed on my screen, I use my simple "screen_up.sh" script (super+r) that will do the following:

  • use scrot and let me select an area on the screen
  • convert the file in jpg but also png compression using pngquant and pick the smallest file
  • upload the file to my remote server in a directory where files older than 3 days are cleaned (using find -ctime -type f -delete)
  • put the link in the clipboard and show a notification

This simple script has been improved a lot over time like getting a feedback of the result or picking the smallest file from various combinations.

#!/bin/sh
test -f /tmp/capture.png && rm /tmp/capture.png
scrot -s /tmp/capture.png
pngquant -f /tmp/capture.png
convert /tmp/capture-fs8.png /tmp/capture.jpg
FILE=$(ls -1Sr /tmp/capture* | head -n 1)
EXTENSION=${FILE##*.}

MD5=$(md5 -b "$FILE" | awk '{ print $4 }' | tr -d '/+=' )

ls -l $MD5

scp $FILE perso.pw:/var/www/htdocs/solene/i/${MD5}.${EXTENSION}
URL="https://perso.pw/i/${MD5}.${EXTENSION}"
echo "$URL" | xclip -selection clipboard

notify-send -u low $URL

Uploading a file temporarily §

Second most used script of mine is a uploading file utility. It will rename a file using the content md5 hash but keeping the extension and will upload it in a directory on my server where it will be deleted after a few days from a crontab. Once the transfer is finished, I get a notification and the url in my clipboard.

#!/bin/sh
FILE="$1"

if [ -z "$1" ]
then
        echo "usage: [file]"
        exit 1
fi
                
                
MD5=$(md5 -b "$1" | awk '{ print $NF }' | tr -d '/+=' )
NAME=${MD5}.${FILE##*.}

scp "$FILE" perso.pw:/var/www/htdocs/solene/f/${NAME}

URL="https://perso.pw/f/${NAME}"
echo -n "$URL" | xclip -selection clipboard

notify-send -u low "$URL"

Sharing some text or code snippets §

While I can easily transfer files, sometimes I need to share a snippet of code or a whole file but I want to ease the reader work and display the content in an html page instead of sharing an extension file that will be downloaded. I don't put those files in a cleaned directory and I require a name to give some clues about the content to potential readers. The remote directory contains a highlight.js library used to use syntactic coloration, hence I pass the text language to use the coloration.

#!/bin/sh

if [ "$#" -eq 0 ]
then
        echo "usage: language [name] [path]"
        exit 1
fi

cat > /tmp/paste_upload <<EOF
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
</head>
<body>
        <link rel="stylesheet" href="default.min.css">
        <script src="highlight.min.js"></script>
        <script>hljs.initHighlightingOnLoad();</script>

        <pre><code class="$1">
EOF

# ugly but it works
cat /tmp/paste_upload | tr -d '\n' > /tmp/paste_upload_tmp
mv /tmp/paste_upload_tmp /tmp/paste_upload

if [ -f "$3" ]
then
    cat "$3" | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
else
    xclip -o | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
fi


cat >> /tmp/paste_upload <<EOF


</code></pre> </body> </html>
EOF


if [ -n "$2" ]
then
    NAME="$2"
else
    NAME=temp
fi

FILE=$(date +%s)_${1}_${NAME}.html

scp /tmp/paste_upload perso.pw:/var/www/htdocs/solene/prog/${FILE}

echo -n "https://perso.pw/prog/${FILE}" | xclip -selection clipboard
notify-send -u low "https://perso.pw/prog/${FILE}"

Resize a picture §

I never remember how to resize a picture so I made a one line script to not have to remember about it, I could have used a shell function for this kind of job.

#!/bin/sh

if [ -z "$2" ]
then
	PERCENT="40%"
else
	PERCENT="$2"
fi

convert -resize "$PERCENT" "$1" "tn_${1}"

Latency meter using DNS §

Because UDP requests are not reliable they make a good choice for testing network access reliability and performance. I used this as part of my stumpwm window manager bar to get the history of my internet access quality while in a high speed train.

The output uses three characters to tell if it's under a threshold (it works fine), between two threshold (not good quality) or higher than the second one (meaning high latency) or even a network failure.

The default timeout is 1s, if it works, under 60ms you get a "_", between 60ms and 150ms you get a "-" and beyond 150ms you get a "¯", if the network is failure you see a "N".

For example, if your quality is getting worse until it breaks and then works, it may look like this: _-¯¯NNNNN-____-_______ My LISP code was taking care of accumulating the values and only retaining the n values I wanted as history.

Why would you want to do that? Because I was bored in a train. But also, when network is fine, it's time to sync mails or refresh that failed web request to get an important documentation page.

#!/bin/sh

dig perso.pw @9.9.9.9  +timeout=1 | tee /tmp/latencecheck

if [ $? -eq 0 ]
then
        time=$(awk '/Query time/{
                if($4 < 60) { print "_";}
                if($4 >= 60 && $4 <= 150) { print "-"; }
                if($4 > 150) { print "¯"; }
        }' /tmp/latencecheck)
        echo $time | tee /tmp/latenceresult
else
        echo "N" | tee /tmp/latenceresult
    exit 1
fi

Conclusion §

Those scripts are part of my habits, I'm a bit lost when I don't have them because I always feel they are available at hand. While they don't bring much benefits, it's quality of life and it's fun to hack on small easy pieces of programs to achieve a simple purpose. I'm glad to share those.

The Old Computer Challenge: day 7

Written by Solène, on 16 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Mastodon

Report of the last day of the old computer challenge.

A journey §

I'm writing this text while in the last hours of the challenge, I may repeat some thoughts and observations already reported in the earlier posts but never mind, this is the end of the journey.

Technical §

Let's speak about Tech! My computer is 16 years old but I've been able to accomplish most of what I enjoy on a computer: IRC, reading my mails, hacking on code and reading some interesting content on the internet. So far, I've been quite happy about my computer, it worked without any trouble.

On the other hand, there were many tasks that didn't work at all:

  • Browsing the internet to use "modern" website relying on javascript: this is because Javascript capable browsers are not working on my combination of operating system/CPU architecture, I'm quite sure the challenge would have been easier with an old amd64 computer even with low memory.
  • Watching videos: for some reasons, mplayer in full screen was producing a weird issue, computer stopped working but cursor was still moving but nothing more was possible. However it worked correctly for most videos.
  • Listening to my big FLAC music files, if doing so I wasn't able to do anything else because of the CPU usage and sitting on my desk to listen to music was not an interesting option.
  • Using Go, Rust and Node programs because there are no implementation of these languages on OpenBSD PowerPC 32bits.

On the hardware side, here is what I noticed:

  • 512MB are quite enough as long as you stay focused on one task, I rarely required to use swap even with multiple programs opened.
  • I don't really miss spinning hard drive, in term of speed and noise, I'm happy they are gone in my newer computers.
  • Using an external pointing device (mouse/trackball) is so much better than the bad touchpad.
  • Modern screens are so much better in term of resolution, colours and contrast!
  • They keyboard is pleasant but lack a "Super" modifier key which lead to issues with key binding overlapping between the window manager and programs.
  • Suspend and resume doesn't work on OpenBSD, so I had to boot the computer and it takes a few minutes to do so and require manual step to unlock /home which add delay for boot sequence.

Despite everything the computer was solid but modern hardware is such more pleasant to use in many ways, not only in term of raw speed. When you buy a laptop especially, you should take care about the other specs than the CPU/memory like the case, the keyboard, the touchpad and the screen, if you use a lot your laptop they are as much important as the CPU itself in my opinion.

Thanks to the programs w3m, catgirl, luakit, links, neomutt, claws-mail, ls, make, sbcl, git, rednotebook, keepassxc, gimp, sxiv, feh, windowmaker, fvwm, ratpoison, ksh, fish, mplayer, openttd, mednafen, rsync, pngquant, ncdu, nethack, goffice, gnumeric, scrot, sct, lxappearence, tootstream, toot, OpenBSD and all the other programs I used for this challenge.

Human §

Because I always felt this challenge was a journey to understand my use of computer, I'm happy of the journey.

To make things simple, here is a bullet list of what I noticed

  • Going to sleep earlier instead of waiting for something to happen.
  • I've spent a lot less time on my computer but at the same time I don't notice it much in term of what I've done with it, this mean I was more "productive" (writing blog, reading content, hacking) and not idling.
  • I didn't participate into web forums of my communities :(
  • I cleared things in my todo list on my server (such as replacing Spamassassin by rspamd and writing about it).
  • I've read more blogs and interesting texts than usual, and I did it without switching to another task.
  • Javascript is not ecological because it prevent older hardware to be usable. If I didn't needed javascript I guess I could continue using this laptop.
  • I got time to discover and practice meditation.
  • Less open source contribution because compiling was too slow.

I'm sad and disappointed to notice I need to work on my self discipline (that's why I started to learn about meditation) to waste less time on my computer. I will really work on it, I see I can still do the same tasks but spend less time doing nothing/idling/switching tasks.

I will take care of supporting old systems by my contributions, like my blog working perfectly fine in console web browsers but also trying to educate people about this.

I've met lot of interesting people on the IRC channel and for this sole reason I'm happy I made the challenge.

Conclusion §

Good hardware is good but is not always necessary, it's up to the developers to make good use of the hardware. While some requirements can evolve over time like cryptography or video codecs, programs shouldn't become more and more resources hungry for the reason that we have more and more available. We have to learn how todo MORE with LESS with computers and it was something I wanted to highlight with this challenge.

The Old Computer Challenge: day 6

Written by Solène, on 15 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Mastodon

Report §

This is the 6th day of the challenge! Time went quite fast.

Mood §

I got quite bored two days ago because it was very frustrating to not be able to do everything I want. I wanted to contribute to OpenBSD but the computer is way to slow to do anything useful beyond editing files.

Although, it got better yesterday, 5th day of the challenge, when I decided to move away from claws-mail and switch to neomutt for my emails. I updated claws-mail to version 4.0.0 freshly released and starting updating the OpenBSD package, but claws-mail switched to gtk3 and it became too slow for the computer.

I started using a mouse on the laptop and it made some tasks more enjoyable although I don't need it too much because most of my programs are in a console but every time I need the cursor it's more pleasant to use a mouse support 3 clicks + wheel.

Software §

The computer is the sum of its software. Here is a list of the software I'm using right now:

  • fvwm2: window manager, doesn't bug with full screen programs and is light enough and I like it.
  • neomutt: mail reader, I always hate mutt/neomutt because of the complexity of their config file, fortunately I had some memories of when I used it and I've been able to build a nice simple configuration script and took the opportunity to update my Neomutt cheatsheet article.
  • w3m: in my opinion it's the best web browser in terminal :) the bookmark feature works very great and using https://lite.duckduckgo.com/lite for searches works perfectly fine. I use the flavor with image rendering support, however I have mixed feelings about it because pictures take time to download and render and will always render at their original size which is a pain most of the time.
  • keepassxc: my usual password manager, it has a cli command line to manage the entries from a shell after unlocking the database.
  • openttd: a game of legend that is relaxing and also very fun to play, runs fine after a few tweaks.
  • mastodon: tootstream but it's quite limited sometimes and I also access Mastodon on my phone with Tusky from F-droid, they make a great combination.
  • rednotebook: I was already using it on this computer when it was known as the "offline computer", this program is a diary where I write my day when I feel bad (anger, depressed, bored), it doesn't have much entries in it but it really helps me to write things down. While the program is very heavy and could be considered bloated for the purpose of writing about your day, I just like it because it works and it looks nice.

I'm often asked how I deal with youtube, I just don't, I don't use youtube so problem is solved :-) I use no streaming services at home.

Breaking the challenge §

I had to use my regular computer to order a pizza because the stupid pizza company doesn't want to take orders by phone and they are the only pizza shop around... :( I could have done using my phone but I don't really trust my phone web browser to support all the operations of the process.

I could easily handle using this computer for more time if I hadn't so many requirements on web services, mostly for ordering products I can't find locally (pizza doesn't count here) and I hate using my phone for web access because I hate smartphone most of the time.

If I had used an old i386 / amd64 computer I would have been able to use a webkit browser even if it was slow, but on PowerPC the state of web browser with javascript is complicated and currently none works for me on OpenBSD.

Filtering spam using Rspamd and OpenSMTPD on OpenBSD

Written by Solène, on 13 July 2021.
Tags: #openbsd #mail #spam

Comments on Mastodon

Introduction §

I recently used Spamassassin to get ride of the spam I started to receive but it proved to be quite useless against some kind of spam so I decided to give rspamd a try and write about it.

rspamd can filter spam but also sign outgoing messages with DKIM, I will only care about the anti spam aspect.

rspamd project website

Setup §

The rspamd setup for spam was incredibly easy on OpenBSD (6.9 for me when I wrote this). We need to install the rspamd service but also the connector for opensmtpd, and also redis which is mandatory to make rspamd working.

pkg_add opensmtpd-filter-rspamd rspamd redis
rcctl enable redis rspamd
rcctl start redis rspamd

Modify your /etc/mail/smtpd.conf file to add this new line:

filter rspamd proc-exec "filter-rspamd"

And modify your "listen on ..." lines to add "filter "rspamd"" to it, like in this example:

listen on em0 pki perso.pw tls auth-optional   filter "rspamd"
listen on em0 pki perso.pw smtps auth-optional filter "rspamd"

Restart smtpd with "rcctl restart smtpd" and you should have rspamd working!

Using rspamd §

Rspamd will automatically check multiple criteria for assigning a score to an incoming email, beyond a high score the email will be rejected but between a low score and too high, it may be tagged with a header "X-spam" with the value true.

If you want to automatically put the tagged email as spam in your Junk directory, either use a sieve filter on the server side or use a local filter in your email client. The sieve filter would look like this:


if header :contains "X-Spam" "yes" {
        fileinto "Junk";
        stop;
}

Feeding rspamd §

If you want better results, the filter needs to learn what is spam and what is not spam (named ham). You need to regularly scan new emails to increase the effectiveness of the filter, in my example I have a single user with a Junk directory and an Archives directory within the maildir storage, I use crontab to run learning on mails newer than 24h.

0  1 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec rspamc learn_ham {} +
10 1 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec rspamc learn_spam {} +

Getting statistics §

rspamd comes with very nice reporting tools, you can get a WebUI on the port 11334 which is listening on localhost by default so you would require tuning rspamd to listen on other addresses or you can use a SSH tunnel.

You can get the same statistics on the command line using the command "rspamc stat" which should have an output similar to this:

Results for command: stat (0.031 seconds)
Messages scanned: 615
Messages with action reject: 15, 2.43%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 9, 1.46%
Messages with action greylist: 6, 0.97%
Messages with action no action: 585, 95.12%
Messages treated as spam: 24, 3.90%
Messages treated as ham: 591, 96.09%
Messages learned: 4167
Connections count: 611
Control connections count: 5190
Pools allocated: 5824
Pools freed: 5801
Bytes allocated: 31.17MiB
Memory chunks allocated: 158
Shared chunks allocated: 16
Chunks freed: 0
Oversized chunks: 575
Fuzzy hashes in storage "rspamd.com": 2936336370
Fuzzy hashes stored: 2936336370
Statfile: BAYES_SPAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 344; users: 1; languages: 0
Statfile: BAYES_HAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 3822; users: 1; languages: 0
Total learns: 4166

Conclusion §

rspamd is for me a huge improvement in term of efficiency, when I tag an email as spam the next one looking similar will immediately go into Spam after the learning cron runs, it draws less memory then Spamassassin and reports nice statistics. My Spamassassin setup was directly rejecting emails so I didn't have a good comprehension of its effectiveness but I got too many identical messages over weeks that were never filtered, for now rspamd proved to be better here.

I recommend looking at the configurations files, they are all disabled by default but offer many comments with explanations which is a nice introduction to learn about features of rspamd, I preferred to keep the defaults and see how it goes before tweaking more.

The Old Computer Challenge: day 3

Written by Solène, on 12 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Mastodon

Report of the third day of the old computer challenge.

Community §

I got a lot of feedback from the community, the IRC channel #old-computer-challenge is quite active and it seems we have a small community that may start here. I received help from various question I had in regards to the programs I'm now using.

Changes §

Web is a pity §

The computer I use is using a different processor architecture than we we are used too. Our computers are now amd64 (even the intel one, amd64 is the name of the instruction sets of the processors) or arm64 for most tablets/smartphone or small boards like raspberry PI, my computer is a PowerPC but it disappeared around 2007 from the market. It is important to know that because most language virtual machines (for interpreted languages) requires some architecture specifics instructions to work, and nobody care much about PowerPC in the javascript land (that could be considered wasting time given the user base), so I'm left without a JS capable web browser because they would instantly crash. The person of cwen@ at the OpenBSD project is pushing hard to fix many programs on PowerPC and she is doing an awesome work, she got JS browsers to work through webkit but for some reasons they are broken again so I have to do without those.

w3m works very fine, I learned about using bookmarks in it and it makes w3m a lot more usable for daily stuff, I've been able to log-in on most websites but I faced some buttons not working because they triggered a javascript action. I'm using it with built-in support for images but it makes loading time longer and they are displayed with their real size which can screw up the display, I'm think I'll disable the image support...

Long live to the smolnet §

What is the smolnet? This is a word that feature what is not on the Web, this includes mostly content from Gopher and Gemini. I like that word because it represents an alternative that I'm contributing too for years and the word carries a lot of meaning.

Gopher and Gemini are way saner to browse, thanks to a standard concept of one item per line and no style, visiting one page feels like all the others and I don't have to look for where the menu is, or even wait for the page to render. I've been recommended the av-98 terminal browser and it has a very lovely feature named "tour", you can accumulate links from pages you visit and add them to the tour, and them visit the next liked accumulated (like a First in-First out queue), this avoids cumbersome tabs or adding bookmarks for later viewing and forgetting about them.

Working on OpenBSD ports §

I'm working at updating the claws-mail mail client package on OpenBSD, a new major release was done the first day of the challenge, unfortunately working with it is extremely painful on my old computer. Compiling was long, but was done only once, now I need to sort out libraries includes and using the built-in check of the ports tree takes like 15 minutes which is really not fun.

I hate the old hardware §

While I like this old laptop, I start to hate it too. The touchpad is extremely bad and move by increments of 5px or so which is extremely imprecise especially for copy/pasting text or playing OpenTTD, not mentioning again that it only has a left click button. (update, it has been fixed thanks to anthk_ on IRC using the command xinput set-prop /dev/wsmouse "Device Accel Constant Deceleration" 1.5)

The screen has a very poor contrast, I can deal with a 1024x768 resolution and I love the 4:3 ratio, but the lack of contrast is really painful to deal with.

The mechanical hard drive is slow, I can cope with that, but it's also extremely noisy, I forgot the crispy noises of the old HDD. It's so annoying to my hears... And talking about noise, I'm often limiting the CPU speed of my computer to avoid the temperature rising too high and triggering the super loud small CPU fan. It is really super loud and it doesn't seem quite effective, maybe the thermal paste is old...

A few months ago I wanted to replace the HDD but I looked on iFixit website the HDD replacement procedure for this laptop and there are like 40 steps to follow plus an Apple specific screwdriver, the procedure basically consists at removing all parts of the laptop to access the HDD which seems the piece of hardware in the most remote place in the case. This is insane, I'm used to work on Thinkpad laptop and after removing 4 usual screws you get access to everything, even my T470 internal battery is removable.

All of these annoying facts are not even related to the computer power but simply because modern hardware evolved, they are quality of life because they don't make the computer more or less usable, but more pleasant. Silence, good and larger screens and multiple fingers gestures touchpad bring a more comfortable use of the computer.

Taking my time §

Because of context switching cost a lot of time, I take my time to read content and appreciate it in one shot instead of bookmarking after reading a few lines and never read the bookmark again. I was quite happy to see I'm able to focus more than 2 minutes on something and I'm a bit relieved in that regards.

Psychological effect §

I'm quite sad to see an older system forcing me to restriction can improve my focus, this mean I'm lacking self discipline and that I've wasted too much time of my life doing useless context/task switching. I don't want to rely on some sort of limitations to be guards of my sanity, I have to work on this on my own, maybe meditation could be me getting my patience back.

End of report of day 3 §

I'm meeting friendly people sharing what I like, I realizing my dependencies over services or my lack of self mental discipline. The challenge is a lot harder than I expected but if it was too easy that wouldn't be a challenge. I already know I'll be happy to get back to my regular laptop but I also think I'll change some habits.

The Old Computer Challenge: day 1

Written by Solène, on 10 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Mastodon

Report of my first day of the old computer challenge

My setup §

I'm using an Apple iBook G4 running the operating system development version of OpenBSD macppc. Its specs are: 1 CPU G4 1.3GHz, 512 MB of memory and an old IDE HDD 40 GB. The screen is a 4/3 ratio with a 1024x768 resolution. The touchpad has only one tap button doing left click, the touchpad doesn't support multiple fingers gestures (can't scroll, can't click). The battery is still holding a 1h40 capacity which is very surprising.

About the software, I was using the ratpoison window manager but I got issue with two GUI applications so I moved to cwm but I have other issues with cwm now. I may switch to window maker maybe or return to ratpoison which worked very well except for 2 programs, and switch to cwm when I need them... I use xterm as my terminal emulator because "it works" and it doesn't draw much memory, usually I'm using Sakura but with 32 MB of memory for each instance vs 4 MB for xterm it's important to save memory now. I usually run only one xterm with a tmux inside.

Same for the shell, I've been using fish since the beginning of 2021 but each instance of fish draws 9 MB which is quite a lot because this mean every time I split my tmux and this spawns a new shell then I have an extra 9MB used. ksh draws only 1MB per instance which is 9x less than fish, however for some operations I still switch to fish manually because it's a lot more comfortable for many operations due to its lovely completion.

Tasks §

Tasks on the day and how I complete them.

Searching on the internet §

My favorite browser on such old system is w3m with image support in the terminal, it's super fast and the render is very good. I use https://html.duckduckgo.com/html/ as my search engine.

The only false issue with w3m is that the key bindings are absolutely not straightforward but you only need to know a few of them to use it and they are all listed in the help.

Using mastodon §

I spend a lot of time on Mastodon to communicate with people, I usually use my web browser to access mastodon but I can't here because javascript capable web browser takes all the memory and often crash so I can only use them as a last joker. I'm using the terminal user interface tootstream but it has some limitations and my high traffic account doesn't match well with it. I'm setting up brutaldon which is a local program that gives access to mastodon through an old style website, I already wrote about it on my blog if you want more information.

Listening to music §

Most of my files are FLAC encoded and are extremely big, although the computer can decode them right but this uses most of the CPU. As OpenBSD doesn't support mounting samba shares and that my music is on my NAS (in addition to locally on my usual computer), I will have to copy the files locally before playing them.

One solution is to use musikcube on my NAS and my laptop with the server/client setup which will make my nas transcoding the music I want to play on the laptop on the fly. Unfortunately there is no package for musikcube yet and I started compiling it on my old laptop and I suppose it will take a few hours to complete.

Reading emails §

My favorite email client at the moment is claws-mail and fortunately it runs perfectly fine on this old computer, although the lack of right click is sometimes a problem but a clever workaround is to run "xdotool click 3" to tell X to do a right click where the cursor is, it's not ideal but I rarely need it so it's ok. The small screen is not ideal to deal with huge piles of mails but it works so far.

IRC §

My IRC setup is to have a tmux with as many catgirl (irc client) instances as network I'm connected too, and this is running on a remote server so I just connect there with ssh and attach to the local tmux. No problem here.

Writing my blog §

The process is exactly the same as usual. I open a terminal to start my favorite text editor, I create the file and write in it, then I run aspell to check for typos, then I run "make" to make my blog generator creates the html/gopher/gemini versions and dispatch them on the various server where they belong to.

How I feel §

It's not that easy! My reliance on web services is hurting here, I found a website providing weather forecast working in w3m.

I easily focus on a task because switching to something else is painful (screen redrawing takes some times, HDD is noisy), I found a blog from a reader linking to other blogs, I enjoyed reading them all while I'm pretty sure I would usually just make a bookmark in firefox and switch to a 10-tabs opening to see what's new on some websites.

Obsolete in the IT crossfire

Written by Solène, on 09 July 2021.
Tags: #life #linux #unix #openbsd

Comments on Mastodon

Preamble §

This is not an article about some tech but more me sharing feelings about my job, my passion and IT. I've met a Linux system at first in the early 2000 and I didn't really understand what this was, I've learned it the hard way by wiping Windows on the family computer (which was quite an issue) and since that time I got a passion with computers. I made a lot of mistakes that made me progress and learn more, and the more I was learning, the more I saw the amount of knowledge I was missing.

Anyway, I finally got a decent skill level if I could say, but I started early and so my skill is related to all of that early Linux ecosystem. Tools are evolving, Linux is morphing into something different a bit more every year, practices are evolving with the "Cloud". I feel lost.

Within the crossfire §

I've met many people along my ride in open source and I think we can distinguish two schools (of course I know it's not that black and white): the people (like me) who enjoy the traditional ecosystem and the other group that is from the Cloud era. It is quite easy to bash the opposite group and I feel sad when I assist at such dispute.

I can't tell which group is right and which is wrong, there is certainly good and bad in both. While I like to understand and control how my system work, the other group will just care about the produced service and not the underlying layers. Nowadays, you want your service uptime to have as much nine as you can afford (99.999999) at the cost of having complex setup with automatic respawning services on failure, automatic routing within VMs and stuff like that. This is not necessarily something that I enjoy, I think a good service should have a good foundation and restarting the whole system upon failure seems wrong, although I can't deny it's effective for the availability.

I know how a package manager work but the other group will certainly prefer to have a tool that will hide all of the package manager complexity to get the job done. Tell ansible to pop a new virtual machine on Amazon using Terraform with a full nginx-php-mysql stack installed is the new way to manage servers. It seems a sane option because it gets the job done, but still, I can't find myself in there, where is the fun? I can't get the fun out of this. You can install the system and the services without ever see the installer of the OS you are deploying, this is amazing and insane at the same time.

I feel lost in this new era, I used to manage dozens of system (most bare-metal, without virtualization), I knew each of them that I bought and installed myself, I knew which process should be running and their usual CPU/memory usage, I got some acquaintance with all my systems. I was not only the system administrator, I was the IT gardener. I was working all the time to get the most out of our servers, optimizing network transfers, memory usage, backups scripts. Nowadays you just pop a larger VM if you need more resources and backups are just snapshots of the whole virtual disk, their lives are ephemeral and anonymous.

To the future §

I would like to understand better that other group, get more confident with their tools and logic but at the same time I feel some aversion toward doing so because I feel I'm renouncing to what I like, what I want, what made me who I am now. I suppose the group I belong too will slowly fade away to give room to the new era, I want to be prepared to join that new era but at the same time I don't want to abandon the people of my own group by accelerating the process.

I'm a bit lost in this crossfire. Should a resistance organize against this? I don't know, I wouldn't see the point. The way we do computing is very young, we are looking for a way. Humanity has been making building for thousands and years and yet we still improve the way we build houses, bridges and roads, I guess that the IT industry is following the same process but as usual with computers, at an insane rate that humans can barely follow.

Next §

Please share with me by email or mastodon or even IRC if you feel something similar or if you got past that issue, I would be really interested to speak about this topic with other people.

Readers reactions §

ew.srht.site reply

After thoughts (UPDATE post publication) §

I got many many readers giving me their thoughts about this article and I'm really thankful for this.

Now I think it's important to realize that when you want to deploy systems at scale, you need to automate all your infrastructure and then you lose that feeling with your servers. However, it's still possible to have fun because we need tooling, proper tooling that works and bring a huge benefit. We are still very young in regards to automation and lot of improvements can be done.

We will still need all those gardeners enjoying their small area of computer because all the cloud services rely on their work to create duplicated system in quantity that you can rely on. They are making the first most important bricks required to build the "Cloud", without them you wouldn't have a working Alpine/CentOS/FreeBSD/etc... to deploy automatically.

Both can coexist, both should know better each other because they will have to live together to continue the fantastic computer journey, however the first group will certainly be in a small number compared to the other.

So, not everything is lost! The Cloud industry can be avoided by self-hosting at home or in associative datacenter/colocations but it's still possible to enjoy some parts of the great shift without giving up all we believe in. A certain balance can be found, I'm quite sure of it.

OpenBSD: pkg_add performance analysis

Written by Solène, on 08 July 2021.
Tags: #bandwidth #openbsd #unix

Comments on Mastodon

Introduction §

OpenBSD package manager pkg_add is known to be quite slow and using much bandwidth, I'm trying to figure out easy ways to improve it and I may nailed something today by replacing ftp(1) http client by curl.

Testing protocol §

I used on an OpenBSD -current amd64 the following command "pkg_add -u -v | head -n 70" which will check for updates of the 70 first packages and then stop. The packages tested are always the same so the test is reproducible.

The traditional "ftp" will be tested, but also "curl" and "curl -N".

The bandwidth usage has been accounted using "pfctl -s labels" by a match rule matching the mirror IP and reset after each test.

What happens when pkg_add runs §

Here is a quick intro to what happens in the code when you run pkg_add -u on http://

  • pkg_add downloads the package list on the mirror (which could be considered to be an index.html file) which weights ~2.5 MB, if you add two packages separately the index will be downloaded twice.
  • pkg_add will run /usr/bin/ftp on the first package to upgrade to read its first bytes and pipe this to gunzip (done from perl from pkg_add) and piped to signify to check the package signature. The signature is the list of dependencies and their version which is used by pkg_add to know if the package requires update and the whole package signify signature is stored in the gzip header if the whole package is downloaded (there are 2 signatures: signify and the packages dependencies, don't be mislead!).
  • if everything is fine, package is downloaded and the old one is replaced.
  • if there is no need to update, package is skipped.
  • new package = new connection with ftp(1) and pipes to setup

Using FETCH_CMD variable it's possible to tell pkg_add to use another command than /usr/bin/ftp as long as it understand "-o -" parameter and also "-S session" for https:// connections. Because curl doesn't support the "-S session=..." parameter, I used a shell wrapper that discard this parameter.

Raw results §

I measured the whole execution time and the total bytes downloaded for each combination. I didn't show the whole results but I did the tests multiple times and the standard deviation is near to 0, meaning a test done multiple time was giving the same result at each run.

operation               time to run     data transferred
---------               -----------     ----------------
ftp http://             39.01           26
curl -N http://	        28.74           12
curl http://            31.76           14
ftp https://            76.55           26
curl -N https://        55.62           15
curl https://           54.51           15

Charts with results

Analysis §

There are a few surprising facts from the results.

  • ftp(1) not taking the same time in http and https, while it is supposed to reuse the same TLS socket to avoid handshake for every package.
  • ftp(1) bandwidth usage is drastically higher than with curl, time seems proportional to the bandwidth difference.
  • curl -N and curl performs exactly the same using https.

Conclusion §

Using http:// is way faster than https://, the risk is about privacy because in case of man in the middle the download packaged will be known, but the signify signature will prevent any malicious package modification to be installed. Using 'FETCH_CMD="/usr/local/bin/curl -L -s -q -N"' gave the best results.

However I can't explain yet the very different behaviors between ftp and curl or between http and https.

Extra: set a download speed limit to pkg_add operations §

By using curl as FETCH_CMD you can use the "--limit-rate 900k" parameter to limit the transfer speed to the given rate.

The Old Computer Challenge

Written by Solène, on 07 July 2021.
Tags: #linux #oldcomputerchallenge

Comments on Mastodon

Introduction §

For some time I wanted to start a personal challenge, after some thoughts I want to share it with you and offer you to join me in this journey.

The point of the challenge is to replace your daily computer by a very old computer and share your feelings for the week.

The challenge §

Here are the *rules* of the challenge, there are no prize to win but I'm convinced we will have feelings to share along the week and that it will change the way we interact with computers.

  • 1 CPU maximum, whatever the model. This mean only 1 CPU|Core|Thread. Some bios allow to disable multi core.
  • 512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)
  • using USB dongles is allowed (storage, wifi, Bluetooth whatever)
  • only for your personal computer, during work time use your usual stuff
  • relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)
  • using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge
  • if you absolutely need your regular computer for something really important please use it. The goal is to have fun but not make your week a nightmare.

If you don't have an old computer, don't worry! You can still use your regularly computer and create a virtual machine with low specs, you would still be more comfortable with a good screen, disk access and a not too old CPU but you can participate.

Date §

The challenge will take place from 10Th July morning until 17Th July morning.

Social medias §

Because I want this event to be a nice moment to share with others, you can contact me so I can add your blog (including gopher/gemini space) to the future list below.

You can also join #old-computer-challenge on libera.chat IRC server.

prahou's blog, running a T42 with OpenBSD 6.9 i386 with hostname brouk

Joe's blog about the challenge and why they need it

Solene (this blog) running an iBook G4 with OpenBSD -current macppc with hostname jeefour

(gopher link) matto's report using FreeBSD 13 on an Acer aspire one

cel's blog using Void Linux PPC on an Apple Powerbook G4

Keith Burnett's blog using a T42 with an emphasis on using GUI software to see how it goes

Kuchikuu's blog using a T60 running Debian (but specs out of the challenge)

Ohio Quilbio Olarte's blog using an MSI Wind netbook with OpenBSD

carcosa's blog using an ASUS eeePC netbook with Fedora i386 downgraded with kernel command line

Tekk's website, using a Dell Latitude D400 (2003) running Slackware 14.2

My setup §

I use an old iBook G4 laptop (the one I already use "offline"), it has a single PowerPC G4 1.3 GHz CPU and 512 MB of ram and a slow 40GB HDD. The wifi is broken so I have to use a Wifi dongle but I will certainly rely on ethernet. The screen has a 1024x768 resolution but the colors are pretty bad.

In regards to software it runs OpenBSD 6.9 with /home/ encrypted which makes performance worse. I use ratpoison as the window manager because it saves screen space and requires little memory and CPU to run and is entirely keyboard driven, that laptop has only a left click touchpad button :).

I love that laptop and initially I wanted to see how far I could use for my daily driver!

Picture of the laptop

Screenshot of the laptop

Track changes in /etc with etckeeper

Written by Solène, on 06 July 2021.
Tags: #linux

Comments on Mastodon

Introduction §

Today I will introduce you to the program etckeeper, a simple tool that track changes in your /etc/ directory into a versioning control system (git, mercurial, darcs, bazaar...).

etckeeper project website

Installation §

Your system may certainly package it, you will then have to run "etckeeper init" in /etc/ the first time. A cron or systemd timer should be set by your package manager to automatically run etckeeper every day.

In some cases, etckeeper can integrate with package manager to automatically run after a package installation.

Benefits §

While it can easily be replicated using "git init" in /etc/ and then using "git commit" when you make changes, etckeeper does it automatically as a safety net because it's easy to forget to commit when we make changes. It also has integration with other system tools and can use hooks like sending an email when a change is found.

It's really a convenience tool but given it's very light and can be useful I think it's a must for most sysadmins.

Gentoo cheatsheet

Written by Solène, on 05 July 2021.
Tags: #linux #gentoo #cheatsheet

Comments on Mastodon

Introduction §

This is a simple cheatsheet to manage my Gentoo systems, a linux distribution source based, meaning everything installed on the computer must be compiled locally.

Gentoo project website

Upgrade system §

I use the following command to update my system, it will downloaded latest portage version and then rebuild @world (the whole set of packages manually installed).

#!/bin/sh
emerge-webrsync 2>&1 | grep "The current local"
if [ $? -eq 0 ]
then
	exit
fi

emerge -auDv --with-bdeps=y --changed-use --newuse @world

Use ccache §

As you may rebuild the same program many times (especially on a new install), I highly recommend using ccache to reuse previous builded objects and will reduce build duration by 80% when you change an USE.

It's quite easy, install ccache package, add 'FEATURES="ccache"' in your make.conf and do "install -d -o root -g portage -p 775" /var/cache/ccache and it should be working (you should see files in the ccache directory).

Gentoo wiki about ccache

Use genlop to view / calculate build time from past builds §

Genlop can tell you how much time will be needed or remains on a build based on previous builds information. I find it quite fun to see how long an upgrade will take.

Gentoo wiki about Genlop

View compilation time §

From the package genlop

# genlop -c

 Currently merging 1 out of 1

 * app-editors/vim-8.2.0814-r100 

       current merge time: 4 seconds.
       ETA: 1 minute and 5 seconds.

Simulate compilation §

Add -p to emerge command for "pretend" and pipe it to genlop -p like this

# emerge -av -p kakoune | genlop -p
These are the pretended packages: (this may take a while; wait...)

[ebuild   R   ~] app-editors/kakoune-2020.01.16_p20200601::gentoo  0 KiB


Estimated update time: 1 minute.

Using gentoolkit §

The gentoolkit package provides a few commands to find informations about packages.

Gentoo wiki page about Gentoolkit

Find a package §

You can use "equery" from the package gentoolkit like this "equery l -p '*package name*" globbing with * is mandatory if you are not looking for a perfect match.

Example of usage:

# equery l -p '*firefox*'
 * Searching for *firefox* ...
[-P-] [  ] www-client/firefox-78.11.0:0/esr78
[-P-] [ ~] www-client/firefox-89.0:0/89
[-P-] [ ~] www-client/firefox-89.0.1:0/89
[-P-] [ ~] www-client/firefox-89.0.2:0/89
[-P-] [  ] www-client/firefox-bin-78.11.0:0/esr78
[-P-] [  ] www-client/firefox-bin-89.0:0/89
[-P-] [  ] www-client/firefox-bin-89.0.1:0/89
[IP-] [  ] www-client/firefox-bin-89.0.2:0/89

Get the package name providing a file §

Use "equery b /path/to/file" like this

# equery b /usr/bin/2to3
 * Searching for /usr/bin/2to3 ... 
dev-lang/python-exec-2.4.6-r4 (/usr/lib/python-exec/python-exec2)
dev-lang/python-exec-2.4.6-r4 (/usr/bin/2to3 -> ../lib/python-exec/python-exec2)

Listing every system I used

Written by Solène, on 02 July 2021.
Tags: #linux #unix #bsd

Comments on Mastodon

Introduction §

Nobody asked for it but I wanted to share the list of the system I used in my life (on a computer) and a few words about them. This is obviously not very accurate but I'm happy to write it somewhere.

You may wonder why I did some choices in the past, I was young and with little experience in many of these experiments, a nice looking distribution was very appealing to me.

One has to know (or remember) that 10 years ago, Linux distributions were very different from one to another and it became more and more standardized over time. At the point that I don't consider distro hoping (the fact to switch from a distribution to another regularly) something interesting because most distributions are derivative from a main one and most will all have a systemd and same defaults.

Disclaimer: my opinions about each systems are personal and driven by feeling and memories, it may be totally inaccurate (outdated or damaged memories) or even wrong (misunderstanding, bad luck). If I had issues with a system this doesn't mean it is BAD and that you shouldn't use it, I recommend to make your opinion about them.

The list (alphabetically) §

This includes Linux distributions but also BSD or Solaris derived system.

Alpine §

  • Duration: a few hours
  • Role: workstation
  • Opinion: interesting but lack of documentation
  • Date of use: June 2021

I wanted to use it on my workstation but the documentation for full disk encryption and the documentation in general was outdated and not accurate so I gave up.

However the extreme minimalism is interesting and without full disk encryption it worked fine. It was surprising to see how packages were split in such small parts, I understand why it's used to build containers.

I really want to like it, maybe in a few years it will be mature enough.

BackTrack §

  • Duration: occasionally
  • Role: playing with wifi devices
  • Opinion: useful
  • Date of use: occasionally between 2006 and 2012

Worked well with a wifi dongle supporting monitor mode.

CentOS §

  • Duration: not much
  • Role: local server
  • Opinion: old packages
  • Date of use: 2014

Nothing much to say, I had to use it temporarily to try a program we where delivering to a client using Red Hat.

Crux §

  • Duration: a few months maybe
  • Role: workstation
  • Opinion: it was blazing fast to install
  • Date of use: around 2009

I don't remember much about it to be honest.

Debian §

  • Duration: multiple years
  • Role: workstation (at least 1 year accumulated) and servers
  • Opinion: I don't like it
  • Date of use: from 2006 to now

It's not really possible to do Linux without having to deal with Debian some day. It's quite working when installed but I always had painful time with upgrades. As for using it as a workstation, it was at a time of gnome 2 and software were already often obsolete so I was using testing.

DragonflyBSD §

  • Duration: months
  • Role: server and workstation
  • Opinion: interesting
  • Date of use: ~2009-2011

The system worked quite well, I had hardware compatibility issues at that time but it worked well for my laptop. HAMMER was stable when I used it on my server and I really enjoyed working with this file system, the server was my NAS and Mumble server at that time and it never failed me. I really think this make a good alternative to ZFS.

Edubuntu §

  • Duration: months
  • Role: laptop
  • Opinion: shame
  • Date of use: 2006

I was trying to be a good student at that time and it seemed Edubuntu was interesting, I didn't understand it was just an Ubuntu with a few packages pre-installed. It was installed on my very first laptop (a very crappy one but eh I loved it.).

Elementary §

  • Duration: months
  • Role: laptop
  • Opinion: good
  • Date of use: 2019-now

I have an old multimedia laptop (the case is falling apart) that runs Elementary OS, mainly for their own desktop environment Pantheon that I really like. The distribution itself is solid and well done, it never failed me even after major upgrades. I could do everything using the GUI. I would recommend like it to a Linux beginner or someone enjoying GUI tools.

EndeavourOS §

  • Duration: months
  • Role: testing stuff
  • Opinion: good project
  • Date of use: 2021

I never been into Arch but I got my first contact with EndeavourOS, a distribution based on Arch Linux that proposes an installer with many options to install Arch Linux, and also a few helper tools to manage your system. This is clearly and Arch Linux and they don't hide it, they just facilitate the use and administration of the system. I'm totally capable of installing Arch but I have to admit if I can save a lot of time to install it in a full disk encryption setup using a GUI I'm all for it. As an Arch Linux noob, the little "welcome" GUI provided by EndeavourOS was very useful to learn how to use the packages manager and a few other things. I'd totally recommend it over Arch Linux because it doesn't denature Arch while still providing useful additions.

Fedora §

  • Duration: months
  • Role: workstation
  • Opinion: hazardous
  • Date of use: 2006 and around 2014

I started with Fedora Core 6 in 2006, at that time it was amazing, they had many new software and up to date, the alternative was Debian or Mandrake (with Ubuntu not being very popular yet), I used it a long time. I used it again later but I stumbled on many quality issues and I don't have good memories about it.

FreeBSD §

  • Duration: years
  • Role: workstation, server
  • Opinion: pretty good
  • Date of use: 2009 to 2020

This is the first BSD I tried, I heard a lot about it so I downloaded the 3 or 5 CDs of the release with my 16 kB/s DSL line, burned CDs and installed it on my computer, the installer was proposing to install packages at that time but it was doing it in a crazy way, you had to switch CD a lot between the sets because sometimes the package was on CD 2 then CD 3 and CD 1 and CD 3 and CD2.... For some reasons, I destroyed my system a few times by mixing ports and packages which ended in dooming the system. I learned a lot from my destroy and retry method.

For my first job (I occupied for 10 years) I switched all the Debian servers to FreeBSD servers and started playing with Jails to provide security for web server. FreeBSD never let me down on servers. The most pain I have with FreeBSD is freebsd-update updating RCS tags so I had to merge sometimes a hundred of files manually... At the point I preferred reinstalling my servers (with salt stack) than upgrading.

On my workstation it always worked well. I regret packages quality can be inconsistent sometimes but I'm also part of the problem because I don't think I ever reported such issues.

Frugalware §

  • Duration: weeks
  • Role: workstation
  • Opinion: I can't remember
  • Date of use: 2006?

I remember I've run a computer with that but that's all...

Gentoo §

  • Duration: months
  • Role: workstation
  • Opinion: i love it
  • Date of use: 2005, 2017, 2020 to now

My first encounter with Gentoo was at my early Linux discovery. I remember following the instructions and compiling X for like A DAY to get a weird result, the resolution was totally wrong and it was in grey scale so I gave up.

I tried it later in 2017 and I successfully installed it with full disk encryption and used it as my pro laptop, I don't remember I broke it once. The only issue was to wait the compilation time when I needed a program not installed.

I'm back on Gentoo regularly for one laptop that requires many tweaks to work correctly and I also use it as my main Linux at home.

gNewSense §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2006

It was my first encounter with a 100% free system, I remember it wasn't able to play MP3 files :) It was an Ubuntu derivative and the community was friendly. I see the project is abandoned now.

Guix §

  • Duration: months
  • Role: workstation
  • Opinion: interesting ideas but raw
  • Date of use: 2016 and 2021

I like Guix a lot, it has very good ideas and the consistent use of Scheme language to define the packages and write the tools is something I enjoy a lot. However I found the system doesn't feel very great for a desktop usage with GUI, it appears quite raw and required me many workaround to work correctly.

Note that Guix is a distribution but also the package manager that can be installed on any linux distribution in addition to the original package manager, in that case we refer to it as Foreign Guix.

Mandrake §

  • Duration: weeks?
  • Role: workstation
  • Opinion: one of my first
  • Date of use: 2004 or something

This was one of my first distribution and it came with a graphical installer! I remember packages had to be installed with the command "urpmi" but that's all. I think I didn't have access to the internet using my USB modem so I was limited to packages from the CDs I burned.

NetBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: good
  • Date of use: 2009 to 2015

I used NetBSD at first on a laptop (in 2009) but it was not very stable and programs were core dumping a lot, I found the software where not really up to date in pkgsrc too. However, I used it for years as my first email server and I never had a single issue.

I didn't try it seriously for a workstation recently but from what I've heard it became a good choice for a daily driver.

NixOS §

  • Duration: years
  • Role: workstation and server
  • Opinion: awesome but different
  • Date of use: 2016 to now

I use NixOS daily in my professional workstation since 2020, it never failed me even when I'm on the development channel. I already wrote about it, it's an amazing piece of work but is radically different from other Linux distributions or Unix-like systems.

I'm using it on my NAS and it's absolutely flawless since I installed it. But I am not sure how easy or hard it would be to run a full featured mail server on it (my best example for a complex setup).

NuTyX §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2010

I don't remember much about this distribution but I remember the awesome community and the creator of the distro which is a very helpful and committed person. This is a distribution made from scratch that is working very well and is still alive and dynamic, kudos to the team.

OpenBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: boring because it just works
  • Date of use: 2015 to now

I already wrote a few times why I like OpenBSD so I will make it short, it just works and it works fine. However the hardware compatibility can be limited, but when hardware is supported everything just work out of the box without any tweak.

I've been using it daily for years now and it started when my NetBSD mail server had to be replaced by a newer machine at online so I chose to try OpenBSD. I'm part of the team since 2018 and apart from occasional ports changes my big contribution was to setup the infrastructure to build binary packages for ports changes in the stable branch.

I wish performance were better though.

OpenIndiana §

  • Duration: weeks
  • Role: workstation
  • Opinion: sadness but hope?
  • Date of use: 2019

I was a huge fan of OpenSolaris but Oracle killed it. OpenIndiana is the resurrection of the open source Solaris but is now a bit abandoned from contributors and the community isn't as dynamic as previously. Hardware support is lagging however the system performs very well and all Solaris features are still there if you know what to do with it.

I really hope for this project to get back on track again and being as dynamic as it used to be!

OpenSolaris §

  • Duration: years
  • Role: workstation
  • Opinion: sadness
  • Date of use: 2009-2010

I loved OpenSolaris, it was such an amazing system, every new release had a ton of improvements (packages updates, features, hardware support) and I really thought it would compete Linux at this rate. It was possible to get free CD over snail mail and they looked amazing.

It was my main workstation on my big computer (I built it in 2007 and it had 2 xeon E5420 CPU and 32 GB of memory with 6x 500GB of SATA drives!!!), it was totally amazing to play with virtualization on it. The desktop was super fast and using Wine I was able to play Windows video games.

OpenSuse §

  • Duration: months
  • Role: pro workstation
  • Opinion: meh
  • Date of use: something like 2015

I don't have strong memories about OpenSuse, I think it worked well on my workstation at first but after some time I had some madness with the package manager that was doing weird things like removing half the packages to reinstall them... I never wanted to give another try after this few months experiment.

Paldo §

  • Duration: weeks? months?
  • Role: workstation
  • Opinion: the install was fast
  • Date of use: 2008?

I remember having played and contributed a bit to packages on IRC, all I remember is the kind community and that it was super fast to install. It's a distribution from scratch and it still alive and updated, bravo!

PC-BSD §

  • Duration: months
  • Role: workstation
  • Opinion: many attempts, too bad
  • Date of use: 2005-2017

PC-BSD (and more recently TrueOS) was the idea to provide FreeBSD to everyone. Each release was either good or bad, it was possible to use FreeBSD packages but also "pbi" packages that looked like Mac OS installers (a huge file that you had to double click on it to install). I definitely liked it because it was my first real success with FreeBSD but sometimes the tools proposed were half backed or badly documented. The project is dead now.

PCLinuxOS §

  • Duration: weeks?
  • Role: laptop
  • Opinion: it worked
  • Date of use: around 2008?

I remember installing it was working fine and I liked it.

Pop!_OS §

  • Duration: months
  • Role: gaming computer
  • Opinion: works!!
  • Date of use: 2020-2021

I use this distribution on my gaming computer and I have to admit it can easily replace Windows! :) Upgrades are painless and everything works out of the box (including the Nvidia driver).

Scientific Linux §

  • Duration: months
  • Role: workstation
  • Opinion: worked well
  • Date of use: ??

I remember I used scientific Linux as my main distribution at work for some time, it worked well and remembered me my old Fedora Core.

Skywave §

  • Duration: occasionally
  • Role: laptop for listening to radio waves
  • Opinion: a must
  • Date of use: 2018-now

This distribution is really focused into providing tools for using radio hardware, I bought a simple and cheap RTL-SDR usb device and I've been able to use it with pre-installed software. Really a plug and play experience. It works as a live CD so you don't even need to install it to benefit from its power.

Slackware §

  • Duration: years
  • Role: workstation and server
  • Opinion: Still Loving You....
  • Date of use: multiple times since 2002

It is very hard for me to explain how much and deep I love Slackware Linux. I just love it. In the date you can read I started with it in 2002, it's my very first encounter with Linux. A friend bought a Linux Magazine with Slackware CDs and explanations about the installation, it worked and many programs were available to play with! (I also erased Windows on the family computer because I had no idea what I was doing).

Since that time, I used Slackware multiples times and I think it's the system that survived the longest time every time it got installed, every new Slackware release was a day to celebrate for me.

I can't explain why I like it so much, I guess it's because you deeply know how your system work over time. Packages didn't manage dependencies at that time and it was a real pain to get new programs, it improved a lot now.

I really can't wait Slackware 15.0 to be out!

Solaris §

  • Duration: months
  • Role: workstation
  • Opinion: fine but not open source
  • Date of use: 2008

I remember the first time I heard that Solaris was a system I could install on my machine, I started to install it after downloading 2 parts of the ISO (which had to be joined using cat), I started to install it on my laptop and went to school with the laptop on battery continuing installing (it was very long) and ending the installation process in class (I was in a computer science university so it was fine :P ).

I discovered a whole new world with it, I even used it on a netbook to write some Java SCTP university project. It was the very introduction to ZFS, brand new FS with many features.

Solus §

  • Duration: days
  • Role: workstation
  • Opinion: good job team
  • Date of use: 2020

I didn't try much Solus because I'm quite busy nowadays, but it's a good distro as an alternative to major distributions, it's totally independent from other main projects and they even have their own package manager. My small experiment was good and it felt quality, it's a rolling release model but the packages are curated to check quality before being pushed to mass users.

I wish them a long and prosper life.

Ubuntu §

  • Duration: months
  • Role: workstation and server
  • Opinion: it works fine
  • Date of use: 2006 to 2014

I used Ubuntu on laptop a lot, and I recommended many people to use Ubuntu if they wanted to try Linux. Whatever we say, they helped to get Linux known and bring Linux to masses. Some choices like non-free integration are definitely not great though. I started with Dapper Drake (Ubuntu 6.06 !) on an old Pentium 1 server I had under my dresser in my student room.

I used it daily a few times but mainly at the time the default window manager was Unity. For some reasons, I loved Unity, it's really a pity the project is now abandoned and lost, it worked very well for me and looked nice.

I don't want to use it anymore as it became very complex internally, like trying to understand how domain names are resolved is quite complicated...

Void §

  • Duration: days?
  • Role: workstation
  • Opinion: interesting distribution, not enough time to try
  • Date of use: 2018

Void is an interesting distribution, I use it a little on a netbook with their musl libc edition and I've run into many issues at usage but also at install time. The glibc version was working a lot better but I can't remember why it didn't catch me more than this.

I wish I could have a lot of time to try it more seriously. I recommend everyone giving it a try.

Windows §

  • Duration: years
  • Role: gaming computer
  • Opinion: it works
  • Date of use: 1995 to now

My first encounter with a computer was with Windows 3.11 on a 486dx computer, I think I was 6. Since then I always had a Windows computer, at first because I didn't know there were alternatives and then because I always had it as a hard requirement for a hardware, a software or video games. Now, my gaming computer is running Windows and is dedicated to games only, I do not trust this system enough to do anything else. I'm slowly trying to move away from it and efforts are giving results, more and more games works fine on Linux.

Zenwalk §

  • Duration: months
  • Role: workstation
  • Opinion: it's like slackware but lighter
  • Date of use: 2009?

I don't remember much, it was like Slackware but without the giant DVD install that requires 15GB of space for installation, it used Xfce by default and looked nice.

How to choose a communication protocol

Written by Solène, on 25 June 2021.
Tags: #internet

Comments on Mastodon

Introduction §

As a human being I have to communicate with other people and now we have many ways to speak to each other, so many that it's hard to speak to other people. This is a simple list of communication protocol and why you would use them. This is an opinionated text.

Protocols §

We rely on protocols to speak to each other, the natural way would be language with spoken words using vocal chords, we could imagine other way like emitting sound in Morse. With computers we need to define how to send a message from A to B and there are many many possibilities for such a simple task.

  • 1. The protocol could be open source, meaning anyone can create a client or a server for this protocol.
  • 2. The protocol can be centralized, federated or peer to peer. In a centralized situation, there is only one service provider and people must be on the same server to communicate. In a federated or peer-to-peer architecture, people can join the communication network with their own infrastructure, without relying on a service provider (federated and peer to peer are different in implementation but their end result is very close)
  • 3. The protocol can provide many features in addition to contact someone.

IRC §

The simplest communication protocol and an old one. It's open source and you can easily host your own server. It works very well and doesn't require a lot of resources (bandwidth, CPU, memory) to run, although it is quite limited in features.

  • you need to stay connected to know what happen
  • you can't stay connected if you don't keep a session opened 24/7
  • multi device (computer / phone for instance) is not possible without an extra setup (bouncer or tmux session)

I like to use it to communicate with many people on some topic, I find they are a good equivalent of forums. IRC has a strong culture and limitations but I love it.

XMPP (ex Jabber) §

Behind this acronym stands a long lived protocol that supports many features and has proven to work, unfortunately the XMPP clients never really shined by their user interface. Recently the protocol is seeing a good adoption rate, clients are getting better, servers are easy to deploy and doesn't draw much resources (i/o, CPU, memory).

XMPP uses a federation model, anyone can host their server and communicate with people from other servers. You can share files, create rooms, do private messages. Audio and video is supported based on the client. It's also able to bridge to IRC or some other protocol using the correct software. Multiples options for end-to-end encryption are available but the most recent named OMEMO is definitely the best choice.

The free/open source Android client « Conversations » is really good, on a computer you can use Gajim or Dino with a nice graphical interface, and finally profanity or poezio for a console client.

XMPP on Wikipedia

Matrix §

Matrix is a recent protocol in the list although it saw an incredible adoption rate and since the recent Freenode drama many projects switched to their own Matrix room. It's fully open source in client or servers and is federated so anyone can be independent with their own server.

As it's young, Matrix has only one client that proposes all the features which is Element, a very resource hungry web program (web page or run "natively using Electron, a program to turn website in desktop application) and a python server named Synapse that requires a lot of CPU to work correctly.

In regards to features, Matrix proposes end to end encryption, rooms, direct chat, encryption done well, file sharing, audio/video etc...

While it's a good alternative to XMPP, I prefer XMPP because of the poor choice of clients and servers in Matrix at the moment. Hopefully it may get better in the future.

Matrix protocol on Wikipedia

Email §

This way is well known, most people have an email address and it may have been your first touch with the Internet. Email works well, it's federated and anyone can host an email server although it's not an easy task.

Mails are not instant but with performant servers it can only takes a few seconds for an email to be sent and delivered. They can support end to end encryption using GPG which is not always easy to use. You have a huge choice for email clients and most of them allow incredible settings choice.

I really like emails, it's a very practical way to communicate ideas or thoughts to someone.

Delta Chat §

I found a nice program named Delta Chat that is built on top of emails to communicate "instantly" with your friends who also use Delta Chat, messages are automatically encrypted.

The client user interface looks like an instant messaging program but will uses emails to transport the messages. While the program is open source and Free, it requires electron for desktop and I didn't find a way to participate to an encrypted thread using an email client (even using the according GPG key). I really found that software practical because your recipients doesn't need to create a new account, it will reuse an existing email address. You can also use it without encryption to write to someone who will reply using their own mail client but you use delta chat.

Delta Chat website

Telegram §

Open source client but proprietary server, I don't recommend anyone to use such a system that lock you to their server. You would have to rely on a company and you empower them by using their service.

Telegram on Wikipedia

Signal §

Open source client / server but the main server where everybody is doesn't allow federation. So far, hosting your own server doesn't seem a possible and viable solution. I don't recommend using it because you rely on a company offering a service.

Signal on Wikipedia

WhatsApp §

Proprietary software and service, please don't use it.

Conclusion §

I daily use IRC, Emails and XMPP to communicate with friends, family, crew from open source projects or meet new people sharing my interests. My main requirement for private messages is end to end encryption and being independent so I absolutely require federated protocol.

How to use the Open Graph Protocol for your website

Written by Solène, on 21 June 2021.
Tags: #blog

Comments on Mastodon

Introduction §

Today I made a small change to my blog, I added some more HTML metadata for the Open Graph protocol.

Basically, when you share an url in most social networks or instant messaging, when some Open Graph headers are present the software will display you the website name, the page title, a logo and some other information. Without that, only the link will be displayed.

Implementation §

You need to add a few tags to your HTML pages in the "head" tag.

    <meta property="og:site_name" content="Solene's Percent %" />
    <meta property="og:title"     content="How to cook without burning your eyebrows" />
    <meta property="og:image"     content="static/my-super-pony-logo.png" />
    <meta property="og:url"       content="https://dataswamp.org/~solene/some-url.html" />
    <meta property="og:type"      content="website" />
    <meta property="og:locale"    content="en_EN" />

There are more metadata than this but it was enough for my blog.

Open Graph Protocol website

Using the I2P network with OpenBSD and NixOS

Written by Solène, on 20 June 2021.
Tags: #i2p #tor #openbsd #nixos #network

Comments on Mastodon

Introduction §

In this text I will explain what is the I2P network and how to provide a service over I2P on OpenBSD and how to use to connect to an I2P service from NixOS.

I2P §

This acronym stands for Invisible Internet Project and is a network over the network (Internet). It is quite an old project from 2003 and is considered stable and reliable. The idea of I2P is to build a network of relays (people running an i2p daemon) to make tunnels from a client to a server, but a single TCP session (or UDP) between a client and a server could use many tunnels of n hops across relays. Basically, when you start your I2P service, the program will get some information about the relays available and prepare many tunnels in advance that will be used to reach a destination when you connect.

Some benefits from I2P network:

  • your network is reliable because it doesn't take care of operator peering
  • your network is secure because packets are encrypted, and you can even use usual encryption to reach your remote services (TLS, SSH)
  • provides privacy because nobody can tell where you are connecting to
  • can prevent against habits tracking (if you also relay data to participate to i2p, bandwidth allocated is used at 100% all the time, and any traffic you do over I2P can't be discriminated from standard relay!)
  • can only allow declared I2P nodes to access a server if you don't want anyone to connect to a port you expose

It is possible to host a website on I2P (by exposing your web server port), it is called an eepsite and can be accessed using the SOCKs proxy provided by your I2P daemon. I never played with them though but this is a thing and you may be interested into looking more in depth.

I2P project and I2P implementation (java) page

i2pd project (a recent C++ implementation that I use for this tutorial)

Wikipedia page about I2P

I2P vs Tor §

Obviously, many people would question why not using Tor which seems similar. While I2P can seem very close to Tor hidden services, the implementation is really different. Tor is designed to reach the outside while I2P is meant to build a reliable and anonymous network. When started, Tor creates a path of relays named a Circuit that will remain static for an approximate duration of 12 hours, everything you do over Tor will pass through this circuit (usually 3 relays), on the other hand I2P creates many tunnels all the time with a very low lifespan. Small difference, I2P can relay UDP protocol while Tor only supports TCP.

Tor is very widespread and using a tor hidden service for hosting a private website (if you don't have a public IP or a domain name for example) would be better to reach an audience, I2P is not very well known and that's partially why I'm writing this. It is a fantastic piece of software and only require more users.

Relays in I2P doesn't have any weight and can be seen as a huge P2P network while Tor network is built using scores (consensus) of relaying servers depending of their throughput and availability. Fastest and most reliable relays will be elected as "Guard server" which are entry points to the Tor network.

I've been running a test over 10 hours to compare bandwidth usage of I2P and Tor to keep a tunnel / hidden service available (they have not been used). Please note that relaying/transit were desactivated so it's only the uploaded data in order to keep the service working.

  • I2P sent 55.47 MB of data in 114 430 packets. Total / 10 hours = 1.58 kB/s average.
  • Tor sent 6.98 MB of data in 14 759 packets. Total / 10 hours = 0.20 kB/s average.

Tor was a lot more bandwidth efficient than I2P for the same task: keeping the network access (tor or i2p) alive.

Quick explanation about how it works §

There are three components in an I2P usage.

- a computer running an I2P daemon configured with tunnels servers (to expose a TCP/UDP port from this machine, not necessarily from localhost though)

- a computer running an I2P daemon configured with tunnel client (with information that match the server tunnel)

- computers running I2P and allowing relay, they will receive data from other I2P daemons and pass the encrypted packets. They are the core of the network.

In this text we will use an OpenBSD system to share its localhost ssh access over I2P and a NixOS client to reach the OpenBSD ssh port.

OpenBSD §

The setup is quite simple, we will use i2pd and not the i2p java program.

pkg_add i2pd

# read /usr/local/share/doc/pkg-readmes/i2pd for open files limits

cat <<EOF > /etc/i2pd/tunnels.conf
[SSH]
type = server
port = 22
host = 127.0.0.1
keys = ssh.dat
EOF

rcctl enable i2pd
rcctl start i2pd

You can edit the file /etc/i2pd/i2pd.conf to uncomment the line "notransit = true" if you don't want to relay. I would encourage people to contribute to the network by relaying packets but this would require some explanations about a nice tuning to limit the bandwidth correctly. If you disable transit, you won't participate into the network but I2P won't use any CPU and virtually no data if your tunnel is in use.

Visit http://localhost:7070/ for the admin interface and check the menu "I2P Tunnels", you should see a line "SSH => " with a long address ending by .i2p with :22 added to it. This is the address of your tunnel on I2P, we will need it (without the :22) to configure the client.

Nixos §

As usual, on NixOS we will only configure the /etc/nixos/configuration.nix file to declare the service and its configuration.

We will name the tunnel "ssh-solene" and use the destination seen on the administration interface on the OpenBSD server and expose that port to 127.0.0.1:2222 on our NixOS box.

services.i2pd.enable = true;
services.i2pd.notransit = true;

services.i2pd.outTunnels = {
  ssh-solene = {
    enable = true;
    name = "ssh";
    destination = "gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p";
    address = "127.0.0.1";
    port = 2222;
    };
};

Now you can use "nixos-rebuild switch" as root to apply changes.

Note that the equivalent NixOS configuration for any other OS would look like that for any I2P setup in the file "tunnel.conf" (on OpenBSD it would be in /etc/i2pd/tunnels.conf).

[ssh-solene]
type = client
address = 127.0.0.1  # optional, default is 127.0.0.1
port = 2222
destination = gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p

Test the setup §

From the NixOS client you should be able to run "ssh -p 2222 localhost" and get access to the OpenBSD ssh server.

Both systems have a http://localhost:7070/ interface because it's a default setting that is not bad (except if you have multiple people who can access the box).

Conclusion §

I2P is a nice way to share services on a reliable and privacy friendly network, it may not be fast but shouldn't drop you when you need it. Because it can easily bypass NAT or dynamic IP it's perfectly fine for a remote system you need to access when you can use NAT or VPN.

Run your Gemini server on Guix with Agate

Written by Solène, on 17 June 2021.
Tags: #guix #gemini

Comments on Mastodon

Introduction §

This article is about deploying the Gemini server agate on the Guix linux distribution.

Gemini quickstart to explain Gemini to beginners

Guix website

Configuration §

Guix manual about web services, search for Agate.

Add the agate-service definition in your /etc/config.scm file, we will store the Gemini content in /srv/gemini/content and store the certificate and its private key in the upper directory.

(service agate-service-type
         (agate-configuration
          (content "/srv/gemini/content")
          (cert "/srv/gemini/cert.pem")
          (key "/srv/gemini/key.rsa"))

If you have something like %desktop-services or %base-services, you need to wrap the services list a list using "list" function and add the %something-services to that list using the function "append" like this.

(services
  (append
    (list (service openssh-service-type)
          (service agate-service-type
                   (agate-configuration
                    (content "/srv/gemini/content")
                    (cert "/srv/gemini/cert.pem")
                    (key "/srv/gemini/key.rsa"))))
    %desktop-services))

Generating the certificate §

- Create directories /srv/gemini/content

- run the following command in /srv/gemini/

openssl req -x509 -newkey rsa:4096 -keyout key.rsa -out cert.pem -days 3650 -nodes -subj "/CN=YOUR_DOMAIN.TLD"

- Apply a chmod 400 on both files cert.pem and key.rsa

- Use "guix system reconfigure /etc/config.scm" to install agate

- Use "chown agate:agate cert.pem key.rsa" to allow agate user to read the certificates

- Use "herd restart agate" to restart the service, you should have a working gemini server on port 1965 now

Conclusion §

You are now ready to publish content on Gemini by adding files in /srv/gemini/content , enjoy!

How to use Tor only for onion addresses in a web browser

Written by Solène, on 12 June 2021.
Tags: #tor #openbsd #network #security #privacy

Comments on Mastodon

Introduction §

A while ago I published about Tor and Tor hidden services. As a quick reminder, hidden services are TCP ports exposed into the Tor network using a long .onion address and that doesn't go through an exit node (it never leaves the Tor network).

If you want to browse .onion websites, you should use Tor, but you may not want to use Tor for everything, so here are two solutions to use Tor for specific domains. Note that I use Tor but this method works for any Socks proxy (including ssh dynamic tunneling with ssh -D).

I assume you have tor running and listening on port 127.0.0.1:9050 ready to accept connections.

Firefox extension §

The easiest way is to use a web browser extension (I personally use Firefox) that will allow defining rules based on URL to choose a proxy (or no proxy). I found FoxyProxy to do the job, but there are certainly other extensions that propose the same features.

FoxyProxy for Firefox

Install that extension, configure it:

- add a proxy of type SOCKS5 on ip 127.0.0.1 and port 9050 (adapt if you have a non standard setup), enable "Send DNS through SOCKS5 proxy" and give it a name like "Tor"

- click on Save and edit patterns

- Replace "*" by "*.onion" and save

In Firefox, click on the extension icon and enable "Proxies by pattern and order" and visit a .onion URL, you should see the extension icon to display the proxy name. Done!

Using privoxy §

Privoxy is a fantastic tool that I forgot over the time, it's an HTTP proxy with built-in filtering to protect users privacy. Marcin Cieślak shared his setup using privoxy to dispatch between Tor or no proxy depending on the url.

The setup is quite easy, install privoxy and edit its main configuration file, on OpenBSD it's /etc/privoxy/config, and add the following line at the end of the file:

forward-socks4a   .onion               127.0.0.1:9050 .

Enable the service and start/reload/restart it.

Configure your web browser to use the HTTP proxy 127.0.0.1:8080 for every protocol (on Firefox you need to check a box to also use the proxy for HTTPS and FTP) and you are done.

Marcin Cieślak mastodon account (thanks for the idea!).

Conclusion §

We have seen two ways to use a proxy depending on the location, this can be quite useful for Tor but also for some other use cases. I may write about privoxy in the future but it has many options and this will take time to dig that topic.

Going further §

Duckduck Go official Tor hidden service access

Check if you use Tor, this is a simple but handy service when you play with proxies

Official Duckduck Go about their Tor hidden service

TL;DR on OpenBSD §

If you are lazy, here are instructions as root to setup tor and privoxy on OpenBSD.

pkg_add privoxy tor
echo "forward-socks4a   .onion               127.0.0.1:9050 ." >> /etc/privoxy/config
rcctl enable privoxy tor
rcctl start privoxy tor

Tor may take a few minutes the first time to build a circuit (finding other nodes).

Guix: easily run Linux binaries

Written by Solène, on 10 June 2021.
Tags: #guix

Comments on Mastodon

Introduction §

For those who used Guix or Nixos you may know that running a binary downloaded from the internet will fail, this is because most expected paths are different than the usual Linux distributions.

I wrote a simple utility to help fixing that, I called it "guix-linux-run", inspired by the "steam-run" command from NixOS (although it has no relation to Steam).

Gitlab project guix-linux-run

How to use §

Clone the git repository and make the command linux-run executable, install packages gcc-objc++:lib and gtk+ (more may be required later).

Call "~/guix-linux-run/linux-run ./some_binary" and enjoy.

If you get an error message like "libfoobar" is not available, try to install it with the package manager and try again, this is simply because the binary is trying to use a library that is not available in your library path.

In the project I wrote a simple compatibility list from a few experiments, unfortunately it doesn't run everything and I still have to understand why, but it permitted me to play a few games from itch.io so it's a start.

Guix: fetch packages from other Guix in the LAN

Written by Solène, on 07 June 2021.
Tags: #guix

Comments on Mastodon

Introduction §

In this how-to I will explain how to configure two Guix system to share packages from one to another. The idea is that most of the time packages are downloaded from ci.guix.gnu.org but sometimes you can compile local packages too, in both case you will certainly prefer computers from your network to get the same packages from a computer that already had them to save some bandwidth. This is quite easy to achieve in Guix.

We need at least two Guix systems, I'll name the one with the package "server" and the system that will install packages the "client".

Prepare the server §

On the server, edit your /etc/config.scm file and add this service:

(service guix-publish-service-type
         (guix-publish-configuration
             (host "0.0.0.0")
             (port 8080)
             (advertise? #t))))

Guix Manual: guix-publish service

Run "guix archive --generate-key" as root to create a public key and then reconfigure the system. Your system is now publishing packages on port 8080 and advertising it with mDNS (involving avahi).

Your port 8080 should be reachable now with a link to a public key.

Prepare the client §

On the client, edit your /etc/config.scm file and modify the "%desktop-services" or "%base-services" if any.

(guix-service-type
  config =>
    (guix-configuration
      (inherit config)
      (discover? #t)
      (authorized-keys
        (append (list (local-file "/etc/key.pub"))
                %default-authorized-guix-keys)))))))

Guix Manual: Getting substitutes from other servers

Download the public key from the server (visiting its ip on port 8080 you will get a link) and store it in "/etc/key.pub", reconfigure your system.

Now, when you install a package, you should see from where the substitution (name for packages) are downloaded from.

Declaring a repository (not dynamic) §

In the previous example, we are using advertising on the server and discovery on the client, this may not be desired and won't work from a different network.

You can manually register a remote substitute server instead of using discovery by using "substitute-urls" like this:

(guix-service-type
  config =>
    (guix-configuration
      (inherit config)
      (discover? #t)
      (substitute-urls
        (append (list "http://192.168.1.66:8080")
                %default-substitute-urls))
      (authorized-keys
        (append (list (local-file "/etc/key.pub"))
                %default-authorized-guix-keys)))))))

Conclusion §

I'm doing my best to avoid wasting bandwidth and resources in general, I really like this feature because this doesn't require much configuration or infrastructure and work in a sort of peer-to-peer.

Other projects like Debian prefer using a proxy that keep in cache the packages downloaded and act as a repository provider itself to proxyfi the service.

In case of doubts of the validity of the substitutions provided by an url, the challenge feature can be used to check if reproducible builds done locally match the packages provided by a source.

Guix Manual: guix challenge documentation

Guix Manual: guix weather, a command to get information from a repository

GearBSD: managing your packages on OpenBSD

Written by Solène, on 02 June 2021.
Tags: #rex #openbsd #gearbsd

Comments on Mastodon

Introduction §

I added a new module for GearBSD, it allows to define the exact list of packages you want on the system and GearBSD will take care of removing extra packages and installing missing packages. This is a huge step for me to allow managing the system from code.

Note that this is an improvement over feeding pkg_add with a package list because this method doesn't remove extra packages.

GearBSD packages in action on asciinema

How to use §

In the directory openbsd/packages/ of the GearBSD git repository, edit the file Rexfile and list the packages you want in the variable @packages.

This is the packages set I want on my server.

my @packages = qw/
bwm-ng checkrestart colorls curl dkimproxy dovecot dovecot-pigeonhole
duplicity ecl geomyidae git gnupg go-ipfs goaccess kermit lftp mosh
mtr munin-node munin-server ncdu nginx nginx-stream
opensmtpd-filter-spamassassin p5-Mail-SpamAssassin  postgresql-server
prosody redis rss2email rsync
/;

Then, run "rex -h localhost show" to see what changes will be done like which packages will be removed and which packages will be installed.

Run "rex -h localhost configure" to apply the changes for real. I use "rex -h localhost" using a local ssh connection to root but you could run rex as root with doas with the same effect.

How does it work §

Installing missing packages was easy but removing extra packages was harder because you could delete packages that are still required as dependencies.

Basically, the module looks at the packages you manually installed (the one you directly installed with the pkg_add command), if they are not part of your list of packages you want to have installed, they are marked as automatically installed and then "pkg_delete -a" will remove them if they are not required by any other package.

Where is going GearBSD §

This is a project I started yesterday but I've long think about it. I really want to be able to manage my OpenBSD system with a single configuration file. I currently wrote two modules that are independently configured, the issue is that it doesn't allow altering modules from one to another.

For example, if I create a module to install gnome3 and configure it correctly, this will require gnome3 and gnome3-packages but if you don't have them in your packages list, it will get deleted. GearBSD needs a single configuration file with all the information required by all packages, this will permit something like this:

$module{pf}{TCPports} = [ 22 ];
$module{gnome}{enable} = 1;
$module{gnome}{lang} = "fr_FR.UTF-8";
@packages = qw/catgirl firefox keepassxc/;

The module gnome will know it's enabled and that @packages has to receive gnome3 and gnome3-extras packages in order to work.

Such main configuration file will allow to catch incompatibilities like enabling gdm and xenodm at the same time.

GearBSD: a project to help automating your OpenBSD

Written by Solène, on 01 June 2021.
Tags: #gearbsd #rex #openbsd

Comments on Mastodon

Introduction §

I love NixOS and Guix for their easy system configuration and easy jumping from one machine to another by using your configuration file. To some extent, I want to make it possible to do so on OpenBSD with a collection of parametrized Rex modules, allowing to configure your system piece by piece from templates that you feed with variables.

Let me introduce you to GearBSD, my project to do so.

GearBSD gitlab page

How to use §

You need to clone https://tildegit.org/solene/gearbsd using git and you also need to install Rex with pkg_add p5-Rex.

Use cd to enter into a directory like openbsd/pf (the only one module at this time), edit the Rexfile to change the variables as you want and run "doas rex configure" to apply.

Video example (asciinema recording)

Example with PF §

The PF module has a few variables, in TCPports and UDPports you can list ports or ports ranges that will be allowed, if no ports are in the list then the "pass" rules for that protocol won't be there.

If you want to enable nat on em0 for your wg0 interface, set "nat" to 1, "nat_from_interface" to "wg0" and "nat_to_interface" to "em0" and the code will take care of everything, even enabling the sysctl for port forwarding.

More work required §

It's only a start but I want to work hard on it to make OpenBSD a more accessible system for everyone, and more pleasant to use.

(R)?ex automation for deploying Matrix synapse on OpenBSD

Written by Solène, on 31 May 2021.
Tags: #rex #matrix #openbsd

Comments on Mastodon

Introduction §

Today I will introduce you to Rex, an automation tool written in Perl and using SSH, it's an alternative to Salt, Ansible or drist.

(R)?ex project website

Setup §

You need to install Rex on the management system, this can be done using cpan or your package manager, on OpenBSD you can use "pkg_add p5-Rex" to install it. You will get an executable script named "rex".

To make things easier, we will use ssh from the management machine (your own computer) and a remote server, using your ssh key to access the root account (escalation with sudo is possible but will complicate things).

Get Rex

Simple steps §

Create a text file named "Rexfile" in a directory, this will contain all the instructions and tasks available.

We will write in it that we want the features up to the syntax version 1.4 (latest at this time, doesn't change often), the default user to connect to remote host will be root and our servers group has only one address.

use Rex -feature => ['1.4'];

user "root";
group servers => "myremoteserver.com";

We can go further now.

Rex commands cheat sheet §

Here are some commands, you don't need much to use Rex.

- rex -T : display the list of tasks defined in Rexfile

- rex -h : display help

- rex -d : when you need some debug

- rex -g : run a task on group

Installing Munin-master §

An example I like is deploying Munin on a computer, it requires a cron and a package.

The following task will install a package and add a crontab entry for root.

desc "Munin-cron installation";
task "install_munin_cron", sub {
	pkg "munin-server", ensure => "present";
	
	cron add => "root", {
		ensure => "present",
		command = > "su -s /bin/sh _munin /usr/local/bin/munin-cron",
		on_change => sub {
			say "Munin cron modified";
		}
	};
};

Now, let's say we want to configure this munin cron by providing it a /etc/munin/munin.conf file that we have locally. This can be done by adding the following code:

	file "/etc/munin/munin.conf",
	source => "local_munin.conf",
	owner => "root",
	group => "wheel",
	mode => 644,
	on_change => sub {
		say "munin.conf has been modified";
	};

This will install the local file "local_munin.conf" into "/etc/munin/munin.conf" on the remote host, owned by root:wheel with a chmod 644.

Now you can try "rex -g servers install_munin_cron" to deploy.

Real world tasks §

Configuring PF §

This task deploys a local pf.conf file into /etc/pf.conf and reload the configuration on changes.

desc "Configuration PF";
task "prepare_pf", sub {

    file "/etc/pf.conf",
    source => "pf.conf",
    owner => "root",
    group => "wheel",
    mode => 400,
    on_change => sub {
        say "pf.conf modified";
        run "Restart pf", command => "pfctl -f /etc/pf.conf";
    };
};

Deploying Matrix Synapse §

A task can call multiples tasks for bigger deployments. In this one, we have a "synapse_deploy" task that will run synapse_install() and then synapse_configure() and synapse_service() and finally prepare_pf() to ensure the rules are correct.

As synapse will generate a working config file, there are no reason to push one from the local system.

desc "Deploy synapse";
task "synapse_deploy", sub {
    synapse_install();
    synapse_configure();
    synapse_service();
    prepare_pf();
};

desc "Install synapse";
task "synapse_install", sub {
    pkg "synapse", ensure => "present";
    
    run "Init synapse",
    	command => 'su -s /bin/sh _synapse -c "/usr/local/bin/python3 -m synapse.app.homeserver -c /var/synapse/
    	cwd => "/tmp/",
    	only_if => is_file("/var/synapse/homeserver.yaml");
};

desc "Configure synapse";
task "synapse_configure", sub {
    file "/etc/nginx/sites-enabled/synapse.conf",
    	source => "nginx_synapse.conf",
    	owner => "root",
    	group => "wheel",
    	mode => "444",
    	on_change => sub {
    		service nginx => "reload";
    	};
};

desc "Service for synapse";
task "synapse_service", sub {
    service synapse => "ensure", "started";
};

Going further §

Rex offers many feature because the configuration is real Perl code, you can make loops, conditions and extend Rex by writing local modules.

Instead of pushing configuration file from an hard coded local one, I could write a template of the configuration file and then use Rex to generate the configuration file on the fly by giving it the needed variables.

Rex has many functions to directly alter text files like "append-if_no_such_line" to add a line if it doesn't exist or replace/add/update a line matching a regex (can be handy to uncomment some lines).

Full list of Rex commands

Rex guides

Rex FAQ

Conclusion §

Rex is a fantastic tool if you want to programmaticaly configure a system, it can even be used for your local machine to allow reproducible configuration or for keeping track of all the changes in one place.

I really like it because it's simple to work with, it's Perl code doing real things, it's easy to hack on it (I contributed to some changes and the process was easy) and it only requires a working ssh toward a server (and Perl on the remote host). While Salt stack also works "agent less", it's painfully slow compared to Rex.

Kakoune: filetype based on filename

Written by Solène, on 30 May 2021.
Tags: #kakoune #editor

Comments on Mastodon

Introduction §

I will explain how to configure Kakoune to automatically use a filetype (for completion/highlighting..) depending on the filename or its extension.

Setup §

The file we want to change is ~/.config/kak/kakrc , in case of issue you can use ":buffer *debug*" in kakoune to display the debug output.

Filetype based on the filename §

I had a case in which the file doesn't have any extension. This snippet will assign the filetype Perl to files named Rexfile.

hook global BufCreate (.*/)?Rexfile %{
	set buffer filetype perl
}

Filetype based on the extension §

While this is pretty similar to the previous example, we will only match any file ending by ".gmi" to assign it a type markdown (I know it's not but the syntax is quite similar).

hook global BufCreate .*\.gmi %{
	set buffer filetype markdown
}

Using dpb on OpenBSD for package compilation cluster

Written by Solène, on 30 May 2021.
Tags: #openbsd

Comments on Mastodon

Introduction §

Today I will explain how to easily setup your own OpenBSD dpb infra. dpb is a tool to manage port building and can use chroot to use a sane environment for building packages.

This is particularly useful when you want to test packages or build your own, it can parallelize package compilation in two way: multiples packages at once and multiples processes for one package.

dpb man page

proot man page

The dpb and proot executable files are available under the bin directory of the ports tree.

Building your packages provide absolutely NOTHING compared to using binary packages except wasting CPU time, disk space and bandwidth.

Setup §

You need a ports tree and a partition that you accept to mount with wxallowed,nosuid,dev options. I use /home/ for that. To simplify the setup, we will create a chroot in /home/build/ and put our ports tree in /home/build/usr/ports (then your /usr/ports can be a symlink).

Create a text file that will be used as a configuration file for proot

chroot=/home/build
WRKOBJDIR=/tmp/pobj
LOCKDIR=/tmp/locks
PLIST_REPOSITORY=/data/plist
DISTDIR=/data/distfiles
PACKAGE_REPOSITORY=/data/packages
actions=unpopulate
sets=base comp etc xbase xfont xshare xetc xserver

This will tell proot to create a chroot in /home/build and preconfigure some variables for /etc/mk.conf, use all sets listed in "sets" and clean everything when run (this is what actions=unpopulate is doing). Running proot is as easy as "proot -c proot_config".

Then, you should be able to run "dpb -B /home/build/ some/port" and it will work.

Ease of use §

I wrote a script to clean locks from dpb, locks from ports system and pobj directories but also taking care of adding the mount options.

Options -p and -j will tell dpb how many cores can be used for parallel compilation, note that dpb is smart and if you tell it 3 ports in parallel and 3 threads in parallel, it won't use 3x3, it will compile three ports at a time and once it's stuck with only one port, it will add cores to its build to make it faster.

#!/bin/sh

CHROOT=/home/build/
CORES=3

rm -fr ${CHROOT}/usr/ports/logs/amd64/locks/*
rm -fr ${CHROOT}/tmp/locks/*
rm -fr ${CHROOT}/tmp/pobj/*
mount -o dev -u /home
mount -o nosuid -u /home
mount -o wxallowed -u /home
/usr/ports/infrastructure/bin/dpb -B $CHROOT -c -p $CORES -j $CORES  $*

Then I use "doas ./my_dpb.sh sysutils/p5-Rex lang/guile" to run the build process.

It's important to use -c in dpb command line which will clear compilation logs of the packages but retains the log size, this will be used to estimate further builds progress by comparing current log size with previous logs sizes.

You can harvest your packages from /home/build/data/packages/ , I even use a symlink from /usr/ports/packages/ to the dpb packages directory because sometimes I use make in ports and sometimes I use dpb, this allow recompiling packages in both areas. I do the same for distfiles.

Going further §

dpb can spread the compilation load over remote hosts (or even manage compilation for a different architecture), it's not complicated to setup but it's out of scope for the current guide. This requires setting up ssh keys and NFS shares, the difficulty is to think with the correct paths depending on chroot/not chroot and local / nfs.

I extremely recommend reading dpb man pages, it supports many options such as providing it a list of pkgpaths (package address such as editor/vim or www/nginx) or building ports in random order.

Here is a simply command to generate a list of pkgpaths of outdated packages on your system compared to the ports tree, the -q parameter is to make it a lot quicker but less accurate for shared libraries.

/usr/ports/infrastructure/bin/pkg_outdated -q | awk '/\// { print $1 }'

Conclusion §

I use dpb when I want to update my packages from sources because the binary packages are not yet available or if I want to build a new package in a clean environment to check for missing dependencies, however I use a simple "make" when I work on a port.

Extend Guix Linux with the nonguix repository

Written by Solène, on 27 May 2021.
Tags: #guix

Comments on Mastodon

Introduction §

Guix is a full open source Linux distribution approved by the FSF, meaning it's fully free. However, for many people this will mean the drivers requiring firmwares won't work and their usual software won't be present (like Firefox isn't considered free because of trademark issue).

A group of people is keeping a parallel repository for Guix to add some not-100% free stuff like kernel with firmware loading capability or packages such as Firefox, this can be added to any Guix installation quite easily.

nonguix git repository

Guix project website

Configuration §

Most of the code and instructions you will find here come from the nonguix README, you need to add the new channel to download the packages or the definitions to build them if they are not available as binary packages (called substitutions) yet.

Create a new file /etc/guix/channels.scm with this content:

(cons* (channel
        (name 'nonguix)
        (url "https://gitlab.com/nonguix/nonguix")
        ;; Enable signature verification:
        (introduction
         (make-channel-introduction
          "897c1a470da759236cc11798f4e0a5f7d4d59fbc"
          (openpgp-fingerprint
           "2A39 3FFF 68F4 EF7A 3D29  12AF 6F51 20A0 22FB B2D5"))))
       %default-channels)

And then run "guix pull" to get the new repository, you have to restart "guix-daemon" using the command "herd restart guix-daemon" to make it accounted.

Deploy a new kernel §

If you use this repository you certainly want to have the kernel provided that allow loading firmwares and the firmwares, so edit your /etc/config.scm

(use-modules (nongnu packages linux)
             (nongnu system linux-initrd))

(operating-system ;; you should already have this line
  (kernel linux)
  (initrd microcode-initrd)
  (firmware (list linux-firmware))
  #...

Then you use "guix system reconfigure /etc/config.scm" to rebuild the system with the new kernel, you will certainly have to rebuild the kernel but it's not that long. Once it's done, reboot and enjoy.

Installing packages §

You should also have packages available now. You can enable the channel for your user only by modifying ~/.config/guix/channels.scm instead of the system wide /etc/channels.scm file. Note that you may have to build the packages you want because the repository doesn't build all the derivations but only a few packages (like firefox, keepassxc and a few others).

Note that Guix provide flatpak in its official repository, this is a workaround for many packages like "desktop app" for instant messaging or even Firefox, but it doesn't integrates well with the system.

Gaming §

There is also a dedicated gaming channel!

Guix gaming channel

Conclusion §

The nonguix repository is a nice illustration that it's possible to contribute to a project without forking it entirely when you don't fully agree with the ideas of the project. It integrates well with Guix while being totally separated from it, as a side project.

If you have any issues related to this repository, you should seek help from the nonguix project and not Guix because they are not affiliated.

How to use Wireguard VPN on Guix

Written by Solène, on 22 May 2021.
Tags: #guix #vpn

Comments on Mastodon

Introduction §

Today I had to setup a Wireguard tunnel on my Guix computer (my email server is only reachable from Wireguard) and I struggled a bit to understand from the official documentation how to put the pieces together.

In Guix (the operating system, and not the foreign Guix on an existing distribution) you certainly have a /etc/config.scm file that defines your system. You will have to add the Wireguard configuration in it after generating a private/public keys for your Wireguard.

Guix project website

Guix Wireguard VPN documentation

Key generation §

In order to generate Wireguard keys, install the package Wireguard with "guix install wireguard".

# umask 077 # this is so to make files only readable by root
# install -d -o root -g root -m 700 /etc/wireguard
# wg genkey > /etc/wireguard/private.key
# wg pubkey < /etc/wireguard/private.key > /etc/wireguard/public

Configuration §

Edit your /etc/config.scm file, in your "(services)" definition, you will define your VPN service. In this example, my Wireguard server is hosted at 192.168.10.120 on port 4433, my system has the IP address 192.168.5.1, I also defines my public key but my private key is automatically picked up from /etc/wireguard/private.key

(services (append (list
      (service wireguard-service-type
             (wireguard-configuration
              (addresses '("192.168.5.1/24"))
              (peers
               (list
                (wireguard-peer
                 (name "myserver")
                 (endpoint "192.168.10.120:4433")
                 (public-key "z+SCmAMgNNvkeaD0nfBu4fCrhk8FaNCa1/HnnbD21wE=")
                 (allowed-ips '("192.168.5.0/24"))))))))
      %desktop-services))

If you have the default "(services %desktop-services)" you need to use "(append " to merge %desktop-services and new services all defined in a "(list ... )" definition.

The "allowed-ips" field is important, Guix will automatically make routes to these networks through the Wireguard interface, if you want to route everything then use "0.0.0.0/0" (you will require a NAT on the other side) and Guix will make the required work to pass all your traffic through the VPN.

At the top of the config.scm file, you must add "vpn" in the services modules, like this:

# I added vpn to the list
(use-service-modules vpn desktop networking ssh xorg)

Once you made the changes, you can use "guix system reconfigure" to make the changes, if you do multiples reconfigure it seems Wireguard doesn't reload correctly, you may have to use "herd restart wireguard-wg0" to properly get the new settings (seems a bug?).

Conclusion §

As usual, setting Wireguard is easy but the functional way make it a bit different. It took me some time to figure out where I had to define the Wireguard service in the configuration file.

Backup software: borg vs restic

Written by Solène, on 21 May 2021.
Tags: #backup #openbsd #unix

Comments on Mastodon

Introduction §

Backups are important, lot of our life is now related to digital data and it's important to take care of them because computers are unreliable, can be stolen and mistakes happen. I really like two programs which are restic and borg, they have nearly the same features but it's hard to decide between both, this is an attempt to understand the differences for my use case.

Restic §

Restic is a backup software written in Go with a "push" workflow, it supports data deduplication within a repository and multiple systems using the same repository and also encryption.

Restic can backup to a remote sftp server but also many network services storage like S3/Minio and even more when using with the program rclone (which can turn any supported backend into a compatible restic backend). Restic seems compatible with Windows (I didn't try).

restic website

Borg §

Borg is a backup software written in Python with a "push" workflow, it supports encryption, data deduplication within a repository and compression. You can backup to a remote server using ssh but the remote server requires borg to be installed.

It's a very good and reliable backup software. It has a companion app named "borgmatic" to automate the backup process and snapshots managements (daily/hourly/monthly ... and integrity checking).

*BSD specific note: borg can honor the "nodump" flag in the filesystem to skip saving those files.

borgbackup website

borgmatic website

Experiment §

I've been making a backup of my /home/ partition (minus some directories that has been excluded in both cases) using borg and restic. I always performed the restic backup and then the borg backup, measuring bandwidth for each and execution time for each.

There are five steps: init for the first backup of lot of data, little changes twice, which is basically opening firefox, browsing a few pages, closing it, refreshing my emails in claws-mail (this changes a lot of small files) and use the computer for an hour. There is a massive change as fourth step, I found a few game installers that I unzipped, producing lot of small files instead of one big file and finally, 24h of normal use between the fourth and last step which is a good representation of a daily backup.

Data §

				restic	borg
Data transmitted (MB)
---------------------
Backup 1 (init)			62860	53730
Backup 2 (little changes)	15	26
Backup 3 (little changes)	168	171
Backup 4 (massive changes)	4820	3910
Backup 5 (typical day of use)	66	44
		
Local cache size (MB)
---------------------
Backup 1 (init)			161	45
Backup 2 (little changes)	163	45
Backup 3 (little changes)	207	46
Backup 4 (massive changes)	211	47
Backup 5 (typical day of use)	216	47
		
Backup time (seconds)
---------------------
Backup 1 (init)			2139	2999
Backup 2 (little changes)	38	131
Backup 3 (little changes)	43	114
Backup 4 (massive changes)	201	355
Backup 5 (typical day of use)	50	110

Repository size (GB)		65	56

Analysis §

Borg was a lot slower than restic but in my experiment the remote ssh server is a dual core atom system, borg is using a process on the other end to manage the data, so maybe that CPU was slowing the backup process. Nevertheless, in my real use case, borg is effectively slower.

Most of the time, borg was more bandwidth effective than restic: it saved 15% of bandwidth for the first backup and 18% after some big changes, but in some cases it used a bit more bandwidth. I have no explanation for this, I guess it depends how file chunks are calculated, if a big database file is changing then one may be able to save only the difference and not the whole file. Borg is also compressing the data (using lz4 by default), this may explain the bandwidth saving that doesn't work for binary data.

The local cache (typically in /root/.cache/) was a lot bigger for restic than for borg, and was increasing slightly at each new backup while borg cache never changed much.

Finally, the whole repo size holding all the snapshots has a different size for restic and borg, respectively 65 GB and 56 GB, which makes a 14% difference between each which may due to the compression done by borg.

Other backup software §

I tested Restic and Borg because they are both good software using the "push" workflow (local computer sends the data) making full snapshots of every backup, but there are many other backup solution available.

- duplicity: fully scriptable, works over many remote protocols but requires a full snapshot and then incremental snapshots to work, when you need to make a new full snapshot it will take a lot of space which is not always convenient. Supports GPG encrypted backup stored over FTP, this is useful for some dedicated server offering 100GB of free FTP.

- burp: not very well known, the setup uses TLS certificates for encryption, requires a burp server and a burp client

- rsnapshot: based on rsync, automate the rotation of backups, use hard links to avoid data duplication for files that didn't change between two backups, it pulls data from servers from a central backup system.

- backuppc: a perl app that will pull data from servers to its repository, not really easy to use

- bacula: enterprise grade solution that I never got to work because it's really complicated but can support many things, even saving on tapes

Conclusion §

In this benchmark, borg is clearly slower but was the most storage and bandwidth efficient. On the other hand, restic is easier to deploy (static binary) and supports a simple sftp server while borg requires borg installed on both sides.

A biggest difference between restic and borg, is that restic supports multiples systems backup in the same repository, allowing a massive data deduplication gain across machines, while a borg repository is for single system (it could work with multiples systems but they should not backup at the same time and they would have to rebuild the local cache every time which is slow).

I'll stick with borg because the backup time isn't a real issue given it's not dramatically slower than restic and that I really enjoy using borgmatic to automatically manage the backups.

For doing backups to a remote server over the Internet, the bandwidth efficiency would be my main concern of all the differences, borg seems a clear winner here.

How to setup wireguard on NixOS

Written by Solène, on 18 May 2021.
Tags: #nixos #network

Comments on Mastodon

Introduction §

Today I will share my simple wireguard setup using NixOS as a wireguard server. The official documentation is actually very good but it didn't really fit for my use case. I have a server with multiples services but some of them need to be only reachable through wireguard, but I don't want to open all ports to wireguard either.

As a quick introduction to Wireguard, it's an UDP based VPN protocol with the specificity that it's stateless, meaning it doesn't huge any bandwidth when not in use and doesn't rely on your IP either. If you switch from an IP to another to connect to the other wireguard peer, it will be seamless in regards to wireguard.

NixOS wireguard documentation

Wireguard setup §

The setup is actually easy if you use the program "wireguard" to generate the keys. You can use "nix-shell -p wireguard" to run the following commands:

umask 077 # this is so to make files only readable by root
wg genkey > /root/wg-private
wg pubkey < /root/wg-private > /root/wg-public

Congratulations, you generated a wireguard private key in /root/wg-private and a wireguard public key in /root/wg-public, as usual, you can share the public key with other peers but the private key must be kept secret on this machine.

Now, edit your /etc/nixos/configuration.nix file, we will create a network 192.168.100.0/24 in which the wireguard server will be 192.168.100.1 and a laptop peer will be 192.168.100.2, the wireguard UDP port chosen is 5553.

networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "192.168.100.1/24" ];
              listenPort = 5553;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # laptop
               publicKey = "uPfe4VBmYjnKaaqdDT1A2PMFldUQUreqGz6v2VWjwXA=";
               allowedIPs = [ "192.168.100.2/32" ];
              }];
      };
};

Firewall configuration §

Now, you will also want to enable your firewall and make the UDP port 5553 opened on your ethernet device (eth0 here). On the wireguard tunnel, we will only allow TCP port 993.

networking.firewall.enable = true;

networking.firewall.interfaces.eth0.allowedTCPPorts = [ 22 25 465 587 ];
networking.firewall.interfaces.eth0.allowedUDPPorts = [ 5553 ];

networking.firewall.interfaces.wg0.allowedTCPPorts = [ 993 ];

Specifically defining the firewall rules for eth0 are not useful if you want to allow the same ports on wireguard (+ some other ports specifics to wg0) or if you want to set the wg0 interface entirely trusted (no firewall applied).

Building §

When you have done all the changes, run "nixos-rebuild switch" to apply the changes, you will see a new network interface wg0.

Conclusion §

I obviously stripped down my real world use case but if for some reasons you want a wireguard tunnel stricter than what's available on the public network interfaces rules, this is how you do.

How to switch to NixOS development version

Written by Solène, on 17 May 2021.
Tags: #nixos

Comments on Mastodon

This short guide will explain you how to switch a NixOS installation to the unstable channel, understand the development version.

nix-channel --add https://channels.nixos.org/nixos-unstable nixos

You will have to reload the channel list using the command "nix-channel --update" and then you can upgrade your system using "nixos-rebuild switch".

If you have issues, you can rollback using "nix-channel --rollback" that will set the channels list to the last state before "--update".

Nix channels wiki page

Nix-channel man page

Turn your Xorg in black and white

Written by Solène, on 15 May 2021.
Tags: #unix

Comments on Mastodon

Introduction §

If for some reasons you want to turn you display in black and white mode and you can't control this on your display (typically a laptop display won't allow you to change this), there are solutions.

Compositor way §

The best way I found is to use a compositor, fortunately I'm already using "picom" as a compositor along with fvwm2 because I found the windows are getting drawn faster when I switch between desktop with the compositor on. You will want to run the compositor in your ~/.xsession file before running your window manager.

The idea is to run picom with a shader that will turn the color into a gray scale, restart picom with no parameter if you want to get colors back.

picom -b --backend glx --glx-fshader-win  "uniform sampler2D tex; uniform float opacity; void main() { vec4 c = texture2D(tex, gl_TexCoord[0].xy); float y = dot(c.rgb, vec3(0.2126, 0.7152, 0.0722)); gl_FragColor = opacity*vec4(y, y, y, c.a); }"

It was surprisingly complicated to find how to do that. I stumbled on "toggle-monitor-grayscale" project on github which is a long script to automate this depending on your graphic card, I only took the part I needed for picom.

toggle-monitor-grayscale project on Github

Conclusion §

I have no idea why someone would like to turn the screen in black and white, but I've been curious to see how it would look like and if it would be nicer for the eyes, it's an interesting experience I have to admit but I prefer to keep my colors.

Why do I write this blog?

Written by Solène, on 14 May 2021.
Tags: #blog

Comments on Mastodon

Why do I write this blog? §

I decided to have a blog when I started to gather personal notes when playing with FreeBSD, while I wanted my notes to be easy to read and understand I also chose to publish them online so I could read them even at work.

The earlier articles were more about how to do X Y, they were reminders for myself that I was sharing with the world, I never intended to have readers at that time. I enjoyed writing and sharing, I had a few friends who were happy to subscribe to the RSS feed and they were proof-reading after my publications.

Over time, I wanted to make it a place to speak about unusual topic like StumpWM, Common LISP, Guix and weird Unix tricks. It made me very happy because I got feedback from more people over time so I kept doing this.

At some point, I got a lot more involved in the OpenBSD community and I think most of my audience is related to OpenBSD now. I want to share what you can do with OpenBSD, how it would be different than with another system, steps-by-steps guides. I hope it helped some people to jump to OpenBSD and they enjoy it as well now. At the same time, I try to be as honest as possible when I publish about something, this blog is making absolutely no money, there are no ads, I would have absolutely nothing to gain not being honest in my articles. I value precision and accuracy, I try to link to official documentation most of the time instead of doing a copy/paste that will become obsolete over time.

Speaking of obsolescence, I usually re-read all my texts (and it takes a long time) once a year, to check if everything seems still correct. I could see packages that not longer exist, configuration syntax that may have changed or just a software version that is really old, this takes a lot of time because I value all my publications and not only the most recent one.

I write because I have fun writing and I'm happy to make my readers happy. I often get some emails from people I don't know giving me their thoughts about an article, I'm always surprised but very happy when this happen and I always reply to those persons.

I have no schedule when I write, sometimes I plan texts but I can't get them right so I delete them. Sometimes months can pass between two publications, I do not really care, I'm not targeting any publication rate, that would be against the fun.

Why not you? §

This may sound odd, but I wanted to write this text mainly to encourage other people to write and publish their own blog. Why not you? On the technical side, there are many free hosting available in the opensource community and you have plenty of awesome static website generators available nowadays.

If you want to start the adventure, just write and publish. Propose a way to contact you, I think it's important for readers to be able to reach you, they are very nice (at least I never had any issue): they could report mistakes or give you links to things you may enjoy on the same topic as your publication.

Don't think of money, styling, hit rate, visit numbers, it doesn't matter. The true gems on the Internet are those old fashions websites of early 2000 with many ugly jpg, wrong colors but with insane content about unusual and highly specific topics. I have in mind the example of a website about a French movie, the author had found every spot in France where the movie has been filmed, he has contacted every cast in the movie even the most insignificant ones to ask about stories and gathered many pictures and stories about the making of the film. None of this would ever happen in a web driven by money and ranking and visitors.

Simple solution VS over-engineering

Written by Solène, on 13 May 2021.
Tags: #software #opensource

Comments on Mastodon

Introduction §

I wanted to share my thoughts about software in general. I've been using and writing software for a long time and I've seen some patterns over time.

Simple solutions §

I am a true adept of the "KISS" philosophy, in which KISS stands for Keep It Simple Stupid, meaning make your software easy to understand and not try to make it smart. It works most of the time but after you reach your goal with your software, you may be tempted to add features over it, or make it faster, or make it smarter, it usually doesn't work.

Over-engineering §

In the opensource world, we have many bricks of software that we can put together to build better tools, but at some point, you may use too many of them and the service is unbearable in regards to maintenance / operating, the current trend is to automate this by providing those huge stacks of software through docker. It may be good enough for users, it does certainly the job and it works, why should we worry?

Failure and reversibility §

When you use a complicated software, ALWAYS make sure you have a way out: either replace product A with product B or make sure the code is easy to fix. If you plan to invest yourself into deploying a complex program that will store data (like Nextcloud or Paperless-ng), the first question you should have is: how can I move away from it?

Why would you move away from something you are deploying right now because it's good? Software can be unmaintained after some time and you certainly don't want to run a network based obsolete program, due to dependency hell it may not work in the future because it relies on some component that is not available anymore (think python2 here), you may have bugs after a long use that nobody want to fix and prevent you to use the software correctly (scalability issue due to data growth).

There are tons of reasons that something can fail, so it's always important to think about replacements.

- are the data stored in a way you can extract? data could be saved as a plain file on the file system but could also be stored in some complicated repositories format (ipfs)

- if data are encrypted, can you decrypt it? If it's GPG based, you can always work with it, but if it's custom made chunk encryption like Seafile does, it's a lot harder without the real program.

- if the software is packaged for your system, it may not be forever, you may have to package it yourself in a few years if you want to keep it up to date

- if you rely on external API, it may be not able indefinitely. Web browser extensions are a good example, those programs have tightened what extensions could do over time and many tricks had to be used to migrate from API to API. When you rely on a extension, it's a real issue when the extension can't work anymore.

Build your own replacement? §

There are many situations in which you may prefer to build your own service with your own code than using a software ready on the shelf. There are always pros and cons, you gain control and reliability over features and ease of use. Not everyone is able to write such scripts and you may fail and have to deal with the consequences when you do so, this is something that must be kept in mind.

- backups: you could use rsync instead of a complex backup system

- "cloud" file storage: rsync/sftp are still a viable option to upload a file "to the cloud" if you have a server, a simple https server would be enough to share the file, the checksum of the file could be used as an unique and very long file name.

- automation: a shell script executed over ssh could replace ansible or salt-stack to some extent

There are many use case in which the administrator may prefer a home-made solution, but in a company context, you may have to rely on that very person instead of relying on a complex software, which moves the problem to another level.

Conclusion §

There are many reasons a software could fail, be abandoned, not work anymore, you should always assess such situations if you don't want to build a fragile service. Easiest ideas have less features but are a lot more reliable and resistant to time than complex implementations. The more code you involve, the more issues you will have.

We are free to use what we want, in open source we are even free to make changes to the code we use, this is fantastic. Choice always come with pros and cons and it's always better to think before hand than facing unwise consequences.

Introduction to git-annex (Port Of The Week)

Written by Solène, on 12 May 2021.
Tags: #git #openbsd

Comments on Mastodon

Introduction §

Now that git-annex is available as a package on OpenBSD I can use it again. I've been relying on it a few years ago but it was really complicated for me to compile it and I gave up. Since I really missed it, I'm now back to it and I think it's time to share about this wonderful piece of software.

git-annex is meant to help you manage your data like you would manage books in a library, you have a database telling you where the books are and you can find them on the shelves, or at least you can know who borrowed the book. We are working with digital files that can be copied here so the analogy doesn't fully work, but you could want to put your data in an external hard drive but not everything, and you may want to have some data on multiples devices for safety reasons, git-annex automates this.

It works very well for files that are not changing much, I call them "static files", they are music, videos, pictures, documents. You don't really want to use git-annex with files you edit everyday, it doesn't work well because the process can be a bit tedious.

git-annex may not be easy to understand at first, I suggest you try locally to grasp its purpose.

git-annex official website

what git-annex is not

Cheat sheet §

Let's create a cheat sheet first. Most git-annex commands have a dedicated man page, but can also provide a simpler help by using "git annex help somecommand".

Create the repository §

The first step is to create a repository which is based on git, then we will tell git-annex to init it too.

mkdir ~/MyDataLibrary && cd ~/MyDataLibrary
git init
git annex init "my-computer"

Add a file §

When you want to register a file in git annex, you need to use "git annex add" to add it and then "git commit" to make it permanent. The files are not stored in the git repository, it will only contains metadata.

git annex add Something
git commit -m "I added something"

Example:

$ echo "hello there" > hello
$ ls -l hello
-rw-r--r--  1 solene  wheel  12 May 12 18:38 hello
$ git annex add hello
add hello
ok
(recording state in git...)
$ ls -l hello
lrwxr-xr-x  1 solene  wheel  180 May 12 18:38 hello -> .git/annex/objects/qj/g5/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9
$  git status hello
On branch master
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   hello

Make changes to a file §

If you want to make changes to a file, you first need to "unlock" it in git-annex, which mean the symbolic link is replaced by the file itself and is no longer in read-only. Then, after your changes, you need to add it again to git-annex and commit your changes.

git annex unlock file
vi file
git annex add file
git commit -m "I changed something" file

Add a remote encrypted repository §

If you want to store data (for duplication) on a remote server using ssh you can use a remote of type "rsync" and encrypt the data in many fashions (GPG with hybrid is the best). This will allow to store data on remote untrusted devices.

git annex initremote my-remote-server type=rsync rsyncurl=remote-server.com:/home/solene/git-annex-data keyid=my-gpg@address encryption=hybrid

After this command, I can send files to my-remote-server.

git-annex website about encryption

git-annex website about special remotes

Manage data from multiple computers (with ssh) §

**This is a way to have a central git repository for many computers, this is not the best way to store data on remote servers**.

If you want to use a remote server through ssh, there are two ways: mounting the remote file system using sshfs or use a plain ssh. If you use sshfs, then it falls as a standard local file system like an external usb drive, but if you go through ssh, it's different.

You need to have a key authentication based for the remote ssh and you also need git-annex on the remote server. It's important to have a bare git repo.

cd /home/data/
git init --bare
git annex init "remote-server"

On your computer:

git remote add remote-server ssh://hostname:/home/data/
git fetch remote-server

You will be able to use commands related to repositories now!

List files and where they are stored §

You can use the "git annex list" command to list where your files are physically stored.

In the following example you can see which files are on my computer and which are available on my remote server called "network", "web" and "bittorrent" are special remotes.

here
|network
||web
|||bittorrent
||||
X___ Documentation/Nim/Dominik Picheta - Nim in Action-Manning Publications (2017).pdf
X___ Documentation/ada/Ada-Distilled-24-January-2011-Ada-2005-Version.pdf
X___ Documentation/ada/courseada1.pdf
X___ Documentation/ada/courseada2.pdf
X___ Documentation/ada/courseada3.pdf
X___ Documentation/scheme/artanis.pdf
X___ Documentation/scheme/guix.pdf
X___ Documentation/scheme/manual_guix.pdf
X___ Documentation/skribilo/skribilo.pdf
X___ Documentation/uck2ep1.pdf
X___ Documentation/uck2ep2.pdf
X___ Documentation/usingckermit3e.pdf
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/01 - Daftendirekt.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/02 - Wdpk 83.7 fm.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/03 - Revolution 909.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/04 - Da Funk.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/05 - Phoenix.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/01 - Alan Walker - Intro.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/02 - Alan Walker, Sorana - Lost Control.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/03 - Alan Walker, Julie Bergan - I Don_t Wanna Go.flac

List files locally available §

If you want to list the files for which you have the content available locally, you can use the "list" command from git-annex but only restrict to the group "here" representing your local repository.

git annex list --in here

Work with a remote repository §

Copy files to a remote §

If you want to duplicate files between repositories to have multiples copies you can use "git annex copy".

git annex copy Music -t remote-server

Move files to a remote §

If you want to move files from a repository to another (removing the content from origin) you can use "git annex move" which will copy to destination and remove from origin.

git annex move Music -t remote-server

Get a file content §

If you don't have a file locally, you can fetch it from a remote to get the content.

git annex get Music/Queen

Forget a file locally §

If you don't want to have the file locally because you don't have disk space or you simply don't want it, you can use the "drop" command. Note that "drop" is safe because git-annex won't allow you to drop files that have only one copy (except if you use --force of course).

git annex drop Music/Queen

Real life example: I have a very huge music library but my laptop SSD is too small, I get get some music I want and drop the files I don't want to listen for a while.

Use mincopies to enforce multi repository data duplication §

The numcopies and mincopies variables can be used to tell git-annex you want exactly or at least "n" copies of the files, so it will be able to protect you from accidental deletions and also help uploading files to other repositories to match the requirements.

Enable per directory recursively §

echo "* annex.mincopies=2" > .gitattributes

Only upload files not matching the num copies §

If you have multiples repositories and some files doesn't match the copies requirements, you can use the following commands to only push the files missing copies.

git annex copy --auto -t remote-server

Real life example: I want my salaries PDF to be really safe, I can ask to have 2 copies of those and then run a sync to the remote server which will proceed to upload them if there is only one copy of the file yet.

Verifying integrity and requirements §

There is the git-annex fsck command which will check the integrity of every file in the local repository and reports you if they are sane (or not), but it will also tell you which file doesn't meet the mincopies requirements.

git annex fsck

Reversibility §

If for some reasons you want to give up git-annex, you can easily get all your files back like a normal file system by using "git annex unlock ." on the top directory of your repository, every local files will be replaced by their physical copy instead of the symlink. Reversibility is very important when you deal with your data because it means you are not stuck forever with a tool in case it's broken or if you want to switch to another process.

My workflow §

I have a ~/DATA/ directory in which I have sub directories {documents,documentation,pictures,videos,music,images}, documents are papers or legal papers, documentation are mostly PDF. Pictures are family pictures and images are wallpapers or stupid images I want to keep.

I've set a mincopies to 2 for documents and pictures and my music is not on my computer but on a remote, I get the music files I want to listen when I'm on the local network with the computer having the files, I drop them locally when I'm bored.

Conclusion §

git-annex separates content from indexation, it can be used in many ways but it implies an archivist philosophy: redundancy, safety, immutability (sort of). It is not meant for backup, you can backup your directory managed by git-annex, it will save the data you have locally, you will have to make backup of your other data as well.

I love that tool, it's a very nice piece of software. It's unique, I didn't find any other program to achieve this.

More resources §

git-annex official walkthrough

git-annex special remotes (S3, webdav, bittorrent etc..)

git-annex encryption

Introduction to security good practices

Written by Solène, on 09 May 2021.
Tags: #security

Comments on Mastodon

Introduction §

I wanted to share my thoughts about security in regards to computers. Let's try to summarize it as a list of rules.

If you read it and you disagree, please let me know, I can be wrong.

Good practices §

Here is a list of good practices I've found over time.

Passwords policy §

Passwords are a mess, we need many of them every day but they are not practical. I do highly recommend to use an unique random password for every password needed. I switched to "keepassxc" to manage my passwords, there are many password managers on the market.

When I need to register a password, I use the longest possible allowed and I keep in my password database.

If I got hacked with my password database, all my passwords are leaked, but if I didn't use it and had only one password, good chance it would be registered somewhere and then the hacker would have access to everything too. The best situation would be to have a really effective memory but I don't want to rely on it.

I still recommend to have a few passwords in your memory, like the one for your backups, your user session and the one to unlock the password database.

When possible, use multi factor authentication. I like the TOTP (Timed One Time Password) method because it works without any third party service and can be stored securely in a backup.

Devices trust §

It's important to define a level of trust in the devices you use. I do not trust my Windows gaming computer, I would not let it have access to my password database. I do not trust my phone device enough for that job too.

If my phone requires a password, I generate one and keep it in my password database and I will create a QR code to scan with the phone instead of copying that very long password. The phone will have the password locally but not the entire database but yet it remains quite usable.

Define your threat model §

When you think about security, you need to think what kind of security you want, sometimes this will also imply thinking about privacy.

Let's think about my home file server, it's a small device which only one disk and doesn't have access to the internet. It could be hacked from a remote person, this is possible but very unlikely. On the other hand, a thief could come into my house a steal a few things, like this server and its data. It makes a lot of sense to use disk encryption for devices that could be stolen (let make it short, I mean all devices).

On the other hand, if I had to manage a mail server with IMAP / SMTP services on it, I would harden it a lot from external attacks and I would have to make some extra security policies for it.

Think about usability §

Most of the time, security and usability doesn't play together, if you increase security that will be at the expense of usability and vice-versa. I'll go back to my IMAP server, I could enable and enforce connecting over TLS for my users, that would prevent their connections to be eavesdropped. I could also enforce a VPN (that I manage myself, not a commercial VPN that can see all my traffic..) to connect to the IMAP server, that would prevent anyone without a VPN to connect to the server. I could also restrict that VPN connection from a list of public IP. I could require the VPN access from an allowed IP to be unlocked by an SSH connection requiring TOTP + password + public key to succeed.

At this point, I'm pretty sure my users will give up and put an automatic redirection of their emails to an other mail server which will be usable to them, I'd be defeated by my users because of too much security.

Don't lock yourself out §

When you come to encrypt everything or lock everything on the network, it could be complicated to avoid data loss or being locked out from the service.

If you have important passwords, you could use Shamir's Secret Sharing (I wrote about it a while back) to split a password in multiples pieces that you would convert as QR code and give a copy to a few person you know, to help you recover the data if you forget about the password once.

Backups §

It's important to make backups, but it's even more important to encrypt them and have them out in a different area of your storage. My practice here is to daily backup all my computer data (which is quite huge) but also backup only my most important data to remote servers. I can afford losing my music files but I'd prefer to be able to recover my GPG and SSH keys in case of huge disaster at home.

User management §

If a hacker got control of your user, it may be over for you. It's important to only run programs you trust and no network related services.

If you need to run something you are unsure, use a virtual machine or at least a dedicated user that won't have access to your user's data. My $HOMEDIR has a chmod 700 so only root and me can access it. If I need to run a service, I will use a dedicated user to it. It's not always convenient but it's effective.

Conclusion §

Good software with a good design are important for the security, but they don't do all the job when it comes to security. Users must be aware of risks and act accordingly.

How to run a NixOS VM as an OpenBSD guest

Written by Solène, on 08 May 2021.
Tags: #openbsd #nixos

Comments on Mastodon

Introduction §

This guide is to help people installing the NixOS Linux distribution as a virtual machine guest hosted on OpenBSD VMM hypervisor.

Preparation §

Some operations are required on the host but specifics instructions will be needed on the guest as well.

Create the disk §

We will create a qcow2 disk, this format allows not using all the reserved space upon creation, size will grow as the virtual disk will be filled with data.

vmctl create -s 20G nixos.qcow2

Configure vmd §

We have to configure the hypervisor to run the VM. I've chose to define a new MAC address for the VM interface to avoid collision with the host MAC.

vm "nixos" {
       memory 2G
       disk "/home/virt/nixos.qcow2"
       cdrom "/home/virt/latest-nixos-minimal-x86_64-linux.iso"
       interface { lladdr "aa:bb:cc:dd:ee:ff"  switch "uplink" }
       owner solene
       disable
}

switch "uplink" {
	interface bridge0
}

vm.conf man page

Configure network §

We need to create a bridge in which I will add my computer network interface "em0" to it. Virtual machines will be attached to this bridge and will be seen from the network.

echo "add em0" > /etc/hostname.bridge0
sh /etc/netstart bridge0

Start vmd §

We want to enable and then start vmd to use the virtual machine.

rcctl enable vmd
rcctl start vmd

NixOS and serial console §

When you are ready to start the VM, type "vmctl start -c nixos", you will get automatically attached to the serial console, be sure to read the whole chapter because you will have a time frame of approximately 10 seconds before it boots automatically (if you don't type anything).

If you see the grub display with letters displayed more than once, this is perfectly fine. We have to tell the kernel to enable the console output and the desired speed.

On the first grub choice, press "tab" and append this text to the command line: "console=ttyS0,115200" (without the quotes). Press Enter to validate and boot, you should see the boot sequence.

For me it took a long time on starting sshd, keep waiting, that will continue after less than a few minutes.

Installation §

There is an excellent installation guide for NixOS in their official documentation.

Official installation guide

I had issues with DHCP so I've set the network manually, my network is in 192.168.1.0/24 and my router 192.168.1.254 is offering DNS too.

systemctl stop NetworkManager
ifconfig enp0s2 192.168.1.151/24 up
route add -net default gw 192.168.1.254
echo "nameserver 192.168.1.254" >> /etc/resolv.conf

The installation process can be summarized with theses instructions:

sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB -1GiB # use every space for root except 1 GB for swap
parted /dev/vda -- mkpart primary linux-swap -1GiB 100%
mkfs.xfs -L nixos /dev/vda1
mkswap -L swap /dev/vda2
mount /dev/disk/by-label/nixos /mnt
swapon /dev/vda2
nixos-generate-config --root /mnt
nano /mnt/etc/nixos/configuration.nix
nixos-install
shutdown now

Here is my configuration.nix file on my VM guest, it's the most basic I could want and I stripped all the comments from the base example generated before install.

{ config, pkgs, ... }:

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  networking.hostName = "my-little-vm"; # Define your hostname.
  networking.useDHCP = false;

  # COMMENT THIS LINE IF YOU DON'T WANT DHCP
  # networking.interfaces.enp0s2.useDHCP = true;


  # BEGIN ADDITION
  # all of these variables were added or uncommented
  boot.loader.grub.device = "/dev/vda";

  # required for serial console to work!
  boot.kernelParams = [
    "console=ttyS0,115200n8"
  ];

  # use what you want
  time.timeZone = "Europe/Paris";

  # BEGIN NETWORK
  # define network here
  networking.interfaces.enp0s2.ipv4.addresses = [ {
        address = "192.168.1.151";
        prefixLength = 24;
  } ];
  networking.defaultGateway = "192.168.1.254";
  networking.nameservers = [ "192.168.1.254" ];
  # END NETWORK

  # disable X server, we don't need it
  services.xserver.enable = false;

  # enable SSH and allow X11 Forwarding to work
  services.openssh.enable = true;
  services.openssh.forwardX11 = true;

  # Declare a user that can use sudo
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
  };

  # declare the list of packages you want installed globally
  environment.systemPackages = with pkgs; [
     wget vim
  ];

  # firewall configuration, only allow inbound TCP 22
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.enable = true;
  # END ADDITION

  # DONT TOUCH THIS EVER EVEN WHEN UPGRADING
  system.stateVersion = "20.09"; # Did you read the comment?

}

Edit /etc/vm.conf to comment the cdrom line and reload vmd service. If you want the virtual machine to automatically start with vmd, you can remove the "disable" keyword.

Once your virtual machine is started again with "vmctl start nixos", you should be able to connect to ssh to it. If you forgot to add users, you will have to access the VM console with "vmctl console", log as root, modify the configuration file, type "nixos-rebuild switch" to apply changes, and then "passwd user" to define the user password. You can set a public key when declaring a user if you prefer (I recommend).

Install packages §

There are three ways to install packages on NixOS: globally, per-user or for a single run.

- globally: edit /etc/nixos/configuration.nix and add your packages names to the variable "environment.systemPackages" and then rebuild the system

- per-user: type "nix-env -i nixos.firefox" to install Firefox for that user

- for single run: type "nix-shell -p firefox" to create a shell with Firefox available in it

Note that the single run doesn't mean the package will disappear, it's most likely... not "hooked" into your PATH so you can't use it. This is mostly useful when you make development and you need specific libraries to build a project and you don't always want them available for your user.

Conclusion §

While I never used a Linux system as a guest in OpenBSD it may be useful to run Linux specific software occasionally. With X forwarding, you can run Linux GUI programs that you couldn't run on OpenBSD, even if it's not really smooth it may be enough for some situations.

I chose NixOS because it's a Linux distribution I like and it's quite easy to use in the regards it has only one configuration file to manage the whole system.

How to install Gnome on OpenBSD

Written by Solène, on 07 May 2021.
Tags: #openbsd #unix #gnome

Comments on Mastodon

Introduction §

This article will explain how to install the Gnome desktop on OpenBSD. You need access to the root user to proceed.

Instructions §

As root, run "pkg_add gnome gnome-extras" which will install the meta-package gnome listing all the required dependencies to have a full working Gnome installation and the -extras package containing all gnome related programs.

You should see this output after "pkg_add" has finished installing the packages, it's important to read the "pkg-readme" files which are specific instructions to packages.

New and changed readme(s):
        /usr/local/share/doc/pkg-readmes/gnome
        /usr/local/share/doc/pkg-readmes/upower

The most important file is the pkg-readme about Gnome that contains clear instructions about the configuration required to run Gnome. That file has a "Too long didn't read" section at the end for people in a hurry which contain instructions to copy/paste.

Tweaks §

There is an "app" named Tweaks that allow further customization than Gnome3 is allowing, like virtual desktop being horizontal, add menus on the top panel or change various behavior of Gnome.

Conclusion §

While the Gnome installation is not fully automated, it requires only a few instructions to get it installed and fully operational.

Gnome3 after the first start wizard

Gnome3 desktop with a few customizations

Synchronization files software

Written by Solène, on 04 May 2021.
Tags: #unix

Comments on Mastodon

Introduction §

In this article I will introduce you to various opensource file synchronization programs and their according workflows. I may not know them all, obviously.

I can't give a full explanation of each of them, but I will tell you enough so you can know if it could be of any interest to you.

Software §

There are many software out there, with pros and cons, to match our file synchronization requirements.

rsync §

rsync is the leader for simple file replication, it can take care that the destination will exactly match the source data. It's available mostly everywhere and using ssh as a transport it's also secure.

rsync is really the reference for a one-way synchronization.

rsync website

lsyncd §

lsyncd is meant to be used in an environment for near to realtime synchronization. It will check for changes in the monitored directories and will replicate the changes on a remote system (using rsync by default).

lsyncd website

unison §

unison is like rsync but can synchronize in both way, meaning you can keep two directories synchronized without having to think in which order you need to transfer. Obviously, in case of conflict you will have to resolve and pick which file you want to keep. This is a well established software that is very reliable.

unison website

rclone §

rclone is like rsync but will support many backend instead of relying on ssh to connect to a remote source. It's mostly used to transfer files from or to Cloud services by making a glue between core rclone and the service API.

I covered rclone in a previous article if you want more information.

rclone website

syncthing §

syncthing is a fantastic tool to keep directories synchronized between computers/phones. It's a service you run, you define what directories you want to export, and on other syncthing instances you can add those exports and it will be kept synchronized together without tuning. It uses a public tracker to find peers so you don't have to mess with NAT or redirections, and if you want full privacy you can use direct IPs. Data are encrypted during transfers.

It has the advantages of working in full automatic mode and can exchange in both ways in a same directory, with multiples instance on a same share, it can also keep previous copies of deleted / replaced files and support many other features.

syncthing website

sparkleshare §

SparkleShare isn't well known but still does the job very efficiently. It offers automatic synchronization of a directory with other peers based on a git directory, basically, if you add a file or make a change, it's committed and pushed to the remote repositories. If someone make a change, you will receive it too.

While it works very well, it's mostly suited for non binary data because of the git backend. You can't really delete old data so the sparkleshare share will grow over time.

SparkleShare website

nextcloud §

Nextcloud has a file synchronization capability, it's mostly used to upload your data to a remote server and be able to access it from remote, but also share a file or a directory in read only or read/write to other people. It's really a huge toolbox that requires a 24/7 server but provide many features for sharing files. A not so well known feature is the ability to share a directory between Nextcloud instances.

Nextcloud has its core in PHP for the www access but also phone or desktop applications.

Nextcloud can encrypt stored data.

Nextcloud website

seafile §

Seafile is a centralized server to store data, like netxtcloud. It's more focused on file storage than nextcloud, but will provide solid features and also companions apps for phones and desktop.

seafile website

git-annex §

I kept the best for the end. Git-annex is a special beast that would have deserved a full article for it but I never found how to approach it.

git-annex is a command line tool to manage a library of data and will delegate actual transfer to the according protocol.

WHAT DOES IT MEAN? Let's try an analogy.

You are in a house, you have many things in your house: movies, music, books, papers. If you want to keep track of where is stored something, you need an inventory, in which you will label where you stored this paper, this DVD, this book etc... This is what git-annex is doing.

git-annex will allow you to entirely manage data and spread it on different location (with redundancy possible) and let you access natively (or at least tell you where to get it). A real life example would be to use an external hard drive to store big files like music or movies but use a remote server to backup important documents. But you may want your documents to also be on the external hard drive, or even two hard drives, you can tell git-annex to manage that.

git-annex can give you the current state of your library without having the files locally, it will replace the whole hierarchy with symlinks to the real files if they are on your computer, meaning you can get the files when you need them or simply work on that index to remove files and then tell git-annex to proceed to deletion if possible (or when it can, like when you get internet access or you connect that external hard drive).

The draw back is that all the tracked files are symbolic links to a potentially non existing file and that you need a specific workflow of unlocking file in order to make changes, and then store it again.

I've been using it for years for data that doesn't change much (administrative documents, music, pictures) but it's certainly not suitable for tracking logs or often modified files.

The name contains "git" but git-annex only use gits to store the whole metadata, the data themselves are not in git.

git-annex website

Conclusion §

There are different strategies to synchronize files between computers, they can be one way, both way, allow other people to use them, manage at huge scale, realtime etc...

From my experience, we all manage our files in very different ways so I'm glad we have that many ways to synchronize them.

PS: don't forget to backup, it's not because you replicate your data that you don't need backup, sometimes it's easy to destroy all the data at once with a simple mistake.

OpenBSD: getting started

Written by Solène, on 03 May 2021.
Tags: #openbsd

Comments on Mastodon

Introduction §

This is a guide to OpenBSD beginners, I hope this will turn to be an useful resource helping people to get acquainted to this operating system I love. I will use a lot of links because I prefer to refer to official documentation.

If you are new on OpenBSD, welcome aboard, this guide is for you. If you are not new, well, you may learn a few things.

Installation step §

This article is not about installing OpenBSD. There are enough official documentation for this.

OpenBSD FAQ about Installation

Booting the first time §

So, you installed OpenBSD, you chose to enable X (the graphical interface at boot) and now you face a terminal on a gray background. Things are getting interesting here.

Become super user (root) §

You will often have to use the root account for commands or modifying system files.

su -l

You will have to type root user password (defined at install time) to change to that user. If you type "whoami" you should see "root" as the output.

You got a mail! §

When you install the system (or upgrade) you will receive an email on root user, you can read it using the "mail" command, it will be an email from Theo De Raadt (founder of OpenBSD) greeting you.

You will notice this email contain hints and has basically the same purpose of my current article you are reading. One important man page to read is afterboot(8).

afterboot(8) man page

What is a man page? §

If you don't know what a man page is, it's really time to learn because you will need it. When someone say a "man page" it implies "a manual page". Documentation in OpenBSD is done in manual pages related to various software, concepts or C functions.

To read a man page, in a terminal type "man afterboot" and use arrows or page up/down to navigate within the man page. You can read "man man" page to read about man itself.

Previously I wrote "afterboot(8)" but the real man page name is "afterboot", the "(8)" is meant to specify the man page section. Some words can be used in various contexts, that's where man pages sections come into the place. For instance, there are sysctl(2) documenting the system call "sysctl()" while sysctl(8) will give you information about the sysctl command to change kernel settings. You can specify which section you want to read by typing the number before the page name, like in "man 2 sysctl" or "man 8 sysctl".

Man pages are constructed in the same order: NAME, SYNOPSIS, DESCRIPTION..... SEE ALSO..., the section "SEE ALSO" is an important one, it gives you man page references of other pages you may want to read. For example, afterboot(8) will give you hints about doas(1), pkg_add(1), hier(7) and many other pages.

Now, you should be able to use the manual pages.

Install a desktop environment §

When you want to install a desktop environment, there will often be a "meta package" which will pull every packages required for the environment to work.

OpenBSD provides a few desktop environments like:

- Gnome 3 => pkg_add gnome

- Xfce => pkg_add xfce

- MATE => pkg_add mate

When you install a package using "pkg_add", you may find a message at the end of the pkg_add output telling you there is a file in /usr/local/share/doc/pkg-readmes/ to read, those files are specifics to packages and contains instructions that should be read before using a package.

The instructions could be about performance, potential limits issues, configuration snippets, how to init the service etc... They are very important to read, and for desktop environment, they will tell you everything you know to get it started.

Graphical session §

When you log-in from the xenodm screen (the one where you have a Puffer fish and OpenBSD logo asking login/password), the program xenodm will read your ~/.xsession file, this is where you prepare your desktop and the execute commands. Usually, the first blocking command (that keeps running on foreground) is your window manager, you can put commands before to customize your system or run programs in background.

# disable bell
xset b off

# auto blank after 10 minutes
xset s 600 600

# run xclock and xload
xclock -geometry 75x75-70-0 -padding 1 &
xload -nolabel -update 5 -geometry 75x75-145-0 & 

# load my ~/.profile file to define ENV
. ~/.profile

# display notifications
dunst &

# load changes in X settings
xrdb -merge ~/.Xresources

# turn the screen reddish to reduce blue color
sct 5600

# synchronize copy buffers
autocutsel &

# kdeconnect to control android phone
kdeconnect-indicator &

# reduce sound to not destroy my ears
sndioctl -f snd/1 output.level=0.3 

# compositor for faster windows drawing
picom &

# something for my mouse setup (I can't remember)
xset mouse 1 1
xinput set-prop 8 273 1.1

# run my window manager
fvwm2

Configure your shell §

This is a very recurrent question, how to get your shell aliases to be working once you have logged in? In bash, sh and ksh (and maybe other shells), every time you spawn a new interactive shell (in which you can enter commands), the environment variable ENV will be read and if it has a value matching a file path, it will be loaded.

The design to your beloved shell environment set is the following:

- ~/.xsession will source ~/.profile when starting X, inheriting the content to everything run from X

- ~/.profile will export ENV like in "export ENV=~/.myshellfile"

CPU frequency auto scaling §

If you run a regular computer (amd64 arch) you will want to run the service "apmd" in automatic mode, it will keep your CPU at lowest frequency and increase the frequency when you have some load, allowing to reduce heat, power usage and noise.

Here are commands to run as root:

rcctl enable apmd
rcctl set apmd flags -A
rcctl start apmd

What are -release and -stable? §

To make things simple, the "-release" version is the whole sets of files to install OpenBSD of that release when it's out. Further updates for that release are called -stable branch, if you run "pkg_add -u" to update your packages and "syspatch" to update your base system you will automatically follow -stable (which is fine!). Release is a single point in time of the state of OpenBSD.

Quick FAQ §

Where is steam? §

No steam, it's proprietary and can't run on OpenBSD

Where is wine? §

No wine, it would require changes into the kernel.

Does my recent NVIDIA card work? §

No nvidia driver, it would work but with a VESA driver, it will be sluggish and very slow.

Does the linux emulation work? §

There is no linux emulation.

I want my favorite program to run on OpenBSD §

If it's not opensource and not using a language like Java or C# that use a Language Virtual Machine allowing abstraction layer to work, it won't work (and most program are not like that).

If it's opensource, it may be possible if all its dependencies are available on OpenBSD.

Get into the ports tree to make things run on OpenBSD

Can I have sudo? §

OpenBSD ships a sudo alternative named "doas" in the base system but sudo can be installed from packages.

doas man page

doas.conf man page

How to view the package list? §

You can check the package directory in a mirror or visit

Openports.pl (using the development version of the ports tree)

What can the virtualization tool do? §

The virtualization system of OpenBSD can run OpenBSD or some linux distributions but without a graphical interface and with only 1 CPU. This mean you will have to configure a serial console to proceed to the installation and then use ssh or the serial console to use your system.

There is qemu in ports but it's not accelerated and won't suit most of people needs because it's terribly terribly slow.

OpenBSD 6.9 packages using IPFS

Written by Solène, on 01 May 2021.
Tags: #openbsd #ipfs

Comments on Mastodon

Update 15/07/2021 §

I disable the IPFS service because it's nearly not used and draw too much CPU on my server. It was a nice experiment, thank you very much for the support and suggestions.

Introduction §

OpenBSD 6.9 has been released and I decided to extend my IPFS experiment to latest release. This mean you can fetch packages and base sets for 6.9 amd64 now over IPFS.

If you don't know what IPFS is, I recommend you to read my previous articles about IPFS.

Note that it also works for -current / amd64, the server automatically checks for new updates of 6.9 and -current every 8 hours.

Benefits §

The benefits is to play with IPFS to understand how it works with a real world use case. Instead of using mirrors to distributes packages, my server is providing the packages and everyone downloading it can also participate into providing data to other IPFS client, this can be seen as a dynamic Bittorrent CDN (Content Delivery Network), instead of making a torrent per file, it's automatic. You certainly wouldn't download each packages as separate torrents files, nor you would download all the packages in a single torrent.

This could reduce the need for mirrors and potentially make faster packages access to people who are far from a mirrors if many people close to that person use IPFS and downloaded the data. This is a great technology that can only be beneficial once it reach a critical mass of adopters.

Installing IPFS on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf
go_ipfs:\
	:openfiles=2048:\
	:tc=daemon:
EOF

rcctl start go_ipfs

Put this in /etc/installurl:

http://k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns.localhost:8080/pub/OpenBSD

Conclusion §

Now, pkg_add will automatically download the packages from IPFS, if more people use it, it will be faster and more resilient than if only my server is distributing the packages.

Have fun and enjoy 6.9 !

If you are worried about security, packages distributed are the same than the one on the mirrors, pkg_add automatically checks the signature in the files against the signify keys available in /etc/signify/ so if pkg_add works, the packages are legitimates.

Use Libreoffice Calc to make 3D models

Written by Solène, on 27 April 2021.
Tags: #fun

Comments on Mastodon

Introduction §

Today I will share with you a simple python script turning a 2D picture defined by numbers and colors in a spreadsheet into a 3D model in OpenSCAD.

Project webpage

How to install §

Short instructions how to install sheetstruder, I will send some documentation upstream. You need git and python and later you will need openscad and a spreadsheet tool.

git clone https://git.hackers.town/seachaint/sheetstruder.git
cd sheetstruder
python3 -m venv sandbox
. sandbox/bin/activate
python3 -m pip install -r requirements.txt

You will need to be in this shell (you need at least the activate command) to make it work.

How to use §

Open a spreadsheet tool that is able to export in format xlsx, type a number to create a solid object of this width (1 = 1 pixel, 2 = 3 pixels because it's mirrored) and put a background color in your cell. Save your file as xlsx.

Run "python3 ./sheetstruder.py yourfile.xlsx > file.scad" and open the file in OpenSCAD, enjoy!

Examples §

I made a simple house with grass around, an antenna, cheminey with smoke, a door and window in it.

House in Libreoffice Calc

House rendered in OpenSCAD from the sheetstruder export

More resources §

OpenSCAD website

Port of the week: pup

Written by Solène, on 22 April 2021.
Tags: #internet

Comments on Mastodon

Introduction §

Today I will introduce you to the utility "pup" providing CSS selectors filtering for HTML documents. It is a perfect companion to curl to properly fetch only a specific data from an HTML page.

On OpenBSD you can install it with `pkg_add pup` and check its documentation at /usr/local/share/doc/pup/README.md

pup official project

Examples §

pup is quite easy to use once you understand the filters. Let's see a few examples to illustrate practical uses.

Fetch my blog titles list to a JSON format §

The following command will returns a JSON structure with an array of data from the tags matching "a" tags with in "h4" tags.

curl https://dataswamp.org/~solene/index.html | pup "h4 a json{}"

The output (only an extract here) looks like this:

[
 {
  "href": "2021-04-18-ipfs-bandwidth-mgmt.html",
  "tag": "a",
  "text": "Bandwidth management in go-IPFS"
 },
 {
  "href": "2021-04-17-ipfs-openbsd.html",
  "tag": "a",
  "text": "Introduction to IPFS"
 },
 [truncated]
 {
  "href": "2016-05-02-3.html",
  "tag": "a",
  "text": "How to add a route through a specific interface on FreeBSD 10"
 }
]

Fetch OpenBSD -current specific changes §

The page https://www.openbsd.org/faq/current.html contains specific instructions that are required for people using OpenBSD -current and you may want to be notified for changes. Using pup it's easy to make a script to compare your last data to see what has been appended.

curl https://www.openbsd.org/faq/current.html | pup "h3 json{}"

Output sample as JSON, perfect for further processing with a scripting language.

[
 {
  "id": "r20201107",
  "tag": "h3",
  "text": "2020/11/07 - iked.conf \u0026#34;to dynamic\u0026#34;"
 },
 {
  "id": "r20210312",
  "tag": "h3",
  "text": "2021/03/12 - IPv6 privacy addresses renamed to temporary addresses"
 },
 {
  "id": "r20210329",
  "tag": "h3",
  "text": "2021/03/29 - [packages] yubiserve replaced with yubikeyedup"
 }
]

I provide a RSS feed for that

Conclusion §

There are many possibilities with pup and I won't list them all. I highly recommend reading the README.md file from the project because it's its documentation and explains the syntax for filtering.

Bandwidth management in go-IPFS

Written by Solène, on 18 April 2021.
Tags: #ipfs

Comments on Mastodon

Introduction §

In this article I will explain a few important parameters for the reference IPFS node server go-ipfs in order to manage the bandwidth correctly for your usage.

Configuration File §

The configuration file of go-ipfs is set by default to $HOME/.ipfs/config but if IPFS_PATH is set it will be $IPFS_PATH/.config

Tweaks §

There are many tweaks possible in the configuration file, but there are pros and cons for each one so I can't tell you what values you want. I will rather explain what you can change and in which situation you would want it.

Connections number §

By default, go-ipfs will keep a number of connections to peers between 600 and 900 and new connections will last at least 20 seconds. This may totally overwhelm your router to have to manage that quantity of TCP sessions.

The HighWater will define the maximum sessions you want to exist, so this may be the most important setting here. On the other hand, the LowWater will define the number of connections you want to keep all the time, so it will drain bandwidth if you keep it high.

I would say if you care about your bandwidth usage, keep the LowWater low like 50 and have the HighWater quite high and a short GracePeriod, this will allow go-ipfs to be quiet when unused but responsive (able to connect to many peers to find a content) when you need it.

Documentation about Swarm.ConnMgr

DHT Routing §

IPFS use a distributed hash table to find peers (it's the common way to proceed in P2P networks), but your node can act as a client and only fetch the DHT from other peer or be active and distribute it to other peer.

If you have a low power server (CPU) and that you are limited in your bandwidth, you should use the value "dhtclient" to no distribute the DHT. You can configure this in the configuration file or use --routing=dhtclient in the command line.

Documentation about Routing.type

Reprovider §

Strategy §

This may be the most important choice you have to make for your IPFS node. With the Reprovider.Strategy setting you can choose to be part of the IPFS network and upload data you have locally, only upload data you pinned or upload nothing.

If you want to actively contribute to the network and you have enough bandwidth, keep the default "all" value, so every data available in your data store will be served to clients over IPFS.

If you self host data on your IPFS node but you don't have much bandwidth, I would recommend setting this value to "pinned" so only the data pinned in your IPFS store will be available. Remember that pinned data will never be removed from the store by the garbage collector and files you add to IPFS from the command line or web GUI are automatically pinned, the pinned data are usually data we care about and that we want to keep and/or distribute.

Finally, you can set it to empty and your IPFS node will never upload any data to anyone which could be consider as unfair in a peer to peer network but under some quota limited or high latency connection it would make sense to not upload anything.

Documentation about Reprovider.Strategy

Interval §

While you can choose what kind of data you want your node to relay as a part of the IPFS network, you can choose how often your node will publish the content of the data hold in its data store.

The default is 12 hours, meaning every 12 hours your node will publish the list of everything available for upload to the other peers. If you care about bandwidth and your content doesn't change often, you can increase this value, on the other hand if you may want to publish more often if your data store is rapidly changing.

If you don't want to publish your content, you can set it to "0", then you would still be able to publish it manually using the IPFS command line.

Documentation about Reprovider.Interval

Gateway management §

If you want to provide your data over a public gateway, you may not want everyone to use this gateway to download IPFS content because of legal concerns, resource limits or you simply don't want that.

You can set Gateway.NoFetch to make your gateway to only distribute files available in the node data store. Meaning it will act as an http·s server for your own data but the gateway can't be used to get any other data. It's a convenient way to publish content over IPFS and make it available from a gateway you trust while keeping control over the data relayed.

Documentation about Gateway.NoFetch

Conclusion §

There are many settings here for various use case. I'm running an IPFS node on a dedicated server but also another one at home and they have a very different configuration.

My home connection is limited to 900 kb/s which make IPFS very unfriendly to my ISP router and bandwidth usage.

Unfortunately, go-ipfs doesn't provide an easy way to set download and upload limit, that would be very useful.

Introduction to IPFS

Written by Solène, on 17 April 2021.
Tags: #openbsd #ipfs

Comments on Mastodon

introduction to IPFS §

IPFS is a distributed storage network protocol that comes with a public network. Anyone can run a peer and access content from IPFS and then relay the content while it's in your cache.

Gateways are websites used to allow accessing content of IPFS through http, there are several public gateways allowing to get data from IPFS without being a peer.

Every publish content has an unique CID to identify it, we usually add a /ipfs/ to it like in /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1. The CID is unique and if someone add the same file from another peer, they will get the same hash as you.

If you add a whole directory in IPFS, the top directory hash will depend on the hash of its content, this mean if you want to share a directory like a blog, you will need to publish the CID every time you change the content, as it's not practical at all, there is an alternative for making the process more dynamic.

A peer can publish data in a long name called an IPNS. The IPNS string will never change (it's tied to a private key) but you can associate a CID to it and update the value when you want and then tell other peers the value changed (it's called publishing). The IPNS notation used is looking like /ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns, you can access an IPNS content with public gateways with a different notation.

- IPNS gateway use example: https://k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns.dweb.link/

- IPFS gateway use example: https://ipfs.io/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1/

The IPFS link will ALWAYS return the same content because it's a defined hash to a specific resource. The IPNS link can be updated to have a newer CID over time, allowing people to bookmark the location and browse it for updates later.

Using a public gateway §

There are many public gateways you can use to fetch content.

Health check of public gateways, useful to pick one

You will find two kind of gateways url, one like "https://$domain/" and other like "https://$something_very_long.ipfs.$domain/", for the first one you need to append your /ipfs/something or /ipns/something requests like in the previous examples. For the latter, in web browser it only works with ipns because web browsers think the CID is a domain and will change the case of the letters and it's not long valid. When using an ipns like this, be careful to change the .ipfs. by .ipns. in the url to tell the gateway what kind of request you are doing.

Using your own node §

First, be aware that there is no real bandwidth control mechanism and that IPFS is known to create too many connections that small routers can't handle. On OpenBSD it's possible to mitigate this behavior using queuing. It's possible to use a "lowpower" profile that will be less demanding on network and resources but be aware this will degrade IPFS performance. I found that after a few hours of bootstrapping and reaching many peers, the bandwidth usage becomes less significant but it's may be an issue for DSL connections like mine.

When you create your own node, you can use its own gateway or the command line client. When you request a data that doesn't belong to your node, it will be downloaded from known peers able to distribute the blocks and then you will keep it in cache until your cache reach the defined limited and the garbage collector comes to make some room. This mean when you get a content, you will start distributing it, but nobody will use your node for content you never fetched first.

When you have data, you can "pin" it so it will never be removed from cache, and if you pin a directory CID, the content will be downloaded so you have a whole mirror of it. When you add data to your node, it's automatically pinned by default.

The default ports are 4001 (the one you need to expose over the internet and potentially forwarding if you are behind a NAT), the Web GUI is available at http://localhost:5001/ and the gateway is available at http://localhost:8080/

Installing the node on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf
go_ipfs:\
	:openfiles=2048:\
	:tc=daemon:
EOF
rcctl start go_ipfs

You can change the profile to lowpower with "env IPFS_PATH=/var/go-ipfs/ ipfs config profile apply lowpower", you can also list profiles with the ipfs command.

I recommend using queues in PF to limit the bandwidth usage, for my DSL connection I've set a maximum of 450K and it doesn't disrupt my network anymore. I explained how to proceed with queuing and bandwidth limitations in a previous article.

Installing the node on NixOS §

Installing IPFS is easy on NixOS thanks to its declarative way. The system has a local IPv4 of 192.168.1.150 and a public IP of 136.214.64.44 (fake IP here). it is started with a 50GB maximum for cache. The gateway will be available on the local network on http://192.168.1.150:8080/.

services.ipfs.enable = true;
services.ipfs.enableGC = true;
services.ipfs.gatewayAddress = "/ip4/192.168.1.150/tcp/8080";
services.ipfs.extraFlags = [ "--enable-namesys-pubsub" ];
services.ipfs.extraConfig = {
    Datastore = { StorageMax = "50GB"; };
    Routing = { Type = "dhtclient"; };
};
services.ipfs.swarmAddress = [
        "/ip4/0.0.0.0/tcp/4001"
        "/ip4/136.214.64.44/tcp/4001"
        "/ip4/136.214.64.44/udp/4001/quic"
        "/ip4/0.0.0.0/udp/4001/quic"
];

Testing your gateway §

Let's say your gateway is http://localhost:8080/ for making simpler incoming examples. If you want to request the data /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 , you just have to add this to your gateway, like this: http://localhost:8080/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 and you will get access to your file.

When using ipns, it's quite the same, for /ipns/blog.perso.pw/ you can request http://localhost:8080/ipns/blog.perso.pw/ and then you can browse my blog.

OpenBSD experiment §

To make all of this really useful, I started an experiment: distributing OpenBSD amd64 -current and 6.9 both with sets and packages over IPFS. Basically, I have a server making a rsync of both sets once a day, will add them to the local IPFS node, get the CID of the top directory and then publish the CID under an IPNS. Note that I have to create an index.html file in the packages sets because IPFS doesn't handle directory listing very well.

The following examples will have to be changed if you don't use a local gateway, replace localhost:8080 by your favorite IPFS gateway.

You can upgrade your packages with this command:

env PKG_PATH=http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/snapshots/packages/amd64/ pkg_add -Dsnap -u

You can switch to latest snapshot:

sysupgrade -s http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/

While it may be slow to update at first, if you have many systems, running a local gateway used by all your computers will allow to have a cache of downloaded packages, making the whole process faster.

I made a "versions.txt" file in the top directory of the repository, it contains the date and CID of every publication, this can be used to fetch a package from an older set if it's still available on the network (because I don't plan to keep all sets, I have a limited disk).

You can simply use the url http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/ in the file /etc/installurl to globally use IPFS for pkg_add or sysupgrade without specifying the url every time.

Using DNS §

It's possible to use a DNS entry to associate an IPFS resource to a domain name by using dnslink. The entry would look like:

_dnslink.blog	IN	TXT	"dnslink=/ipfs/somehashhere"

Using an /ipfs/ syntax will be faster to resolve for IPFS nodes but you will need to update your DNS every time you update your content over IPFS.

To avoid manipulating your DNS every so often (you could use an API to automate this by the way), you can use an /ipns/ record.

_dnslink.blog	IN	TXT	"dnslink=/ipns/something"

This way, I made my blog available under the hostname blog.perso.pw but it has no A or CNAME so it work only in an IPFS context (like a web browser with IPFS companion extension). Using a public gateway, the url becomes https://ipfs.io/ipns/blog.perso.pw/ and it will download the last CID associated to blog.perso.pw.

Conclusion §

IPFS is a wonderful piece of technology but in practice it's quite slow for DSL users and may not work well if you don't need a local cache. I do really love it though so I will continue running the OpenBSD experiment.

Please write me if you have any feedback or that you use my OpenBSD IPFS repository. I would be interested to know about people's experiences.

Interesting IPFS resources §

dweb-primer tutorials for IPFS (very well written)

Official IPFS documentation

IPFS companion for Firefox and Chrom·ium·e

Pinata.cloud is offering IPFS hosting (up to 1 GB for free) for pinned content

Wikipedia over IPFS

OpenBSD website/faq over IPFS (maintained by solene@)

Port of the week: musikcube

Written by Solène, on 15 April 2021.
Tags: #portoftheweek

Comments on Mastodon

Introduction §

Today I will share about the console oriented audio player "musikcube" because I really like it. It has many features while being easy to use for a console player. The feature that really sold it to me is the library management and the rating feature allowing me to rate my files and filter by score. The library is nice to browse, it's easy to filter by pattern and the whole UI is easy to use.

Unfortunately it doesn't come with a man page, so you can check the key binding by typing "?" in it or look at the key bindings menu in the main menu.

Official user guide

Official project website

The package is not yet available on OpenBSD but should arrive after 6.9 release (so it will be in 7.0 release).

Picture of Musikcube playing music from a directory mode display

A terminal client §

Musikcube is a console client, meaning you start it in a terminal. You can easily switch between menus with Tab, Shift+Tab, Enter and keyboard arrows but you should also check the key bindings for full controls. Note that the mouse is supported!

Once you told musikcube where to look files, you will have access to your library, using numbers from 1 to 6 you can choose how you want the library filtered but 6 will ask which criteria to use, using "directory" will display the file hierarchy which is sometimes nicer to use for badly tagged music files.

You can access to the whole tracks list using "t" and then filter by pattern or sort the list using "Ctrl + s".

A server §

When run as musikcube, a daemon mode is started to accept incoming connections on TCP ports 7905 and 7906 for remote API control and transcoding/streaming. This behavior can be disabled in the main menu under the "server setup" choice.

Running it with the binary musikubed binary, there will be no UI started, only a background daemon listening on ports.

Android companion app §

Musikcube has a companion app for Android named musikdroid but it only available for download as a file on the github project.

The app has multiples features, it can control the musikcube server for music playing on the remote system, but you can also use it to stream music to your Android device. The song on the musikcube server and android devices can be separated. Even better, songs played on the android devices will be automatically stored for offline (you can tune the cache) and even transcode files to have smaller files for the device.

Look for a .apk file in the assets list of the releases

Easy text transmission from computer to smartphone

Written by Solène, on 25 March 2021.
Tags: #opensource

Comments on Mastodon

Introduction §

Today I will share with you a simple way I found to transmit text from my computer to my phone. I often have to do it, to type a password, enter an url, copy/paste a message or whatever reasons.

Using QR codes §

The best way to get a text from computer to a smartphone (that I am aware of) is scanning a QR code using the camera. By using the command qrencode (I already wrote about this one), xclip and feh (a picture viewer), it is possible to generate QR code on the fly on the screen.

Is it as simple as running the following command, from a menu or a key binding:

xclip -o | qrencode -o - -t PNG | feh -g 600x600 -Z - 

Using this command, xclip will gives the clipboard to qrencode which will create a PNG file on stdout and then feh will display it on a 600 by 600 window, no temporary file involved here.

Once the picture is displayed on the screen, you can use a scanner program on your phone to gather the content, I found "QR & Barcode Scanner" to be really light, fast and usable with its history, available on F-Droid.

QR & Barcode Scanner on F-Droid

Composing a quite long text on your computer and sharing it to the phone can be done with sending the text to xclip and then generate the QR code.

Going further §

When it comes to sharing data between my phone and my computer, I love "primitive ftpd" which is a SFTP/FTP server for Android, it works out of the box and allow secure transfers over Wifi (use SFTP please!).

primitive ftpd on F-Droid

For simple transfers, I use "Share to Computer" that will share a file or a group of files as a zip on a temporary http server, it is then easy to connect to it to save the file.

Share to Computer on F-Droid

For sending SMS through my phone but from my computer, I use the program KDE Connect (it has to be installed on phone and computer), I wanted to write about it for a long time but it's not easy to explain how to get it to work and uneasy to explain its usage. But it allows me to receive phone notifications on my computer and also send SMS. I have simple aliases in my shell like "mom-sms hello are you ?" to ease my use of SMS. When possible, don't use SMS, it's not secure. The program does a lot more than sending SMS, like using the smartphone as a remote touchpad as one example.

KDE Connect on F-Droid

Opensource from an author point of view

Written by Solène, on 23 March 2021.
Tags: #opensource

Comments on Mastodon

Hi, today's article will be a bit different than what you are used to. I am currently writing about my experience as an open source author and "project manager". I recently created a project that, while being extremely small, have seen some people getting involved at various level. I didn't know what it was to be in this position.

Having to deal with multiple people contributing to a project I started for myself on one architecture with a limited set of features is surprisingly hard. I don't say it's boring and that no one should ever do it, but I think I wasn't really prepare to handle this.

I made my best to integrate people wishes while keeping the helm of the project in the right direction, but I had to ask myself many questions.

Many questions §

Should I care about what other people need? I could say no to everything proposed if I see no benefit for my use case. I chose to accept some changes that I didn't use because they made sense in some context. But I have to be really careful to accept everything if I want to keep the program sane.

Should I care about other platforms I don't use? Someone proposed me to add some code to support Linux targets, which I don't use, meaning more code I can't test. For the sake of compatibility and avoiding extra work to packagers, I made a very simple solution to fix that, but if someone wanted to port my program to Windows or a platform that would require many many changes, I don't know how I would react.

Too much changing code situation. My program changed A LOT since my initials commits, and now a git blame mostly show no lines from me, this doesn't mean I didn't review changes made by contributors, but I am not as comfortable now that I was initially with my own code. That doesn't mean the new code is wrong, but it doesn't hold my logic in it. I think it's the biggest deal in this situation, I, as the project manager, must say what can go in, what can't and when. It's fine to receive contributions but they shouldn't add complexity or weird algorithms.

Accepting changes §

I am not an expert programmer, I don't often write code, and when I do, it's for my own benefit. Opening our work to other implies making it accessible to outsiders, accepting changes and explaining choices.

Many times I reviewed submitted code and replied it wasn't fine, and while it compiles and apply correctly, it's not the right way to do, please rework this in some way to make it better or discard it, but it won't get into the repository. It's not always easy, people can submit code I don't understand sometimes, I still have to review it thoroughly because I can't accept everything sent.

In some way, once people get involved into my projects, they get denatured because they receive thoughts from other, their ideas, their logic, their needs. It's wonderful and scary at the same time. When I publish code, I never expect it to be useful for someone and even less that I could receive new features by emails from strangers.

Be prepared for this is important when you start a project and that you make it open source. I could refuse everything but then I would cut myself from a potential community around my own code, that would be a shame.

Responsibility §

This part is not related to my projects (or at least not in this situation) but this is a debate I often think about when reading dramas in open source: is an open source author responsible toward the users?

One way to reply this is that if you publish your content online and accept contributions, this mean you care about users (which then contribute back), but where to draw the limit of what is acceptable? If someone writes an awesome program for themselves and gather a community around it, and then choose to make breaking changes or remove important features, what then? The users are free to fork, the author is free to to whatever they want.

There are no clear responsibility binding contributors and end users, I hope most of the time, contributors think about the end users, but with different philosophies in play sometimes we can end in dilemma between the two groups.

Epilogue §

I am very happy to publish open source code and to have contributors, coordinate people, goals and features is not something I expected :)

Please, be cautious with this writing, I only had to face this situation with a couple of contributors, I can't imagine how complicated it can become at a bigger scale!

Securely share a secret using Shamir's secret sharing

Written by Solène, on 21 March 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction §

I will present you the program ssss (for Shamir's Secret Sharing Scheme), a cryptography program to split a secret into n parts, requiring at least t parts to be recovered (with t <= n).

Shamir Secret Sharing (method is mathematically proven to be secure)

Use case §

The project website list a few use cases for real life and I like them, but I will share another use case.

ssss project website

I used to run a community but there was no person in charge apart me, which made me a single point of failure. I decided to make the encrypted backup available to a few kind of trustable community members, and I gave each a secret. There were four members and I made the backup password available only if the four members agreed to share their secrets to get the password. For privacy reasons, I didn't want any of these people to be able to lurk into the backup, at least, if someone had happened to me, they could agree to recover the database only if the four persons agreed on it.

How to use §

ssss-split is easy to use, you can only share text with it. So you can use a very long passphrase to encrypt files and share this passphrase into many secrets that you share.

You can install it on OpenBSD using pkg_add ssss.

In the following examples, I will create a simple passphrase and then use the generated secrets to get the original passphrase back.

$ ssss-split -t 3 -n 3
Generating shares using a (3,3) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters: [Note=>hidden input where I typed "this is a very very long password] Using a 264 bit security level.
1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353

When you want to recover a secret, you will have to run ssss-combine and tell it how many secrets you have, they can be provided in any order.

$ ssss-combine -t 3
Enter 3 shares separated by newlines:
Share [1/3]: 2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
Share [2/3]: 3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
Share [3/3]: 1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
Resulting secret: this is a very very long password

Tips §

If you want to easily store a secret or share it to a non-IT person (or in a vault), you can create a QR code and then print the picture. QR code has redundancy so if the paper is damaged you can still recover it, it's quite big on a paper so if it fades of you may not lose data and it also checks integrity.

Conclusion §

ssss is a wonderful program to share a secret among a few people or put a few secrets here and there for a recovery situation. The program can receive the passphrase on its standard input allowing it to be scripted.

Interesting fact, if you run ssss-combine multiple times on the same text, you always get different secrets, so if you give a secret, no brute force can be used to find which input produced the secret.

How to split a file into small parts

Written by Solène, on 21 March 2021.
Tags: #openbsd #unix

Comments on Mastodon

Introduction §

Today I will present the userland program "split" that is used to split a single file into smaller files.

OpenBSD split(1) manual page

Use case §

Split will create new files from a single files, but smaller. The original file can be get back using the command cat on all the small files (in the correct order) to recreate the original file.

There are several use cases for this:

- store a single file (like a backup) on multiple medias (floppies, 700MB CD, DVDs etc..)

- parallelize a file process, for example: split a huge log file into small parts to run analysis on each part

- distribute a file across a few people (I have no idea about the use but I like the idea)

Usage §

Its usage is very simple, run split on a file or feed its standard input, it will create 1000 lines long files by default. -b could be used to tell a size in kB or MB for the new files or use -l to change the default 1000 lines. Split can also create a new file each time a line match a regex given with -p.

Here is a simple example splitting a file into 1300kB parts and then reassemble the file from the parts, using sha256 to compare checksum of the original and reconstructed files.

solene@kongroo ~/V/pmenu> split -b 1300k pmenu.mp4
solene@kongroo ~/V/pmenu> ls
pmenu.mp4  xab        xad        xaf        xah        xaj        xal        xan
xaa        xac        xae        xag        xai        xak        xam
solene@kongroo ~/V/pmenu> cat x* > concat.mp4
solene@kongroo ~/V/pmenu> sha256 pmenu.mp4 concat.mp4 
SHA256 (pmenu.mp4)  = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
SHA256 (concat.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
solene@kongroo ~/V/pmenu> ls -l x*
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaa
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xab
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xac
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xad
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xae
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaf
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xag
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xah
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xai
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaj
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xak
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xal
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xam
-rw-r--r--  1 solene  wheel    810887 Mar 21 16:50 xan

Conclusion §

If you ever need to split files into small parts, think about the command split.

For more advanced splitting requirements, the program csplit can be used, I won't cover it here but I recommend reading the manual page for its usage.

csplit manual page

Port of the week: diffoscope

Written by Solène, on 20 March 2021.
Tags: #openbsd

Comments on Mastodon

Introduction §

Today I will introduce you to Diffoscope, a command line software to compare two directories. I find it very useful when looking for changes between two extracted tarballs, I use it to compare changes between two version of a program to see what changed.

Diffoscope project website

How to install §

On OpenBSD you can use "pkg_add diffoscope", on other systems you may have a package for it, but it could be installed via pip too.

Usage §

It is really easy to use, as parameter give the two directories you want to compare, diffoscope will then show the uid, gid, permissions, modification/creation/access time changes between the two directories.

The output on a simple example looks like the following:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello
│ ├── stat {}
│ │ @@ -1 +1 @@
│ │ -1043 492483 -rw-r--r-- 1 solene wheel 1973218 6 "Mar 20 18:31:08 2021" "Mar 20 18:31:14 2021" "Mar 20 18:31:14 2021" 16384 4 0 t/foo
│ │ +1043 77762 -rw-r--r-- 1 solene wheel 314338 10 "Mar 20 18:31:08 2021" "Mar 20 18:31:18 2021" "Mar 20 18:31:18 2021" 16384 4 0 a/foo

Diffoscope has many flags, if you want to only compare the directories content, you have to use "--exclude-directory-metadata yes".

Using the same example as previously with --exclude-directory-metadata yes, it looks like:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello

Port of the week: pmenu

Written by Solène, on 12 March 2021.
Tags: #openbsd

Comments on Mastodon

Introduction §

This Port of the week will introduce you to a Pie-menu for X11, available on OpenBSD since 6.9 (not released yet). A pie menu is a circle with items spread in the circle, allowing to open other circle with other items in it. I find it very effective for me because I am more comfortable with information spatially organized (my memory is based on spatialization). I think pmenu was designed for a tablet input device using a pen to trigger pmenu.

Pmenu github page

Installation §

On OpenBSD, a pkg_add pmenu is enough, but on other systems you should be able to compile it out of the box with a C compiler and the X headers.

Configuration §

This part is a bit tricky because the configuration is not obvious. Pmenu takes its configuration on the standard input and then must be piped to a shell.

My configuration file looks like this:

#!/bin/sh

cat <<ENDOFFILE | pmenu | sh &
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/utilities-terminal.png	sakura
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/applets-screenshooter.png	screen_up.sh
Apps
	IMG:/usr/local/share/icons/hicolor/48x48/apps/gimp.png	gimp
	IMG:/home/solene/dev/pmenu/claws-mail.png	claws-mail
	IMG:/usr/local/share/pixmaps/firefox.png	firefox
	IMG:/usr/local/share/icons/hicolor/256x256/apps/keepassxc.png	keepassxc
	IMG:/usr/local/share/icons/hicolor/48x48/apps/chrome.png	chrome
	IMG:/usr/local/share/icons/hicolor/128x128/apps/rclone-browser.png	rclone-browser
Games
	IMG:/home/jeux/slay_the_spire/sts.png	cd /home/jeux/slay_the_spire/ && libgdx-run
	IMG:/home/jeux/Delver/unjar/a/Delver-Logo.png	cd /home/jeux/Delver/unjar/ && /usr/local/jdk-1.8.0/bin/java -Dsun.java2d.dpiaware=true com.interrupt.dungeoneer.DesktopStarter
	IMG:/home/jeux/Dead_Cells/deadcells.png	cd /home/jeux/Dead_Cells/ && hl hlboot.dat
	IMG:/home/jeux/brutal_doom/Doom-The-Ultimate-1-icon.png	cd /home/jeux/doom2/ && gzdoom /home/jeux/brutal_doom/bd21RC4.pk3
Volume
	0%	sndioctl output.level=0
	10%	sndioctl output.level=0.1
	20%	sndioctl output.level=0.2
	30%	sndioctl output.level=0.3
	40%	sndioctl output.level=0.4
ENDOFFILE

The configuration supports levels, like "Apps" or "Games" in this example, that will allow a second level of shortcuts. A text could be used like in Volume, but you can also use images like in other categories. Every blank appearing in the configuration are tabs.

The pmenu itself can be customized by using X attributes, you can learn more about this on the official project page.

Video §

I made a short video to show how it looks with the configuration shown here.

Note that pmenu is entirely browseable with the keyboard by using tab / enter / escape to switch to next / validate / exit.

Video demonstrating pmenu in action

Easy spamAssassin with OpenSMTPD

Written by Solène, on 10 March 2021.
Tags: #openbsd #mail

Comments on Mastodon

Introduction §

Today I will explain how to setup very easily the anti-spam SpamAssassin and make it work with the OpenSMTPD mail server (OpenBSD default mail server). I will suppose you are already familiar with mail servers.

Installation §

We will need two packages to install: opensmtpd-filter-spamassassin and p5-Mail-SpamAssassin. The first one is a "filter" for OpenSMTPD, it's a special meaning in smtpd context, it will run spamassassin on incoming emails and the latter is the spamassassin daemon itself.

Filter §

As explained in the pkg-readme file from the filter package /usr/local/share/doc/pkg-readmes/opensmtpd-filter-spamassassin , a few changes must be done to the smtpd.conf file. Mostly a new line to define the filter and add "filter "spamassassin"" to lines starting by "listen".

Website of the filter author who made other filters

SpamAssassin §

SpamAssassin works perfectly fine out of the box, "rcctl enable spamassassin" and "rcctl start spamassassin" is enough to make it work.

Official SpamAssassin project website

Usage §

It should really work out of the box, but you can train SpamAssassin what are good mails (called "ham") and what are spam by running the command "sa-learn --ham" or "sa-learn --spam" on directories containing that kind of mail, this will make spamassassin more efficient at filtering by content. Be careful, this command should be run as the same user as the daemon used by SpamAssassin.

In /var/log/maillog, spamassassin will give information about scoring, up to 5.0 (default), a mail is rejected. For legitimate mails, headers are added by spamassassin.

Learning §

I use a crontab to run once a day sa-learn on my "Archives" directory holding all my good mails and "Junk" directory which has Spam.

0 2 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec sa-learn --spam {} +
5 2 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec sa-learn --ham  {} +

Extra configuration §

SpamAssassin is quite slow but can be speeded up by using redis (a key/value database in memory) for storing tokens that help analyzing content of emails. With redis, you would not have to care anymore about which user is running sa-learn.

You can install and run redis by using "pkg_add redis" and "rcctl enable redis" and "rcctl start redis", make sure that your port TCP/6379 is blocked from outside. You can add authentication to your redis server &if you feel it's necessary. I only have one user on my email server and it's me.

You then have to add some content to /etc/mail/spamassassin/local.cf , you may want to adapt to your redis configuration if you changed something.

bayes_store_module  Mail::SpamAssassin::BayesStore::Redis
bayes_sql_dsn       server=127.0.0.1:6379;database=4
bayes_token_ttl 300d
bayes_seen_ttl   8d
bayes_auto_expire 1

Configure a Bayes backend (like redis or SQL)

Conclusion §

Restart spamassassin after this change and enjoy. SpamAssassin has many options, I only shared the most simple way to setup it with opensmtpd.

Implement a «Command not found» handler in OpenBSD

Written by Solène, on 09 March 2021.
Tags: #openbsd

Comments on Mastodon

Introduction §

On many Linux systems, there is a special program run by the shell (configured by default) that will tell you which package provide a command you tried to run but is not available in $PATH. Let's do the same for OpenBSD!

Prerequisites §

We will need to install the package pkglocate to find binaries.

# pkg_add pkglocate

We will also need a file /usr/local/bin/command-not-found executable with this content:

#!/bin/sh

CMD="$1"

RESULT=$(pkglocate */bin/${CMD} */sbin/${CMD} | cut -d ':' -f 1)

if [ -n "$RESULT" ]
then
    echo "The following package(s) contain program ${CMD}"
    for result in $RESULT
    do
        echo "    - $result"
    done
else
    echo "pkglocate didn't find a package providing program ${CMD}"
fi

Configuration §

Now, we need to configure the shell to run this command when it detects an error corresponding to an unknown command. This is possible with bash, zsh or fish at least.

Bash configuration §

Let's go with bash, add this to your bash configuration file

command_not_found_handle()
{
    /usr/local/bin/command-not-found "$1"
}

Fish configuration §

function fish_command_not_found
    /usr/local/bin/command-not-found $argv[1]
end

ZSH configuration §

function command_not_found_handler()
{
    /usr/local/bin/command-not-found "$1"
}

Trying it §

Now that you configured your shell correctly, if you run a command in your shell that isn't available in your PATH, you may have either a success with a list of packages giving the command or that the command can't be found in any package (unlucky).

This is a successful output that found the program we were trying to run.

$ pup
The following package(s) contain program pup
    - pup-0.4.0p0

This is a result showing that no package found a program named "steam".

$ steam
pkglocate didn't find a package providing program steam

Top 12 best opensource games available on OpenBSD

Written by Solène, on 07 March 2021.
Tags: #openbsd #gaming

Comments on Mastodon

Introduction §

This article features the 12 best games (in my opinion) in term of quality and fun available in OpenBSD packages. The list only contains open source games that you can install out of the box. This means that game engines requiring proprietary (or paid) game assets are not part of this list.

Tales of Maj'Eyal §

Tome4 is a rogue-like game with many classes, many races, lot of areas to explore. There are fun pieces of lore to find and read if it's your thing, you have to play it many times to unlock everything. Note that while the game is open source, there are paid extensions requiring an online account on the official website, this is not mandatory to play or finish the game.

# pkg_add tome4
$ tome4

Tales of Maj'Eyal official website

Tales of Maj

OpenTTD §

This famous game is a free reimplementation of the Transport Tycoon game. Build roads, rails, make huge trains networks with signals, transports materials from extraction to industries and then deliver goods to cities to make them grow. There is a huge community and many mods, and the game can be played in multiplayer. Also available on Android.

# pkg_add openttd
$ openttd

OpenTTD official website

[Peertube video] OpenTTD

OpenTTD screenshot

The Battle for Wesnoth §

Wesnoth is a turn based strategy game based on hexagons. There are many races with their own units. The game features a full set of campaign for playing solo but also include multiplayer. Also available on Android.

# pkg_add wesnoth
$ wesnoth

The Battle for Wesnoth official website

Wesnoth screenshot

Endless Sky §

This game is about space exploration, you are captain of a ship and you can get missions, enhance your ship, trade goods over the galaxy or fight enemies. There is a learning curve to enjoy it because it's quite hard to understand at first.

# pkg_add endless-sky
$ endless-sky

Endless Sky official website

Endless sky screenshot

OpenRA §

Open Red Alert, the 100% free reimplementation of the engine AND assets of Red Alert, Command and Conquer and Dune. You can play all these games from OpenRA, including multiplayer. Note that there are no campaign, you can play skirmish alone with bots or in multiplayer. Campaigns (and cinematics) could be played using the original games files (from OpenRA launcher), as the games have been published as freeware a few years ago, one can find them for free and legally.

# pkg_add openra
$ openra
wait for instructions to download the assets of the game you want to play

OpenRA official website

[Peertube video] Red Alert

Red Alert screenshot

Cataclysm: Dark Days Ahead §

Cataclysm DDA is a game in which you awake in a zombie apocalypse and you have to survive. The game is extremely complete and allow many actions/combinations like driving vehicles, disassemble electronics to build your own devices and many things I didn't try yet. The game is turn based and 2D from top, I highly recommend reading the manual and how-to because the game is hard. You can also create your character when you start a game, which will totally change the game experience because of your characters attributes and knowledge.

# pkg_add cataclysm-dda
$ cataclysm-dda

Cataclysm: Dark Days Ahead official website

Cataclysm DDA screenshot

Taisei §

Taisei is a bullet hell game in the Touhou universe. Very well done, extremely fun, multiple characters to play with an alternative mechanic of each character.

# pkg_add taisei
$ taisei

Taisei official website

[Peertube video] Taisei

Taisei screenshot

The Legend of Zelda: Return of the Hylian SE §

There is a game engine named Solarus dedicated to write Zelda like games, and Zelda RotH is a game based on this. Nothing special to say, it's a 2D Zelda game, very well done with a new adventure.

# pkg_add zelda_roth_se
$ zelda_roth_se

Zelda RotH official website

ROTH screenshot

Shapez.io §

This game is about making industries from shapes and colors in order to deliver what you are asked to produce in the most efficient manner, this game is addictive and easy to understand thanks to the tutorial when you start the game.

# pkg_add shapezio
$ /usr/local/bin/electron /usr/local/share/shapez.io/index.html

Shapez.io official website

Shapez.io screenshot

OpenArena §

OpenArena is a Quake 3 reimplementation, including assets. It's like Quake 3 but it's not Quake 3 :)

# pkg_add openarena
$ openarena

OpenArena official website

Openarena screenshot

Xonotic §

This is a fast paced arena FPS game with beautiful graphics, many weapons with two modes of fire and many games modes. Reminds me a lot Unreal Tournament 2003.

# pkg_add xonotic
$ xonotic

Xonotic official website

Xonotic screenshot

Hyperrogue §

This game is a rogue like (every run is different than last one) in which you move from hexagone to hexagone to get points, each biome has its own characteristics, like a sand biome in which you have to gather spice and you must escape sand worms :-) . The game is easy to play, turn by turn and has unusual graphics because of the non-euclidian nature of its world. I recommend reading the game manual because the first time I played it I really disliked it by missing most of the game mechanics... Also available on Android!

Hyperrogue official website

Hyperrogue screenshot

And many others §

Here is a list of games I didn't include but at also worth being played: 0ad, Xmoto, Freedoom, The Dark Mod, Freedink, crack-attack, witchblast, flare, vegastrike and many others.

List of games available on OpenBSD

Port of the week: checkrestart

Written by Solène, on 02 March 2021.
Tags: #openbsd #portoftheweek

Comments on Mastodon

Introduction §

This article features the very useful program "checkrestart" which is OpenBSD specific. The purpose of checkrestart is to display which programs and their according PID for which the binaries doesn't exist anymore.

Why would their binary be absent? The obvious case is that the program was removed, but what it is really good at, is when you upgrade a package with running binaries, the old binary is deleted and the new binary installed. In that case, you will have to stop all the running binaries and restart them. Hence the name "checkrestart".

Installation §

Installing it is as simple as running pkg_add checkrestart

Usage §

This is simple too, when you run checkrestart, you will have a list of PID numbers with the binary name.

For example, on my system, checkrestart gives me information about what programs got updated that I should restart to run the new binary.

69575	lagrange
16033	lagrange
9664	lagrange
77211	dhcpleased
6134	dhcpleased
21860	dhcpleased

Real world usage §

If you run OpenBSD -stable, you will want to use checkrestart after running pkg_add -u. After a package update, most often related to daemons, you will have to restart the related services.

On my server, in my daily script updating packages and running syspatch, I use it to automatically restart some services.

checkrestart | grep php && rcctl restart php-fpm
checkrestart | grep postgres && rcctl restart postgresql
checkrestart | grep nginx && rcctl restart nginx

Other Operating System §

I've been told that checkrestart is also available on FreeBSD as a package! The output may differ but the use is the same.

On Linux, a similar tool exists under the name "needrestart", at least on Debian and Gentoo.

Port of the week: shapez.io - a libre factory gaming

Written by Solène, on 26 February 2021.
Tags: #openbsd #openbsd69 #gaming #portoftheweek

Comments on Mastodon

Introduction §

I would like to introduce you to a very nice game I discovered a few months ago, its name is Shapez.io and is a "factory" game, a genre popularized by the famous game Factorio. In this game you will have to extract shapes and colors and rework the shapez, mix colors and mix the whole thing together to produce wanted pieces.

The game §

The gameplay is very cool, the early game is an introduction to the game mechanics, you can extract shapes, cut them rotate pieces, merge conveys belts into one, paint shapes etc... and logic circuits!

In those games, you will have to learn how to make efficient factories and mostly "tile-able" installations. A tile-able setup means that if you copy a setup and paste it next to it, it will be bigger and functional, meaning you can extend it to infinity (except that the input conveyors will starve at some point).

It can be quite addictive to improve your setups over and over. This game is non violent and doesn't require any reflex but you need to think. You can't loose, it's between a puzzle and a management game.

Compact tile-able painting setup (may spoil if you want to learn yourself)

Where to get it §

On OpenBSD since version 6.9 (not released yet when I publish this) you can install the package shapezio and find a launcher in your desktop environment Game menu.

I also compiled a web version that you can play in your web browser (I discourage using Firefox due to performance..) without installing it, it's legal because the game is open source :)

Play shapez.io in the web browser

The game is also sold on Steam, pre-compiled and ready to run, if you prefer it, it's also a nice way to support the developer.

shapez.io on Steam

More content §

Official website

Youtube video of "Real civil engineer" explaining the game

Nginx as a TCP/UDP relay

Written by Solène, on 24 February 2021.
Tags: #openbsd #nginx #network

Comments on Mastodon

Introduction §

In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement.

I will explain how to install and configure Nginx and how to parse logs to obtain useful information. I will use an OpenBSD system for the examples.

It is important to understand that in this context Nginx is not doing anything related to HTTP.

Installation §

On OpenBSD we need the package nginx-stream, if you are unsure about which package is required on your system, search which package provide the file ngx_stream_module.so . To enable Nginx at boot, you can use rcctl enable nginx.

Nginx stream module core documentation

Nginx stream module log documentation

Configuration §

The default configuration file for nginx is /etc/nginx/nginx.conf , we will want it to listen on port 1965 and relay to 127.0.0.1:11965.

worker_processes  1;

load_module modules/ngx_stream_module.so;

events {
   worker_connections 5;
}

stream {
    log_format basic '$remote_addr $upstream_addr [$time_local] '
                     '$protocol $status $bytes_sent $bytes_received '
                     '$session_time';

    access_log logs/nginx-access.log basic;

    upstream backend {
        hash $remote_addr consistent;
        server 127.0.0.1:11965;
    }
    server {
        listen 1965 ssl;
        ssl_certificate /etc/ssl/perso.pw:1965.crt;
        ssl_certificate_key /etc/ssl/private/perso.pw:1965.key;
        proxy_pass backend;
    }
}

In the previous configuration file, the backend defines the destination, multiples servers could be defined, with weights and timeouts, there is only one in this example.

The server block will tell on which port Nginx should listen and if it has to handle TLS (which is named ssl because of history), usual TLS configuration can be used here, then for a request, we have to tell to which backend Nginx have to relay the connections.

The configuration file defines a custom log format that is useful for TLS connections, it includes remote host, backend destination, connection status, bytes transffered and duration.

Log parsing §

Using awk to calculate time performance §

I wrote a quite long shell command parsing the log defined earlier that display the number of requests, and median/min/max session time.

$ awk '{ print $NF }' /var/www/logs/nginx-access.log | sort -n |  awk '{ data[NR] = $1 } END { print "Total: "NR" Median:"data[int(NR/2)]" Min:"data[2]" Max:"data[NR] }'
Total: 566 Median:0.212 Min:0.000 Max:600.487

Find bad clients using awk §

Sometimes in the logs there are clients that obtains a status 500, meaning the TLS connection haven't been established correctly. It may be some scanner that doesn't try a TLS connection, if you want to get statistics about those and see if it would be worth to block them if they do too many attempt, it is easy to use awk to get the list.

awk '$(NF-3) == 500 { print $1 }' /var/www/logs/nginx-access.log

Using goaccess for real time log visualization §

It is also possible to use the program Goaccess to view logs in real time with many information, it is really an awesome program.

goaccess --date-format="%d/%b/%Y" \
         --time-format="%H:%M:%S" \
         --log-format="%h %r [%d:%t %^] TCP %s %^ %b %L" /var/www/logs/nginx-access.log

Goaccess official website

Conclusion §

I was using relayd before trying Nginx with stream module, while relayd worked fine it doesn't provide any of the logs Nginx offer. I am really happy with this use of Nginx because it is a very versatile program that shown to be more than a http server over time. For a minimal setup I would still recommend lighter daemon such as relayd.

Port of the week: catgirl irc client

Written by Solène, on 22 February 2021.
Tags: #openbsd69 #openbsd #irc #catgirl #portoftheweek

Comments on Mastodon

Introduction §

In this Port of the Week I will introduce you to the IRC client catgirl. While there are already many IRC clients available (and good ones), there was a niche that wasn't filled yet, between minimalism (ii, irCII) and full featured clients (irssi, weechat) in the terminal world. Here comes catgirl, a simple IRC client coming with enough features to be comfortable to use for heavy IRC users.

Catgirl has the following features: tab completion, split scrolling, URL detection, nick coloring, ignores filter. On the other hand, it doesn't support non-TLS networks, CCTP, multi networks or dynamic configuration. If you want to use catgirl with multiples networks, you have to run it once per network.

Catgirl will be available as a package in OpenBSD starting with version 6.9.

OpenBSD security bonus: catgirl features a very good use of unveil to reduce file system access to the minimum required (configuration+logs+certs), reducing the severity of an exploit. It also has a restricted mode when using the -R parameter that reduce features like notifications or url handling and tight the pledge list (allowing systems calls).

Catgirl official website

Catgirl screenshot

Configuration §

A simple configuration file to connect to the irc.tilde.chat server would look like the following file that must be stored under ~/.config/catgirl/tilde

nick = solene_nickname
real = Solene
host = irc.tilde.chat
join = #foobar-channel

You can then run catgirl and use the configuration file but passing the config file name as parameter.

$ catgirl tilde

Usage and tips §

I recommend reading catgirl man page, everything is well explained there. I will cover most basics needs here.

Catgirl man page

Catgirl only display one window at a time, it is not possible to split the display, but if you scroll up you will see the last displayed lines and the text stream while keeping the upper part displaying the history, it is a neat way to browse the history without cutting yourself from what's going on in the channel.

Channels can be browsed from keyboard using Ctrl+N or Ctrl+P like in Irssi or by typing /window NUMBER, with number being the buffer number. Alt+NUMBER could also be used to switch directly to buffer NUMBER.

Searches in buffer could be used by typing a word in your input and using Ctrl+R to search backward or Ctrl+S for searching forward (given you are in the history of course).

Finally, my most favorite feature which is missing in minimal clients is Alt+A, jumping to next buffers I have to read (also yes, catgirl keep a line with information about how many messages in channels since last time you didn't read them). Even better, when you press alt+A while there is nothing to read, you jump back to the channel you manually selected last, this allow to quickly read what you missed and return to the channel you spend all your time on.

Conclusion §

I really love this IRC client, it replaced Irssi that I used for years really easily because most of the key bindings are the same, but I am also very happy to use a client that is a lot safer (on OpenBSD). It can be used with tmux for persistence but also connect to multiple servers and make it manageable.

Full list of services offered by a default OpenBSD installation

Written by Solène, on 16 February 2021.
Tags: #openbsd69 #openbsd #unix

Comments on Mastodon

Introduction §

This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).

From all this list, the following list is started by default: cron, pflogd, sndiod, openssh, ntpd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.

Service list §

I extracted the list of base install services by looking at /etc/rc.conf.

$ grep _flags /etc/rc.conf | cut -d '_' -f 1

amd §

This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".

amd man page

apmd §

This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.

apmd man page

apm man page

bgpd §

This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.

OpenBGPD website

bootparamd §

This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.

Information about a diskless setup

cron §

This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.

Cron man page

Crontab command

Crontab format

dhcpd §

This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.

dhcrelay §

This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.

dvmrpd §

This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.

eigrpd §

This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.

ftpd §

This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).

ftpd man page

ftpproxy §

This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.

ftpproxy6 §

Same as above but for IPv6. Using IPv6 behind a NAT make no sense.

hostapd §

This is the daemon that turns OpenBSD into a WiFi access point.

hostapd man page

hostapd configuration file man page

hotplugd §

hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.

hotplugd man page

httpd §

httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.

httpd man page

httpd configuration file man page

identd §

Identd is a daemon for the Identification Protocol which returns the login name of a user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.

ifstated §

This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.

ifstated man page

ifstated configuration file man page

iked §

This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.

OpenBSD FAQ about VPN

inetd §

This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.

inetd man page

isakmpd §

This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.

iscsid §

This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.

ldapd §

This is a light LDAP server, offering version 3 of the protocol.

ldap client man page

ldapd daemon man page

ldapd daemon configuration file man page

ldattach §

This daemon allows to configure programs that are exposed as a serial port, such as gps devices.

ldomd §

This daemon is specific to the sparc64 platform and provide services for dom feature.

lockd §

This daemon is used as part of a NFS environment to support file locking.

ldpd §

This daemon is used by MPLS routers to get labels.

lpd §

This daemon is used to manage print access to a line printer.

mountd §

This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.

mountd man page

showmount man page

mopd §

This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.

mrouted §

Similar to dvmrpd.

nfsd §

This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.

nfsd man page

nfsstat man page

npppd §

This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.

nsd §

This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.

nsd man page

nsd configuration file man page

ntpd §

This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.

ntpd man page

ospfd §

It is a daemon for the OSPF routing protocol (Open Shortest Path First).

ospf6d §

Same as above for IPv6.

pflogd §

This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.

pflogd man page

tcpdump

portmap §

This daemon is used as part of a NFS environment.

rad §

This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.

radiusd §

This daemon is used to offer RADIUS protocol authentication.

rarpd §

This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.

Information about a diskless setup

rbootd §

Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».

relayd §

This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.

relayd man page

relayd control tool man page

relayd configuration file man page

ripd §

This is a routing daemon using an old protocol but widely supported.

route6d §

Same as above but for IPv6.

sasyncd §

This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.

sensorsd §

This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.

sensorsd man page

sensorsd configuration file man page

slaacd §

This service is a daemon that will automatically pick up auto IPv6 configuration on the network.

slowcgi §

This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.

slowcgi man page

smtpd §

This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.

smtpd man page

smtpd configuration file man page

smtpd control command man page

sndiod §

This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.

sndiod man page

sndiod control command man page

mixerctl man page to control an audio device

OpenBSD FAQ about multimedia devices

snmpd §

This daemon is a SNMP server exposing some system metrics to SNMP client.

snmpd man page

snmpd configuration file man page

spamd §

This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.

spamlogd §

This daemon is dedicated to the update of spamd whitelist.

sshd §

This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.

sshd man page

sshd configuration file man page

statd §

This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.

switchd §

This daemon is used to control a switch pseudo device.

switch pseudo device man page

syslogd §

This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.

syslogd man page

syslogd configuration file man page

newsyslog man page

logger man page

tftpd §

This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.

Information about a diskless setup

tftpproxy §

This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.

unbound §

This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.

unbound man page

unbound configuration file man page

unwind §

This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.

unwind man page

unwind configuration file man page

unwind control command man page

vmd §

This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.

vmd man page

vmd configuration file man page

vmd control command man page

vmm driver man page

OpenBSD FAQ about virtualization

watchdogd §

This daemon is used to trigger watchdog timer devices if any.

wsmoused §

This daemon is used to provide a mouse support to the console.

xenodm §

This daemon is used to start the X server and allow users to authenticate themselves and log in their session.

xenodm man page

ypbind §

This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.

ypldap §

This daemon offers a YP service using a LDAP backend.

ypserv §

This daemon is a YP server.

What security does a default OpenBSD installation offer?

Written by Solène, on 14 February 2021.
Tags: #openbsd69 #openbsd #security

Comments on Mastodon

Introduction §

In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.

There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.

Security matters §

Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.

In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.

Pledge / unveil on userland §

Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.

Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.

Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.

OpenBSD presentation about Unveil (BSDCan2019)

OpenBSD presentation of Pledge and Unveil (BSDCan2018)

Privilege separation §

Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.

Clock synchronization §

The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.

X display not as root §

If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.

Resources limits §

Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.

Genuine full disk encryption §

When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.

W^X §

Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.

OpenBSD presentation « Kernel W^X Improvements In OpenBSD »

Only one reliable randomness source §

When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.

OpenBSD presentation about arc4random

Accurate documentation §

OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.

OpenBSD man pages online

EuroBSDcon 2018 about « Better documentation »

IPSec and Wireguard out of the box §

If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.

Memory safeties §

OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.

Dedicated root account §

When you install the system, a root account is created and its password is asked, then you create a user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.

Small network attack surface §

The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).

Encrypted swap §

By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.

SMT disabled §

Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.

Meltdown: one of the first security issue related to speculative execution in the CPU

Micro and Webcam disabled §

With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.

Maintainability, release often, update often §

The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.

Signify chain of trust §

Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.

Signify at BSDCan 2015

Packages §

While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.

Chroot by default when available §

Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.

Dedicated users for services §

Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.

Installing a service doesn't enable it §

When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.

rcctl man page

Conclusion §

Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.

There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.

« Mitigations and other real security features » by Theo De Raadt

OpenBSD innovations

OpenBSD events, often including slides or videos

Firejail on Linux to sandbox all the things

Written by Solène, on 14 February 2021.
Tags: #linux #security #sandbox

Comments on Mastodon

Introduction §

Firejail is a program that can prepare sandboxes to run other programs. This is an efficient way to keep a software isolated from the rest of the system without need of changing its source code, it works for network, graphical or daemons programs.

You may want to sandbox programs you run in order to protect your system for any issue that could happen within the program (security breach, code mistake, unknown errors), like Steam once had a "rm -fr /" issue, using a sandbox that would have partially saved a part of the user directory. Web browsers are major tools nowadays and yet they have access to the whole system and have many security issues discovered and exploited in the wild, running it in a sandbox can reduce the data a hacker could exfiltrate from the computer. Of course, sandboxing comes with an usability tradeoff because if you only allow access to the ~/Downloads/ directory, you need to put files in this directory if you want to upload them, and you can only download files into this directory and then move them later where you really want to keep your files.

Installation §

On most Linux systems you will find a Firejail package that you can install. If your distribution doesn't provide a Firejail package, it seems the installing from sources process is quite easy, and as the project is written in C with limited dependencies it may be easy to get the build process done.

There are no service to enable and no kernel parameters to add. Apparmor or SELinux features in kernel can be used to integrates into Firejail profiles if you want to.

Usage §

Start a program §

The simplest usage is to run a command by adding Firejail before the command name.

$ Firejail firefox

Firejail has a neat feature to allow starting software by their name without calling Firejail explicitly, if you create a symbolic link in your $PATH using a program name but targeting Firejail, when you call that name Firejail will automatically now what you want to start. The following example will run firefox when you call the symbolic link.

export PATH=~/bin/:$PATH
$ ln -s /usr/bin/firejail ~/bin/firefox
$ firefox

Listing sandboxes §

There is a Firejail --list command that will tell you about all sandboxes running and what are their parameters. As a first column the identifier is available for more Firejail features.

$ firejail --list
6108:solene::/usr/bin/firejail /usr/bin/firefox 

Limit bandwidth per program §

Firejail also has a neat feature that allows to limit the bandwidth available only for one sandbox environment. Reusing previous list output, I will reduce firefox bandwidth, the number are in kB/s.

$ firejail --bandwidth=6108 set wlan0 1000 40

You can find more information about this feature in the "TRAFFIC SHAPING" section of the Firejail man page.

Restrict network access §

If for some reason you want to start a program with absolutely no network access, you can run a program and deny it any network.

$ firejail --net=none libreoffice

Conclusion §

Firejail is a neat way to start software into sandboxes without requiring any particular setup. It may be more limited and maybe less reliable than OpenBSD programs who received unveil() features but it's a nice trade off between safety and required work within source code (literally none). It is a very interesting project that proves to work easily on any Linux system, with a simple C source code with little dependencies. I am not really familiar with Linux kernel and its features but Firejail seems to use seccomp-bpf and namespace, I guess they are complicated to use but powerful and Firejail comes here as a wrapper to automate all of this.

Firejail has been proven to be USABLE and RELIABLE for me while my attempts at sandboxing Firefox with AppArmor were tedious and not optimal. I really recommend it.

More resources §

Official project website with releases and security information

Firejail sources and documentation

Community profiles 1

Community profiles 2

Bandwidth limiting on OpenBSD 6.8

Written by Solène, on 07 February 2021.
Tags: #openbsd68 #openbsd #unix #network

Comments on Mastodon

This is a February 2021 update of a text originally published in April 2017.

Introduction §

I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.

OpenBSD pf.conf man page about queuing

Prerequisites §

My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).

PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.

Configuration §

Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.

# we define a main queue (requirement)
queue main on em0 bandwidth 1G

# set a queue for everything
queue normal parent main bandwidth 200K max 200K default

And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`

QUEUE        BW/FL SCH      PKTS    BYTES   DROP_P   DROP_B QLEN
main on em0  1000M fifo        0        0        0        0    0
 normal      1000M fifo   535424 36032467        0        0   60

More control (per user / protocol) §

This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.

# within the queue rules
queue oasis parent main bandwidth 150K max 150K

# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis

Instead of a user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.

How to set a system wide bandwidth limit on Linux systems

Written by Solène, on 06 February 2021.
Tags: #linux #bandwidth

Comments on Mastodon

In these times of remote work / home office, you may have a limited bandwidth shared with other people/device. All software doesn't provide a way to limit bandwidth usage (package manager, Youtube videos player etc...).

Fortunately, Linux has a very nice program very easy to use to limit your bandwidth in one command. This program is « Wondershaper » and is using the Linux QoS framework that is usually manipulated with "tc", but it makes it VERY easy to set limits.

What are QoS, TC and Filters on Linux

On most distributions, wondershaper will be available as a package with its own name. I found a few distributions that didn't provide it (NixOS at least), and some are providing various wondershaper versions.

To know if you have the newer version, a "wondershaper --help" may provide information about "-d" and "-u" flags, the older version doesn't have this.

Wondershaper requires the download and upload bandwidths to be set in kb/s (kilo bits per second, not kilo bytes). I personally only know my bandwidth in kB/s which is a 1/8 of its kb/s equivalent. My home connection is 1600 kB/s max in download and 95 kB/s max in upload, I can use wondershaper to limit to 1000 / 50 so it won't affect much my other devices on my network.

# my network device is enp3s0
# new wondershaper
sudo wondershaper -a enp3s0 -d $(( 1000 * 8 )) -u $(( 50 * 8 ))

# old wondershaper
sudo wondershaper enp3s0 $(( 1000 * 8 )) $(( 50 * 8 ))

I use a multiplication to convert from kB/s to kb/s and still keep the command understandable to me. Once a limit is set, wondershaper can be used to clear the limit to get full bandwidth available again.

# new wondershaper
sudo wondershaper -c -a enp3s0

# old wondershaper
sudo wondershaper clear enp3s0

There are so many programs that doesn't allow to limit download/upload speeds, wondershaper effectiveness and ease of use are a blessing.

Filtering TCP connections by operating system on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction §

In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.

OpenBSD pf.conf man page about OS Fingerprinting

Explanations §

Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.

Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.

How to setup §

The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).

# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22

# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22

This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.

Using pkgsrc on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #pkgsrc

Comments on Mastodon

This quick article will explain how to install pkgsrc packages on an OpenBSD installation. This is something regulary asked on #openbsd freenode irc channel. I am not convinced by the relevant use of pkgsrc under OpenBSD but why not :)

I will cover an unprivileged installation that doesn't require root. I will use packages from 2020Q4 release, I may not update regularly this text so you will have to adapt to your current year.

$ cd ~/
$ ftp https://cdn.NetBSD.org/pub/pkgsrc/pkgsrc-2020Q4/pkgsrc.tar.gz
$ tar -xzf pkgsrc.tar.gz
$ cd pkgsrc/bootstrap
$ ./bootstrap --unprivileged

From now you must add the path ~/pkg/bin to your $PATH environment variable. The pkgsrc tree is in ~/pkgsrc/ and all the relevant files for it to work are in ~/pkg/.

You can install programs by searching directories of software you want in ~/pkgsrc/ and run "bmake install", for example in ~/pkgsrc/chat/irssi/ to install irssi irc client.

I'm not sure X11 software compiles well, I got issues compiling dbus as a dependency of x11/xterm and I got compilation errors, maybe clashing with Xenocara from base system... I don't really want to investigate more about this though.

Enable multi-factor authentication on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction §

In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD

What is TOTP (Time-based One time Password)

When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.

TOTP software §

Here is a quick list of TOTP software

- command line: oathtool from package oath-toolkit

- GUI and multiplatform: KeepassXC

- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)

Setup §

A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.

# pkg_add login_oath

You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:

# totp OR password
totp:\
        :auth=-totp,passwd:\
        :tc=default:

# totp AND password
totppw:\
        :auth=-totp-and-pwd:\
        :tc=default:

If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.

Local login §

Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.

# usermod -L totp some_user

In the user directory, you have to generate a key and give it the correct permissions.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key

The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.

python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"

SSH login §

It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.

This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.

sshd_config man page about AuthenticationMethods

# enable for everyone
AuthenticationMethods publickey,password

# for one user
Match User solene
	AuthenticationMethods publickey,password

Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.

Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.

Conclusion §

In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.

NixOS review: pros and cons

Written by Solène, on 22 January 2021.
Tags: #nixos #linux

Comments on Mastodon

Hello, in this article I would like to share my thoughts about the NixOS Linux distribution. I've been using it daily for more than six months as my main workstation at work and on some computer at home too. I also made modest contributions to the git repository.

NixOS official website

Introduction §

NixOS is a Linux distribution built around Nix tool. I'll try to explain quickly what Nix is but if you want more accurate explanations I recommend visiting the project website. Nix is the package manager of the system, Nix could be used on any Linux distribution on top of the distribution package manager. NixOS is built from top to bottom from Nix.

This makes NixOS a system entirely different than what one can expect from a regular Linux/Unix system (with the exception of Guix sharing the same idea with a different implementation). NixOS system configuration is stateless, most of the system is in read-only and most of paths you know doesn't exist. The directory /bin/sh only contains "sh" which is a symlink.

The whole system configuration: fstab, packages, users, services, crontab, firewall... is configured from a global configuration file that defines the state of the system.

An example of my configuration file to enable graphical interface with Mate as a desktop and a french keyboard layout.

services.xserver.enable = true;
services.xserver.layout = "fr";
services.xserver.libinput.enable = true;
services.xserver.displayManager.lightdm.enable = true;
services.xserver.desktopManager.mate.enable = true;

I could add the following lines into the configuration to add auto login into my graphical session.

services.xserver.displayManager.autoLogin.enable = true;
services.xserver.displayManager.autoLogin.user = "solene";

Pros §

There are a lot of pros. The system is really easy to setup, installing a system (for a reinstall or replicate an installation) is very easy, you only need to get the configuration.nix file from the other/previous system. Everything is very fast to setup, it's often only a few lines to add to the configuration.

Every time the system is rebuilt from the configuration file, a new grub entry is made so at boot you can choose on which environment you want to boot. This make upgrades or tries very easy to rollback and safe.

Documentation! The NixOS documentation is very nice and is part of the code. There is a special man page "configuration.nix" in the system that contains all variables you can define, what values to expect, what is the default and what it's doing. You can literally search for "steam", "mediawiki" or "luks" to get information to configure your system.

All the documentation

Builds are reproducible, I don't consider it a huge advantage but it's nice to have it. This allow to challenge a package mirror by building packages locally and verifying they provide the exact same package on the mirror.

It has a lot of packages. I think the NixOS team is pretty happy to share their statistics because, if I got it right, Nixpkgs is the biggest and up to date repository alive.

Search for a package

Cons §

When you download a pre compiled Linux program that isn't statically built, it's a huge pain to make it work on NixOS. The binary will expect some paths to exist at usual places but they won't exist on NixOS. There are some tricks to get them work but it's not always easy. If the program you want isn't in the packages, it may not be easy to use it. Flatpak can help to get some programs if they are not in the packages though.

Running binaries

It takes disk space, some libraries can exist at the same time with small compilation differences. A program can exist with different version at the same time because of previous builds still available for boot in grub, if you forget to clean them it takes a lot of memory.

The whole system (especially for graphical environments) may not feel as polished as more mainstream distributions putting a lot of efforts into branding and customization. NixOS will only install everything and you will have a quite raw environment that you will have to configure. It's not a real cons but in comparison to other desktop oriented distributions, NixOS may not look as good out of the box.

Conclusion §

NixOS is an awesome piece of software. It works very well and I never had any reliability issue with it. Some services like xrdp are usually quite complex to setup but it worked out of the box here for me.

I see it as a huge Lego© box with which you can automate the building of the super system you want, given you have the schematics of its parts. Once you need a block you don't have in your recipes list, you will have a hard time.

I really classify it into its own category, in comparison to Linux/BSD distributions and Windows, there is the NixOS / Guix category with those stateless systems for which the configuration is their code.

Vger security analysis

Written by Solène, on 14 January 2021.
Tags: #vger #gemini #security

Comments on Mastodon

I would like to share about Vger internals in regards to how the security was thought to protect vger users and host systems.

Vger code repository

Thinking about security first §

I claim about security in Vger as its main feature, I even wrote Vger to have a secure gemini server that I can trust. Why so? It's written in C and I'm a beginner developer in this language, this looks like a scam.

I chose to follow the best practice I'm aware of from the very first line. My goal is to be sure Vger can't be used to exfiltrate data from the host on which it runs or to allow it to run arbirary command. While I may have missed corner case in which it could crash, I think a crash is the worse that can happen with Vger.

Smallest code possible §

Vger doesn't have to manage connections or TLS, this was a lot of code already removed by this design choice. There are better tools which are exactly made for this purpose, so it's time to reuse other people good work.

Inetd and user §

Vger is run by inetd daemon, allowing to choose the user running vger. Using a dedicated user is always a good idea to prevent any harm in case of issue, but it's really not sufficient to protect vger to behave badly.

Another kind of security benefit is that vger runtime isn't looping like a daemon awaiting new connections. Vger accept a request, read a file if exist and gives its result and terminates. This is less error prone because no variable can be reused or tricked after a loop that could leave the code in an inconsistent or vulnerable state.

Chroot §

A critical vger feature is the ability to chroot into a directory, meaning the directory is now seen as the root of the file system (/var/gemini would be seen as /) and prevent vger to escape it. In addition to the chroot feature, the feature allow vger to drop to an unprivileged user.

     /* 
      * use chroot() if a user is specified requires root user to be 
      * running the program to run chroot() and then drop privileges 
      */
     if (strlen(user) > 0) {

             /* is root? */
             if (getuid() != 0) {
                     syslog(LOG_DAEMON, "chroot requires program to be run as root");
                     errx(1, "chroot requires root user");
             }
             /* search user uid from name */
             if ((pw = getpwnam(user)) == NULL) {
                     syslog(LOG_DAEMON, "the user %s can't be found on the system", user);
                     err(1, "finding user");
             }
             /* chroot worked? */
             if (chroot(path) != 0) {
                     syslog(LOG_DAEMON, "the chroot_dir %s can't be used for chroot", path);
                     err(1, "chroot");
             }
             chrooted = 1;
             if (chdir("/") == -1) {
                     syslog(LOG_DAEMON, "failed to chdir(\"/\")");
                     err(1, "chdir");
             }
             /* drop privileges */
             if (setgroups(1, &pw->pw_gid) ||
                 setresgid(pw->pw_gid, pw->pw_gid, pw->pw_gid) ||
                 setresuid(pw->pw_uid, pw->pw_uid, pw->pw_uid)) {
                     syslog(LOG_DAEMON, "dropping privileges to user %s (uid=%i) failed",
                            user, pw->pw_uid);
                     err(1, "Can't drop privileges");
             }
     }

No use of third party libs §

Vger only requires standard C includes, this avoid leaving trust to dozens of developers using fragile or barely tested code.

OpenBSD specific code §

In addition to all the previous security practices, OpenBSD is offering a few functions to help restricting a lot what Vger can do.

The first function is pledge, allowing to restrict the system calls that can happen within the code itself. The current syscalls allowed in vger are related to the categories "rpath" and "stdio", basically standard input/output and reading files/directories only. This mean after pledge() is called, if any syscall not in those two categories is used, vger will be killed and a pledge error will be reported in the logs.

The second function is unveil, which will basically restrict access to the filesystem to anything but what you list, with the permission. Currently, vger only allows file access in read-only mode in the base directory used to serve files.

Here is an extract of the code relative to the OpenBSD specific code. With unveil available everywhere chroot wouldn't be required.

 #ifdef __OpenBSD__
         /* 
          * prevent access to files other than the one in path 
          */
         if (chrooted) {
                 eunveil("/", "r");
         } else {
                 eunveil(path, "r");
         }
         /* 
          * prevent system calls other parsing queryfor fread file and 
          * write to stdio 
          */
         if (pledge("stdio rpath", NULL) == -1) {
                 syslog(LOG_DAEMON, "pledge call failed");
                 err(1, "pledge");
         }
 #endif

The least code before dropping privileges §

I made my best to use the least code possible before reducing Vger capabilities. Only the code managing the parameters is done before activating chroot and/or unveil/pledge.

int
main(int argc, char **argv)
{
     char            request  [GEMINI_REQUEST_MAX] = {'\0'};
     char            hostname [GEMINI_REQUEST_MAX] = {'\0'};
     char            uri      [PATH_MAX]           = {'\0'};
     char            user     [_SC_LOGIN_NAME_MAX] = "";
     int             virtualhost = 0;
     int             option = 0;
     char           *pos = NULL;

     while ((option = getopt(argc, argv, ":d:l:m:u:vi")) != -1) {
             switch (option) {
             case 'd':
                     estrlcpy(chroot_dir, optarg, sizeof(chroot_dir));
                     break;
             case 'l':
                     estrlcpy(lang, "lang=", sizeof(lang));
                     estrlcat(lang, optarg, sizeof(lang));
                     break;
             case 'm':
                     estrlcpy(default_mime, optarg, sizeof(default_mime));
                     break;
             case 'u':
                     estrlcpy(user, optarg, sizeof(user));
                     break;
             case 'v':
                     virtualhost = 1;
                     break;
             case 'i':
                     doautoidx = 1;
                     break;
             }
     }

     /* 
      * do chroot if a user is supplied run pledge/unveil if OpenBSD 
      */
     drop_privileges(user, chroot_dir); 

The Unix way §

Unix is made of small component that can work together as small bricks to build something more complex. Vger is based on this idea by delegating the listening daemon handling incoming requests to another software (let's say relayd or haproxy). And then, what's left from the gemini specs once you delegate TLS is to take account of a request and return some content, which is well suited for a program accepting a request on its standard input and giving the result on standard ouput. Inetd is a key here to make such a program compatible with a daemon like relayd or haproxy. When a connection is made into the TLS listening daemon, a local port will trigger inetd that will run the command, passing the network content to the binary into its stdin.

Fine grained CGI §

CGI support was added in order to allow Vger to make dynamic content instead of serving only static files. It has a fine grained control, you can allow only one file to be executable as a CGI or a whole directory of files. When serving a CGI, vger forks, a pipe is opened between the two processes and a process is using execlp to run the cgi and transmit its output to vger.

Using tests §

From the beginning, I wrote a set of tests to be sure that once a kind of request or a use case work I can easily check I won't break it. This isn't about security but about reliability. When I push a new version on the git repository, I am absolutely confident it will work for the users. It was also an invaluable help for writing Vger.

As vger is a simple binary that accept data in stdin and output data on stdout, it is simple to write tests like this. The following example will run vger with a request, as the content is local and within the git repository, the output is predictable and known.

printf "gemini://host.name/autoidx/\r\n" | vger -d var/gemini/

From here, it's possible to build an automatic test by checking the checksum of the output to the checksum of the known correct output. Of course, when you make a new use case, this requires manually generating the checksum to use it as a comparison later.

OUT=$(printf "gemini://host.name/autoidx/\r\n" | ../vger -d var/gemini/ -i | md5)
if ! [ $OUT = "770a987b8f5cf7169e6bc3c6563e1570" ]
then
	echo "error"
	exit 1
fi

At this time, vger as 19 use case in its test suite.

By using the program `entr` and a Makefile to manage the build process, it was very easy to trigger the testing process while working on the source code, allowing me to check the test suite only by saving my current changes. Anytime a .c file is modified, entr will trigger a make test command that will be displayed in a dedicated terminal.

ls *.c | entr make test

Realtime integration tests? :)

Conclusion §

By using best practices, reducing the amount of code and using only system libraries, I am quite confident about Vger good security. The only real issue could be to have too many connections leading to a quite high load due to inetd spawning new processes and doing a denial of services. This could be avoided by throttling simultaneous connection in the TLS daemon.

If you want to contribute, please do, and if you find a security issue please contact me, I'll be glad to examine the issue.

Free time partitionning

Written by Solène, on 06 January 2021.
Tags: #life

Comments on Mastodon

Lately I wanted to change the way I use my free time. I define my free time as: not working, not sleeping, not eating. So, I estimate it to six hours a day in work day and fourteen hours in non worked day.

With the year 2020 being quite unusual, I was staying at home most of the time without seeing the time passing. At the end of the year, I started to mix the duration of weeks and months which disturbed me a lot.

For a a few weeks now, I started to change the way I spend my free time. I thought it was be nice to have a few separate activies in the same day to help me realizing how time is passing by.

Activity list §

Here is the way I chose to distribute my free time. It's not a strict approach, I measure nothing. But I try to keep a simple ratio of 3/6, 2/6 and 1/6.

Recreation: 3/6 §

I spend a lot of time in recreation time. A few activies I've put into recreation:

  • video games
  • movies
  • reading novels
  • sports

Creativity: 2/6 §

Those activies requires creativy, work and knowledge:

  • writing code
  • reading technical books
  • playing music
  • creating content (texts, video, audio etc..)

Chores: 1/6 §

Yes, obviously this has to be done on free time... And it's always better to do it a bit everyday than accumulating it until you are forced to proceed.

Conclusion §

I only started for a few weeks now but I really enjoy doing it. As I said previously, it's not something I stricly apply, but more a general way to spend my time and not stick for six hours writing code in a row from after work to going to sleep. I really feel my life is better balanced now and I feel some accomplishments for the few activies done every day.

Questions / Answers §

Some asked asked me if I was planning in advance how I spend my time.

The answer is no. I don't plan anything but when I tend to lose focus on what I'm doing (and this happen often), I think about this time repartition method and then I think it may be time to jump on another activity and I pick something in another category. Now I think about it, that was very often that I was doing something because I was bored and lacking idea of activities to occupy myself, with this current list I no longer have this issue.

Toward a simpler lifestyle

Written by Solène, on 04 January 2021.
Tags: #life

Comments on Mastodon

I don't often give my own opinion on this blog but I really feel it is important here.

The matter is about ecology, fair money distribution and civilization. I feel I need to share a bit about my lifestyle, in hope it will have a positive impact on some of my readers. I really think one person can make a change. I changed myself, only by spending a few moments with a member of my family a few years ago. That person never tried to convince me of anything, they only lived by their own standard without never offending me, it was simple things, nothing that would make that person a paria in our society. But I got curious about the reasons and I figurated it myself way later, now I understand why.

My philisophy is simple. In a life in modern civilization where everything is going fast, everyone cares about opinions other have about them and ultra communication, step back.

Here are the various statement I am following, this is something I self defined, it's not absolute rules.

  • Be yourself and be prepare to assume who you are. If you don't have the latest gadget you are not "has been", if you don't live in a giant house, you didn't fail your career, if you don't have a top notch shiny car nobody should ever care.
  • Reuse what you have. It's not because a cloth has a little scratch that you can't reuse it. It's not because an electronic device is old that you should replace it.
  • Opensource is a great way to revive old computers
  • Reduce your food waste to 0 and eat less meat because to feed animals we eat this requires a huge food production, more than what we finally eat in the meat
  • Travel less, there are a lot to see around where I live than at the other side of the planet. Certainly not go on vacation far away from home only to enjoy a beach under the sun. This also mean no car if it can be avoided, and if I use a car, why not carpooling?
  • Avoid gadgets (electronic devices that bring nothing useful) at all cost. Buy good gears (kitchen tools, workshop tools, furnitures etc...) that can be repaired. If possible buy second hand. For non-essential gears, second hand is mandatory.
  • In winter, heat at 19°C maximum with warm clothes while at home.
  • In summer, no A/C but use of extern isolation and vines along the home to help cooling down. And fans + water while wearing lights clothes to keep cool.

While some people are looking for more and more, I do seek for less. There are not enough for everyone on the planet, so it's important to make sacrifices.

Of course, it is how I am and I don't expect anyone to apply this, that would be insane :)

Be safe and enjoy this new year! <3

Lowtech Magazine, articles about doing things using simple technology

[FR] Pourquoi j'utilise OpenBSD

Written by Solène, on 04 January 2021.
Tags: #openbsd #francais

Comments on Mastodon

Dans ce billet je vais vous livrer mon ressenti sur ce que j'aime dans OpenBSD.

Respect de la vie privée §

Il n'y a aucune télémétrie dans OpenBSD, je n'ai pas à m'inquiéter pour le respect de ma vie privée. Pour rappel, la télémétrie est un mécanisme qui consiste à remonter des informations de l'utilisateur afin d'analyser l'utilisation du produit.

De plus, le défaut du système a été de désactiver entièrement le micro, à moins d'une intervention avec le compte root, le microphone enregistre du silence (ce qui permet de ne pas le bloquer quant à des droits d'utilisation). A venir dans 6.9, la caméra suit le même chemin et sera désactivée par défaut. Il s'agit pour moi d'un signal fort quant à la nécessité de protéger l'utilisateur.

Navigateurs web sécurisés §

Avec l'ajout des fonctionnalités de sécurité (pledge et surtout unveil) dans les sources de Firefox et Chromium, je suis plus sereine quant à leur utilisation au quotidien. À l'heure actuelle, l'utilisation d'un navigateur web est quasiment incontournable, mais ils sont à la fois devenus extrêmement complexes et mal maîtrisés. L'exécution de code côté client via Javascript qui a de plus en plus de possibilité, de performances et de nécessités, ajouter un peu de sécurité dans l'équation était nécessaire. Bien que ces ajouts soient parfois un peu dérangeants à l'utilisation, je suis vraiment heureuse de pouvoir en bénéficier.

Avec ces sécurités ajoutés (par défaut), les navigateurs cités précédemment ne peuvent pas parcourir les répertoires en dehors de ce qui leur est nécessaire à leur bon fonctionnement plus les dossiers ~/Téléchargements/ et /tmp/. Ainsi, des emplacements comme ~/Documents ou ~/.gnupg sont totalement inaccessibles ce qui limite grandement les risques d'exfiltration de données par le navigateur.

On pourrait refaire grossièrement la même fonctionnalité sous Linux en utilisant AppArmor mais l'intégration est extrêmement compliquée (là où c'est par défaut sur OpenBSD) et un peu moins efficace, il est plus facile d'agir au bon moment depuis le code plutôt qu'en encapsulant le programme entier d'un groupe de règles.

Pare-feu PF §

Avec PF, il est très simple de vérifier le fichier de configuration pour comprendre les règles en place sur le serveur ou un ordinateur de bureau. La centralisation des règles dans un fichier et le système de macros permet d'écrire des règles simples et lisibles.

J'utilise énormément la fonctionnalité de gestion de bande passante pour limiter le débit de certaines applications qui n'offrent pas ce réglage. C'est très important pour moi n'étant pas la seule utilisatrice du réseau et ayant une connexion assez lente.

Sous Linux, il est possible d'utiliser les programmes trickle ou wondershaper pour mettre en place des limitations de bande passante, par contre, iptables est un cauchemar à utiliser en tant que firewall!

C'est stable §

A part à l'utilisation sur du matériel peu répandu, OpenBSD est très stable et fiable. Je peux facilement atteindre deux semaines d'uptime sur mon pc de bureau avec plusieurs mises en veille par jour. Mes serveurs OpenBSD tournent 24/24 sans problème depuis des années.

Je dépasse rarement deux semaines puisque je dois mettre à jour le système de temps en temps pour continuer les développements sur OpenBSD :)

Peu de maintenance §

Garder à jour un système OpenBSD est très simple. Je lance les commandes syspatch et pkg_add -u tous les jours pour garder mes serveurs à jour. Une mise à jour tous les six mois est nécessaire pour monter en version mais à part quelques instructions spécifiques qui peuvent parfois arriver, une mise à jour ressemble à ça :

# sysupgrade
[..attendre un peu..]
# pkg_add -u
# reboot

Documentation de qualité §

Installer OpenBSD avec un chiffrement complet du disque est très facile (il faudra que j'écrive un billet sur l'importance de chiffrer ses disques et téléphones).

La documentation officielle expliquant l'installation d'un routeur avec NAT est parfaitement expliquée pas à pas, c'est une référence dès qu'il s'agit d'installer un routeur.

Tous les binaires du système de base (ça ne compte pas les packages) ont une documentation, ainsi que leurs fichiers de configuration.

Le site internet, la FAQ officielle et les pages de man sont les seules ressources nécessaires pour s'en sortir. Elles représentent un gros morceau, il n'est pas toujours facile de s'y retrouve mais tout y est.

Si je devais me débrouiller pendant un moment sans internet, je préférerais largement être sur un système OpenBSD. La documentation des pages de man suffit en général à s'en sortir.

Imaginez mettre en place un routeur qui fait du trafic shaping sous OpenBSD ou Linux sans l'aide de documents extérieurs au système. Personnellement je choisis OpenBSD à 100% pour ça :)

Facilité de contribution §

J'adore vraiment la façon dont OpenBSD gère les contributions. Je récupère les sources sur mon système et je procède aux modifications, je génère un fichier de diff (différence entre avant/après) et je l'envoie sur la liste de diffusion. Tout ça peut être fait en console avec des outils que je connais déjà (git/cvs) et des emails.

Parfois, les nouveaux contributeurs peuvent penser que les personnes qui répondent ne sont vraiment pas sympa. **Ce n'est pas vrai**. Si vous envoyez un diff et que vous recevez une critique, cela signifie déjà qu'on vous accorde du temps pour vous expliquer ce qui peut être amélioré. Je peux comprendre que cela puisse paraître rude pour certaines personnes, mais ce n'est pas ça du tout.

Cette année, j'ai fait quelques modestes contributions aux projets OpenIndiana et NixOS, c'était l'occasion de découvrir comment ces projets gèrent les contributions. Les deux utilisent github et la manière de faire est très intéressante, mais la comprendre demande beaucoup de travail car c'est relativement compliqué.

Site officiel d'OpenIndiana

Site officiel de NixOS

La méthode de contribution nécessite un compte sur Github, de faire un fork du projet, cloner le fork en local, créer une branche, faire les modifications en local, envoyer le fork sur son compte github et utiliser l'interface web de github pour faire un "pull request". Ça c'est la version courte. Sur NixOS, ma première tentative de faire un pull request s'est terminée par une demande contenant six mois de commits en plus de mon petit changement. Avec une bonne documentation et de l'entrainement c'est tout à fait surmontable. Cette méthode de travail présente certains avantages comme le suivi des contributeurs, l'intégration continue ou la facilité de critique de code, mais c'est rebutoire au possible pour les nouveaux.

Packages top qualité §

Mon opinion est sûrement biaisée ici (bien plus que pour les éléments précédents) mais je pense sincèrement que les packages d'OpenBSD sont de très bonne qualité. La plupart d'entre eux fonctionnent "out of the box" avec des paramètres par défaut corrects.

Les packages qui nécessitent des instructions particulières sont fournis avec un fichier "readme" expliquant ce qui est nécessaire, par exemple créer certains répertoires avec des droits particuliers ou comment mettre à jour depuis une version précédente.

Même si par manque de contributeurs et de temps (en plus de certains programmes utilisant beaucoup de linuxismes pour être faciles à porter), la plupart des programmes libres majeurs sont disponibles et fonctionnent très bien.

Je profite de l'occasion de ce billet pour critiquer une tendance au sein du monde Open Source.

  • les programmes distribués avec flatpak / docker / snap fonctionnent très bien sur Linux mais sont hostiles envers les autres systèmes. Ils utilisent souvent des fonctionnalités spécifiques à Linux et les méthodes de compilation sont tournées vers Linux. Cela complique grandement le portage de ces applications vers d'autres systèmes.
  • les programmes avec nodeJS: ils nécessitent parfois des centaines voir des milliers des libs et certaines sont mêmes un peu bancales. C'est vraiment compliqué de faire fonctionner ces programmes sur OpenBSD. Certaines libs vont même jusqu'à embarquer du code rust ou à télécharger un binaire statique sur un serveur distant sans solution de compilation si nécessaire ou sans regardant si ce binaire est disponible dans $PATH. On y trouve des aberrations incroyables.
  • les programmes nécessitant git pour compiler: le système de compilation dans les ports d'OpenBSD fait de son mieux pour faire au plus propre. L'utilisateur dédié à la création des packages n'a pas du tout accès à internet (bloqué par le pare-feu avec une règle par défaut) et ne pourra pas exécuter de commande git pour récupérer du code. Il n'y a aucune raison pour que la compilation d'un programme nécessite de télécharger du code au milieu de l'étape de compilation!

Évidemment je comprends que ces trois points ci-dessus existent car cela facilite la vie des développeurs, mais si vous écrivez un programme et que vous le publiez, ce serait très sympa de penser aux systèmes non-linux. N'hésite pas à demander sur les réseaux sociaux si quelqu'un veut tester votre code sur un autre système que Linux. On adore les développeurs "BSD friendly" qui acceptent nos patches pour améliorer le support OpenBSD.

Ce que j'aimerais voir évoluer §

Il y a certaines choses où j'aimerais voir OpenBSD s'améliorer. Cette liste est personnelle et reflète pas l'opinion des membres du projet OpenBSD.

  • Meilleur support ARM
  • Débit du Wifi
  • Meilleures performances (mais ça s'améliore un peu à chaque version)
  • Améliorations de FFS (lors de crashs j'ai parfois des fichiers dans lost+found)
  • Un pkg_add -u plus rapide
  • Support du décodage vidéo matériel
  • Meilleur support de FUSE avec une possibilité de monter des systèmes CIFS/samba
  • Plus de contributeurs

Je suis consciente de tout le travail nécessaire ici, et ce n'est certainement pas moi qui vais y faire quelque chose. J'aimerais que cela s'améliore sans toutefois me plaindre de la situation actuelle :)

Malheureusement, tout le monde sait qu'OpenBSD évolue par un travail acharné et pas en envoyant une liste de souhaits aux développeurs :)

Quand on pense à ce qu'arrive à faire une petite équipe (environ 150 développeurs impliqués sur les dernières versions) en comparaison d'autres systèmes majeurs, je pense qu'on est assez efficace!

[FR] Méthodes de publication de mon blog sur plusieurs médias

Written by Solène, on 03 January 2021.
Tags: #life #blog #francais

Comments on Mastodon

On me pose souvent la question sur la façon dont je publie mon blog, comment j'écris mes textes et comment ils sont publiés sur trois médias différents. Cet article est l'occasion pour moi de répondre à ces questions.

Pour mes publications j'utilise le générateur de site statique "cl-yag" que j'ai développé. Son principal travail est de générer les fichiers d'index d'accueil et de chaque tags pour chacun des médias de diffusion, HTML pour http, gophermap pour gopher et gemtext pour gemini. Après la génération des indexs, pour chaque article publié en HTML, un convertisseur va être appelé pour transformer le fichier d'origine en HTML afin de permettre sa consultation avec un navigateur internet. Pour gemini et gopher, l'article source est simplement copié avec quelques méta-données ajoutées en haut du fichier comme le titre, la date, l'auteur et les mots-clés.

Publier sur ces trois format en même temps avec un seul fichier source est un défi qui requiert malheureusement de faire des sacrifices sur le rendu si on ne veut pas écrire trois versions du même texte. Pour gopher, j'ai choisi de distribuer les textes tel quel, en tant que fichier texte, le contenu peut être du markdown, org-mode, mandoc ou autre mais gopher ne permet pas de le déterminer. Pour gémini, les textes sont distribués comme .gmi qui correspondent au type gemtext même si les anciennes publications sont du markdown pour le contenu. Pour le http, c'est simplement du HTML obtenu via une commande en fonction du type de données en entrée.

J'ai récemment décidé d'utiliser le format gemtext par défaut plutôt que le markdown pour écrire mes articles. Il a certes moins de possibilités que le markdown, mais le rendu ne contient aucune ambiguïté, tandis que le rendu d'un markdown peut varier selon l'implémentation et le type de markdown (tableaux, pas tableaux ? Syntaxe pour les images ? etc...)

Lors de l'exécution du générateur de site, tous les indexs sont régénérées, pour les fichiers publiés, la date de modification de celui-ci est comparée au fichier source, si la source est plus récente alors le fichier publié est généré à nouveau car il y a eu un changement. Cela permet de gagner énormément de temps puisque mon site atteint bientôt les 200 articles et copier 200 fichiers pour gopher, 200 pour gemini et lancer 200 programmes de conversion pour le HTML rendrait la génération extrêmement longue.

Après la génération de tous les fichiers, la commande rsync est utilisée pour mettre à jour les dossiers de sortie pour chaque protocole vers le serveur correspondant. J'utilise un serveur pour le http, deux serveurs pour gopher (le principal n'était pas spécialement stable à l'époque), un serveur pour gemini.

J'ai ajouté un système d'annonce sur Mastodon en appelant le programme local "toot" configuré sur un compte dédié. Ces changements n'ont pas été déployé dans cl-yag car il s'agit de changements très spécifiques pour mon utilisation personnelle. Ce genre de modification me fait penser qu'un générateur de site statique peut être un outil très personnel que l'on configure vraiment pour un besoin hyper spécifique et qu'il peut être difficile pour quelqu'un d'autre de s'en servir. J'avais décidé de le publier à l'époque, je ne sais pas si quelqu'un l'utilise activement, mais au moins le code est là pour les plus téméraires qui voudraient y jeter un oeil.

Mon générateur de blog peut supporter le mélange de différents types de fichiers sources pour être convertis en HTML. Cela me permet d'utiliser le type de formatage que je veux sans avoir à tout refaire.

Voici quelques commandes utilisées pour convertir les fichiers d'entrées (les articles bruts tels que je les écrits) en HTML. On constate que la conversion org-mode vers HTML n'est pas la plus simple. Le fichier de configuration de cl-yag est du code LISP chargé lors de l'exécution, je peux y mettre des commentaires mais aussi du code si je le souhaite, cela se révèle pratique parfois.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

Quand je déclare un nouvel article dans le fichier de configuration qui détient les méta-données de toutes les publications, j'ai la possibilité de choisir le convertisseur HTML à utiliser si ce n'est pas celui par défaut.

;; utilisation du convertisseur par défaut
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; utilisation du convertisseur mmd, un script awk très simple que j'ai fait pour convertir quelques fonctionnalités de markdown en html
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Quelques statistiques concernant la syntaxe de mes différentes publications, via http vous ne voyez que le HTML, mais en gopher ou gemini vous verrez la source telle quelle.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

My blog workflow

Written by Solène, on 03 January 2021.
Tags: #life #blog

Comments on Mastodon

I often have questions about how I write my articles, which format I use and how I publish on various medias. This article is the opportunity to highlight all the process.

So, I use my own static generator cl-yag which supports generating indexes for whole article lists but also for every tags in html, gophermap format and gemini gemtext. After the generation of indexes, for html every article will be converted into html by running a "converter" command. For gopher and gemini the original text is picked up, some metadata are added at the top of the file and that's all.

Publishing for all the three formats is complicated and sacrifices must be made if I want to avoid extra work (like writing a version for each). For gopher, I chose to distribute them as simple text file but it can be markdown, org-mode, mandoc or other formats, you can't know. For gemini, it will distribute gemtext format and for http it will be html.

Recently, I decided to switch to gemtext format instead of markdown as the main format for writing new texts, it has a bit less features than markdown, but markdown has some many implementations than the result can differ greatly from one renderer to another.

When I run the generator, all the indexes are regenerated, and destination file modification time are compared to the original file modification time, if the destination file (the gopher/html/gemini file that is published) is newer than the original file, no need to rewrite it, this saves a lot of time. After generation, the Makefile running the program will then run rsync to various servers to publish the new directories. One server has gopher and html, another server only gemini and another server has only gopher as a backup.

I added a Mastodon announcement calling a local script to publish links to new publications on Mastodon, this wasn't merged into cl-yag git repository because it's too custom code depending on local programs. I think a blog generator is as personal as the blog itself, I decided to publish its code at first but I am not sure it makes much sense because nobody may have the same mindset as mine to appropriate this tool, but at least it's available if someone wants to use it.

My blog software can support mixing input format so I am not tied to a specific format for all its life.

Here are the various commands used to convert a file from its original format to html. One can see that converting from org-mode to html in command line isn't an easy task. As my blog software is written in Common LISP, the configuration file is also a valid common lisp file, so I can write some code in it if required.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

When I define a new article to generate from a main file holding the metadata, I can specify the converter if it's not the default one configured.

;; using default converter
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; using mmd converter, a simple markdown to html converter written in awk
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Some statistics about the various format used in my blog.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

Port of the week: Lagrange

Written by Solène, on 02 January 2021.
Tags: #portoftheweek #gemini

Comments on Mastodon

Today's Port of the Week is about Lagrange, a gemini web browser.

Lagrange official website

Information about the Gemini protocol

Curated list of Gemini clients

Lagrange is the finest browser I ever used and it's still brand new. I imported it into OpenBSD and so it will be available starting from OpenBSD 6.9 releases.

Screenshot of the web browser in action with dark mode, it supports left and right side panels.

Lagrange is fantastic in the way it helps the user with the content browsed.

  • Links already visited display the last visited date
  • Subscription on page without RSS is possible for pages respecting a specific format (most of gemini space does)
  • Easy management of client certificates, used for authentication
  • In-page image loading, video watching and sound playing
  • Gopher support
  • Table of content displayed generated from headings
  • Keyboard navigation
  • Very light (dependencies, memory footprint, cpu usage)
  • Smooth scrolling
  • Dark and light modes
  • Much more

If you are interested into Gemini, I highly recommend this piece of software as a browser.

In case you would like to host your own Gemini content without requiring infrastructure, some community servers are offering hosting through secure sftp transfers.

Si3t.ch community Gemini hosting

Un bon café !

Once you get into Gemini space, I recommend the following resources:

CAPCOM feed agregator, a great place to meet new authors

GUS: a search engine

Vger gemini server can now redirect

Written by Solène, on 02 January 2021.
Tags: #gemini

Comments on Mastodon

I added a new feature to Vger gemini server.

Vger git repository

The protocol supports status code including redirections, Vger had no way to know if a user wanted to redirect a page to another. The redirection litteraly means "You asked for this content but it is now at that place, load it from there".

To keep it with vger Unix way, a redirection is done using a symbolic link:

The following command would redirect requests from gemini://perso.pw/blog/index.gmi to gemini://perso.pw/blog/index.gmi:

ln -s "gemini://perso.pw/capsule/index.gmi" blog/index.gmi

Unfortunately, this doesn't support globbing, in other words it is not possible to redirect everything from `/blog/` to `/capsule/` without creating a symlink for all previous resources to their new locations.

Host your Cryptpad web office suite with OpenBSD

Written by Solène, on 14 December 2020.
Tags: #web #openbsd

Comments on Mastodon

In this article I will explain how to deploy your own cryptpad instance with OpenBSD.

Cryptpad official website

Cryptpad is a web office suite featuring easy real time collaboration on documents. Cryptpad is written in JavaScript and the daemon acts as a web server.

Pre-requisites §

You need to install the packages git, node, automake and autoconfig to be able to fetch the sources and run the program.

# pkg_add node git autoconf--%2.69 automake--%1.16

Another web front-end software will be required to allow TLS connections and secure the network access to the Cryptpad instance. This can be relayd, haproxy, nginx or lighttpd. I'll cover the setup using httpd, and relayd. Note that Cryptpad developers will provide support only to Nginx users.

Installation §

I really recommend using dedicated users daemons. We will create a new user with the command:

# useradd -m _cryptpad

Then we will continue the software installation as the `_cryptpad` user.

# su -l _cryptpad

We will mainly follow the official instructions with some exceptions to adapt to OpenBSD:

Official installation guide

$ git clone https://github.com/xwiki-labs/cryptpad
$ cd cryptpad
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install bower
$ node_modules/.bin/bower install
$ cp config/config.example.js config/config.js

Configuration §

There are a few variables important to customize:

  • "httpUnsafeOrigin" should be set to the public address on which cryptpad will be available. This will certainly be a HTTPS link with an hostname. I will use https://cryptpad.kongroo.eu
  • "httpSafeOrigin" should be set to a public address which is different than the previous one. Cryptpad requires two different addresses to work. I will use https://api.cryptpad.kongroo.eu
  • "adminEmail" must be set to a valid email used by the admin (certainly you)

Make a rc file to start the service §

We need to automatically start the service properly with the system.

Create the file /etc/rc.d/cryptpad

#!/bin/ksh

daemon="/usr/local/bin/node"
daemon_flags="server"
daemon_user="_cryptpad"
location="/home/_cryptpad/cryptpad"

. /etc/rc.d/rc.subr

rc_start() {
	${rcexec} "cd ${location}; ${daemon} ${daemon_flags}"
}

rc_bg=YES
rc_cmd $1

Enable the service and start it with rcctl

# rcctl enable cryptpad
# rcctl start cryptpad

Operating §

Make an admin account §

Register yourself on your Cryptpad instance then visit the *Settings* page of your profile: copy your public signing key.

Edit Cryptpad file config.js and search for the pattern "adminKeys", uncomment it by removing the "/* */" around and delete the example key and paste your key as follow:

adminKeys: [
    "[solene@cryptpad.kongroo.eu/YzfbEYwZq6Xhl7ET6AHD01w3QqOE7STYgGglgSTgWfk=]",
],

Restart Cryptpad, the user is now admin and has access to a new administration panel from the web application.

Backups §

In the cryptpad directory, you need to backup `data` and `datastore` directories.

Extra configuration §

In this section I will explain how to configure generate your TLS certificate with acme-client and how to configure httpd and relayd to publish cryptpad. I consider it besides the current article because if you have nginx and already a setup to generate certificates, you don't need it. If you start from scratch, it's the easiest way to get the job done.

Acme client man page

Httpd man page and

Relayd man page

From here, I consider you use OpenBSD and you have blank configuration files.

I'll use the domain **kongroo.eu** as an example.

httpd §

We will use httpd in a very simple way. It will only listen on port 80 for all domain to allow acme-client to work and also to automatically redirect http requests to https.

# cp /etc/examples/httpd.conf /etc/httpd.conf
# rcctl enable httpd
# rcctl start httpd

acme-client §

We will use the example file as a default:

# cp /etc/examples/acme-client.conf /etc/acme-client.conf

Edit `/etc/acme-client.conf` and change the last domain block, replace `example.com` and `secure.example.com` with your domains, like `cryptpad.kongroo.eu` and `api.cryptpad.kongroo.eu` as alternative name.

For convenience, you will want to replace the path for the full chain certificate to have `hostname.crt` instead of `hostname.fullchain.pem` to match relayd expectations.

This looks like this paragraph on my setup:

domain kongroo.eu {
        alternative names { api.cryptpad.kongroo.eu cryptpad.kongroo.eu }
        domain key "/etc/ssl/private/kongroo.eu.key"
        domain full chain certificate "/etc/ssl/kongroo.eu.crt"
        sign with buypass
}

Note that with the default acme-client.conf file, you can use *letsencrypt* or *buypass* as a certification authority.

acme-client.conf man page

You should be able to create your certificates now.

# acme-client kongroo.eu

Done!

You will want the certificate to be renewed automatically and relayd to restart upon certificate change. As stated by acme-client.conf man page, add this to your root crontab using `crontab -e`:

~ * * * * acme-client kongroo.eu && rcctl reload relayd

relayd §

This configuration is quite easy, replace `kongroo.eu` with your domain.

Create a /etc/relayd.conf file with the following content:

relayd.conf man page

tcp protocol "https" {
        tls keypair kongroo.eu
}

relay "https" {
        listen on egress port 443 tls
        protocol https
        forward to 127.0.0.1 port 3000
}

Enable and start relayd using rcctl:

# rcctl enable relayd
# rcctl start relayd

Conclusion §

You should be able to reach your Cryptpad instance using the public URL now. Congratulations!

Kakoune editor cheatsheet

Written by Solène, on 02 December 2020.
Tags: #kakoune #editor #cheatsheet

Comments on Mastodon

This is a simple kakoune cheat sheet to help me (and readers) remember some very useful features.

To see kakoune in action.

Video showing various features in made with asciinema.

Official kakoune website (it has a video)

Commands (in command mode) §

Select from START to END position. §

Use `Z` to mark start and `alt+z i` to select unti current position.

Add a vertical cursor (useful to mimic rectangle operation) §

Type `C` to add a new cursor below your current cursor.

Clear all cursors §

Type `space` to remove all cursors except one.

Pasting text verbatim (without completion/indentation) §

You have to use "disable hook" command before inserting text. This is done with `\i` with `\` disabling hooks.

Split selection into cursors §

When you make a selection, you can use `s` and type a pattern, this will create a new cursor at the start of every pattern match.

This is useful to make replacements for words or characters.

A pattern can be a word, a letter, or even `^` to tell the beginning of each line.

How-to §

In kakoune there are often multiples way to do operations.

Select multiples lines §

Multiples cursors §

Go to first line, press `J` to create cursors below and press `X` to select whole lines of every cursors.

Using start / end markers §

Press `Z` on first line, and `alt+z i` on last line and then press `X` to select whole lines of every lines.

Using selections §

Press `X` until you reach the last line.

Replace characters or words §

Make a selection and type `|`, you are then asked for a shell command, you have to use `sed`.

Sed can be used, but you can also select the lines and split the selection to make a new cursor before each word and replace the content by typing it, using the `s` command.

Format lines §

For my blog I format paragraphs so lines are not longer than 80 characters. This can be done by selecting lines and run `fmt` using a pipe command. You can use other software if fmt doesn't please you.

How to deploy Vger gemini server on OpenBSD

Written by Solène, on 30 November 2020.
Tags: #gemini #openbsd

Comments on Mastodon

Introduction §

In this article I will explain how to install and configure Vger, a gemini server.

What is the gemini protocol

Short introduction about Gemini: it's a very recent protocol that is being simplistic and limited. Keys features are: pages are written in markdown like, mandatory TLS, no header, UTF-8 encoding only.

Vger program §

Vger source code

I wrote Vger to discover the protocol and the Gemini space. I had a lot of fun with it, it was the opportunity for me to rediscover the C language with a better approach. The sources include a full test suite. This test suite was unvaluable for the development process.

Vger was really built with security in mind from the first lines of code, now it offers the following features:

  • chroot and privilege dropping, and on OpenBSD it uses unveil/pledge all the time
  • virtualhost support
  • language selection
  • MIME detection
  • handcrafted man page, OpenBSD quality!

The name Vger is a reference to the 1979 first Star Trek movie.

Star Trek: The Motion Picture

Install Vger §

Compile vger.c using clang or gcc

$ make
# install -o root -g bin -m 755 vger /usr/local/bin/vger

Vger receives requests on stdin and gives the result on stdout. It doesn't take account of the hostname given but a request MUST start with `gemini://`.

vger official homepage

Setup on OpenBSD §

Create directory /var/gemini/, files will be served from there.

Create the `_gemini` user:

useradd -s /sbin/nologin _gemini

Configure vger in /etc/inetd.conf

11965 stream tcp nowait _gemini /usr/local/bin/vger vger

Inetd will run vger` with the _gemini user. You need to take care that /var/gemini/ is readable by this user.

inetd is a wonderful daemon listening on ports and running commands upon connections. This mean when someone connects on the port 11965, inetd will run vger as _gemini and pass the network data to its standard input, vger will send the result to the standard output captured by inetd that will transmit it back to the TCP client.

Tell relayd to forward connections in relayd.conf

log connection
relay "gemini" {
    listen on 163.172.223.238 port 1965 tls
    forward to 127.0.0.1 port 11965
}

Make links to the certificates and key files according to relayd.conf documentation. You can use acme / certbot / dehydrate or any "Let's Encrypt" client to get certificates. You can also generate your own certificates but it's beyond the scope of this article.

# ln -s /etc/ssl/acme/cert.pem /etc/ssl/163.172.223.238\:1965.crt
# ln -s /etc/ssl/acme/private/privkey.pem /etc/ssl/private/163.172.223.238\:1965.key

Enable inetd and relayd at boot and start them

# rcctl enable relayd inetd
# rcctl start relayd inetd

From here, what's left is populating /var/gemini/ with the files you want to publish, the `index.md` file is special because it will be the default file if no file are requests.

About Language Server Protocol and Kakoune text editor

Written by Solène, on 24 November 2020.
Tags: #kakoune #editor #openbsd

Comments on Mastodon

In this article I will explain how to install a lsp plugin for kakoune to add language specific features such as autocompletion, syntax error reporting, easier navigation to definitions and more.

The principle is to use "Language Server Protocol" (LSP) to communicate between the editor and a daemon specific to a programming language. This can be also done with emacs, vim and neovim using the according plugins.

Language Server Protocol on Wikipedia

For python, _pyls_ would be used while for C or C++ it would be _clangd_.

The how-to will use OpenBSD as a base. The package names may certainly vary for other systems.

Pre-requisites §

We need _kak-lsp_ which requires rust and cargo. We will need git too to fetch the sources, and obviously kakoune.

# pkg_add kakoune rust git

Building §

Official building steps documentation

I recommend using a dedicated build user when building programs from sources, without a real audit you can't know what happens exactly in the build process. Mistakes could be done and do nasty things with your data.

$ git clone https://github.com/kak-lsp/kak-lsp
$ cd kak-lsp
$ cargo install --locked --force --path .

Configuration §

There are a few steps. kak-lsp has its own configuration file but the default one is good enough and kakoune must be configured to run the kak-lsp program when needed.

Take care about the second command if you built from another user, you have to fix the path.

$ mkdir -p ~/.config/kak-lsp
$ cp kak-lsp.toml ~/.config/kak-lsp/

This configuration file tells what program must be used depending of the programming language required.

[language.python]
filetypes = ["python"]
roots = ["requirements.txt", "setup.py", ".git", ".hg"]
command = "pyls"
offset_encoding = "utf-8"

Taking the configuration block for python, we can see the command used is _pyls_.

For kakoune configuration, we need a simple configuration in ~/.config/kak/kakrc

eval %sh{/usr/local/bin/kak-lsp --kakoune -s $kak_session}
hook global WinSetOption filetype=(rust|python|go|javascript|typescript|c|cpp) %{
        lsp-enable-window
}

Note that I used the full path of kak-lsp binary in the configuration file, this is due to a rust issue on OpenBSD.

Link to Rust issue on github

Trying with python §

To support python programs you need to install python-language-server which is available in pip. There are no package for it on OpenBSD. If you install the program with pip, take care to have the binary in your $PATH (either by extending $PATH to ~/.local/bin/ or by copying the binary in /usr/local/bin/ or whatever suits you).

The pip command would be the following (your pip binary name may change):

$ pip3.8 install --user 'python-language-server[all]'

Then, opening python source file should activate the analyzer automatically. If you add a mistake, you should see `!` or `*` in the most left column.

Trying with C §

To support C programs, clangd binary is required. On OpenBSD it is provided by the clang-tools-extra package. If clangd is in your $PATH then you should have working support.

Using kak-lsp §

Now that it is installed and working, you may want to read the documentation.

kak-lsp usage

I didn't look deep for now, the autocompletion automatically but may be slow in some situation.

Default keybindings for "gr" and "gd" are made respectively for "jump to reference" and "jump to definition".

Typing "diag" in the command prompt runs "lsp-diagnostics" which will open a new buffer explaining where errors are warnings are located in your source file. This is very useful to fix errors before compiling or running the program.

Debugging §

The official documentation explains well how you can check what is wrong with the setup. It consists into starting kak-lsp in a terminal and kakoune separately and check kak-lsp output. This helped me a lot.

Official troubleshooting guide

[7th floor] Nethack story of Sery the tourist

Written by Solène, on 24 November 2020.
Tags: #nethack #gaming

Comments on Mastodon

Sery is back in the fourth floor 4 of the underworld. What mysteries are to be discovered? What enemies will be slayed so we can make our path?

Everything is awesome

Sery is in the fourth floor, she found stairs to go deeper but she also heard coins flipping. Maybe a merchant is around? That would be the right opportunity to buy weapons, armor and food.

               --------------
               |............|
              #.@...........+
              #|............|
              #|..>...$.....|
              #--------------
              ###
                #
                ##
                 #
                 #
                 #
                 #
         -- -----#
              <  #
         |      |
         |      |
         --------

After walking to a new room south-east, she found a large room with a hobbit statue h and a potion on the floor. The potion is not identified, so using it will be very risky.

The large room was a dead end. Back to the previous room Sery was now surrounded by enemies. A gas spore e, a green mold F and a giant bug :! She also felt hungry at the time, but she had to fight. Eggs and pancakes will be for another time.

           --------------
           |.F..........|
          #.:.....@..e..-#
          #|............|#
          #|..>...d.....|#
          #--------------#
          ###            #

While fleeing to the ascending stairs to search a merchant on this floor while escaping enemies, a gecko was blocking the way. Sery had to fight with her fists and fortunately the gecko didn’t oppose much resistance. But a few steps later, a goblin was also in the path. Sery’s dog location is unknown, it was certainly fighting in the previous room. Sery decided to drink a potion to recover from her 2 HP left and go back to the room, in hope the dog can help her.

It worked! The dog was just behind and charged the goblin would die instantly. The dog was starving and ate the goblin freshly killed, Sery was hungry too but preferred eating some pancake that wasn’t fresh, it had better taste than the remaining goblin meat tin can she had in her purse.

                               --------------
                               |            |
                              #.............-#
                              #|            |#
      ---------------         #|  >         |#
      .........o....|         #--------------#
      |.............|         ###            #
      |.......$....@d##         #            #
      --------------- ###       ##           #
                        #        #           #
                        #        #   `##################
                        #        #           #--------- --
                        #        #           #|         h|
                        #-- -----#           #|          |
                        #     <  #           #           |
                         |      |             |          |
                         |      |             |          |
                         --------             ------------

On the first steps in the room, she found a graffiti on the ground:

Atta?king a? ec| vhere the?c is rone i? usually a ?a?al mistakc!

The message didn’t make any sense. The room had a goblin statue and some gold on the ground, it’s all Sery had to know. The room was calm and nothing happened when crossing it. Sery seemed to be blessed!

        -----
        |....##  
        |@..| ###
        -----   #

Nearby she found a very small room with no other way than the entrance. This looked very suspicious and she decided to spend some time looking around for a clue about a secret door. She was right! A few minutes after she started to search, she found a hidden door! The door was not locked, which was surprising. Who knows what was waiting on the other side?

After walking a bit in a small and dark corridor, a new room was here, with an empty box along a wall and a grave in a corner in the opposite side of the room.

             -----
             |    ##                           --------------
            #-   | ###                         |            |
            #-----   #                        #             -#
            ##       #                        #|            |#
             ##      #---------------         #|  >         |#
              ##     #         o    |         #--------------#
      ---------#      |             |         ###            #
      |.......|#      |              ##         #            #
      |........#      --------------- ###       ##           #
      |.......|                         #        #           #
      |(@......                         #        #   `##################
      |......||                         #        #           #--------- --
      ---------                         #        #           #|         h|
                                        #-- -----#           #|          |
                                        #     <  #           #           |
                                         |      |             |          |
                                         |      |             |          |
                                         --------             ------------

The large box was locked! Without lock pick she wasn’t able to open it. After all she went through in the dungeon, anger gave her some strength to break the box padlock after a few kicks in it.

The box contained the following objects:

  • a pyramidal amulet
  • a food ration
  • a black gem
  • two green gems

She still had some room on her bag, it wasn’t too heavy for now so she decided to take everything from the box.

Kicking the box consumed energy and she decided to restart a little, and eat something. The food ration from the box looked very tasty but it may be poisoned or toxic so she avoid it and ate goblin meat in tin can. It wasn’t good, but did the job.

She looked at the grave, it was old and only had engraved words on it which appeared to be

Yes Dear, just a few more minutes…

A corridor in the room was leading to a dead end. There was nothing. Even after searching for a long time, Sery didn’t find any way there so she decided to go back and descend to the next floor.

On a way back, she had to fight monsters: a newt, a sewer rat, a gas spore! After the fights, hunger was back again! It was time for a good meal: goblin meat and food ration. It did hit the spot and Sery felt a lot better.

Fifth floor

In the fifth floor, a potion ! was lying on the ground. There was some light, it wasn’t completely dark, without a lamp or a torch this would be a real problem.

    ---------
    |.......+
    |.......|
    |@......|
    |..d.!..|
    |........
    ------- -

In a corridor leading to a room in the south, she had to kill a coyote in the way. The room had a teleportation trap and an apple %, food!

Going east, she walked through a long corridor until a dead end. After searching for some time she found a way to get a body through a hole and get to the other side. A boulder was in the tunnel but she have been able to push it, fortunately the bolder was rolling fine.

    ---------
    |       +
    |       |
    |<      |
    |       |
    |        
    ------- -
           #
           #
           ##
            #
            ##
             #
             #      #           #                    ##
          --- ------#           #             #      @
          |         #################################`
          |    ^   |
          ----------

Sery found a new room with two potions and a gnome. It was hard for Sery to know if the gnome was hostile

                -.--|--
                +..!G.|
       #        |...!.|
        ########d@....|
        #       |.....|
    ####`       -------

The dog got triggered by the gnome presence and ran to fight the gnome. The gnome was definitely hostile. Sery ended quickly in hand-to-hand combat with the gnome.

The camera’s flash! She thought it should work, after all the camera still had forty seven pictures to take, or enemies to blind.

It worked, the poor creature got blinded, the dog was biting its back. After a few hits, the gnome died, leaving a bow on the ground.

Continuing her way, Sery found the room with the descending stairs. There were a homunculus i and a sewer rat r waiting. She knew the rat was an easy target but the other enemy was unknown. It didn’t appeared friendly and she doubted to be able to kill it without risking her life.

    ---------
    |       +                                               -------------
    |       |                                               |...........|
    |<      |                                               -....>!.....|
    |       |                                               |...........|
    |                                                       ....i....r..|
    ------- -                                               -- -------@--
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                - --)--
            ##                                               +     |
             #                                      #        |  )  |
             #      #           #                    ########      |
          --- ------#           #             #      #       |     |
          |         #################################`       -------
          |    ^   |
          ----------

Sery decided to go back to the long corridor which had cross ways.

    ---------
    |       +                                               -------------
    |       |                                               |           |
    |<      |                                               -    >!     |
    |       |                                               |           |
    |                                                                   |
    ------- -                                               -- ------- --
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                -.--|--
            ##                                      #########i@....|
             #                                #######        |..)..|
             #      #           #             #      ########......|
          --- ------#           #             #      #       |)....|
          |         #################################`       -------
          |    ^   |
          ----------

The homunculus was fast! It found Sery back from where they met. Sery was in troubles. The homunculus seemed hard to escape and while fleeing in a corridor, a dwarf zombie Z blocked the way.

She tried to fight it but she lost 9 HP in 2 hits, the beast was very powerful. It was time to drink the random potions she got over the journey. They were unidentified but there was no choice, except praying maybe.

Praying! Sery wasn’t a believer but praying was the best she could do. Her pray was deep and pure, she only wanted to have some hope for her future and her quest.

The Lady heard her pray, Sery got surrounded by a shimmering light. The dwarf zombie attacked Sery but got pulled back by some energy field. Sery felt a lot better, her health was fully recovered and also increased.

                #########-.....|
          #######        |..)..|
          #      #Z@#####......|
          #      #       |)....|
        #########`       -------

Sery got a second chance, she certainly wanted to make a good use of it. At this time, the only thought in her mind was: RUN AWAY

She did run, very fast, to the stairs leading deeper. None enemies made troubles in her retreat.

Sixth floor

No time to look in the room she arrived, Sery got attacked by a brown mold, which in turn was killed by her dog.

    ------
    |....|
    |....|
    |.d@.|
    |....|
    |....|
    |....|
    --.---

The room had only way to the south. Finding a merchant was becoming urgent. Her food supplies were depleting. She had a lot of money but that is not helpful in the middle of the underground among the monsters.

In the south room there was a lichen F, but it seemed peaceful, or guarding the stairs to descend to seventh floor, who knows? The room had no other entrance than the one by which Sery came, but after examining the walls, she found a door.

     ------
     |    |
     |    |
     |  < |
     |    |
     |    |
     |    |
     -- ---
       ####
          #
          #
          ##
      ----- -      -----
      |     |     |....|
      |.F...-#####@....|
      |>    |     |....|
      -------      .!...
                   -----

Nothing unusual in this floor. Continuing her progresses through the tunnels, she ended in a dark room, she wasn’t able to see further than a meter away.

     ------
     |    |              -------------
     |    |             |          .d|
     |  < |            #-          .@|
     |    |            #----       -.-
     |    |            #
     |    |            ##
     -- ---             #
       ####             #
          #             #
          #             #
          ##            #
      ----- -     ------#
      |     |     |    |#
      |     -#####     |#
      |>    |     |    |#
      -------     |     #
                  ------

One more step and she came face to face with a homunculus. Fortunately the dog was just behind and not fighting any other aggressive animals. The dog killed it fast. But then another homunculus came, which also got killed by the dog.

In the end, those homunculus are pretty weak.

Room after room, with only emptiness as a friend, Sery walked for a long time. And then he appeared! The merchant !

     ------
     |    |              -------------                                      ------
     |    |             |            |                                      |????|
     |  < |            #-            |                                      |????|
     |    |            #----       - -                                      |???+|
     |    |            #            ##                                      |??+?|
     |    |            ##            #                                      |+??+|
     -- ---             #            #                                      |.@.
       ####             #        ---- -#                                    -@-
          #             #        |    -#                                     #
          #             #        |    |      |            -- ------        ###
          ##            #        |    -######|                    |        #
      ----- -     ------#        |    |     #|            |                #
      |     |     |    |#        |  <      ##         #### `      |        #
      |     -#####     |#        ------    ######     #   |        ###### - ----
      |>    |     |    |#                       #######   |     _ |     # |    |
      -------     |     #                                 |       |     ##     |
                  ------                                  ---------       ------

He was a bookseller, selling scrolls… Sery was so disappointed by this, she felt helpless for a moment.

FuguITA: OpenBSD live-cd

Written by Solène, on 18 November 2020.
Tags: #openbsd

Comments on Mastodon

In this article I will explain how to download and run the FuguITA OpenBSD live-cd, which is not an official OpenBSD project (it is not endorsed by the OpenBSD project), but is available since a long time and is carefully updated at every release and errata published.

FuguITA official homepage

I do like this project and I am running their European mirror, it was really long to download it from Europe before.

Please note that if you have issues with FuguITA, you must report it to the FuguITA team and not report it to the OpenBSD project.

Preparing §

Download the img or iso file on a mirror.

Mirror list from official project page

The file is gzipped, run gunzip on the img file FuguIta-6.8-amd64-202010251.img.gz (name may change over time because they get updated to include new erratas).

Then, copy the file to your usb memory stick. This can be dangerous if you don't write the file to the correct disk!

To avoid mistakes, I plug in the memory stick when I need it, then I check the last lines of the output of dmesg command which looks like:

sd1 at scsibus2 targ 1 lun 0: <Corsair, Voyager 3.0, 1.00> removable serial.1b1c1a03800000000060
sd1: 15280MB, 512 bytes/sector, 31293440 sectors

This tells me my memory stick is the sd1 device.

Now I can copy the image to the memory stick:

# dd if=FuguIta-6.8-amd64-202010251.img of=/dev/rsd1c bs=10M

Note that I use /dev/rsd1c for the sd1 device. I've added a r to use the raw mode (in opposition of buffered mode) so it gets faster, and the c stands for the whole disk (there is a historical explanation).

Starting the system §

Boot on your usb memory stick. You will be prompted for a kernel, you can wait or type enter, the default is to use the multiprocessor kernel and there are no reason to use something else.

If will see a prompt "scanning partitions: sd0i sd1a sd1d sd1i" and be asked which is the FuguIta operating device, proposing a default that should be the correct one.

FROM HERE, YOUR KEYBOARD IS IN QWERTY.

Just type enter.

The second question will be the memory disk allowed size (using TMPFS), just press enter for "automatic".

Then, a boot mode will be showed: the best is the mode 0 for a livecd experience.

Official documentation in regards to FuguITA specifics options

Keyboard type will be asked, just type the layout you want. Then answer to questions:

  • root password
  • hostname (you can just press enter)
  • IP to use (v4, v6, both [default])

When prompted for your network interfaces, WIFI may not work because the livecd doesn't have any firmware.

Finally, you will be prompted for C for console or X for xenodm. THERE ARE NO USER except root, so if you start X you can only use root as a user, which I STRONGLY discourage.

You can login console as root, use the two commands "useradd -m username" and "passwd username" to give a password to that user, and then start xenodm.

The livecd can restore data from a local hard drive, this is explained in the start guide of the FuguITA project.

Conclusion §

Having FuguITA around is very handy. You can use it to check your hardware compatibility with OpenBSD without installing it. Packages can be installed so it's perfect to check how OpenBSD performs for you and if you really want to install it on your computer.

You can also use it as an usb live system to transport OpenBSD anywhere (the system must be compatible) by using the persistent mode, encryption being a feature! This may be very useful for people traveling on lot and who don't necesserarly want to travel with an OpenBSD laptop.

As I said in the introduction, the team is doing a very good job at producing FuguITA releases shortly after the OpenBSD release, and they continuously update every release with new erratas.

Why I use OpenBSD

Written by Solène, on 16 November 2020.
Tags: #openbsd #life

Comments on Mastodon

Introduction §

In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.

Features §

Privacy §

There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.

The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.

Secure firefox / chromium §

While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.

With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.

This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.

It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.

PF firewall §

With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.

I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.

Linux users could use the software such as trickle or wondershaper for this.

It's stable §

Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.

I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.

Low maintenance §

Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.

Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):

# sysupgrade
[..wait..]
# pkg_add -u
# reboot

Documentation is accurate §

Setting up an OpenBSD system with full disk encryption is easy.

Documentation to create a router with NAT is explained step by step.

Every binary or configuration file have their own up-to-date man page.

The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.

If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.

Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.

Contributing is easy §

This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.

There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.

This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.

OpenIndiana official website

NixOS official website

One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.

High quality packages §

My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.

Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.

Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.

I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.

  • programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
  • nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
  • programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.

I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.

What I would like to see improved §

This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.

  • Better ARM support
  • Better performance (gently improving every release)
  • FFS improvements in regards to reliability (I often get files in lost+found)
  • Faster pkg_add -u
  • hardware video decoding/encoding support
  • better FUSE support and mount cifs/smb support
  • scaling up the contributions (more contributors and reviewers for ports@)

I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.

Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)

When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.

Toward an automated tracking of OpenBSD ports contributions

Written by Solène, on 15 November 2020.
Tags: #openbsd #automation

Comments on Mastodon

Since my previous article about a continous integration service to track OpenBSD ports contribution I made a simple proof of concept that allowed me to track what works and what doesn't work.

The continuous integration goal §

A first step for the CI service would be to create a database of diffs sent to ports. This would allow people to track what has been sent and not yet committed and what the state of the contribution is (build/don't built, apply/don't apply). I would proceed following this logic:

  • a mail arrive and is sent to the pipeline
  • it's possible to find a pkgpath out of the file
  • the diff applies
  • distfiles can be fetched
  • portcheck is happy

Step 1 is easy, it could be mail dumped into a directory that get scanned every X minutes.

Step 2 is already done in my POC using a shell script. It's quite hard and required tuning. Submitted diffs are done with diff(1), cvs diff or git diff. The important part is to retrieve the pkgpath like "lang/php/7.4". This allow testing the port exists.

Step 3 is important, I found three cases so far when applying a diff:

  • it works, we can then register in the database it can be used to build
  • it doesn't work, human investigation required
  • the diff is already applied and patch think you want to reverse it. It's already committed!

Being able to check if a diff is applied is really useful. When building the contributions database, a daily check of patches that are known to apply can be done. If a reverse patch is detected, this mean it's committed and the entry could be delete from the database. This would be rather useful to keep the database clean automatically over time.

Step 4 is an inexpensive extra check to be sure the distfiles can be downloaded over the internet.

Step 5 is also an inexpensive check, running portinfo can reports easy to fix mistakes.

All the steps only require a ports tree. Only the step 4 could be tricked by someone malicious, using a patch to make the system download very huge files or files with some legal concerns, but that message would also appear on the mailing list so the risk is quite limited.

To go further in the automation, building the port is required but it must be done in a clean virtual machine. We could then report into the database if the diff has been producing a package correctly, if not, provide the compilation log.

Automatic VM creation §

Automatically creating an OpenBSD-current virtual machine was tricky but I've been able to sort this out using vmm, rsync and upobsd.

The script download the last sets using rsync, that directory is served from a mail server. I use upobsd to create an automatic installation with bsd.rd including my autoinstall file. Then it gets tricky :)

vmm must be started with its storage disk AND the bsd.rd, as it's an auto install, it will reboot after the install finishes and then will install again and again.

I found that using the parameters "-B disk" would make the vm to shutdown after installation for some reasons. I can then wait for the vm to stop and then start it without bsd.rd.

My vmm VM creation sequence:

upobsd -i autoinstall-vmm-openbsd -m http://localhost:8080/pub/OpenBSD/
vmctl stop -f -w integration
vmctl start -B disk -m 1G -L -i 1 -d main.qcow2 -b autobuild_vm/bsd.rd integration
vmctl wait integration
vmctl start -m 1G -L -i 1 -d main.qcow2 integration

The whole process is long though. A derivated qcow image could be used after creation to try each port faster until we want to update the VM again.

Multplies vm could be used at once to make parallel testing and make good use of host ressources.

What's done so far §

I'm currently able to deposite email as files in a directory and run a script that will extract the pkgpath, try to apply the patch, download distfiles, run portcheck and run the build on the host using PORTS_PRIVSEP. If the ports compiled fine, the email file is deleted and a proper diff is made from the port and moved into a staging directory where I'll review the diffs known to work.

This script would stop on blocking error and write a short text report for each port. I intended to sent this as a reply to the mailing at first, but maintaining a parallel website for people working on ports seems a better idea.

The Nethack story of Sery the tourist

Written by Solène, on 15 November 2020.
Tags: #nethack #gaming

Comments on Mastodon

First episode of maybe a serie!

Let’s play NetHack and write a story along the way. I find nethack to be a wonderful game despite its quite simple graphics. In this game, you can do more actions than in any modern games. I can dip a towel in a fountain to make it wet, and wear it on my head. Maybe it would protect me from heat? Who knows.

As this leaves a lot of place for the imagination, every serious nethack game I play, I create a story in my head and try to imagine the various situations, so maybe I could write them down?

Welcome to the underworld Gehennom, you will read the story of Sery the human female neutral tourist and her dog. She has to find the Amulet of Yendor and come back to the surface, for some reasons.

@ is Sery and d is her dog.

Arrival - first floor

{ is a fountain, # a sink, - an open door and + a closed door.

In her inventory, she has 875 gold, tourists are rich! 24 darts to throw at enemies, 2 fortunes cookies, some various food (goblin meat in tin can, eggs, carrot, apple, pancakes…), 4 scrolls of magic mapping, 2 healing potions, and expensive camera and an uncursed credit card.

       ---+---------
       |......{....-
       |@.........#|
       |d..........|
       -------------

She went to the closed door but it resisted, after kicked it three times, the door opened! After walking around in tunnel, she only found empty rooms, leading to others tunnels.

# are corridors (when they are not sinks in a room).

                             --------
                            #   ..  |
                            #|  ..  |
                            #|  ..  |
                            #---|----
                            #   ##
                          ###########
                          ##     #
                          #      #
                          #      #
          ----------|---###   ##d@##
          |             #     # ###
          |            |      #---.---------
          |            -#######|..... {    -
          |            |       |<....     #|
          |            |       |.....      |
          --------------       -------------

At the end of a corridor, Sery was stuck but after searching around for some secrets passage, she found a hidden passage to the first room. Back to square one.

                             --------
                            #       |
                            #|      |
                            #|      |
                            #---|----
                            #   ##
                          ###########
                          ##     #           # #
                          #      #       #######
                          #      #       #   #
          ----------|---###   ############  #d
          |             #     # ###         @
          |            |      #--- ---------#
          |            -#######|      {....-#
          |            |       |<   ......#|
          |            |       |   ........|
          --------------       -------------

After she heard some noise in a corridor, she stumbled on a boulder ``` but it is impossible to move it to clear the corridor.

A new room was found, with a large box ( in it. What could be in this box?

           ------
           |....|
         ##d.@..+
        ###|....|
        ## |....|
        ##`|.(..|
        #  |....|
        #  ------

While walking toward the box, her dog suddenly disappeared, falling in a trap door! Sery shorten her exploration of the first level after opening the box to look after her dog.

The large box was locked, without weapon or tools to unlock it, Sery kicked the large box a dozen time so it opened. What a disappointment when she was it was empty!

Second floor

            ----------
            |......@.|
            .........|
            |........|
            |....>...|
            |.....$..|
            ----------

Sery jumped into the trap to descend to the level below, her dog wasn’t in the room though. There were five gold to loot and stairs to descend to the third level. She needed to find her dog before continuing exploration to third level.

In the adjacent corridor, the dog was found sound and safe!

After continuing the exploration, a room was found with enemies!

F lichen, o goblin and a : newt! That was a lot of enemies for a simple tourist. She wanted to pull them into a corridor and let her dog take care of the enemies. This was a good spartiate strategy after all!

                                ----------
                                |        |
                               #         |
                               #|        |
                               #|    >   |
                               #|        |
                               #----------
                               #
                               #
         --------              #
         |.......              #
         .......F|      -------#
         |:....o.@d#####......|#
         |.......|      |      #
         |.......       |     |
         |......        |     |
         -------        -------

Unfortunately, when a lichen is in contact with you, you can’t escape. It took a while for Sery to kill the lichen and retreat in the corridor, she receive a few hits from the lichen and the goblin (HP 6/10). She heard some noises while staying in the corridor, after coming back in the room, the dog finished to kill the newt and the goblin seemed to ran away.

             -------- 
             |.....o. 
             ........|
             |.....d.@
             |.......|
             |....... 
             |......  
             -------  

The dog was then attacking the goblin and killed it rather quickly. This was really fortunate that Sery was in company of her dog.

After walking a bit to continue the exploration, Sery stumbled on a sewer rat, she got hit rather hard and didn’t had much HP left! While retreating to the last room, looking for the dog who stayed back eating the goblin corp, the dog came back to her bringing a iron skull cap certainly found on the dead goblin. In one bit, the dog killed the rat.

After some rest to recover a few HP, Sery went back to the exploration. The exploration was quiet and easy, rooms with unlocked doors, she found the stairs to go upstairs. Nothing of interested was to be found, so it was time to go to the third level. A newt and a lichen were encountered in the corridors but opposed little resistance to the dog.

    ---------                                                   ----------
    |       |                                                   |........|
    |       |       ----------                                 #.........|
    |       |       |        |                                 #|.d..@...|
    |       |       |        |                                 #|F...>...|
    |       |       |        |                                 #|........|
    - -|--- -#   ###-        |                                 #----------
      ### ####  ##  |        |                                 #
       #  `##`###   --- ------                                 #
       ###     ###    ##                 ---------             #
         #####  #     #####              |       |             #
    ---------|-##      ######          ##        |      -------#
    |         |#      -- ---|-----     # |       -######      |#
    |         |#      |          |   ### |       |      |      #
    |         |#      |          |   #   |       |      |     |
    |         -#      |           ####   |       |      |     |
    | <       |       ------------       ---------      -------
    -----------

Third floor

The room where Sery arrived in the third level had an enemy, a huge x bug and some money in a corner near a door.

                      --------------
                      |...@........|
                      |....d.......|
                      ....x.......$|
                      |............+
                      --------------

The door required two kicks to be opened.

In the next room, Sery saw a bug before entering, so she immediately swapped her place with her dig in the corridor to let her defender do their job.

< are upstairs stairs.

                      --------------
                      |   <        |
                      |            |
                                   |
                      |             ##
                      -------------- #
                                     ##    --+-
                                      ##d@.x..|
                                            .$|
                                              .
                                              -

As usual, the dog took care of the enemies. A new room was found, multiples windows, some opening in previous rooms wasn’t explored yet too. There were lot of exploration to be done in this area.

                                   --------
                                   |......+
                                   |......|
                                   +>.{...|
              --------------       |......|
              |   <        |       |....@.|
              |            |       -----.--     ...
                           |        ######
              |             ##       #####
              -------------- #       #
                             ##   ---|-
                              ####    |
                                  |   |
                                  |    
                                  -----

While exploring, Sery got to fight a giant rat, she didn’t know where her dog was so she had to fight for real this time.

                                                           --------
             ----                                          |      +
             ....                                          |      |
              ..                     ######################-> {   |
               r                     #--------------       |      |
              #@#####                #|   <        |       |      |
              #     #              ###|            |       ----- --        
                    ##             ###             |        ######
                     #            ##  |             ##       #####
                     #            ##  -------------- #       #
                     #             #                 ##   ---|-
                     ##        #####                  ####    |
                    #- ------  ####                       |   |
                     +      |  #                          |    
                     | >     ###                          -----
                     |      |###
                     |      |
                     --------

Thinking about her inventory, she panicked and used her camera. The flash blinded the giant rat and he ran away! Unfortunately, another giant rat came from the left corridor. She tried to use her camera again but it didn’t work as expected as the giant was still standing in the corridor. The blinding effect didn’t seem very effective because a few seconds later, the first giant rat was back again!

      ----     
      ....     
       ..      
        r      
       r@##### 
       #     # 
         ##

She had no choice but run away, maybe at least fight then but one at a time in a corridor. She want backward, suffered from a giant rat bite and found her dog on the way, who came to the rescue. While she let her dog fighting, a third rat came from behind, this one, she really had to fight, no escape was possible with the dog fighting two rats in the corridor on the other side.

Camera flash, it worked! Time to throw darts, one dart was enough to kill the rat but she missed it a few times. The rat never missed a bite, Sery was in poor health at this moment.

The dog killed the two rats and she was safe, for now.

While walking around to find her way, she got surprised by a giant zombie Z who hit her hard. She had only 1 health point left. Death was close. What she could do? Try the camera flash, drink a potion, escape until her dog run and try to bite the zombie?

She decided to try the health potion and then, support enough hits from the zombie to blind it while the dog behind it was killing the undead living. It was a good idea, at the moment she dunked the healing potion, the zombie hit her, losing one health point, she would be dead if she didn’t drink that potion, then the dog killed the monster and our duo leveled up!

It was time to finish exploring and get deeper in the underworld. A = ring was on the ground in the last room. It was silver ring.

                                                             --------
               --------------                                |      +
              #.            |                                |      |
              #|            |          ######################-> {   |
              #-- -----------          #--------------       |      |
              #########                #|   <        |       |      |
                #     #              ###|            |       ----- --        
                #     ##             ###             |        ######
     -----------#      #            ##  |             ##       #####
     |.......=@.#      #            ##  -------------- #       #
     |.........|       #             #                 ##   ---|-
     |.........        ##        #####                  ####    |
     |....`....|      #- ------  ####                       |   |
     |..  .....|       +      |  #                          |    
     ---  ------       | >     ###                          -----
                       |      |###
                       |      |
                       --------

It would be foolish to wear the ring without identifying it first, it could be a cursed ring you can’t remove that makes you blind or provoke some unwanted effects.

Fourth floor

Arriving at the fourth floor, Sery found a green gem. Feeling this floor would be quite complicated, she decided to read one of her mapping scroll.

       -------
      --     |                                                    ---  ---    ---
      |  --  |           ------                       --- ----   -- ---- --  -- --
      | -|-- |           |  | ---                    -- ---  --  |        ----   |
      |  --| |           |      ----                --        |  |        >      |
      |   || ----------  --      | --------------- --         |  ---             |
      | | ||          -------        | --      | ---         --    -- ---        --
      | |--|  -------     ---                                | ---- --- --        |
      | |  | --     ---                                      | |  |---- --       --
      | -- | --       -------     ----       --  - --        ---  --  | |       --
     --  --|  |             |    --  |       |--   --- ---            ---       |
     |    |-- |             ---  |   --     -| ---  --------                    |
     |    | | ---------       ----    |      --  --      --|            ---     |
     | -- | |.....--.@--             --       |   ------   |-- --      -- |     |
     ---| | ----.......|        ------        |        |-  | ---|-    --  |     |
       -- |   --......-|       --  |         --        |   ---  |    --   --   --
     ---  |  --........|      --             |         |     |  |  ---     -----
    --   --  |.........|      |         -- ---         --    |  ----
    |   --   |......--.|      |     --  |---            ---  |
    --  |    --.|.------      ---- ------                 ----
     ----     -----              ---

After the whole map got reveal in her mind, she got face to face with a dwarf h wielding a dagger. He really didn’t seem friendly but he didn’t attack her yet.

The whole area was very dark, without a torch or a light source, exploring this level would be very tedious.

After exploring the room, looking for interesting loots on the ground, the dwarf attacked her. This was a very dolorous stabbing. Sery retreated back to the upper stairs, she wanted to reach the level below through the other stairs on this level. In the room, she found her dog which stayed behind, fighting a gecko and a giant rat.

She started to feel hungry, hopefully she went to the underworld with a lot of food. She decided to eat a fortune cookie. When cracking it, she found a paper saying: They say that you should never introduce a rope golem to a succubus. This didn’t make much sense to her though.

While walking toward the other stairs, Sery found a graffiti on the ground: ??urist? we?r shirts loud enougn to wake t?e ?e?d.. As for the fortune cookie, this didn’t make much sense.

On her way, she fought various enemies: red mold, newt, rats, found a banana. Descending the stairs, she was surprised to see they didn’t lead to the forth floor with the dwarves, it was a parallel fourth floor. Could it be possible?? There were a newt and money in the room, it wasn’t dark.

             -- -----
             .....@..
             |....d.|
             |...:.$|
             --------

She was angry.

The dog jumped on the newt and killed it. The duo got experience to reach level four. The dog, being a little dog, did grow up into a dog.

After a short rest to eat and recover health, Sery went back in corridors to find a way and continue her quest.

                   --------------
                   |............|
                  #.@...........+
                  #|............|
                  #|..>...$.....|
                  #--------------
                  ###
                    #
                    ##
                     #
                     #
                     #
                     #
             -- -----#
                  <  #
             |      |
             |      |
             --------

In the room she found stairs to go in the level below, would it be a good idea to descend now or should she explore the area first? She had lot of money, finding a merchant to buy armors and weapons would be a good idea.

To be continued

It’s all for today! Please tell me if you enjoyed it!

Full featured Slackware email server with sendmail and cyrus-imapd

Written by Solène, on 14 November 2020.
Tags: #slackware #email

Comments on Mastodon

This article is about making your own mail server using Slackware linux distribution, sendmail and cyrus-imap. This choice is because I really love Slackware and I also enjoy non-mainstream stacks. While everyone would recommend postfix/dovecot, I prefer using sendmail/cyrus-imap. Please not this article contain ironical statements, I will try to write them with some emphasis.

While some people use fossil fuel cars, some people use Slackware.

If you are used to clean, reproducible and automated deployments, the present how-to is the totally opposite. This is the /Slackware/ way.

Slackware

Slackware is one of the oldest (maybe the oldest with debian) linux distribution out there and it’s still usable. The last release (14.2) is 4 years old but there are still security updates. I choose to use the development branch slackware-current for this article.

I discovered an alternative to Windows in the early 2000’ with a friend showing me a « Linux » magazine, featuring Slackware installation CDs and the instructions to install. It was my very first contact with Linux and open source ever. I used Slackware multiple times over time, and it was always a great system for me on my main laptop.

The Slackware specifics could be said as: “not changing much” and “quite limited”. Slackware never change much between releases, from 2010 to 2020, it’s pretty much the same system when you use it. I say it’s rather limited, package wise, the default Slackware installation requires like 15 GB on your disk because it bundles KDE and all the kde apps, a bunch of editors (emacs,vim,vs,elvis), lot of compilers/interpreter (gcc, llvm, ada, scheme, python, ruby etc..). While it provides a LOT of things out of the box, you really get all Slackware can offer. If something isn’t in the packages, you need to install it yourself.

Full Disk Encryption or nothing

I recommend to EVERYONE the practice of having a full disk encryption (phone, laptop, workstation, servers). If your system get stolen, you will only lose hardware when you use full disk encryption.

Without encryption, the thief can access all your data forever.

Slackware provides a file README_CRYPT.txt explaining how to install on an encrypted partition. Don’t forget to tell the bootloader LILO about the initrd, and keep in mind the initrd must be recreated after kernel upgrade

Use ntpd

It’s important to have a correct time on your server.

# chmod +x /etc/rc.d/rc.ntpd
# /etc/rc.d/rc.ntpd start

Disable ssh password authentication

In /etc/ssh/sshd_config there are two changes to do:

Turn UsePam yes into UsePam no and add PasswordAuthentication.

Changes can be applied by restarting ssh with /etc/rc.d/rc.sshd restart.

Before enabling this, don’t forget to deploy your public key to an user who is able to become to root.

Get a SSL certificate

We need a SSL certificate for the infrastructure, so we will install certbot. Unfortunately, certbot-auto doesn’t work on Slackware because the system is unsupported. So we will use pip and call certbot in standalone mode so we don’t need a web server.

# pip3 install certbot
# certbot certonly --standalone -d mydomain.foobar -m usernam@example

My domain being kongroo.eu the files are generated under /etc/letsencrypt/live/kongroo.eu/.

Configure the DNS

Three DNS entries have to be added for a working email server.

  1. SPF to tell the world which addresses have the right send your emails
  2. MX to tell the world which addresses will receive the emails and in which order
  3. DKIM (a public key) to allow recipients to check your emails really comes from your servers (signed used a private key)
  4. DMARC to tell recipient what to do with mails not respecting SPF

SPF

Simple, add an entry with v=spf1 mx if you want to allow your MX servers to send emails. Basically, for simple setups, the same server receive and send emails.

@ 1800 IN SPF "v=spf1 mx"

MX

My server with the address kongroo.eu will receive the emails.

@ 10800 IN MX 50 kongroo.eu.

DKIM

This part will be a bit more complicated. We have to generate a pair of public and private keys and run a daemon that will sign outgoing emails with the private key, so recipients can verify the emails signature using the public key available in the DNS. We will use opendkim, I found this very good article explaining how to use opendkim with sendmail.

Opendkim isn’t part of slackware base packages, fortunately it is available in slackbuilds, you can check my previous article explaining how to setup slackbuilds.

# groupadd -g 305 opendkim
# useradd -r -u 305 -g opendkim -d /var/run/opendkim/ -s /sbin/nologin \
    -c  "OpenDKIM Milter" opendkim
# sboinstall opendkim

We want to enable opendkim at boot, as it’s not a service from the base system, so we need to “register” it in rc.local and enable both.

Add the following to /etc/rc.d/rc.local:

if [ -x /etc/rc.d/rc.opendkim ]; then
  /etc/rc.d/rc.opendkim start
fi

Make the scripts executable so they will be run at boot:

# chmod +x /etc/rc.d/rc.local
# chmod +x /etc/rc.d/rc.opendkim

Create the key pair:

# mkdir /etc/opendkim
# cd /etc/opendkim
# opendkim-genkey -t -s default -d kongroo.eu

Get the content of default.txt, we will use it as a content for a TXT entry in the DNS, select only the content between parenthesis without double quotes: your DNS tool (like on Gandi) may take everything without warning which would produce an invalid DKIM signature. Been there, done that.

The file should looks like:

default._domainkey      IN      TXT     ( "v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB" )

But the content I used for my entry at gandi is:

v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB

Now we need to configure opendkim to use our keys. Edit /etc/opendkim.conf to changes the following lines already there:

Domain                  kongroo.eu
KeyFile /etc/opendkim/default.private
ReportAddress           postmaster@kongroo.eu

Dmarc

We have to tell DMARC, this may help being accepted by big corporate mail servers.

_dmarc.kongroo.eu.   IN TXT    "v=DMARC1;p=none;pct=100;rua=mailto:postmaster@kongroo.eu;"

This will tell the recipient that we don’t give specific instruction to what to do with suspicious mails from our domain and tell postmaster@kongroo.eu about the reports. Expect daily mail from every mail server reached in the day to arrive on that address.

Install Sendmail

Unfortunately Slackware team dropped sendmail in favor to postfix in the default install, this may be a good thing but I want sendmail. Good news: sendmail is still in the extra directory.

I wanted to use citadel but it was really complicated, so I went to sendmail.

Installation

Download the two sendmail txz packages on a mirror in the “extra” directory: https://mirrors.slackware.com/slackware/slackware64-current/extra/sendmail/

Run /sbin/installpkg on both packages.

Configuration

We will disable postfix.

# sh /etc/rc.d/rc.postfix stop
# chmod -x /etc/rc.d/rc.postfix

Enable sendmail and saslauthd

# chmod +x /etc/rc.d/rc.sendmail
# chmod +x /etc/rc.d/rc.saslauthd

All the configuration will be done in /usr/share/sendmail/cf/cf, we will use a default template from the package. As explained in the cf files, we need to use a template and rebuild from this directory containing all the macros.

# cp sendmail-slackware-tls-sasl.mc /usr/share/sendmail/cf/cf/config.mc

Every time we want to rebuild the configuration file, we need to apply the m4 macros to have the real configuration file.

# sh Build config.mc
# cp config.cf /etc/mail/sendmail.cf

My config.mc file looks like this (I stripped the comments):

include(`../m4/cf.m4')
VERSIONID(`TLS supporting setup for Slackware Linux')dnl
OSTYPE(`linux')dnl
define(`confCACERT_PATH', `/etc/letsencrypt/live/kongroo.eu/')
define(`confCACERT', `/etc/letsencrypt/live/kongroo.eu/cert.pem')
define(`confSERVER_CERT', `/etc/letsencrypt/live/kongroo.eu/fullchain.pem')
define(`confSERVER_KEY', `/etc/letsencrypt/live/kongroo.eu/privkey.pem')
define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
define(`confTO_IDENT', `0')dnl
FEATURE(`use_cw_file')dnl
FEATURE(`use_ct_file')dnl
FEATURE(`mailertable',`hash -o /etc/mail/mailertable.db')dnl
FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable.db')dnl
FEATURE(`access_db', `hash -T<TMPF> /etc/mail/access')dnl
FEATURE(`blocklist_recipients')dnl
FEATURE(`local_procmail',`',`procmail -t -Y -a $h -d $u')dnl
FEATURE(`always_add_domain')dnl
FEATURE(`redirect')dnl
FEATURE(`no_default_msa')dnl
EXPOSED_USER(`root')dnl
LOCAL_DOMAIN(`localhost.localdomain')dnl
INPUT_MAIL_FILTER(`opendkim', `S=inet:8891@localhost')
MAILER(local)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl
define(`confAUTH_OPTIONS', `A p y')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
TRUST_AUTH_MECH(`LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
DAEMON_OPTIONS(`Port=smtps, Name=MSA-SSL, M=Esa')dnl
LOCAL_CONFIG
O CipherList=ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-LOW:+SSLv3:+TLSv1:-SSLv2:+EXP:+eNULL

Create the file /etc/sasl2/Sendmail.conf with this content:

pwcheck_method:saslauthd

This will tell sendmail to use saslauthd for PLAIN and LOGIN connections. Any SMTP client will have to use either PLAIN or LOGIN.

If you start sendmail and saslauthd, you should be able to send e-mails with authentication.

We need to edit /etc/mail/local-host-names to tell sendmail for which domain it should accept local deliveries.

Simply add your email domain:

kongroo.eu

The mail logs are located under /var/log/maillog, every mail sent well signed with DKIM should appear under a line like this:

[time] [host] sm-mta[2520]: 0AECKet1002520: Milter (opendkim) insert (1): header: DKIM-Signature:  [whole signature]

Configure DKIM

This has been explained in a subsection of sendmail configuration. If you didn’t read this step because you don’t want to setup dkim, you missed information required for the next steps.

Install cyrus-imap

Slackware ships with dovecot in the default installation, but cyrus-imapd is available in slackbuilds.

The bad news is that the slackbuild is outdated, so here it a simple patch to apply in /usr/sbo/repo/network/cyrus-imapd. This patch also fixes a compilation issue.

diff --git a/network/cyrus-imapd/cyrus-imapd.SlackBuild b/network/cyrus-imapd/cyrus-imapd.SlackBuild
index 48e2c54e55..251ca5f207 100644
--- a/network/cyrus-imapd/cyrus-imapd.SlackBuild
+++ b/network/cyrus-imapd/cyrus-imapd.SlackBuild
@@ -23,7 +23,7 @@
 #  ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

 PRGNAM=cyrus-imapd
-VERSION=${VERSION:-2.5.11}
+VERSION=${VERSION:-2.5.16}
 BUILD=${BUILD:-1}
 TAG=${TAG:-_SBo}

@@ -107,6 +107,8 @@ CXXFLAGS="$SLKCFLAGS" \
   $DATABASE \
   --build=$ARCH-slackware-linux

+sed -i'' 's/gettid/_gettid/g' lib/cyrusdb_berkeley.c
+
 make PERL_MM_OPT='INSTALLDIRS=vendor'
 make install DESTDIR=$PKG

diff --git a/network/cyrus-imapd/cyrus-imapd.info b/network/cyrus-imapd/cyrus-imapd.info
index 99b2c68075..6ae26365dc 100644
--- a/network/cyrus-imapd/cyrus-imapd.info
+++ b/network/cyrus-imapd/cyrus-imapd.info
@@ -1,8 +1,8 @@
 PRGNAM="cyrus-imapd"
 VERSION="2.5.11"
 HOMEPAGE="https://www.cyrusimap.org/"
-DOWNLOAD="ftp://ftp.cyrusimap.org/cyrus-imapd/cyrus-imapd-2.5.11.tar.gz"
-MD5SUM="674083444c36a786d9431b6612969224"
+DOWNLOAD="https://github.com/cyrusimap/cyrus-imapd/releases/download/cyrus-imapd-2.5.16/cyrus-imapd-2.5.16.tar.gz"
+MD5SUM="d5667e91d8e094ef24560a148e39c462"
 DOWNLOAD_x86_64=""
 MD5SUM_x86_64=""
 REQUIRES=""

You can apply it by carefully copying the content in a file and use the command patch.

We can now proceed with cyrus-imapd compilation and installation.

# env DATABASE=sqlite sboinstall cyrus-imapd

As explained in the README file shown during installation, we need to do a few instructions.

# mkdir -m 750 -p /var/imap /var/spool/imap /var/sieve
# chown cyrus:cyrus /var/imap /var/spool/imap /var/sieve
# su - cyrus
# /usr/doc/cyrus-imapd-2.5.16/tools/mkimap
# logout

Add the following to /etc/rc.d/rc.local to enable cyrus-imapd at boot:

if [ -x /etc/rc.d/rc.cyrus-imapd ]; then
  /etc/rc.d/rc.cyrus-imapd start
fi

And make the rc script executable:

# chmod +x /etc/rc.d/rc.cyrus-imapd

The official cyrus documentation is very well done and was very helpful while writing this.

The configuration file is /etc/imapd.conf:

configdirectory: /var/imap
partition-default: /var/spool/imap
sievedir: /var/sieve
admins: cyrus
sasl_pwcheck_method: saslauthd
allowplaintext: yes
tls_server_cert: /etc/letsencrypt/cyrus/fullchain.pem
tls_server_key:  /etc/letsencrypt/cyrus/privkey.pem
tls_client_ca_dir: /etc/ssl/certs

There is another file /etc/cyrusd.conf used but we don’t need to make changes in it.

We will have to copy the certificates into a separate place and allow cyrus user to read them. This will have to be done every time the certificate are renewed. Let’s add the certbot command so we can use this script as a cron.

#!/bin/sh
DOMAIN=kongroo.eu
LIVEDIR=/etc/letsencrypt/live/$DOMAIN/
DESTDIR=/etc/letsencrypt/cyrus/

certbot certonly --standalone -d $DOMAIN -m usernam@example
mkdir -p $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/fullchain.pem $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/privkey.pem $DESTDIR
/etc/rc.d/rc.sendmail restart
/etc/rc.d/rc.cyrus-imapd restart

Add a crontab entry to run this script once a day, using crontab -e to change root crontab.

MAILTO=""
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
0 5 * * * sh /root/renew_certs.sh

Starting the mail server

We prepared the mail server to be working on reboot, but the services aren’t started yet.

# /etc/rc.d/rc.saslauthd start
# /etc/rc.d/rc.sendmail start
# /etc/rc.d/rc.cyrus-imapd start
# /etc/rc.d/rc.opendkim start

Adding a new user

Add a new user to your system.

# useradd $username
# passwd $username

For some reasons the user mailboxes must be initialized. The same password must be typed twice (or passed as parameter using -w $password).

# USER=foobar
# DOMAIN=kongroo.eu
# echo "cm INBOX" | rlwrap cyradm -u $USER $DOMAIN
Password:
IMAP Password:

Voila! The user should be able to connect using IMAP and receive emails.

Check your email setup

You can use the web service Mail tester by sending an email. You could copy/paste a real email to avoid having a bad mark due to spam recognition (which happens if you send a mail with a few words). The bad spam core isn’t relevant anyway as long as it’s due to the content of your email.

Conclusion

I had real fun writing this article, digging hard in Slackware and playing with unusual programs like sendmail and cyrus-imapd. I hope you will enjoy too as much as I enjoyed writing it!

If you find mistakes or bad configuration settings, please contact me so, I will be happy to discuss about the change and fix this how-to.

Nota Bene: Slackbuilds aren’t mean to be used on the current version, but really on the last release. There is a github repository carrying the -current changes on a github repository https://github.com/Ponce/slackbuilds/.

How to use Slackware community slackbuilds

Written by Solène, on 13 November 2020.
Tags: #slackware

Comments on Mastodon

In today article I will explain how to use Slackbuilds repository on a Slackware current system.

You can read the Documentation of slackbuilds for more information.

We will first install sbotools package which make the use of slackbuilds a lot easier: like a proper ports tree. As it’s preferable to let the tools create the repository, we will install them without downloading the whole slackbuild repository.

Download the slackbuild from this page, extract it and cd into the new directory.

$ tar xzvf sbotools.tar.gz
$ cd sbotools
$ . ./sbotools.info
$ wget $DOWNLOAD
$ md5sum $(basename $DOWNLOAD)
$ echo $MD5SUM

The two md5 string should match.

Now, run the build as root

$ sudo sh sbotools.SlackBuild
[lot of text]
Slackware package /tmp/sbotools-2.7-noarch-1_SBo.tgz created.

Now you can install the created package using

$ sudo /sbin/installpkg /tmp/sbotools-2.7-noarch-1_SBo.tgz

We now have a few programs to use the slackbuilds repository, they all have their own man page:

  • sbocheck
  • sboclean
  • sboconfig
  • sbofind
  • sboinstall
  • sboremove
  • sbosnap
  • sboupgrade

Creating the repository

As root, run the following command:

# sbosnap fetch
Pulling SlackBuilds tree...
Cloning into '/usr/sbo/repo'...
remote: Enumerating objects: 59, done.
remote: Counting objects: 100% (59/59), done.
remote: Compressing objects: 100% (59/59), done.
remote: Total 485454 (delta 31), reused 14 (delta 0), pack-reused 485395
Receiving objects: 100% (485454/485454), 134.37 MiB | 1.20 MiB/s, done.
Resolving deltas: 100% (337079/337079), done.
Updating files: 100% (39863/39863), done.

The slackbuilds tree is now installed under /usr/sbo/repo. This could be configured before using sboconfig -s /home/solene which would create a /home/solene/repo.

Searching a port

One can use the command sbofind to look for a port:

# sbofind nethack
SBo:    nethack 3.6.6
Path:   /usr/sbo/repo/games/nethack

SBo:    unnethack 5.2.0
Path:   /usr/sbo/repo/games/unnethack

Install a port

We will install the previously searched port: nethack

# sboinstall nethack
Nethack is a single-player dungeon exploration game. The emphasis is
on discovering the detail of the dungeon. Each game presents a
different landscape - the random number generator provides an
essentially unlimited number of variations of the dungeon and its
denizens to be discovered by the player in one of a number of
characters: you can pick your race, your role, and your gender.

User accounts that play this need to be members of the "games" group.

Proceed with nethack? [y] y
nethack added to install queue.

Install queue: nethack

Are you sure you wish to continue? [y] y
[... compilation ... ]
+==============================================================================
| Installing new package /tmp/nethack-3.6.6-x86_64-1_SBo.tgz
+==============================================================================

Verifying package nethack-3.6.6-x86_64-1_SBo.tgz.
Installing package nethack-3.6.6-x86_64-1_SBo.tgz:
PACKAGE DESCRIPTION:
# nethack (roguelike game)
#
# Nethack is a single-player dungeon exploration game. The emphasis is
# on discovering the detail of the dungeon. Each game presents a
# different landscape - the random number generator provides an
# essentially unlimited number of variations of the dungeon and its
# denizens to be discovered by the player in one of a number of
# characters: you can pick your race, your role, and your gender.
#
# http://nethack.org
#
Package nethack-3.6.6-x86_64-1_SBo.tgz installed.
Cleaning for nethack-3.6.6...

Done, nethack is installed! sboinstall manages dependencies and if required will ask you for every required other slackbuilds to install to add to the queue before starting compiling.

Example: getting flatpak

Flatpak is a software distribution system for linux distributions, mainly to provide desktop software that could be complicated to package like Libreoffice, GIMP, Microsoft Teams etc… Using Slackware, this can be a good source of software.

To use flatpak and the official flathub repository, we need to install flatpak first. It’s now as easy as:

# sboinstall flatpak

And answer yes to questions (you will be asked to agree for every dependency required, there are a few of them), if you don’t want to answer, you can use -r flag to automatically accept.

We need to add the official repository flathub using the following command:

# flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

And now you can browse flatpak programs on flathub

For example, if you want to install VLC

# flatpak install flathub org.videolan.VLC

You will be prompted about all the dependencies required in order to get VLC installed, those dependencies are some system parts that will be shared across all the flatpak software in order to efficiently use disk space. For VLC, some kde components will be required and also Xorg GL/VAAPI/openh264 environments, flatpak manage all this and you don’t have to worry about this.

The file /usr/sbo/repo/desktop/flatpak/README explains quirks of flatpak on Slackware, like pulseaudio instructions or the polkit policy on slackware not allowing your user to use the global flatpak install command.

I found the following ~/.xinitrc to enable dbus and pulseaudio for me, so flatpak programs work.

start-pulseaudio-x11
eval $(pax11publish -i)
dbus-run-session fvwm2

About the offline laptop project

Written by Solène, on 10 November 2020.
Tags: #life #disconnected

Comments on Mastodon

Third article of the offline laptop serie.

Sometimes, network access is required

Having a totally disconnected system isn’t really practical for a few reasons. Sometimes, I really need to connect the offline laptop to the network. I do produce some content on the computer, so I need to do backups. The easiest way for me to have reliable backup is to host them on a remote server holding the data, this requires network connection for the time of the backup. Of course, backups could be done on external disks or usb memory sticks (I don’t need to backup much), but I never liked this backup solution; don’t get me wrong, I don’t say it’s ineffective, but it doesn’t suit my needs.

Besides the backup, I may need to sync files like my music files. I may have bought new music that I want to get on the offline laptop, so network access is required.

I also require internet access to install new packages or upgrade the system, this isn’t a regular need but I occasionnaly require a new program I forgot to install. This could be solved by downloaded the whole packages repository but this would require too many disk space for packages I would never use. This would also waste a lot of network transfer.

Finally, when I work on my blog, I need to publish the files, I use rsync to sync the destination directory from my local computer and this requires access to the Internet through ssh.

A nice place at the right time

The moments I enjoy using this computer the most is by taking the laptop on a table with nothing around me. I can then focus about what I am doing. I find comfortable setups being source of distraction, so a stool and a table are very nice in my opinion.

In addition to have a clean place to use it, I like to dedicate some time for the use of this computer. I can write texts or some code in a given time frame.

On a computer with 24/7 power and internet access I always feel everything is at reach, then I tend to slack with it.

Having a rather limited battery life changes the way I experience the computer use. It has a finite time, I have N minutes until the computer has to be charged or shutdown. This produces for me the same effect than when starting watching a movie, sometimes I pick up a movie that fits the time I can spend on it.

Knowing I have some time until the computer stops, I know I must keep focused because time is passing.

Keyboard tweaks to use Xorg on an IBook laptop

Written by Solène, on 09 November 2020.
Tags: #openbsd

Comments on Mastodon

Simple article for posterity or future-me. I will share here my tweaks to make the IBook G4 laptop (apple keyboard) suitable for OpenBSD , this should work for Linux too as long as you run X.

Command should be alt+gr

I really need the alt+gr key which is not there on the keyboard, I solved this by using this line in my ~/.xsession.

xmodmap -e "keycode 115 = ISO_Level3_Shift"

i3 and mod4

As the touchpad is incredibely bad by nowadays standards (and it only has 1 button and no scrolling feature!), I am using a window manager that could be entirely keyboard driven, while I’m not familiar with tiling window manager, i3 was easy to understand and light enough. Long time readers may remember I am familiar with stumpwm but it’s not really a dynamic tiling window manager, I can only tolerate i3 using the tabs mode.

But an issue arise, there are no “super” key on the keyboard, and using “alt” would collide with way too many programs. One solution is to use “caps lock” as a “super” key.

I added this in my ~/.xsession file:

xmodmap ~/.Xmodmap

with ~/.Xmodmap having the following instructions:

clear Lock 
keycode 66 = Hyper_L
add mod4 = Hyper_L
clear Lock

This will disable to “toggling” effect of caps lock, and will turn it into a “Super” key that will be refered as mod4 for i3.

Connect to Mastodon using HTTP 1.0 with Brutaldon

Written by Solène, on 09 November 2020.
Tags: #openbsd68 #openbsd #mastodon

Comments on Mastodon

Today post is about Brutaldon, a Mastodon/Pleroma interface in old fashion HTML like in the web 1.0 era. I will explain how it works and how to install it. Tested and approved on an 16 years old powerpc laptop, using Mastodon with w3m or dillo web browsers!

Introduction

Brutaldon is a mastodon client running as a web server. This mean you have to connect to a running brutaldon server, you can use a public one like Brutaldon.online and then you will have two ways to connect to your account:

  1. using oauth which will redirect through a dedicated API page of your mastodon instance and will give back a token once you logged in properly, this is totally safe of use, but requires javascript to be enabled to works due to the login page on the instance
  2. there is “old login” method in which you have to provide your instance address, your account login and password. This is not really safe because the brutaldon instance will known about your credentials, but you can use any web browser with that. There are not much security issues if you use a local brutaldon instance

How to install it

The installation is quite easy, I wish this could be as easy more often. You need a python3 interpreter and pipenv. If you don’t have pipenv, you need pip to install pipenv. On OpenBSD this would translates as:

$ pip3.8 install --user pipenv

Note that on some system, pip3.8 could be pip3, or pip. Due to the coexistence of python2 and python3 for some time until we can get ride of python2, most python related commands have a suffix to tell which python version it uses.

If you install pipenv with pip, the path will be ~/.local/bin/pipenv.

Now, very easy to proceed! Clone the code, run pipenv to get the dependencies, create a sqlite database and run the server.

$ git clone git://github.com/jfmcbrayer/brutaldon.git
$ cd brutaldon
$ pipenv install
$ pipenv run python ./manage.py migrate
$ pipenv run python ./manage.py runserver

And voilà! Your brutaldon instance is available on http://localhost:8000, you only need to open it on your web browser and log-in to your instance.

As explained in the INSTALL.md file of the project, this method isn’t suitable for a public deployment. The code is a Django webapp and could be used with wsgi and a proper web server. This setup is beyond the scope of this article.

Join the peer to peer social network Scuttlebutt using OpenBSD and Oasis

Written by Solène, on 04 November 2020.
Tags: #openbsd68 #openbsd #ssb

Comments on Mastodon

In this article I will tell you about the Scuttlebutt social network, what makes it special and how to join it using OpenBSD. From here, I’ll refer to Scuttlebutt as SSB.

Introduction to the protocol

You can find all the related documentation on the official website. I will make a simplification of the protocol to present it.

SSB is decentralized, meaning there are no central server with clients around it (think about Twitter model) nor it has a constellation of servers federating to each others (Fediverse: mastodon, plemora, peertube…). SSB uses a peer to peer model, meaning nodes exchanges data between others nodes. A device with an account is a node, someone using SSB acts as a node.

The protocol requires people to be mutual followers to make the private messaging system to work (messages are encrypted end-to end).

This peer to peer paradigm has specific implications:

  1. Internet is not required for SSB to work. You could use it with other people in a local network. For example, you could visit a friend’s place exchange your SSB data over their network.
  2. Nodes owns the data: when you join, this can be very long to download the content of nodes close to you (relatively to people you follow) because the SSB client will download the data, and then serves everything locally. This mean you can use SSB while being offline, but also that in the case seen previously at your friend’s place, you can exchange data from mutual friends. Example: if A visits B, B receives A updates. When you visit B, you will receive B updates but also A updates if you follow B on the network.
  3. Data are immutables: when you publish something on the network, it will be spread across nodes and you can’t modify those data. This is important to think twice before publishing.
  4. Moderation: there are no moderation as there are no autority in control, but people can block nodes they don’t want to get data from and this blocking will be published, so other people can easily see who gets blocked and block it too. It seems to work, I don’t have opinion about this.
  5. You discover parts of the network by following people, giving you access to the people they follow. This makes the discovery of the network quite organic and should create some communities by itself. Birds of feather flock together!
  6. It’s complicated to share an account across multiples devices because you need to share all your data between the devices, most people use an account per device.

SSB clients

There are differents clients, the top clients I found were:

There are also lot of applications using the protocol, you can find a list on this link. One particularly interesting project is git-ssb, hosting a git repository on the network.

Most of the code related to SSB is written in NodeJS.

In my opinion, Patchwork is the most user-friendly client but Oasis is very nice too. Patchwork has more features, like being able to publish pictures within your messages which is not currently possible with Oasis.

Manyverse works fine but is rather limited in term of features.

The developer community working on the projects seems rather small and would be happy to receive some help.

How to install Oasis on OpenBSD

I’ve been able to get the Oasis client to run on OpenBSD. The NodeJS ecosystem is quite hostile to anything non linux but following the path of qbit (who solved few libs years ago), this piece of software works.

$ doas pkg_add libvips git node autoconf--%2.69 automake--%1.16 libtool
$ git clone https://github.com/fraction/oasis
$ cd oasis
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install --only=prod

There is currently ONE issue that require a hack to start Oasis. The lo0 interface must not have any IPv6 address.

You can use the following command as root to remove the IPv6 addresses.

# ifconfig lo0 -inet6

I reported this bug as I’ve not been able to fix it myself.

How to use Oasis on OpenBSD

When you want to use Oasis, you have to run

$ node /path/to/oasis_sources

You can add --help to have the usage output, like --offline if you don’t want oasis to do networking.

When you start oasis, you can then open http://localhost:3000 to access network. Beware that this address is available to anyone having access to your system.

You have to use an invitation from someone to connect to a node and start following people to increase your range in this small world.

You can use a public server which acts as a 24/7 node to connect people together on https://github.com/ssbc/ssb-server/wiki/Pub-Servers.

How to backup your account

You absolutely need to backup your ~/.ssb/ directory if you don’t want to lose your account. There are no central server able to help you recover your account in case of data lass.

If you want to use another client on another computer, you have to copy this directory to the new place.

I don’t think the whole directory is required, but I have not been able to find more precise information.

How the OpenBSD -stable packages are built

Written by Solène, on 29 October 2020.
Tags: #openbsd

Comments on Mastodon

In this long blog post, I will write about the technical details of the OpenBSD stable packages building infrastructure. I have setup the infrastructure with the help of Theo De Raadt who provided me the hardware in summer 2019, since then, OpenBSD users can upgrade their packages using pkg_add -u for critical updates that has been backported by the contributors. Many thanks to them, without their work there would be no packages to build. Thanks to pea@ who is my backup for operating this infrastructure in case something happens to me.

The total lines of code used is around 110 lines of shell.

Original design

In the original design, the process was the following. It was done separately on each machine (amd64, arm64, i386, sparc64).

Updating ports

First step is to update the ports tree using cvs up from a cron job and capture its output. If there is a result, the process continues into the next steps and we discard the result.

With CVS being per-directory and not using a database like git or svn, it is not possible to “poll” for an update except by verifying every directory if a new version of files is available. This check is done three time a day.

Make a list of ports to compile

This step is the most complicated of the process and weights for a third of the total lines of code.

The script uses cvs rdiff between the cvs release and stable branches to show what changed since release, and its output is passed through a few grep and awk scripts to only retrieve the “pkgpaths” (the pkgpath of curl is net/curl) of the packages that were updated since the last release.

From this raw output of cvs rdiff:

File ports/net/dhcpcd/Makefile changed from revision 1.80 to 1.80.2.1
File ports/net/dhcpcd/distinfo changed from revision 1.48 to 1.48.2.1
File ports/net/dnsdist/Makefile changed from revision 1.19 to 1.19.2.1
File ports/net/dnsdist/distinfo changed from revision 1.7 to 1.7.2.1
File ports/net/icinga/core2/Makefile changed from revision 1.104 to 1.104.2.1
File ports/net/icinga/core2/distinfo changed from revision 1.40 to 1.40.2.1
File ports/net/synapse/Makefile changed from revision 1.13 to 1.13.2.1
File ports/net/synapse/distinfo changed from revision 1.11 to 1.11.2.1
File ports/net/synapse/pkg/PLIST changed from revision 1.10 to 1.10.2.1

The script will produce:

net/dhcpcd
net/dnsdist
net/icinga/core2
net/synapse

From here, for each pkgpath we have sorted out, the sqlports database is queried to get the full list of pkgpaths of each packages, this will include all packages like flavors, subpackages and multipackages.

This is important because an update in editors/vim pkgpath will trigger this long list of packages:

editors/vim,-lang
editors/vim,-main
editors/vim,gtk2
editors/vim,gtk2,-lang
[...40 results hidden for readability...]
editors/vim,no_x11,ruby
editors/vim,no_x11,ruby,-lang
editors/vim,no_x11,ruby,-main

Once we gathered all the pkgpaths to build and stored them in a file, next step can start.

Preparing the environment

As the compilation is done on the real system (using PORTS_PRIVSEP though) and not in a chroot we need to clean all packages installed except the minimum required for the build infrastructure, which are rsync and sqlports.

dpb(1) can’t be used because it didn’t gave good results for building the delta of the packages between release and stable.

The various temporary directories used by the ports infrastructure are cleaned to be sure the build starts in a clean environment.

Compiling and creating the packages

This step is really simple. The ports infrastructure is used to build the packages list we produced at step 2.

env SUBDIRLIST=package_list BULK=yes make package

In the script there is some code to manage the logs of the previous batch but there is nothing more.

Every new run of the process will pass over all the packages which received a commit, but the ports infrastructure is smart enough to avoid rebuilding ports which already have a package with the correct version.

Transfer the package to the signing team

Once the packages are built, we need to pass only the built packages to the person who will manually sign the packages before publishing them and have the mirrors to sync.

From the package list, the package file lists are generated and reused by rsync to only copy the packages generated.

env SUBDIRLIST=package_list show=PKGNAMES make | grep -v "^=" | \
      grep ^. | tr ' ' '\n' | sed 's,$,\.tgz,' | sort -u

The system has all the -release packages in ${PACKAGE_REPOSITORY}/${MACHINE_ARCH}/all/ (like /usr/ports/packages/amd64/all) to avoid rebuilding all dependencies required for building a package update, thus we can’t copy all the packages from the directory where the packages are moved after compilation.

Send a notification

Last step is to send an email with the output of rsync to send an email telling which machine built which package to tell the people signing the packages that some packages are available.

As this process is done on each machine and that they don’t necessarily build the same packages (no firefox on sparc64) and they don’t build at the same speed (arm64 is slower), mails from the four machines could arrive at very different time, which led to a small design change.

The whole process is automatic from building to delivering the packages for signature. The signature step requires a human to be done though, but this is the price for security and privilege separation.

Current design

In the original design, all the servers were running their separate cron job, updating their own cvs ports tree and doing a very long cvs diff. The result was working but not very practical for the people signing who were receiving mails from each machine for each batch.

The new design only changed one thing: One machine was chosen to run the cron job, produce the package list and then will copy that list to the other machines which update their ports tree and run the build. Once all machines finished to build, the initiator machine will gather outputs and send an unique mail with a summary of each machine. This became easier to compare the output of each architecture and once you receive the email this means every machine finished their job and the signing can be done.

Having the summary of all the building machines resulted in another improvement: In the logic of the script, it is possible to send an email telling absolutely no package has been built while the process was triggered, which means, something went wrong. From here, I need to check the logs to understand why the last commit didn’t produce a package. This can be failures like a distinfo file update forgotten in the commit.

Also, this permitted fixing one issue: As the distfiles are shared through a common NFS mount point, if multiples machines try to fetch a distfile at the same time, both will fail to build. Now, the initiator machine will download all the required distfiles before starting the build on every node.

All of the previous scripts were reused, except the one sending the email which had to be rewritten.

Port of the week: rclone

Written by Solène, on 28 October 2020.
Tags: #portoftheweek

Comments on Mastodon

New Port of the Week after 3 years! I never thought it was so long since last blog post about slrn.

This post is about the awesome rclone program, written in Go and available on most popular platforms (including OpenBSD!). I will explain how to configure it from the interactive command, from file and what you can do with rclone.

rclone can be see as a rsync on steroids which supports lot of Cloud backend and also support creating an encrypted data repository over any backend (local file, ftp, sftp, webdav, Dropbox, AWS S3, etc…).

It’s not a automatic synchronization tool or a backup software. It can copy files from A to B, synchronize two places (can be harmful if you don’t pay attention).

Let’s see how to use it with an ssh server on which we will create an encrypted repository to store important data.

Official documentation

Installation

Most of the time, run your package manager to install rclone. It’s a single binary.

Interactive configuration

You can skip this LONG section if you want to learn what rclone can do and how to configure it in a 10 lines files.

There is a parameter to have a question / answer interface to configure your repository, using rclone config.

I’ll make a full walkthrough to enable an encrypted repository because I struggled to understand the logic behind rclone when I started using it.

Let’s start. I’ll create an encrypted destination on my local NAS which doesn’t have full disk encryption, so anyone who access the system won’t be able to read my data. First, this will require to set up an sftp repository and then an encrypted repository using the previous one as a backend.

Let’s create a new config named home_nas.

$ rclone config
2020/10/27 21:30:48 NOTICE: Config file "/home/solene/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> home_nas

We want the storage type 29, “SSH/SFTP” (I removed all 50+ others storages for readability).

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
29 / SSH/SFTP Connection
   \ "sftp"
[...]
Storage> 29

My host is 192.168.1.200

** See help for sftp backend at: https://rclone.org/sftp/ **

SSH host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "example.com"
host> 192.168.1.200

I will connect with the username solene.

SSH username, leave blank for current username, solene
Enter a string value. Press Enter for the default ("").
user> solene

Standard port 22, which is the default

SSH port, leave blank to use default (22)
Enter a string value. Press Enter for the default ("").
port> 

I answer n because I want rclone to use ssh agent, this could be the ssh password to the remote user, but I highly discourage everyone from using password authentication on SSH!

SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n

Leave this except if you want to provide a private key.

Raw PEM-encoded private key, If specified, will override key_file parameter.
Enter a string value. Press Enter for the default ("").
key_pem> 

Leave this except if you want to provide a PEM-encoded private key.

Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.

Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.

Enter a string value. Press Enter for the default ("").
key_file> 

Leave this except if you need to use a password to unlock your private key. I use ssh agent so I don’t need it.

The passphrase to decrypt the PEM-encoded private key file.

Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n

If your user agent manage multiples keys, you should enter the correct value here, I only have one key so I leave it empty.

When set forces the usage of the ssh-agent.

When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
when the ssh-agent contains many keys.
Enter a boolean value (true or false). Press Enter for the default ("false").
key_use_agent> 

This is a question about crypto, accept the default except if you have to connect to old servers.

Enable the use of insecure ciphers and key exchange methods. 

This enables the use of the following insecure ciphers and key exchange methods:

- aes128-cbc
- aes192-cbc
- aes256-cbc
- 3des-cbc
- diffie-hellman-group-exchange-sha256
- diffie-hellman-group-exchange-sha1

Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Use default Cipher list.
   \ "false"
 2 / Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.
   \ "true"
use_insecure_cipher> 

We want to keep hashcheck feature so just skip the answer to keep the default value.

Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Enter a boolean value (true or false). Press Enter for the default ("false").
disable_hashcheck> 

We are at the end of the configuration, we are proposed to change more parameters but we don’t need to.

Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n

Now we can see the output of the configuration file of rclone in regards to my home_nas destination. I agree with the configuration to continue.

Remote config
--------------------
[home_nas]
type = sftp
host = 192.168.1.200
user = solene
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Here is a summary of the configuration, we have only one remote here.

Current remotes:

Name                 Type
====                 ====
home_nas             sftp

In the menu, I will choose to add another remote. Let’s name it home_nas_encrypted

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> home_nas_encrypted

We will choose the special storage crypt which work on an existing backend.

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
10 / Encrypt/Decrypt a remote
   \ "crypt"
Storage> 10

To this question, we will define we want the data stored to home_nas_encrypted being saved in home_nas remote in the encrypted_repo directory.

** See help for crypt backend at: https://rclone.org/crypt/ **

Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
remote> home_nas:encrypted_repo

Depending on the level of obfuscation your choice may vary. The simple filename obfuscation is fine for me.

How to encrypt the filenames.
Enter a string value. Press Enter for the default ("standard").
Choose a number from below, or type in your own value
 1 / Encrypt the filenames see the docs for the details.
   \ "standard"
 2 / Very simple filename obfuscation.
   \ "obfuscate"
 3 / Don't encrypt the file names.  Adds a ".bin" extension only.
   \ "off"
filename_encryption> 2

As for the directory names obfuscation, I recommend to enable it, otherwise that leave the whole directory tree readable!

Option to either encrypt directory names or leave them intact.

NB If filename_encryption is "off" then this option will do nothing.
Enter a boolean value (true or false). Press Enter for the default ("true").
Choose a number from below, or type in your own value
 1 / Encrypt directory names.
   \ "true"
 2 / Don't encrypt directory names, leave them intact.
   \ "false"
directory_name_encryption> 1

Type the password that will be used to encrypt the data.

Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:

You can add a salt to the passphrase, I choose not too.

Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> 

No need to change advanced parameters.

Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n

Here is a summary of the configuration of this remote backend. I’m fine with it.

Remote config
--------------------
[home_nas_encrypted]
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

We see we have now two remote backends, one with the crypt type.

Current remotes:

Name                 Type
====                 ====
home_nas             sftp
home_nas_encrypted   crypt

Quit rclone, the configuration is done.

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Configuration file

The previous configuration process only produced this short configuration file, so you may copy/paste from it and adapt to add more backends if you want, instead of doing the tedious config process.

Here is my file ~/.config/rclone/rclone.conf on my desktop.

[home_nas]
type = sftp
host = 192.168.1.200
user = solene

[home_nas_encrypted]
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = GDS9B1B1LrBa3ltQrSbLf1Vq5C6VbaA1AJVlSZ8

First usage

Now we defined our configuration, we need to create the remote directory that will be used as a backend, this is important to avoid errors when using rclone, this is a simple step required only once.

$ rclone mkdir home_nas_encrypted:

On the remote server, I can see a /home/solene/encryted_repo directory. It’s now ready to use!

A few commands

rclone has a LOT of commands available, I will present a few of them.

Copying files to/from backend

Let’s say I want to copy files to the encrypted repository. There is a copy command.

$ rclone copy /home/solene/log/templates home_nas_encrypted:blog_template  

There are no output by default when the program runs fine. You can use -v flag to have some verbose output (I prefer it).

List files on a remote backend

Now, we want to see if the files were copied correctly, we will use the ls command.

$ rclone ls home_nas_encrypted:
      299 blog_template/article.tpl
      700 blog_template/gopher_head.tpl
     2505 blog_template/layout.tpl
      295 blog_template/more.tpl
      236 blog_template/navigation.tpl
       57 blog_template/one-tag.tpl
       34 blog_template/page.tpl
      189 blog_template/rss-item.tpl
      326 blog_template/rss.tpl

We can also use ncdu to mimic the ncdu program displaying a curses interfaces to visualize disk usage in a nice browsing tree.

$ rclone ncdu home_nas_encrypted
-- home_nas_encrypted: ------------------
  6.379k [##########] /blog_template

The sync command

Files and directories can also be copied with the sync command, but this must be used with care because it makes sure the destination matches exactly the origin when you use it. It’s the equivalent of rsync -a --delete origin/ destination/, so any extra files will be removed! Note that you can use --dry-run to see what will happen.

Filters

When you copy files using the various available method, instead of using a path, you can provide a filter file or a list of paths to transfers. This can be very efficient when you want to recover specifics data.

The documentation about filtering is available here

Parameters

rclone supports a lot of parameters, like to limit upload bandwidth, copy multiples files at once, enable an interactive mode in case of file deletion/overwriting.

Mount

On Linux, FreeBSD and MacOS, rclone can use a FUSE filesystem to mount the remote repository on the filesystem, making its uses totally transparent.

This is extremely useful, avoiding the tediousness of the get/put paradigm of rclone.

This can even be used to make an encrypted repository on the local filesystem! :)

Create a webdav/sftp/ftp server

rclone has the capability of act as a server and expose a configured remote backend on various network protocol like webdav, sftp, ftp, s3 (minio) !

The serv document is available here

Example running a simple webdav server with hardcoded login/password:

$ rclone serv webdav --user solene --password ANicePassword home_nas_encrypted:

OpenVPN as the default gateway on OpenBSD

Written by Solène, on 27 October 2020.
Tags: #openbsd68 #openbsd #openvpn

Comments on Mastodon

If you plan to use an OpenVPN tunnel to reach your default gateway, which would make the tun interface in the egress group, and use tun0 in your pf.conf which is loaded before OpenVPN starts?

Here are the few tips I use to solve the problems.

Remove your current default gateway

We don’t want a default gateway on the system. You need to know the remote address of the VPN server.

If you have a /etc/mygate file, remove it.

The /etc/hostname.if file (with if being your interface name, like em0 for example), should look like this:

192.168.1.200
up
!route add -host A.B.C.D 192.168.1.254
  • First line is the IP on my lan
  • Second line is to make the interface up.
  • Third line is means you want to reach A.B.C.D via 192.168.1.254, with the IP A.B.C.D being the remote VPN server.

Create the tun0 interface at boot

Create a /etc/hostname.tun0 file with only up as content, that will create tun0 at boot and make it available to pf.conf and you prevent it from loading the configuration.

You may think one could use “egress” instead of the interface name, but this is not allowed in queuing.

Don’t let OpenVPN manage the route

Don’t use redirect-gateway def1 bypass-dhcp from the OpenVPN configuration, this will create a route which is not default and so the tun0 interface won’t be in the egress group, which is not something we want.

Add those two lines in your configuration file, to execute a script once the tunnel is established, in which we will make the default route.

script-security 2
up /etc/openvpn/script_up.sh

In /etc/openvpn/script_up.sh you simply have to write

#!/bin/sh
/sbin/route add -net default X.Y.Z.A

If you have IPv6 connectivity, you have to add this line:

/sbin/route add -inet6 2000::/3 fe80::%tun0

(not sure it’s 100% correct for IPv6 but it works fine for me! If it’s wrong, please tell me how to make it better).

A curated non-violent games list

Written by Solène, on 18 October 2020.
Tags: #gaming

Comments on Mastodon

For long time I wanted to share a list of non-violent games I enjoyed, so here it is. Obviously, this list is FAR from being complete and exhaustive. It contains games I played and that I liked. They should all run on Linux and some on OpenBSD.

Aside this list, most tycoon and puzzle games should be non-violent.

Automation / Building games

This game is like Factorio, you have to automate production lines and increase the output of shapes/colors. Very time consuming.

The project is Open source but you need to buy the game if you don’t want to compile yourself. Or just use my compiled version working in a web browser.

Play shapez.io in web browser

A transport tycoon game, multiplayer possible! Very complex, the community is active and you can find tons of mods.

The game is Open source and you can certainly install it on any distribution with the package manager.

This game is about building equipment to restore the nature into a wasteland, improve the biodiversity and then remove all your structures.

The game is not open source but is free of charge. The music seems to be under an open licence. Still, you can pay what you want for it to support the developer.

This is a short game about chaining producing buildings into another, all from garbage up to some secret ending :)

The game is not open source but is free of charge.

Sandbox / Adventure game

This game is a clone of Minecraft, it supports a lot of mods (which can make the game very complex, like adding trains tracks with their signals, the pinnacle of complexity :D). As far as I know, the game now supports health but there are no fight involved.

The game is Open source and free of charge.

This game is about exploration in a forest. It has a nice music, gameplay is easy.

The game is not open source but it’s free. Still, you can pay what you want for it to support the developer.

Action / reflex games

This category of games contains games that require some reflexes or at least need to player to be active to play.

This game is about driving a 2D motocross and pass through obstacles, it can be very hard and will challenge you for long time.

it’s Open source and free of charge.

This is a fun game where you need to drive some big trucks only using a displayed control panel with your mouse which make things very hard.

The game is not open source and not free, but the cost isn’t very high (3.99€ at the moment from France).

This game is about a teenager character who is on vacation in a place with no cell network, and you will have to make a hike and meet people to go to the end. Very relaxing :)

The game isn’t open source and isn’t free, but costs around 8€ at the moment from France.

This game is about adding trains to tracks and avoid them to crash. I found this game to be more about reflexes than building, simulation or tycoon. You mostly need to route the trains in real time.

The game isn’t open source and not free but costs around 10€.

This game is a 2D platform game with interesting gameplay mechanics, it is surprisingly full of good ideas and a very nice music :) The characters are very cute and the whole environment looking great.

The game isn’t open source and not free.

Simulation

This game may not be liked by everyone, it consists at driving a truck in Europe and pick up a cargo to deliver it someone else, taking care of not hurting it and driving safely by respecting the law. You can also buy garages and hire people to drive trucks for you to make money. The game is relaxing and also pretty accurate in the environment. I have been driving in many European countries and this game really reflects country signs, cars, speed limits, country side etc… Some cities received more work and you can see monuments from the road. The game doesn’t cost much and works on Linux although it’s not open source.

This game is hard and will require learning. The goal is to create rockets to send astronauts in space, or even land on a planet or an asteroid, and come back. Doing a whole trip like this requires some knowledge about the game mechanics and physics. This game is certainly not for everyone if you want to achieve something, I never made better than just sending a rocket in space and let it crash on the planet after lacking fuel or drifting in space forever… The game works on Linux, requires an average computer and can be obtained at a very fair price like 10€ when it’s on sales (which happens very often). Definitely a must to play if you like space.

Puzzle games (Zachtronics games)

What’s a Zachtronics game? It’s a game edited by Zachtronics! Every game from this studio have a common pattern. You solve puzzles with more and more complexes systems, you can compare your result in speed / efficiency / steps to the others player. They are a mix in between automation and puzzles. Those games are really good. There are more than the 3 games I list, but I didn’t enjoy them all, check the full list

You play an alchemist who is asked to create product for a rich family. You need to setup devices to transforms and combine materials into the expected result.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is in 3D, you receive materials on conveyor belts and you will have to rotate and wield them to deliver the expect material.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is about writing code into assembly. There are calculations units that will add/sub values from registers and pass it to another unit. Even more fun if you print the old fashion instructions book!

The game isn’t open source and isn’t free. The average cost is 10€.

Visual Novel

The expression Amrilato

This game is about a Japanese girl who ends in a parallel world where everything seems similar but in this Japan, people talk Esperanto.

The game isn’t open source and isn’t free. The average cost is 20€.

Not very violent

Way of the Passive Fist

I would like to add this game to this list. It’s a brawler (like street of rage) in which you don’t fight people, but you only dodge attacks to exhaust enemies or counter-attack. It’s still a bit violent because it involves violence toward you, and throwing back a knife would still be violent… But still, I think this is an unique game that merits to be better known. :)

The game isn’t open source and isn’t free, expect around 15€ for it.

Making a home NAS using NixOS

Written by Solène, on 18 October 2020.
Tags: #nixos #linux #nas

Comments on Mastodon

Still playing with NixOS, I wanted to experience how difficult it would be to write a NixOS configuration file to turn a computer into a simple NAS with basics features: samba storage, dlna server and auto suspend/resume.

What is NixOS? As a reminder for some and introduction to the others, NixOS is a Linux distribution built by the Nix package manager, which make it very different than any other operating system out there, except Guix which has a similar approach with their own package manager written in Scheme.

NixOS uses a declarative configuration approach along with lot of others features derived from Nix. What’s big here is you no longer tweak anything in /etc or install packages, you can define the working state of the system in one configuration file. This system is a totally different beast than the others OS and require some time to understand how it work. Good news though, everything is documented in the man page configuration.nix, from fstab configuration to users managements or how to enable samba!

Here is the /etc/nixos/configuration.nix file on my NAS.

It enables ssh server, samba, minidlna and vnstat. Set up a user with my ssh public key. Ready to work.

Using rtcwake command (Linux specific), it’s possible to put the system into standby mode and schedule an auto resume after some time. This is triggered by a cron job at 01h00.

{ config, pkgs, ... }:
{
  # include stuff related to hardware, auto generated at install
  imports = [ ./hardware-configuration.nix ];
  boot.loader.grub.device = "/dev/sda";
      
  # network configuration
  networking.interfaces.enp3s0.ipv4.addresses = [ {
    address = "192.168.42.150";
    prefixLength = 24;
  } ];
  networking.defaultGateway = "192.168.42.1";
  networking.nameservers = [ "192.168.42.231" ];
      
  # FR locales and layout
  i18n.defaultLocale = "fr_FR.UTF-8";
  console = { font = "Lat2-Terminus16"; keyMap = "fr"; };
  time.timeZone = "Europe/Paris";
      
  # Packages management
  environment.systemPackages = with pkgs; [
    kakoune vnstat borgbackup utillinux
  ];
      
  # network disabled (I need to check the ports used first)
  networking.firewall.enable = false;
      
  # services to enable
  services.openssh.enable = true;
  services.vnstat.enable = true;
      
  # auto standby
  services.cron.systemCronJobs = [
      "0 1 * * * root rtcwake -m mem --date +6h"
  ]; 
      
  # samba service
  services.samba.enable = true;
  services.samba.enableNmbd = true;
  services.samba.extraConfig = ''
        workgroup = WORKGROUP
        server string = Samba Server
        server role = standalone server
        log file = /var/log/samba/smbd.%m
        max log size = 50
        dns proxy = no
        map to guest = Bad User
    '';
  services.samba.shares = {
      public = {
          path = "/home/public";
          browseable = "yes";
          "writable" = "yes";
          "guest ok" = "yes";
          "public" = "yes";
          "force user" = "share";
        };
     };
      
  # minidlna service
  services.minidlna.enable = true;
  services.minidlna.announceInterval = 60;
  services.minidlna.friendlyName = "Rorqual";
  services.minidlna.mediaDirs = ["A,/home/public/Musique/" "V,/home/public/Videos/"];
      
  # trick to create a directory with proper ownership
  # note that tmpfiles are not necesserarly temporary if you don't
  # set an expire time. Trick given on irc by someone I forgot the name..
  systemd.tmpfiles.rules = [ "d /home/public 0755 share users" ];
      
  # create my user, with sudo right and my public ssh key
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "wheel" "sudo" ];
    openssh.authorizedKeys.keys = [
          "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15viQXHYRjGqE4LLfvETMkjjgSz0mzMzS personal"
          "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15vAQXBYRjGqE6L1fvETMkjjgSz0mxMzS pro"
    ];
  };
      
  # create a dedicated user for the shares
  # I prefer a dedicated one than "nobody"
  # can't log into it
  users.users.share= {
    isNormalUser = false;
  };
}

NixOS optional features in packages

Written by Solène, on 14 October 2020.
Tags: #nixos #linux

Comments on Mastodon

As a claws-mail user, I like to have calendar support in the mail client to be able to “accept” invitations. In the default NixOS claws-mail package, the vcalendar module isn’t installed with the package. Still, it is possible to add support for the vcalendar module without ugly hack.

It turns out, by default, the claws-mail package in Nixpkg has an optional build option for the vcalendar module, we need to tell nixpkg we want this module and claws-mail will be compiled.

As stated in the NixOS manual, the optionals features can’t be searched yet. So what’s possible is to search for your package in the NixOS packages search, click on the package name to get to the details and click on the link named “Nix expression” that will open a link to the package definition on GitHUB, claws-mail nix expression

As you can see on the claws-mail nix expression code, there are lot of lines with optional, those are features we can enable. Here is a sample:

[..]
++ optional (!enablePluginArchive) "--disable-archive-plugin"
++ optional (!enablePluginLitehtmlViewer) "--disable-litehtml_viewer-plugin"
++ optional (!enablePluginPdf) "--disable-pdf_viewer-plugin"
++ optional (!enablePluginPython) "--disable-python-plugin"
[..]

In your configuration.nix file, where you define the package list you want, you can tell you want to enable the plugin vcalendar, this is done as in the following example:

environment.systemPackages = with pkgs; [
  kakoune git firefox irssi minetest
  (pkgs.claws-mail.override { enablePluginVcalendar = true;})
];

When you rebuild your system to match the configuration definition, claws-mail will be compiled with the extras options you defined.

Now, I have claws-mail with vCalendar support.

Unlock a full disk encryption NixOS with usb memory stick

Written by Solène, on 06 October 2020.
Tags: #nixos #linux

Comments on Mastodon

Using NixOS on a laptop on which the keyboard isn’t detected when I need to type the password to decrypt disk, I had to find a solution. This problem is hardware related, not Linux or NixOS related.

I highly recommend using full disk encryption on every computer following a thief threat model. Having your computer stolen is bad, but if the thief has access to all your data, you will certainly be in trouble.

This was time to find how to use an usb memory stick to unlock the full disk encryption in case I don’t have my hands on an usb keyboard to unlock the computer.

There are 4 steps to enable unlocking the luks volume using a device.

  1. Create the key
  2. Add the key on the luks volume
  3. Write the key on the usb device
  4. Configure NixOS

First step, creating the file. The easiest way is to the following:

# dd if=/dev/urandom of=/root/key.bin bs=4096 count=1

This will create a 4096 bytes key. You can choose the size you want.

Second step is to register that key in the luks volume, you will be prompted for luks password when doing so.

# cryptsetup luksAddKey /dev/sda1 /root/key.bin

Then, it’s time to write the key to your usb device, I assume it will be /dev/sdb.

# dd if=/root/key.bin of=/dev/sdb bs=4096 count=1

And finally, you will need to configure NixOS to give the information about the key. It’s important to give the correct size of the key. Don’t forget to adapt "crypted" to your luks volume name.

boot.initrd.luks.devices."crypted".keyFileSize = 4096;
boot.initrd.luks.devices."crypted".keyFile = "/dev/sdb";

Rebuild your system with nixos-rebuild switch and voilà!

Going further

I recommend using the fallback to password feature so if you lose or don’t have your memory stick, you can type the password to unlock the disk. Note that you need to not put anything looking like a /dev/sdb because if it exists and no key are there, the system won’t ask for password, and you will need to reboot.

boot.initrd.luks.devices."crypted".fallbackToPassword = true;

It’s also possible to write the key in a partition or at a specific offset into your memory disk. For this, look at boot.initrd.luks.devices."volume".keyFileOffset entry.

Playing chess by email

Written by Solène, on 28 September 2020.
Tags: #chess

Comments on Mastodon

It’s possible to play chess using email. This is possible because there are notations like PGN (Portable Game Notation) that describe the state of a game.

By playing on your computer and sending the PGN of the game to your opponent, that person will be able to play their move and send you the new PGN so you can play.

Using xboard

This is quite easy with xboard (which should be available in most bsd/linux/unix distributions), as long as you are aware of the few keybindings.

When you start a game, press Ctrl+E to enter edition mode, this will prevent the AI to play, then make your move.

From there, you can press Ctrl+C to copy the state of the game. You will have something like this in your clipboard.

[Event "Edited game"]
[Site "solene.local"]
[Date "2020.09.28"]
[Round "-"]
[White "-"]
[Black "-"]
[Result "*"]

1. d3
*

You can send this to your opponent, but the only needed data is 1. d3 which is the PGN notation of the moves. You can throw the rest.

In a more advanced game, you will end up mailing this kind of data:

1. d3 e6 2. e4 f5 3. exf5 exf5 4. Qe2+ Be7 5. Qxe7+ Qxe7+

When you want to play your turn, load that line and press Ctrl+V, you should see the moves happening on the board.

Using gnuchess

gnuchess allow playing chess in command line.

When you want to start a game, you will have a prompt, type manual to not play against the AI. I recommend using coords to display coordinates on the axis of the board.

When you type show board you will have this display:

  white  KQkq

8 r n b q k b n r 
7 p p p p p p p p 
6 . . . . . . . . 
5 . . . . . . . . 
4 . . . . . . . . 
3 . . . . . . . . 
2 P P P P P P P P 
1 R N B Q K B N R 
  a b c d e f g h 

Then, I can type d3 I get a display

8 r n b q k b n r 
7 p p p p p p p p 
6 . . . . . . . . 
5 . . . . . . . . 
4 . . . . . . . . 
3 . . . P . . . . 
2 P P P . P P P P 
1 R N B Q K B N R 
  a b c d e f g h 

From the game, you can save the game using pgnsave FILE and load a game using pgnload FILE.

You can see the list of the moves using show game.

About pipelining OpenBSD ports contributions

Written by Solène, on 27 September 2020.
Tags: #openbsd #automation

Comments on Mastodon

After modest contributions to the NixOS operating system which made me learn about the contribution process, I found enjoyable to have an automatic report and feedback about the quality of the submitted work. While on NixOS this requires GitHub, I think this could be applied as well on OpenBSD and the mailing list contributing system.

I made a prototype before starting the real work and actually I’m happy with the result.

This is what I get after feeding the script with a mail containing a patch:

Determining package path         ✓    
Verifying patch isn't committed  ✓    
Applying the patch               ✓    
Fetching distfiles               ✓    
Distfile checksum                ✓    
Applying ports patches           ✓    
Extracting sources               ✓    
Building result                  ✓

It requires a lot of checks to find a patch in the file, because we have have patches generated from cvs or git which have a slightly different output. And then, we need to find from where to apply this patch.

The idea would be to retrieve mails sent to ports@openbsd.org by subscribing, then store metadata about that submission into a database:

Sender
Date
Diff (raw text)
Status (already committed, doesn't apply, apply, compile)

Then, another program will pick a diff from the database, prepare a VM using a derivated qcow2 disk from a base image so it always start fresh and clean and ready, and do the checks within the VM.

Once it is finished, a mail could be sent as a reply to the original mail to give the status of each step until error or last check. The database could be reused to make a web page to track what compiles but is not yet committed. As it’s possible to verify if a patch is committed in the tree, this can automatically prune committed patches over time.

I really think this can improve tracking patches sent to ports@ and ease the contribution process.

DISCLAIMER

  • This would not be an official part of the project, I do it on my own
  • This may be cancelled
  • This may be a bad idea
  • This could be used “as a service” instead of pulling automatically from ports, meaning people could send mails to it to receive an automatic review. Ideally this should be done in portcheck(1) but I’m not sure how to verify a diff apply on the ports tree without enforcing requirements
  • Human work will still be required to check the content and verify the port works correctly!

Docker cheatsheet

Written by Solène, on 24 September 2020.
Tags: #docker

Comments on Mastodon

Simple Docker cheatsheet. This is a short introduction about Docker usage and common questions I have been asking myself about Docker.

The official documentation for building docker images can be found here

Build an image

Building an image is really easy. As a requirement, you need to be in a directory that can contain data you will use for building the image but most importantly, you need a Dockerfile file.

The Dockerfile file hold all the instructions to create the container. A simple example would be this description:

FROM busybox
CMD "echo" "hello world"

This will create a docker container using busybox base image and run echo "hello world" when you run it.

To create the container, use the following command in the same directory in which Dockerfile is:

$ docker build -t your-image-name .

Advanced image building

If you need to compile sources to distribute a working binary, you need to prepare the environment to have the required dependencies to compile and then you need to compile a static binary to ship the container without all the dependencies.

In the following example we will use a debian environment to build the software downloaded by git.

FROM debian as work
WORKDIR /project

RUN apt-get update
RUN apt-get install -y git make gcc
RUN git clone git://bitreich.org/sacc /project
RUN apt-get install -y libncurses5-dev libncurses5
RUN make LDFLAGS="-static -lncurses -ltinfo"

FROM debian

COPY --from=work /project/sacc /usr/local/bin/sacc

CMD "sacc" "gopherproject.org"

I won’t explain every command here, but you may see that I have split the packages installation in two commands. This was to help debugging.

The trick here is that the docker build process has a cache feature. Every time you use a FROM, COPY, RUN or CMD docker will cache the current state of the build process, if you re-run the process docker will be able to pick up the most recent state until the change.

I wasn’t sure how to compile statically the software at first, and having to install git make and gcc and run git clone EVERY TIME was very time consuming and bandwidth consuming.

In case you run this build and it fails, you can re-run the build and docker will catch up directly at the last working step.

If you change a line, docker will reuse the last state with a FROM/COPY/RUN/CMD command before the changed line. Knowing about this is really important for more efficient cache use.

Run an image

With the previously locally built image we can run it with the command:

$ docker run your-image-name
hello world

By default, when you use an image name to run, if you don’t have a local image that match the name docker will check on the docker official repository if an image exists, if so, it will be pulled and run.

$ docker run hello-world

This is a sample official container that will display some explanations about docker.

If you want to try a gopher client, I made a docker version of it that you can run with the following command:

$ docker run -t -i rapennesolene/sacc

Why did you require -t and -i parameters? The former is to tell docker you want a tty because it will manipulate a terminal and the latter is to ask an interactive session.

Persistant data

By default, every data of the docker container get wiped out once it stops, which may be really undesirable if you use docker to deploy a service that has a state and require an installation, configuration files etc…

Docker has two ways to solve it:

1) map a local directory 2) map a docker volume name

This is done with the parameter -v with the docker run command.

$ docker run -v data:/var/www/html/ nextcloud

This will map a persistent storage named “data” on the host on the path /var/www/html in the docker instance. By using data, docker will check if /var/lib/docker/volumes/data exists, if so it will reuse it and if not it will create it.

This is a convenient way to name volumes and let docker manage it.

The other way is to map a local path to a container environment path.

$ docker run -v /home/nextcloud:/var/www/html nextcloud

In this case, the directory /home/nextcloud on the host and /var/www/html in the docker environment will be the same directory.

A few tips about the cd command

Written by Solène, on 04 September 2020.
Tags: #unix

Comments on Mastodon

While everyone familiar with a shell know about the command cd there are a few tips you should know.

Moving to your $HOME directory

$ pwd
/tmp
$ cd
$ pwd
/home/solene

Using cd without argument will change your current directory to your $HOME.

Moving into someone $HOME directory

While this should fail most of the time because people shouldn’t allow anyone to visit their $HOME, there are use case it can be used though.

$ cd ~user1
$ pwd
/home/user1
$ cd ~solene
$ pwd
/home/solene

Using ~user as a parameter will move to that user $HOME directory, note that cd and cd ~youruser have the same result.

Moving to previous directory

This is a very useful command which allow going back and forth between two directories.

$ pwd
/home/solene
$ cd /tmp
$ pwd
/tmp
$ cd -
/home/solene
$ pwd
/home/solene

When you use cd - the command will move to the previous directory in which you were. There are two special variables in your shell: PWD and OLDPWD, when you move somewhere, OLDPWD will hold your current location before moving and then PWD hold the new path. When you use cd - the two variables get exchanged, this mean you can only jump from two paths using cd - multiple times.

Please note that when using cd - your new location is displayed.

Changing directory by modifying current PWD

thfr@ showed me a cd feature I never heard about, and it’s the perfect place to write about it. Note that this work in ksh and zsh but is reported to not work in bash.

One example will explain better than any text.

$ pwd
/tmp/pobj/foobar-1.2.0/work
$ cd 1.2.0 2.4.0
/tmp/pobj/foobar-2.4.0/work

This tells cd to replace first parameter pattern by the second parameter in the current PWD and then cd into it.

$ pwd
/home/solene
$ cd solene user1
/home/user1

This could be done in a bloated way with the following command:

$ cd $(echo $PWD | sed "s/solene/user1/")

I learned it a few minutes ago but I see a lot of uses cases where I could use it.

Moving into the current directory after removal

In some specific case, like having your shell into a directory that existed but was deleted and removed (this happens often when you working into compilation directories).

A simple trick is to tell cd to go to the current location.

$ cd .

or

$ cd $PWD

And cd will go into the same path and you can start hacking again in that directory.

Find which package provides a given file in OpenBSD

Written by Solène, on 04 September 2020.
Tags: #openbsd

Comments on Mastodon

There is one very handy package on OpenBSD named pkglocatedb which provides the command pkglocate.

If you need to find a file or binary/program and you don’t know which package contains it, use pkglocate.

$ pkglocate */bin/exiftool  
p5-Image-ExifTool-12.00:graphics/p5-Image-ExifTool:/usr/local/bin/exiftool

With the result, I know that the package p5-Image-ExifTool will provide me the command exiftool.

Another example looking for files containing the pattern “libc++”

$ pkglocate libc++
base67:/usr/lib/libc++.so.5.0
base67:/usr/lib/libc++abi.so.3.0
comp67:/usr/lib/libc++.a
comp67:/usr/lib/libc++_p.a
comp67:/usr/lib/libc++abi.a
comp67:/usr/lib/libc++abi_p.a
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.app
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.lib
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qmake.conf
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qplatformdefs.h

As you can see, base sets are also in the database used by pkglocate, so you can easily find if a file is from a set (that you should have) or if the file comes from a package.

Find which package installed a file

Klemmens Nanni (kn@) told me it’s possible to find which package installed a file present in the filesystem using pkg_info command which comes from the base system. This can be handy to know from which package an installed file comes from, without requiring pkglocatedb.

$ pkg_info -E /usr/local/bin/convert
/usr/local/bin/convert: ImageMagick-6.9.10.86p0
ImageMagick-6.9.10.86p0 image processing tools

This tells me convert binary was installed by ImageMagick package.

Download files listed in a http index with wget

Written by Solène, on 16 June 2020.
Tags: #wget #internet

Comments on Mastodon

Sometimes I need to download files through http from a list on an “autoindex” page and it’s always painful to find a correct command for this.

The easy solution is wget but you need to use the correct parameters because wget has a lot of mirroring options but you only want specific ones to achieve this goal.

I ended up with the following command:

wget --continue --accept "*.tgz" --no-directories --no-parent --recursive http://ftp.fr.openbsd.org/pub/OpenBSD/6.7/amd64/

This will download every tgz files available at the address given as last parameter.

The parameters given will filter to only download the tgz files, put the files in the current working directory and most important, don’t try to escape to the parent directory to start downloading again. The `–continue`` parameter allow to interrupt wget and start again, downloaded file will be skipped and partially downloaded files will be completed.

Do not reuse this command if files changed on the remote server because continue feature only work if your local file and the remote file are the same, this simply look at the local and remote names and will ask the remote server to start downloading at the current byte range of your local file. If meanwhile the remote file changed, you will have a mix of the old and new file.

Obviously ftp protocol would be better suited for this download job but ftp is less and less available so I find wget to be a nice workaround for this.

Birthday dates management using calendar

Written by Solène, on 15 June 2020.
Tags: #openbsd #plaintext #automation

Comments on Mastodon

I manage my birthday list so I don’t forget about them in a calendar file so I can use it in scripts

The calendar file format is easy but sadly it only works using English month names.

This is an example file with differents spacing:

7  August   This is 7 august birthday!
 8 August   This is 8 august birthday!
16 August   This is 16 august birthday!

Now you have a calendar file you can use the calendar binary on it and show incoming events in the next n days using -A flag.

calendar -A 20

Note that the default file is ~/.calendar/calendar so if you use this file you don’t need to use the -f flag in calendar.

Now, I also use it in crontab with xmessage to show a popup once a day with incoming birthdays.

30 13 * * *  calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -

You have to set the DISPLAY variable so it appear on the screen.

It’s important to check if calendar will have any output before calling xmessage to prevent having an empty window.

prose - Blogging with email

Written by Solène, on 11 June 2020.
Tags: #blog #email #blog #plaintext

Comments on Mastodon

The software developer prx, his website is available at https://ybad.name/ (en/fr), released a new software called prose to publish a blog by sending emails.

I really like this idea, while this doesn’t suit my needs at all, I wanted to write about it.

The code can be downloaded from this address https://dev.ybad.name/prose/ .

I will briefly introduce how it works but the README file is well explaining, prose must be started from the mail server, upon email receival in /etc/mail/aliases the email will be piped into prose which will produce the html output.

On the security side, prose doesn’t use any external command and on OpenBSD it will use unveil and pledge features to reduce privileges of prose, unveil will restrict the process file system accesses outside of the html output directory.

I would also congrats prx who demonstrates again that writing good software isn’t exclusive to IT professionnal.

Gaming on OpenBSD

Written by Solène, on 05 June 2020.
Tags: #openbsd #gaming

Comments on Mastodon

While no one would expect this, there are huge efforts from a small team to bring more games into OpenBSD. In fact, now some commercial games works natively now, thanks to Mono or Java. There are no wine or linux emulation layer in OpenBSD.

Here is a small list of most well known games that run on OpenBSD:

  • Northguard (RTS)
  • Dead Cells (Side scroller action game)
  • Stardew Valley (Farming / Roguelike)
  • Slay The Spire (Card / Roguelike)
  • Axiom Verge (Side scroller, metroidvania)
  • Crosscode (top view twin stick shooter)
  • Terraria (Side scroller action game with craft)
  • Ion Fury (FPS)
  • Doom 3 (FPS)
  • Minecraft (Sandbox - not working using latest version)
  • Tales Of Maj’Eyal (Roguelike with lot of things in it - open source and free)

I would also like to feature the recently made compatible games from Zachtronics developer, those are ingenious puzzles games requiring efficiency. There are games involving Assembly code, pseudo code, molecules etc…

  • Opus Magnum
  • Exapunks
  • Molek-Syntez

Finally, there are good RPG running thanks to devoted developer spending their free time working on game engine reimplementation:

  • Elder Scroll III: Morrowind (openmw engine)
  • Baldur’s Gate 1 and 2 (gemrb engine)
  • Planescape: Torment (gemrb engine)

There is a Peertube (opensource decentralized Youtube alternative) channel where I started publishing gaming videos recorded from OpenBSD. Now there are also videos from others people that are published. OpenBSD Gaming channel

The full list of running games is available in the Shopping guide webpage including information how they run, on which store you can buy them and if they are compatible.

Big thanks to thfr@ who works hard to keep the shopping guide up to date and who made most of this possible. Many thanks to all the other people in the OpenBSD Gaming community :)

Note that it seems last Terraria release/update doesn’t work on OpenBSD yet.

Beautiful background pictures on OpenBSD

Written by Solène, on 20 May 2020.
Tags: #openbsd

Comments on Mastodon

While the title may appear quite strange, the article is about installing a package to have a new random wallpaper everytime you start the X session!

First, you need to install a package named openbsd-backgrounds which is quite large with a size of 144 MB. This package made by Marc Espie contains lot of pictures shot by some OpenBSD developers.

You can automatically set a picture as a background when xenodm start and prompt for your username by uncommenting a few lines in the file /etc/X11/xenodm/Xsetup_0:

Uncomment this part

if test -x /usr/local/bin/openbsd-wallpaper
then
/usr/local/bin/openbsd-wallpaper
fi

The command openbsd-wallpaper will display a different random picture on every screen (if you have multiples screen connected) every time you run it.

Communauté OpenBSD française

Written by Solène, on 17 May 2020.
Tags: #openbsd

Comments on Mastodon

This article is exceptionnaly in French because it’s about a French OpenBSD community.

Bonjour à toutes et à tous.

Exceptionnellement je publie un billet en français sur mon blog car je tiens à faire passer le mot concernant la communauté française obsd4a.

Vous pourrez par exemple trouver la quasi intégralité de la FAQ OpenBSD traduite à cette adresse

Sur l’accueil du site vous pourrez trouver des liens vers le forum, le wiki, le blog, la mailing list et aussi les informations pour rejoindre le salon irc (#obsd4* sur freenode)

https://openbsd.fr.eu.org/

New blog feature: Fediverse comments

Written by Solène, on 16 May 2020.
Tags: #fediverse #automation

Comments on Mastodon

I added a new feature to my blog today, when I post a new blog article this will trigger my dedicated Mastodon user https://bsd.network/@solenepercent to publish a Toot so people can discuss the content there.

Every article now contains a link to the toot if you want to discuss about an article.

This is not perfect but a good trade-off I think:

  1. the website remains static and light (nothing is included, only one more link per blog post)
  2. people who would like to discuss about it can proceed in a known place instead of writing reactions on reddit or other places without a chance for me to asnwer
  3. this is not relying on proprietary services

Of course, if you want to give me feedback, I’m still happy to reply to emails or on IRC.

FreeBSD 12.1 on a laptop

Written by Solène, on 11 May 2020.
Tags: #freebsd #mate #laptop

Comments on Mastodon

Introduction

I’m using FreeBSD again on a laptop for some reasons so expect to read more about FreeBSD here. This tutorial explain how to get a graphical desktop using FreeBSD 12.1.

I used a Lenovo Thinkpad T480 for this tutorial.

Intel graphics hardware support

If you have a recent Intel integrated graphic card (maybe less than 3 years), you have to install a package containing the driver:

pkg install drm-kmod

and you also have to tell the system the correct path of the module (because another i915kms.ko file exist):

sysrc kld_list="/boot/modules/i915kms.ko"

Choose your desktop environnement

Install Xfce

pkg install xfce

Then in your user ~/.xsession file you must append:

exec ck-launch-session startxfce4

Install MATE

pkg install mate

Then in your user ~/.xsession file you must append:

exec ck-launch-session mate-session

Install KDE5

pkg install kde5

Then in your user ~/.xsession file you must append:

exec ck-launch-session startplasma-x11

Setting up the graphical interface

You have to enable a few services to have a working graphical session:

  • moused to get laptop mouse support
  • dbus for hald
  • hald for hardware detection
  • xdm for display manager where you log-in

You can install them with the command:

pkg install xorg dbus hal xdm

Then you can enable the services at boot using the following commands, order is important:

sysrc moused_enable="yes"
sysrc dbus_enable="yes"
sysrc hald_enable="yes"
sysrc xdm_enable="yes"

Reboot or start the services in the same order:

service moused start
service dbus start
service hald start
service xdm start

Note that xdm will be in qwerty layout.

Power management

The installer should have prompted for the service powerd, if you didn’t activate it at this time, you can still enable it.

Check if it’s running

service powerd status

Enabling

sysrc powerd_enable="yes"

Starting the service

service powerd start

Webcam support

If you have a webcam and want to use it, some configuration is required in order to make it work.

Install the package webcamd, it will displays all the instructions written below at the install step.

pkg install webcamd

From here, append this line to the file /boot/loader.conf to load webcam support at boot time:

cuse_load="yes"

Add your user to the webcamd group so it will be able to use the device:

pw groupmod webcamd -m YOUR_USER

Enable webcamd at boot:

sysrc webcamd_enable="yes"

Now, you have to logout from your user for the group change to take place. And if you want the webcamd daemon to work now and not wait next reboot:

kldload cuse
service webcamd start
service devd restart

You should have a /dev/video0 device now. You can test it easily with the package pwcview.

External resources

I found this blog very interesting, I wish I found it before I struggle with all the configuration as it explains how to install FreeBSD on the exact same laptop. The author explains how to make a transparent lagg0 interface for switching from ethernet to wifi automatically with a failover pseudo device.

https://genneko.github.io/playing-with-bsd/hardware/freebsd-on-thinkpad-t480/

Enable dark mode on Firefox

Written by Solène, on 04 May 2020.
Tags: #firefox

Comments on Mastodon

Some websites (like this one) now offers two differents themes: light and dark.

Dark themes are proven to be better for the eyes and reduce battery usage on mobiles devices because it requires less light to be displayed hence it requires less energy to display. The gain is optimal on OLED devices but it also works on classic LCD screens.

While on Windows and MacOS there is a global setting for the user interface in which you choose if your system is in light or dark mode, with that setting being used by lot of applications supporting dark/light themes, on Linux and BSDs (and others) operating systems there is no such settings and your web browser will keep displaying the light theme all the time.

Hopefully, it can be fixed in firefox as as explained in the documentation.

To make it short, in the about:config special Firefox page, one can create a new key ui.systemUsesDarkTheme with a number value of 1, the firefox about:config page should turn dark immediately and then Firefox will try to use dark themes when they are available.

You should note that as explained in the mozilla documentation, if you have the key privacy.resistFingerprinting set to true the dark mode can’t be used. It seems dark mode and privacy can’t belong together for some reasons.

Many thanks to https://tilde.zone/@andinus who pointed me this out after I overlooked that page and searched a long time with no result how to make Firefox display website using the dark theme.

Aggregate internet links with mlvpn

Written by Solène, on 28 March 2020.
Tags: #openbsd68

Comments on Mastodon

In this article I’ll explain how to aggregate internet access bandwidth using mlvpn software. I struggled a lot to set this up so I wanted to share a how-to.

Pre-requisites

mlvpn is meant to be used with DSL / fiber links, not wireless or 4G links with variable bandwidth or packet loss.

mlvpn requires to be run on a server which will be the public internet access and on the client on which you want to aggregate the links, this is like doing multiples VPN to the same remote server with a VPN per link, and aggregate them.

Multi-wan roundrobin / load balancer doesn’t allow to stack bandwidth but doesn’t require a remote server, depend on what you want to do, this may be enough and mlvpn may not be required.

mlvpn should be OS agnostic between client / server but I only tried between two OpenBSD hosts, your setup may differ.

Some network diagram

Here is a simple network, the client has access to 2 ISP through two ethernet interfaces.

em0 and em1 will have to be on different rdomains (it’s a feature to separate routing tables).

Let’s say the public ip of the server is 1.2.3.4.

                [internet]
                    ↑
                    | (public ip on em0)
             #-------------#
             |             |
             |   Server    |
             |             |
             #-------------#
                |       |
                |       |
                |       |
                |       |
    (internet)  |       | (internet)
    #-------------#   #-------------#
    |             |   |             |
    |   ISP 1     |   |  ISP 2      |
    |             |   |             |  (you certainly don't control those)
    #-------------#   #-------------#
                |       |
                |       |
  (dsl1 via em0)|       | (dsl1 via em1)
             #-------------#
             |             |
             |   Client    |
             |             |
             #-------------#

Network configuration

As said previously, em0 and em1 must be on different rdomains, it can easily be done by adding rdomain 1 and rdomain 2 to the interfaces configuration.

Example in /etc/hostname.em0

rdomain 1
dhcp

mlvpn installation

On OpenBSD the installation is as easy as pkg_add mlvpn (should work starting from 6.7 because it required patching).

mlvpn configuration

Once the network configuration is done on the client, there are 3 steps to do to get aggregation working:

  1. mlvpn configuration on the server
  2. mlvpn configuration on the client
  3. activating NAT on the client

Server configuration

On the server we will use the UDP ports 5080 et 5081.

Connections speed must be defined in bytes to allow mlvpn to correctly balance the traffic over the links, this is really important.

The line bandwidth_upload = 1468006 is the maximum download bandwidth of the client on the specified link in bytes. If you have a download speed of 1.4 MB/s then you can choose a value of 1.4*1024*1024 => 1468006.

The line bandwidth_download = 102400 is the maximum upload bandwidth of the client on the specified link in bytes. If you have an upload speed of 100 kB/s then you can choose a value of 100*1024 => 102400.

The password line must be a very long random string, it’s a shared secret between the client and the server.

# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
protocol = "tcp"
loglevel = 4
mode = "server"
tuntap = "tun"
interface_name = "tun0"
cleartext_data = 0
ip4 = "10.44.43.2/30"
ip4_gateway = "10.44.43.1"

# things you need to change
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"

[dsl1]
bindhost = "1.2.3.4"
bindport = 5080
bandwidth_upload = 1468006
bandwidth_download = 102400

[dsl2]
bindhost = "1.2.3.4"
bindport = 5081
bandwidth_upload = 1468006
bandwidth_download = 102400

Client configuration

The password value must match the one on the server, the values of ip4 and ip4_gateway must be reversed compared to the server configuration (this is so in the following example).

The bindfib lines must correspond to the according rdomain values of your interfaces.

# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
loglevel = 4
mode = "client"
tuntap = "tun"
interface_name = "tun0"
ip4 = "10.44.43.1/30"
ip4_gateway = "10.44.43.2"
timeout = 30
cleartext_data = 0

password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"

[dsl1]
remotehost = "1.2.3.4"
remoteport = 5080
bindfib = 1

[dsl2]
remotehost = "1.2.3.4"
remoteport = 5081
bindfib = 2

NAT configuration (server side)

As with every VPN you must enable packet forwarding and create a pf rule for the NAT.

Enable forwarding

Add this line in /etc/sysctl.conf:

net.inet.ip.forwarding=1

You can enable it now with sysctl net.inet.ip.forwarding=1 instead of waiting for a reboot.

In pf.conf you must allow the UDP ports 5080 and 5081 on the public interface and enable nat, this can be done with the following lines in pf.conf but you should obviously adapt to your configuration.

# allow NAT on VPN
pass in on tun0
pass out quick on em0 from 10.44.43.0/30 to any nat-to em0

# allow mlvpn to be reachable
pass in on egress inet proto udp from any to (egress) port 5080:5081

Start mlvpn

On both server and client you can run mlvpn with rcctl:

rcctl enable mlvpn
rcctl start mlvpn

You should see a new tun0 device on both systems and being able to ping them through tun0.

Now, on the client you have to add a default gateway through the mlvpn tunnel with the command route add -net default 10.44.43.2 (adapt if you use others addresses). I still didn’t find how to automatize it properly.

Your client should now use both WAN links and being visible with the remote server public IP address.

mlvpn can be used for more links, you only need to add new sections. mlvpn also support IPv6 but I didn’t take time to find how to make it work, si if you are comfortable with ipv6 it may be easy to set up IPv6 with the variables ip6 and ip6_gateway in mlvpn.conf.

OpenBSD -current - Frequently Asked Questions

Written by Solène, on 27 March 2020.
Tags: #openbsd

Comments on Mastodon

Hello, as there are so many questions about OpenBSD -current on IRC, Mastodon or reddit I’m writing this FAQ in hope it will help people.

The official FAQ already contains answers about -current like Following -current and using snapshots and Building the system from sources.

What is OpenBSD -current?

OpenBSD -current is the development version of OpenBSD. Lot of people use it for everyday tasks.

How to install OpenBSD -current?

OpenBSD -current refers to the last version built from sources obtained with CVS, however, it’s also possible to get a pre-built system (a snapshot) usually built and pushed on mirrors every 1 or 2 days.

You can install OpenBSD -current by getting an installation media like usual, but on the path /pub/OpenBSD/snapshots/ on the mirror.

How do I upgrade from -release to -current?

There are two ways to do so:

  1. Download bsd.rd file from the snapshots directory and boot it to upgrade like for a -release to -release upgrade
  2. Run sysupgrade -s command as root, this will basically download all sets under /home/_sysupgrade and boot on bsd.rd with an autoinstall(8) config.

How do I upgrade my -current snapshot to a newer snapshot?

Exactly the same process as going from -release to -current.

Can I downgrade to a -release if I switch to -current?

No.

What issues can I expect in OpenBSD -current?

There are a few issues possibles that one can expect

Out of sync packages

If a library get updated into the base system and you want to update packages, they won’t be installable until packages are rebuilt with that new library, this usually takes 1 up to 3 days.

This only create issues in case you want to install a package you don’t have.

The other way around, you can have an old snapshot and packages are not installable because the libraries linked to by the packages are newer than what is available in your system, in this case you have to upgrade snapshot.

Snapshots sets are getting updated on the mirror

If you download the sets on the mirror to update your -current version, you may have an issue with the sha256 sum, this is because the mirror is getting updated and the sha256 file is the first to be transferred, so sets you are downloading are not the one the sha256 will compare.

Unexpected system breakage

Sometimes, very rarely (maybe 2 or 3 time in a year?), some snapshots are borked and will prevent system to boot or lead to regularly crashes. In that case, it’s important to report the issue with the sendbug utility.

You can fix this by using an older snapshot from this archives server and prevent this to happen by reading bugs@ mailing list before updating.

Broken package

Sometimes, a package update will break it or break some others packages, this is often quickly fixed on popular packages but in some niche packages you may be the only one using it on -current and the only one who can report about it.

If you find breakage on something you use, it may be a good idea to report the problem on ports@openbsd.org mailing list if nobody did before. By doing so, the issue will be fixed and next -release users will be able to install a working package.

Is -current stable enough for a server or a workstation?

It’s really up to you. Developers are all using -current and are forbidden to break it, so the system should totally be usable for everyday use.

What may be complicated on a server is keep updating it regularly and face issues requires troubleshooting (like major database upgrade which was missing a quirk).

For a workstation I think it’s pretty safe as long as you can deal with packages that can’t be installed until they are in sync.

Advice for working remotely from home

Written by Solène, on 17 March 2020.
Tags: #life

Comments on Mastodon

Hello,

A few days ago, as someone working remotely since 3 years I published some tips to help new remote workers to feel more confident into their new workplace: home

I’ve been told I should publish it on my blog so it’s easier to share the information, so here it is.

  • dedicate some space to your work area, if you use a laptop try to dedicate a table corner for it, so you don’t have to remove your “work station” all the time

  • keep track of the time, remember to drink and stand up / walk every hour, you can set an alarm every hour to remember or use software like http://www.workrave.org/ or https://github.com/hovancik/stretchly which are very useful. If you are alone at home, you may lose track of time so this is important.

  • don’t forget to keep your phone at hand if you use it for communication with colleagues. Think that they may only know your phone number, so it’s their only way to reach you

  • keep some routine for lunch, you should eat correctly and take the time to do so, avoid eating in front of the computer

  • don’t work too much after work hours, do like at your workplace, leave work when you feel it’s time to and shutdown everything related to work, it’s a common trap to want to do more and keep an eye on mails, don’t fall into it.

  • depending on your social skills, work field and colleagues, speak with others (phone, text whatever), it’s important to keep social links.

Here are some others tips from Jason Robinson

  • after work, distance yourself from the work time by taking a short walk outside, cooking, doing laundry, or anything that gets you away from the work area and cuts the flow.

  • take at least one walk outside if possible during the day time to get fresh air.

  • get a desk that can be adjusted for both standing and sitting.

I hope those advices will help you going through the crisis, take care of yourselves.

A day as an OpenBSD developer

Written by Solène, on 19 February 2020.
Tags: #life #openbsd

Comments on Mastodon

This is a little story that happened a few days ago, it explains well how I usually get involved into ports in OpenBSD.

1 - Lurking into ports/graphics/

At first, I was looking in various ports there are in the graphics category, searching for an image editor that would run correctly on my offline laptop. Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open ports randomly to read their pkg/DESCR file.

This way, I often find gems I reuse later, sometimes I have less luck and I only tried 20 ports which are useless to me. It happens I find issues in ports looking randomly like this…

2 - Find the port « comix »

Then, the second or third port I look at is « comix », here is the DESCR file.

Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.

That looked awesome, I have lot of books as PDF I want to read but it’s not convenient in a “normal” PDF reader, so maybe comix would help!

3 - Using comix

Once comix was compiled (a mix of python and gtk), I start it and I get errors opening PDFs… I start it again from console, and in the output I get the explanation that PDF files are not usable in comix.

Then I read about the CBZ or CBT files, they are archives (zip or tar) containing pictures, definitely not what a PDF is.

4 - mcomix > comix

After a few searches on the Internet, I find that comix last release is from 2009 and it never supported PDF, so nothing wrong here, but I also found comix had a fork named mcomix.

mcomix forked a long time ago from comix to fix issues and add support for new features (like PDF support), while last release is from 2016, it works and still receive commits (last is from late 2019). I’m going for using comix!

5 - Installing mcomix from ports

Best way to install a program on OpenBSD is to make a port, so it’s correctly packaged, can be deinstalled and submit to ports@ mailing list later.

I did copy comix folder into mcomix, use a brain dead sed command to replace all occurrence of comix by mcomix, and it mostly worked! I won’t explain little details, but I got mcomix to work within a few minutes and I was quite happy! Fun fact is that comix port Makefile was mentioning mcomix as a suggestion for upgrade.

6 - Enjoying a CBR reader

With mcomix installed, I was able to read some PDF, it was a good experience and I was pretty happy with it. I’ve spent a few hours reading, a few moments after mcomix was installed.

7 - mcomix works but not all the time

After reading 2 longs PDFs, I got issues with the third, some pages were not rendered and not displayed. After digging this issue a bit, I found about mcomix internals. Reading PDF is done by rendering every page of the PDF using mutool binary from mupdf software, this is quite CPU intensive, and for some reason in mcomix the command execution fails while I can do the exact same command a hundred time with no failure. Worse, the issue is not reproducible in mcomix, sometimes some pages will fail to be rendered, sometimes not!

8 - Time to debug some python

I really want to read those PDF so I take my favorite editor and start debugging some python, adding more debug output (mcomix has a -W parameter to enable debug output, which is very nice), to try to understand why it fails at getting output of a working command.

Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just found it fail, sometimes, but I wasn’t able to understand why.

9 - mcomix on PowerPC

While mcomix is clunky with PDF, I wanted to check if it was working on PowerPC, it took some times to get all the dependencies installed on my old computer but finally I got mcomix displayed on the screen… and dying on PDF loading! Crash seems related to GTK and I don’t want to touch that, nobody will want to patch GTK for that anyway so I’ve lost hope there.

10 - Looking for alternative

Once I knew about mcomix, I was able to search the Internet for alternatives of it and also CBR readers. A program named zathura seems well known here and we have it in the OpenBSD ports tree.

Weird thing is that it comes with two different PDF plugins, one named mupdf and the other one poppler. I did try quickly on my amd64 machine and zathura was working.

11 - Zathura on PowerPC

As Zathura was working nice on my main computer, I installed it on the PowerPC, first with the poppler plugin, I was able to view PDF, but installing this plugin did pull so many packages dependencies it was a bit sad. I deinstalled the poppler PDF plugin and installed mupdf plugin.

I opened a PDF and… error. I tried again but starting zathura from the terminal, and I got the message that PDF is not a supported format, with a lot of lines related to mupdf.so file not being usable. The mupdf plugin work on amd64 but is not usable on powerpc, this is a bug I need to report, I don’t understand why this issue happens but it’s here.

12 - Back to square one

It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT files and then use any CBT reader out there and not having to deal with that PDF madness!!

13 - Use big calibre for the job

I have found on the Internet that Calibre is the most used tool to convert a PDF into CBT files (or into something else but I don’t really care here). I installed calibre, which is not lightweight, started it and wanted to change the default library path, the software did hang when it displayed the file dialog. This won’t stop me, I restart calibre and keep the default path, I click on « Add a book » and then it hang again on file dialog. I did report this issue on ports@ mailing list, but it didn’t solve the issue and this mean calibre is not usable.

14 - Using the command line

After all, CBT files are images in a tar file, it should be easy to reproduce the mcomix process involving mutool to render pictures and make a tar of that.

IT WORKED.

I found two ways to proceed, one is extremely fast but may not make pages in the correct order, the second requires CPU time.

Making CBT files - easiest process

The first way is super easy, it requires mutool (from mupdf package) and it will extract the pictures from the PDF, given it’s not a vector PDF, not sure what would happen on those. The issue is that in the PDF, the embedded pictures have a name (which is a number from the few examples I found), and it’s not necessarily in the correct order. I guess this depend how the PDF is made.

$ mutool extract The_PDF_file.pdf
$ tar cvf The_PDF_file.tar *jpg

That’s all you need to have your CBT file. In my PDF there was jpg files in it, but it may be png in others, I’m not sure.

Making CBT files - safest process (slow)

The other way of making pictures out of the PDF is the one used in mcomix, call mutool for rendering each page as a PNG file using width/height/DPI you want. That’s the tricky part, you may not want to produce pictures with larger resolution than the original pictures (and mutool won’t automatically help you for this) because you won’t get any benefit. This is the same for the DPI. I think this could be done automatically using a correct script checking each PDF page resolution and using mutool to render the page with the exact same resolution.

As a rule of thumb, it seems that rendering using the same width as your screen is enough to produce picture of the correct size. If you use large values, it’s not really an issue, but it will create bigger files and take more time for rendering.

$ mutool draw -w 1920 -o page%d.png The_PDF_file.pdf
$ tar cvf The_PDF_file.tar page*.png

You will get PNG files for each page, correctly numbered, with a width of 1920 pixels. Note that instead of tar, you can use zip to create a zip file.

15 - Finally reading books again

After all this LONG process, I was finally able to read my PDF with any CBR reader out there (even on phone), and once the process is done, it uses no cpu for viewing files at the opposite of mcomix rendering all the pages when you open a file.

I have to use zathura on PowerPC, even if I like it less due to the continuous pages display (can’t be turned off), but mcomix definitely work great when not dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports tree if it fails randomly on random pages with PDF.

16 - Being an open source activist is exhausting

All I wanted was to read a PDF book with a warm cup of tea at hand. It ended into learning new things, debugging code, making ports, submitting bugs and writing a story about all of this.

Daily life with the offline laptop

Written by Solène, on 18 February 2020.
Tags: #life #disconnected

Comments on Mastodon

Last year I wrote a huge blog post about an offline laptop attempt. It kinda worked but I wasn’t really happy with the setups, need and goals.

So, it is back and I use it know, and I am very happy with it. This article explains my experience at solving my needs, I would appreciate not receiving advice or judgments here.

State of the need

Internet is infinite, my time is not

Having access to the Internet is a gift, I can access anything or anyone. But this comes with a few drawbacks. I can waste my time on anything, which is not particularly helpful. There are so many content that I only scratch things, knowing it will still be there when I need it, and jump to something else. The amount of data is impressive, one human can’t absorb that much, we have to deal with it.

I used to spend time of what I had, and now I just spend time on what exist. An example of this statement is that instead of reading books I own, I’m looking for which book I may want to read once, meanwhile no book are read.

Network socialization requires time

When I say “network socialization” this is so to avoid the easy “social network” saying. I do speak with people on IRC (in real time most of the time), I am helping people on reddit, I am reading and writing mail most of the time for OpenBSD development.

Don’t get me wrong, I am happy doing this, but I always keep an eye on each, trying to help people as soon as they ask a question, but this is really time consuming for me. I spend a lot of time jumping from one thing to another to keep myself updated on everything, and so I am too distracted to do anything.

In my first attempt of the offline laptop, I wanted to get my mails on it, but it was too painful to download everything and keep mails in sync. Sending emails would have required network too, it wouldn’t be an offline laptop anymore.

IT as a living and as a hobby

On top of this, I am working in IT so I spend my day doing things over the Internet and after work I spend my time on open source projects. I can not really disconnect from the Internet for both.

How I solved this

First step was to define « What do I like to do? », and I came with this short list:

  • reading
  • listening to music
  • playing video games
  • writing things
  • learning things

One could say I don’t need a computer to read books, but I have lots of ebooks and PDF about lots of subjects. The key is to load everything you need on the computer, because it can be tempting to connect the device to the Internet because you need a bit of this or that.

I use a very old computer with a PowerPC CPU (1.3 GHz single core) with 512MB of ram. I like that old computer, and slower computer forbid doing multiple things at the same time and help me staying on focus.

Reading files

For reading, I found zathura or comix (and its fork mcomix) very useful for reading huge PDF, the scrolling customization make those tools useful.

Listening to music

I buy my music as FLAC files and download it, this doesn’t require any internet access except at purchase time, so nothing special there. I use moc player which is easy to use, have a lot of feature and supports FLAC (on powerpc).

Video games

Emulation is a nice way to play lot of games on OpenBSD, on my old computer it’s up to game boy advance / super nes / megadrive which should allow me to do again lots of games I own.

We also have a lot of nice games in ports, but my computer is too slow to run them or they won’t work on powerpc.

Encyclopedia - Wikipedia

I’ve set up a local wikipedia replica like I explained in a previous article, so anytime I need to find about something, I can ask my local wikipedia. It’s always available. This is the best I found for a local encyclopedia, works well.

Writing things

Since I started the offline computer experience, I started a diary. I never felt the need to do so but I wanted to give it a try. I have to admit summing up what I achieved in the day before going to bed is a satisfying experience and now I continue to update it.

You can use any text editor you want, there are special software with specific features, like rednotebook or lifeograph which supports embedded pictures or on the fly markdown rendering. But a text file and your favorite editor also do the job.

I also write some articles of this blog. It’s easy to do so as articles are text files in a git repository. When I finish and I need to publish, I get network and push changes to the connected computer which will do the publishing job.

Technical details

I will go fast on this. My set up is an old Apple IBook G4 with a 1024x768 screen (I love this 4:3 ratio) running OpenBSD.

The system firewall pf is configured to prevent any incoming connections, and only allow TCP on the network to port 22, because when I need to copy files, I use ssh / sftp. The /home partition is encrypted using the softraid crypto device, full disk encryption is not supported on powerpc.

The experience is even more enjoyable with a warm cup of tea on hand.

Cycling / bike trips and opensource

Written by Solène, on 06 February 2020.
Tags: #biking

Comments on Mastodon

Introduction

I started doing biking seriously a few months ago, as I love having statistics I needed to gather some. I found a lot of devices on the market but I prefered using opensource tool and not relying on any vendor.

The best option to do so for me was reusing a 6 years old smartphone on which the SIM card bus is broken, that phone lose the sim card when it is shaked a little and requires a reboot to find it again, I am happy I found a way to reuse it.

Tip: turn ON airplane mode on the smartphone while riding, even without a SIM card it will try to get network and it will draw battery + emitting useless radio waves. In case of emergency, just disable the airplane mode to get access to your local emergency call number. GPS is a passive module and doesn’t require any network.

This smartphone has a GPS receiver, it’s enough for recording my position as often I want. Using the correct GPS software from F-droid store and a program for sftp transfer, I can record data and transfer it easily to my computer.

The most common file format for recording GPS position is the GPX format, it’s a simple XML file containing all positions with their timestamp, sometimes with a few more information like speed at that time, but given you have all positions, software can calculate the speed between each position.

Android GPS Software

It seems GPS software for recording GPX tracks are becoming popular, and in the last months, lot of new software appeared, which is a good thing, I didn’t tested all of them though but they tend to be more easy to use and minimalistic.

OpenStreetMap app - OSMand~

You can install it from F-droid an alternate store for Android only with opensource software, it’s a full free version (and opensource) compared to the one you can find on Android store.

This is OpenStreetMap official software, it’s full of features and quite heavy, you can download maps for navigation, record tracks, view tracks statistics, contribute to OSM, get Wikipedia information for an area and everything of this while being OFFLINE. Not only on my bike, I use it all the time while walking or in my car.

Recorded GPX can be found in the default path Android/data/net.osmand.plus/files/tracks/rec/

Trekarta

I found another software named Trekarta which is a lot more lighter than OSM, but only focuses on recording your tracks. I would recommend it if you don’t want any other feature or have a really old android compatible phone or low disk space.

Analyzing GPX files / keep track of everything

I found Turtlesport, an opensource software in Java for which last release was years ago but still work out of the box, given you have a java implementation installed. You can find it at the following link.

/usr/local/bin/jdk-1.8.0/bin/java -jar turtlesport.jar

Turtlesport is a nice tool for viewing tracks, it’s not for only for cycling and can be used for various sports, the process is the following:

  • define sports you do (bike, skateboard, hiking etc..)
  • define equipments you use (bike, sport shoes, skis etc..)
  • import GPX files and tell Turtlesport which sport and equipment it’s related to

Then, for each GPX file, you will be able to see it on a map, see elevation and speed of that track, but you can also make statistics per sport or equipment, like “How many km I ride with that bike over last year, per week”.

If you don’t have a GPX file, you can still add a new trip into the database by drawing the path on a map.

In the equipments, you will see how many kilometers you used each, with an alert feature if the equipment goes beyond a defined wearing limit. I’m not sure about the use of this, maybe you want to know your shoes shouldn’t be used for more than 2000 km?? Maybe it’s possible to use it for maintenance purpose, says your bike has a wearing limit of 1000 km, when you reach it you get an alert, do your maintenance and set the new limit to 2000km.

Viewing GPX files

From OpenBSD 6.7 you can install the package gpxsee to open multiple GPX files, they will be shown on a map, each track with a different colour, and nice charts displaying the elevation or speed over the travel for every tracks.

Before gpxsee I was using the GIS (Geographical Information System) tool qgis but it is really heavy and complicated. But if you want to work on your recorded data like doing complex statistics, it’s a powerful tool if you know how to use it.

I like to use it in a gamification purpose: I’m trying to ride over every road around my home, viewing all GPX files at the same time allow me to plan the next trip where I never went.

Miscellaneous

Create an unique GPX file from all records

It is possible to merge GPX file into one giant one using gpsbabel .I was using this before having *gpxsee but I have no idea about what you can do with that, this create one big spaggheti track. I choose to keep the command here, in case it’s useful for someone one day:

gpsbabel -s -r -t -i GPX $(ls /path/to/files/*gpx | awk '{ printf "-f %s ", $1 }') -o GPX -F - > sum.gpx

Cycling using electronic devices

Of course, if you are a true cyclist racer and GPX files will not be enough for you, you will certainly want devices such as a power meter or a cadence meter and an on-board device to use them. I can’t help much about hardware.

However, you may want to give a try to Golden Cheetah to import all your data from various devices and make complex statistics from it. I tried it and I had no idea about the purpose of 90% of the features.

Have fun

Don’t forget to have fun and do not get obscessed by numbers!

Common LISP awk macro for easy text file operations

Written by Solène, on 04 February 2020.
Tags: #awk #lisp

Comments on Mastodon

I like Common LISP and I also like awk. Dealing with text files in Common LISP is often painful. So I wrote a small awk like common lisp macro, which helps a lot dealing with text files.

Here is the implementation, I used the uiop package for split-string function, it comes with sbcl. But it's possible to write your own split-string or reused the infamous split-str function shared on the Internet.

(defmacro awk(file separator &body code)
  "allow running code for each line of a text file,
   giving access to NF and NR variables, and also to
   fields list containing fields, and line containing $0"
    `(progn
       (let ((stream (open ,file :if-does-not-exist nil)))
         (when stream
           (loop for line = (read-line stream nil)
              counting t into NR
              while line do
                (let* ((fields (uiop:split-string line :separator ,separator))
                       (NF (length fields)))
                  ,@code))))))

It's interesting that the "do" in the loop could be replaced with a "collect", allowing to reuse awk output as a list into another function, a quick example I have in mind is this:

;; equivalent of awk '{ print NF }' file | sort | uniq
;; for counting how many differents fields long line we have
(uniq (sort (awk "file" " " NF)))

Now, here are a few examples of usage of this macro, I've written the original awk command in the comments in comparison:

;; numbering lines of a text file with NR
;; awk '{ print NR": "$0 }' file.txt
;;
(awk "file.txt" " "
     (format t "~a: ~a~%" NR line))

;; display NF-1 field (yes it's -2 in the example because -1 is last field in the list)
;; awk -F ';' '{ print NF-1 }' file.csv
;;
(awk "file.csv" ";"
     (print (nth (- NF 2) fields)))

;; filtering lines (like grep)
;; awk '/unbound/ { print }' /var/log/messages
;;
(awk "/var/log/messages" " "
     (when (search "unbound" line)
       (print line)))

;; printing 4nth field
;; awk -F ';' '{ print $4 }' data.csv
;;
(awk "data.csv" ";"
     (print (nth 4 fields)))

Using the OpenBSD ports tree with dedicated users

Written by Solène, on 11 January 2020.
Tags: #openbsd68

Comments on Mastodon

If you want to contribute to OpenBSD ports collection you will want to enable thePORTS_PRIVSEP feature. When this variable is set, ports system will use dedicated users for tasks.

Source tarballs will be downloaded by the user _pfetch and all compilation and packaging will be done by the user _pbuild.

Those users are created at system install and pf have a default rule to prevent _pbuild user doing network access. This will prevent ports from doing network stuff, and this is what you want.

This adds a big security to the porting process and any malicious code run by ports being compiled will be harmless.

In order to enable this feature, a few changes must be made.

The file /etc/mk.conf must contains

PORTS_PRIVSEP=yes
SUDO=doas

Then, /etc/doas.conf must allows your user to become _pfetch and _pbuild

permit keepenv nopass solene as _pbuild
permit keepenv nopass solene as _pfetch
permit keepenv nopass solene as root

If you don’t want to use the last line, there is an explanation in the bsd.port.mk(5) man page.

Finally, within the ports tree, some permissions must be changed.

# chown -R _pfetch:_pfetch /usr/ports/distfiles
# chown -R _pbuild:_pbuild /usr/ports/{packages,plist,pobj,bulk}

If directories doesn’t exist yet on your system (this is the case on a fresh ports checkout / untar), you can create them with the commands:

# install -d -o _pfetch -g _pfetch /usr/ports/distfiles
# install -d -o _pbuild -g _pbuild /usr/ports/{packages,plist,pobj,bulk}

Now, when you run a command in the ports tree, privileges should be dropped to according users.

Using rsnapshot for easy backups

Written by Solène, on 10 January 2020.
Tags: #openbsd68

Comments on Mastodon

Introduction

rsnapshot is a handy tool to manage backups using rsync and hard links on the filesystem. rsnapshot will copy folders and files but it will skip duplication over backups using hard links for files which has not changed.

This kinda create snapshots of your folders you want to backup, only using rsync, it’s very efficient and easy to use, and getting files from backups is really easy as they are stored as files under the rsnapshot backup.

Installation

Installing rsnapshot is very easy, on most systems it will be in your official repository packages.

To install it on OpenBSD: pkg_add rsnapshot (as root)

Configuration

Now you may want to configure it, in OpenBSD you will find a template in /etc/rsnapshot.conf that you can edit for your needs (you can make a backup of it first if you want to start over). As it’s stated in big (as big as it can be displayed in a terminal) letters at the top of the configuration sample file, you will see that things must be separated by TABS and not spaces. I’ve made the mistakes more than once, don’t forget using tabs.

I won’t explain all options, but only the most importants.

The variable snapshot_root is where you want to store the backups. Don’t put that directory in a directory you will backup (that will end into an infinite loop)

The variable backup is for telling rsnapshot what you want to backup from your system to which directory inside snapshot_root

Here are a few examples:

backup  /home/solene/   myfiles/
backup  /home/shera/Documents   shera_files/
backup  /home/shera/Music   shera_files/
backup  /etc/   etc/
backup  /var/   var/    exclude=logs/*

Be careful when using ending slashes to paths, it works the same as with rsync. /home/solene/ means that into target directory, it will contains the content of /home/solene/ while /home/solene will copy the folder solene within the target directory, so you end up with target_directory/solene/the_files_here.

The variables retain are very important, this will define how rsnapshot keep your data. In the example you will see alpha, beta, gamma but it could be hour, day, week or foo and bar. It’s only a name that will be used by rsnapshot to name your backups and also that you will use to tell rsnapshot which kind of backup to do. Now, I must explain how rsnapshot actually work.

How it work

Let’s go for a straighforward configuration. We want a backup every hour on the last 24h, a backup every day for the past 7 days and 3 manuals backup that we start manually.

We will have this in our rsnapshot configuration

retain  hourly  24
retain  daily   7
retain  manual  3

but how does rsnapshot know how to do what? The answer is that it doesn’t.

In root user crontab, you will have to add something like this:

# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly

# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily

and then, when you want to do a manual backup, just start rsnapshot manual

Every time you run rsnapshot for a “kind” of backup, the last version will be named in the rsnapshoot root directory like hourly.0 and every backups will be shifted by one. The directory getting a number higher than the number in the retain line will be deleted.

New to crontab?

If you never used crontab, I will share two important things to know about it.

Use MAILTO=“” if you don’t want to receive every output generated from scripts started by cron.

Use a PATH containing /usr/local/bin/ in it because in the default cron PATH it is not present. Instead of setting PATH you can also using full binary paths into the crontab, like /usr/local/bin/rsnapshot daily

You can edit the current user crontab with the command crontab -e.

Your crontab may then look like:

PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin
MAILTO=""
# comments are allowed in crontab
# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly
# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily

Crop a video using ffmpeg

Written by Solène, on 20 December 2019.
Tags: #ffmpeg

Comments on Mastodon

If you ever need to crop a video, which mean that you want to reduce the area of the video to a square of it to trim areas you don’t want.

This is possible with ffmpeg using the video filter crop. To make the example more readable, I replaced values with variables names:

  • WIDTH = width of output video
  • HEIGHT = height of output video
  • START_LEFT = relative position of the area compared to the left, left being 0
  • START_TOP = relative position of the area compared to the top, top being 0

So the actual commands look like

ffmpeg -i input_video.mp4 -filter:v "crop=$WIDTH:$HEIGHT:$START_LEFT:$START_TOP" output_video.mp4

If you want to crop the video to get a 320x240 video from the top-left position 500,100 the command would be

ffmpeg -i input_video.mp4 -filter:v "crop=320:240:500:100" output_video.mp4

Separate or merge audio and video using ffmpeg

Written by Solène, on 20 December 2019.
Tags: #ffmpeg

Comments on Mastodon

Extract audio and video (separation)

If for some reasons you want to separate the audio and the video from a file you can use those commands:

ffmpeg -i input_file.flv -vn -acodec copy audio.aac

ffmpeg -i input_file.flv -an -vcodec copy video.mp4

Short explanation:

  • -vn means -video null and so you discard video
  • -an means -audio null and so you discard audio
  • codec copy means the output is using original format from the file. If the audio is mp3 then the output file will be a mp3 whatever the extension you choose.

Instead of using codec copy you can choose a different codec for the extracted file, but copy is a good choice, it performs really fast because you don’t need to re-encode it and is loss-less.

I use this to rework the audio with audacity.

Merge audio and video into a single file (merge)

After you reworked tracks (audio and/or video) of your file, you can combine them into a single file.

ffmpeg -i input_audio.aac -i input_video.mp4 -acodec copy -vcodec copy -f flv merged_video.flv

Playing CrossCode within a web browser

Written by Solène, on 09 December 2019.
Tags: #gaming #openbsd68 #openindiana

Comments on Mastodon

Good news for my gamers readers. It’s not really fresh news but it has never been written anywhere.

The commercial video game Crosscode is written in HTML5, making it available on every system having chromium or firefox. The limitation is that it may not support gamepad (except if you find a way to make it work).

A demo is downloadable at this address https://radicalfishgames.itch.io/crosscode and should work using the following instructions.

You need to buy the game to be able to play it, it’s not free and not opensource. Once you bought it, the process is easy:

  1. Download the linux installer from GOG (from steam it may be too)
  2. Extract the data
  3. Patch a file if you want to use firefox
  4. Serve the files through a http server

The first step is to buy the game and get the installer.

Once you get a file named like “crosscode_1_2_0_4_32613.sh”, run unzip on it, it’s a shell script but only a self contained archive that can extract itself using the small shell script at the top.

Change directory into data/noarch/game/assets and apply this patch, if you don’t know how to apply a patch or don’t want to, you only need to remove/comment the part you can see in the following patch:

--- node-webkit.html.orig   Mon Dec  9 17:27:17 2019
+++ node-webkit.html    Mon Dec  9 17:27:39 2019
@@ -51,12 +51,12 @@
 <script type="text/javascript">
     // make sure we don't let node-webkit show it's error page
     // TODO for release mode, there should be an option to write to a file or something.
-    window['process'].once('uncaughtException', function() {
+/*    window['process'].once('uncaughtException', function() {
         var win = require('nw.gui').Window.get();
         if(!(win.isDevToolsOpen && win.isDevToolsOpen())) {
             win.showDevTools && win.showDevTools();
         }
-    });
+    });*/

     function doStartCrossCodePlz(){
       if(window.startCrossCode){

Then you need to start a http server in the current path, an easy way to do it is using… php! Because php contains a http server, you can start the server with the following command:

$ php -S 127.0.0.1:8080

Now, you can play the game by opening http://localhost:8080/node-webkit.html

I really thank Thomas Frohwein aka thfr@ for finding this out!

Tested on OpenBSD and OpenIndiana, it works fine on an Intel Core 2 Duo T9400 (CPU from 2008).

Host your own wikipedia backup

Written by Solène, on 13 November 2019.
Tags: #openbsd68 #wikipedia #life

Comments on Mastodon

Wikipedia and openzim

If you ever wanted to host your own wikipedia replica, here is the simplest way.

As wikipedia is REALLY huge, you don’t really want to host a php wikimedia software and load the huge database, instead, the project made the openzim format to compress the huge database that wikipedia became while allowing using it for fast searches.

Sadly, on OpenBSD, we have no software reading zim files and most software requires the library openzim to work which requires extra work to get it as a package on OpenBSD.

Hopefully, there is a python package implementing all you need as pure python to serve zim files over http and it’s easy to install.

This tutorial should work on all others unix like systems but packages or binary names may change.

Downloading wikipedia

The project Kiwix is responsible for wikipedia files, they create regularly files from various projects (including stackexchange, gutenberg, wikibooks etc…) but for this tutorial we want wikipedia: https://wiki.kiwix.org/wiki/Content_in_all_languages

You will find a lot of files, the language is contained into the filename. Some filenames will also self explain if they contain everything or categories, and if they have pictures or not.

The full French file is 31.4 GB worth.

Running the server

For the next steps, I recommend setting up a new user dedicated to this.

On OpenBSD, we will require python3 and pip:

$ doas pkg_add py3-pip--

Then we can use pip to fetch and install dependencies for the zimply software, the flag --user is rather important as it allows any user to download and install python libraries in its home folder instead of polluting the whole system as root.

$ pip3.7 install --user --upgrade zimply 

I wrote a small script to start the server using the zim file as a parameter, I rarely write python so the script may not be high standard.

File server.py:

from zimply import ZIMServer
import sys
import os.path

if len(sys.argv) == 1:
    print("usage: " + sys.argv[0] + " file")
    exit(1)

if os.path.exists(sys.argv[1]):
    ZIMServer(sys.argv[1])
else:
    print("Can't find file " + sys.argv[1])

And then you can start the server using the command:

$ python3.7 server.py /path/to/wikipedia_fr_all_maxi_2019-08.zim

You will be able to access wikipedia on the url http://localhost:9454/

Note that this is not a “wiki” as you can’t see history and edit/create pages.

This kind of backup is used in place like Cuba or Africa areas where people don’t have unlimited internet access, the project lead by Kiwix allow more people to access knowledge.

Creating new users dedicated to processes

Written by Solène, on 12 November 2019.
Tags: #openbsd #openbsd68

Comments on Mastodon

What this article is about ?

For some times I wanted to share how I manage my personal laptop and systems. I got the habit to create a lot of users for just everything for security reasons.

Creating a new users is fast, I can connect as this user using doas or ssh -X if I need a X app and this allows preventing some code to steal data from my main account.

Maybe I went this way too much, I have a dedicated irssi users which is only for running irssi, same with mutt. I also have a user with a stupid name and I can use it for testing X apps and I can wipe the data in its home directory (to try fresh firefox profiles in case of ports update for example).

How to proceed?

Creating a new user is as easy as this command (as root):

# useradd -m newuser
# echo "permit nopass keepenv solene as newuser" >> /etc/doas.conf

Then, from my main user, I can do:

$ doas -u newuser 'mutt'

and it will run mutt as this user.

This way, I can easily manage lots of services from packages which don’t come with dedicated daemons users.

For this to be effective, it’s important to have a chmod 700 on your main user account, so others users can’t browse your files.

Graphicals software with dedicated users

It becomes more tricky for graphical users. There are two options there:

  • allow another user to use your X session, it will have native performance but in case of security issue in the software your whole X session is accessible (recording keys, screnshots etc…)
  • running the software through ssh -X will restricts X access to the software but the rendering will be a bit sluggish and not suitable for some uses.

Example of using ssh -X compared to ssh -Y:

$ ssh -X foobar@localhost scrot
X Error of failed request:  BadAccess (attempt to access private resource denied)
  Major opcode of failed request:  104 (X_Bell)
  Serial number of failed request:  6
  Current serial number in output stream:  8

$ ssh -Y foobar@localhost scrot
(nothing output but it made a screenshot of the whole X area)

Real world example

On a server I have the following new users running:

  • torrents
  • idlerpg
  • searx
  • znc
  • minetest
  • quake server
  • awk cron parsing http

they can have crontabs.

Maybe I use it too much, but it’s fine to me.

How to remove a part of a video using ffmpeg

Written by Solène, on 02 October 2019.
Tags: #ffmpeg

Comments on Mastodon

If you want to remove parts of a video, you have to cut it into pieces and then merge the pieces, so you can avoid parts you don’t want.

The command is not obvious at all (like in all ffmpeg uses), I found some parts on differents areas of the Internet.

Split in parts, we want to keep from 00:00:00 to 00:30:00 and 00:35:00 to 00:45:00

ffmpeg -i source_file.mp4 -ss 00:00:00 -t 00:30:00 -acodec copy -vcodec copy part1.mp4
ffmpeg -i source_file.mp4 -ss 00:35:00 -t 00:10:00 -acodec copy -vcodec copy part2.mp4

The -ss parameter tells ffmpeg where to start the video and -t parameter tells it about the duration.

Then, merge the files into one file:

printf "file %s\n" part1.mp4 part2.mp4 > file_list.txt
ffmpeg -f concat -i file_list.txt -c copy result.mp4

instead of printf you can write into file_list.txt the list of files like this:

file /path/to/test1.mp4
file /path/to/test2.mp4

GPG2 cheatsheet

Written by Solène, on 06 September 2019.
Tags: #security

Comments on Mastodon

Introduction

I don’t use gpg a lot but it seems the only tool out there for encrypting data which “works” and widely used.

So this is my personal cheatsheet for everyday use of gpg.

In this post, I use the command gpg2 which is the binary to GPG version 2. On your system, “gpg” command could be gpg2 or gpg1. You can use gpg --versionif you want to check the real version behind gpg binary.

In your ~/.profile file you may need the following line:

export GPG_TTY=$(tty)

Install GPG

The real name of GPG is GnuPG, so depending on your system the package can be either gpg2, gpg, gnupg, gnugp2 etc…

On OpenBSD, you can install it with: pkg_add gnupg--%gnupg2

GPG Principle using private/public keys

  • YOU make a private and a public key (associated with a mail)
  • YOU give the public key to people
  • PEOPLE import your public key into they keyring
  • PEOPLE use your public key from the keyring
  • YOU will need your password everytime

I think gpg can do much more, but read the manual for that :)

Initialization

We need to create a public and a private key.

solene$ gpg2 --gen-key
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

In this part, you should put your real name and your email address and validate with “O” if you are okay with the input. You will get ask for a passphrase after.

Real name: Solene
Email address: solene@domain.example
You selected this USER-ID:
    "Solene <solene@domain.example>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 368E580748D5CA75 marked as ultimately trusted
gpg: revocation certificate stored as '/home/solene/.gnupg/openpgp-revocs.d/7914C6A7439EADA52643933B368E580748D5CA75.rev'
public and secret key created and signed.

pub   rsa2048 2019-09-06 [SC] [expires: 2021-09-05]
      7914C6A7439EADA52643933B368E580748D5CA75
uid                    Solene <solene@domain.example>
sub   rsa2048 2019-09-06 [E] [expires: 2021-09-05]

The key will expire in 2 years, but this is okay. This is a good thing, if you stop using the key, it will die silently at it expiration time. If you still use it, you will be able to extend the expiracy time and people will be able to notice you still use that key.

Export the public key

If someone asks your GPG key, this is what they want:

gpg2 --armor --export solene@domain.example > solene.asc

Import a public key

Import the public key:

gpg2 --import solene.asc

Delete a public key

In case someone change their public key, you will want to delete it to import a new one, replace $FINGERPRINT by the actual fingerprint of the public key.

gpg2 --delete-keys $FINGERPRINT

Encrypt a file for someone

If you want to send file picture.jpg to remote@mail then use the command:

gpg2 --encrypt --recipient remote@domain.example picture.jpg > picture.jpg.gpg

You can now send picture.jpg.gpg to remote@mail who will be able to read the file with his/her private key.

You can use `–armor`` parameter to make the output plaintext, so you can put it into a mail or a text file.

Decrypt a file

Easy!

gpg2 --decrypt image.jpg.gpg > image.jpg

Get public key fingerprint

The fingerprint is a short string made out of your public key and can be embedded in a mail (often as a signature) or anywhere.

It allows comparing a public key you received from someone with the fingerprint that you may find in mailing list archives, twitter, a html page etc.. if the person spreaded it somewhere. This allow to multiple check the authenticity of the public key you received.

it looks like:

4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909

This is my real key fingerprint, so if I send you my public key, you can use the fingerprint from this page to check it matches the key you received!

You can obtain your fingerprint using the following command:

solene@t480 ~ $ gpg2 --fingerprint
pub   rsa4096 2018-06-08 [SC]
      4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909
uid          [  ultime ] XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sub   rsa4096 2018-06-08 [E]

Add a new mail / identity

If for some reason, you need to add another mail to your GPG key (like personal/work keys) you can create a new identity with the new mail.

Type gpg2 --edit-key solene@domain.example and then in the prompt, type adduid and answer questions.

You can now export the public key with a different identity.

List known keys

If you want to get the list of keys you imported, you can use

gpg2 -k

Testing

If you want to do some tests, I’d recommend making new users on your system, exchanges their keys and try to encrypt a message from one user to another.

I have a few spare users on my system on which I can ssh locally for various tests, it is always useful.

BitreichCON 2019 talks available

Written by Solène, on 27 August 2019.
Tags: #unix #drist #automation #awk

Comments on Mastodon

Earlier in August 2019 happened the BitreichCON 2019. There was awesome talks there during two days but there are two I would like to share. You can find all the informations about this event at the following address with the Gopher protocol gopher://bitreich.org/1/con/2019

BrCON talks are happening through an audio stream, a ssh session for viewing the current slide and IRC for questions. I have the markdown files producing the slides (1 title = 1 slide) and the audio recording.

Simple solutions

This is a talk I have made for this conference. It as about using simple solutions for most problems. Simple solutions come with simple tools, unix tools. I explain with real life examples like how to retrieve my blog articles titles from the website using curl, grep, tr or awk.

Link to the audio

Link to the slides

Experiences with drist

Another talk from Parazyd about my deployment tool Drist so I feel obligated to share it with you.

In his talk he makes a comparison with slack (debian package, not the online community), explains his workflow with Drist and how it saves his precious time.

Link to the audio

Link to the slides

About the bitreich community

If you want to know more about the bitreich community, check gopher://bitreich.org or IRC #bitreich-en on Freenode servers.

There is also the bitreich website which is a website parody of the worse of what you can daily see.

Stream live video using nginx

Written by Solène, on 26 August 2019.
Tags: #openbsd68 #openbsd #gaming #nginx

Comments on Mastodon

This blog post is about a nginx rtmp module for turning your nginx server into a video streaming server.

The official website of the project is located on github at: https://github.com/arut/nginx-rtmp-module/

I use it to stream video from my computer to my nginx server, then viewers can use mpv rtmp://perso.pw/gaming in order to view the video stream. But the nginx server will also relay to twitch for more scalability (and some people prefer viewing there for some reasons).

The module will already be installed with nginx package since OpenBSD 6.6 (not already out at this time).

There is no package for install the rtmp module before 6.6. On others operating systems, check for something like “nginx-rtmp” or “rtmp” in an nginx context.

Install nginx on OpenBSD:

pkg_add nginx

Then, add the following to the file /etc/nginx/nginx.conf

load_module modules/ngx_rtmp_module.so;
rtmp {
    server {
        listen 1935;
        buflen 10s;

        application gaming {
            live on;
            allow publish 176.32.212.34;
            allow publish 175.3.194.6;
            deny publish all;
            allow play all;

            record all;
            record_path /htdocs/videos/;
            record_suffix %d-%b-%y_%Hh%M.flv;

        }
    }
}

The previous configuration sample is a simple example allowing 172.32.212.34 and 175.3.194.6 to stream through nginx, and that will record the videos under /htdocs/videos/ (nginx is chrooted in /var/www).

You can add the following line in the “application” block to relay the stream to your Twitch broadcasting server, using your API key.

push rtmp://live-ams.twitch.tv/app/YOUR_API_KEY;

I made a simple scripts generating thumbnails of the videos and generating a html index file.

Every 10 minutes, a cron check if files have to be generated, make thumbnails for videos (tries at 05:30 of the video and then 00:03 if it doesn’t work, to handle very small videos) and then create the html.

The script checking for new stuff and starting html generation:

#!/bin/sh

cd /var/www/htdocs/videos

for file in $(find . -mmin +1 -name '*.flv')
do
        echo $file
        PIC=$(echo $file | sed 's/flv$/jpg/')
        if [ ! -f "$PIC" ]
        then
                ffmpeg -ss 00:05:30 -i "$file" -vframes 1 -q:v 2 "$PIC"
                if [ ! -f "$PIC" ]
                then
                        ffmpeg -ss 00:00:03 -i "$file" -vframes 1 -q:v 2 "$PIC"
                        if [ ! -f "$PIC" ]
                        then
                                echo "problem with $file" | mail user@my-tld.com
                        fi
                fi
        fi
done
cd ~/dev/videos/ && sh html.sh

This one makes the html:

#!/bin/sh

cd /var/www/htdocs/videos

PER_ROW=3
COUNT=0

cat << EOF > index.html
<html>
  <body>
<h1>Replays</h1>
<table>
EOF

for file in $(find . -mmin +3 -name '*.flv')
do
        if [ $COUNT -eq 0 ]
        then
                echo "<tr>" >> index.html
                INROW=1
        fi
        COUNT=$(( COUNT + 1 ))
        SIZE=$(ls -lh $file  | awk '{ print $5 }')
        PIC=$(echo $file | sed 's/flv$/jpg/')

        echo $file
        echo "<td><a href=\"$file\"><img src=\"$PIC\" width=320 height=240 /><br />$file ($SIZE)</a></td>" >> index.html
        if [ $COUNT -eq $PER_ROW ]
        then
                echo "</tr>" >> index.html
                COUNT=0
                INROW=0
        fi
done

if [ $INROW -eq 1 ]
then
        echo "</tr>" >> index.html
fi

cat << EOF >> index.html
    </table>
  </body>
</html>
EOF

Minimalistic markdown subset to html converter using awk

Written by Solène, on 26 August 2019.
Tags: #unix #awk

Comments on Mastodon

Hello

As on my blog I use different markup languages I would like to use a simpler markup language not requiring an extra package. To do so, I wrote an awk script handling titles, paragraphs and code blocks the same way markdown does.

16 December 2019 UPDATE: adc sent me a patch to add ordered and unordered list. Code below contain the addition.

It is very easy to use, like: awk -f mmd file.mmd > output.html

The script is the following:

BEGIN {
    in_code=0
    in_list_unordered=0
    in_list_ordered=0
    in_paragraph=0
}

{
    # escape < > characters
    gsub(/</,"\<",$0);
    gsub(/>/,"\>",$0);

    # close code blocks
    if(! match($0,/^    /)) {
        if(in_code) {
            in_code=0
            printf "</code></pre>\n"
        }
    }

    # close unordered list
    if(! match($0,/^- /)) {
        if(in_list_unordered) {
            in_list_unordered=0
            printf "</ul>\n"
        }
    }

    # close ordered list
    if(! match($0,/^[0-9]+\. /)) {
        if(in_list_ordered) {
            in_list_ordered=0
            printf "</ol>\n"
        }
    }

    # display titles
    if(match($0,/^#/)) {
        if(match($0,/^(#+)/)) {
            printf "<h%i>%s</h%i>\n", RLENGTH, substr($0,index($0,$2)), RLENGTH
        }

    # display code blocks
    } else if(match($0,/^    /)) {
        if(in_code==0) {
            in_code=1
            printf "<pre><code>"
            print substr($0,5)
        } else {
            print substr($0,5)
        }

    # display unordered lists
    } else if(match($0,/^- /)) {
        if(in_list_unordered==0) {
            in_list_unordered=1
            printf "<ul>\n"
            printf "<li>%s</li>\n", substr($0,3)
        } else {
            printf "<li>%s</li>\n", substr($0,3)
        }

    # display ordered lists
    } else if(match($0,/^[0-9]+\. /)) {
        n=index($0," ")+1
        if(in_list_ordered==0) {
            in_list_ordered=1
            printf "<ol>\n"
            printf "<li>%s</li>\n", substr($0,n)
        } else {
            printf "<li>%s</li>\n", substr($0,n)
        }

    # close p if current line is empty
    } else {
        if(length($0) == 0 && in_paragraph == 1 && in_code == 0) {
            in_paragraph=0
            printf "</p>"
        } # we are still in a paragraph
        if(length($0) != 0 && in_paragraph == 1) {
            print
        } # open a p tag if previous line is empty
        if(length(previous_line)==0 && in_paragraph==0) {
            in_paragraph=1
            printf "<p>%s\n", $0
        }
    }
    previous_line = $0
}

END {
    if(in_code==1) {
        printf "</code></pre>\n"
    }
    if(in_list_unordered==1) {
        printf "</ul>\n"
    }
    if(in_list_ordered==1) {
        printf "</ol>\n"
    }
    if(in_paragraph==1) {
        printf "</p>\n"
    }
}

Life with an offline laptop

Written by Solène, on 23 August 2019.
Tags: #openbsd #life #disconnected

Comments on Mastodon

Hello, this is a long time I want to work on a special project using an offline device and work on it.

I started using computers before my parents had an internet access and I was enjoying it. Would it still be the case if I was using a laptop with no internet access?

When I think about an offline laptop, I immediately think I will miss IRC, mails, file synchronization, Mastodon and remote ssh to my servers. But do I really need it _all the time_?

As I started thinking about preparing an old laptop for the experiment, differents ideas with theirs pros and cons came to my mind.

Over the years, I produced digital data and I can not deny this. I don't need all of them but I still want some (some music, my texts, some of my programs). How would I synchronize data from the offline system to my main system (which has replicated backups and such).

At first I was thinking about using a serial line over the two laptops to synchronize files, but both laptop lacks serial ports and buying gears for that would cost too much for its purpose.

I ended thinking that using an IP network _is fine_, if I connect for a specific purpose. This extended a bit further because I also need to install packages, and using an usb memory stick from another computer to get packages and allow the offline system to use it is _tedious_ and ineffective (downloading packages and correct dependencies is a hard task on OpenBSD in the case you only want the files). I also came across a really specific problem, my offline device is an old Apple PowerPC laptop being big-endian and amd64 is little-endian, while this does not seem particularly a problem, OpenBSD filesystem is dependent of endianness, and I could not share an usb memory device using FFS because of this, alternatives are fat, ntfs or ext2 so it is a dead end.

Finally, using the super slow wireless network adapter from that offline laptop allows me to connect only when I need for a few file transfers. I am using the system firewall pf to limit access to outside.

In my pf.conf, I only have rules for DNS, NTP servers, my remote server, OpenBSD mirror for packages and my other laptop on the lan. I only enable wifi if I need to push an article to my blog or if I need to pull a bit more music from my laptop.

This is not entirely _offline_ then, because I can get access to the internet at any time, but it helps me keeping the device offline. There is no modern web browser on powerpc, I restricted packages to the minimum.

So far, when using this laptop, there is no other distraction than the stuff I do myself.

At the time I write this post, I only use xterm and tmux, with moc as a music player (the audio system of the iBook G4 is surprisingly good!), writing this text with ed and a 72 long char prompt in order to wrap words correctly manually (I already talked about that trick!).

As my laptop has a short battery life, roughly two hours, this also helps having "sessions" of a reasonable duration. (Yes, I can still plug the laptop somewhere).

I did not use this laptop a lot so far, I only started the experiment a few days ago, I will write about this sometimes.

I plan to work on my gopher space to add new content only available there :)

OpenBSD ttyplot examples

Written by Solène, on 29 July 2019.
Tags: #openbsd68 #openbsd

Comments on Mastodon

I said I will rewrite ttyplot examples to make them work on OpenBSD.

Here they are, but a small notice before:

Examples using systat will only work for 10000 seconds , or increase that -d parameter, or wrap it in an infinite loop so it restart (but don’t loop systat for one run at a time, it needs to start at least once for producing results).

The systat examples won’t work before OpenBSD 6.6, which is not yet released at the time I’m writing this, but it’ll work on a -current after 20 july 2019.

I made a change to systat so it flush output at every cycle, it was not possible to parse its output in realtime before.

Enjoy!

Examples list

ping

Replace test.example by the host you want to ping.

ping test.example | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"

cpu usage

vmstat 1 | awk 'NR>2 { print 100-$(NF); fflush(); }' | ttyplot -t "Cpu usage" -s 100

disk io

 systat -d 1000 -b  iostat 1 | awk '/^sd0/ && NR > 20 { print $2/1024 ; print $3/1024 ; fflush }' | ttyplot -2 -t "Disk read/write in kB/s"

load average 1 minute

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($8,0,length($8)-1) ; fflush }' | ttyplot -t "load average 1"

load average 5 minutes

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($9,0,length($9)-1) ; fflush }' | ttyplot -t "load average 5"

load average 15 minutes

{ while :; do uptime ; sleep 1 ; done } | awk '{ print $10 ; fflush }' | ttyplot -t "load average 15"

wifi signal strengh

Replace iwm0 by your interface name.

{ while :; do ifconfig iwm0 | tr ' ' '\n' ; sleep 1 ; done } | awk '/%$/ { print ; fflush }' | ttyplot -t "Wifi strength in %" -s 100

cpu temperature

{ while :; do sysctl -n hw.sensors.cpu0.temp0 ; sleep 1 ; done } | awk '{ print $1 ; fflush }' | ttyplot -t "CPU temperature in °C"

pf state searches rate

systat -d 10000 -b pf 1 | awk '/state searches/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"

pf state insertions rate

systat -d 10000 -b pf 1 | awk '/state inserts/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"

network bandwidth

Replace trunk0 by your interface. This is the same command as in my previous article.

netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"

Tip

You can easily use those examples over ssh for gathering data, and leave the plot locally as in the following example:

ssh remote_server "netstat -b -w 1 -I trunk0" | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"

or

ssh remote_server "ping test.example" | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"

Realtime bandwidth terminal graph visualization

Written by Solène, on 19 July 2019.
Tags: #openbsd68 #openbsd

Comments on Mastodon

If for some reasons you want to visualize your bandwidth traffic on an interface (in or out) in a terminal with a nice graph, here is a small script to do so, involving ttyplot, a nice software making graphics in a terminal.

The following will works on OpenBSD. You can install ttyplot by pkg_add ttyplot as root, ttyplot package appeared since OpenBSD 6.5.

For Linux, the ttyplot official website contains tons of examples.

Example

Output example while updating my packages:

                                          IN Bandwidth in KB/s
  ↑ 1499.2 KB/s#
  │            #
  │            #
  │            #
  │            ##
  │            ##
  │ 1124.4 KB/s##
  │            ##
  │            ##
  │            ##
  │            ##
  │            ##
  │ 749.6 KB/s ##
  │            ##
  │            ##
  │            ##                                                    #
  │            ##      # #       #                     #             ##
  │            ##  #   ###    # ##      #  #  #        ##            ##         #         # ##
  │ 374.8 KB/s ## ##  ####  # # ## # # ### ## ##      ###  #      ## ###    #   #     #   # ##   #    ##
  │            ## ### ##### ########## #############  ###  # ##  ### ##### #### ##    ## ###### ##    ##
  │            ## ### ##### ########## #############  ###  ####  ### ##### #### ## ## ## ###### ##   ###
  │            ## ### ##### ########## ############## ###  ####  ### ##### #### ## ## ######### ##  ####
  │            ## ### ##### ############################## ######### ##### #### ## ## ############  ####
  │            ## ### #################################################### #### ## #####################
  │            ## ### #################################################### #############################
  └────────────────────────────────────────────────────────────────────────────────────────────────────→
     # last=422.0 min=1.3 max=1499.2 avg=352.8 KB/s                             Fri Jul 19 08:30:25 2019
                                                                           github.com/tenox7/ttyplot 1.4

In the following command, we will use trunk0 with INBOUND traffic as the interface to monitor.

At the end of the article, there is a command for displaying both in and out at the same time, and also instructions for customizing to your need.

Article update: the following command is extremely long and complicated, at the end of the article you can find a shorter and more efficient version, removing most of the awk code.

You can copy/paste this command in your OpenBSD system shell, this will produce a graph of trunk0 inbound traffic.

{ while :; do netstat -i -b -n ; sleep 1 ; done } | awk 'BEGIN{old=-1} /^trunk0/ { if(!index($4,":") && old>=0)  { print ($5-old)/1024 ; fflush  ; old = $5 } if(old==-1) { old=$5 } }'  | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"

The script will do an infinite loop doing netstat -ibn every second and sending that output to awk. You can quit it with Ctrl+C.

Explanations

Netstat output contains total bytes (in or out) since system has started so awk needs to remember last value and will display the difference between two output, avoiding first value because it would make a huge spike (aka the total network transfered since boot time).

If I decompose the awk script, this is a lot more readable. Awk is very readable if you take care to format it properly as any source code!

#!/bin/sh
{ while :;
  do
      netstat -i -b -n
      sleep 1
  done
} | awk '
    BEGIN {
        old=-1
    }
    /^trunk0/ { 
        if(!index($4,":") && old>=0) {
            print ($5-old)/1024
            fflush
            old = $5
        }
        if(old==-1) {
            old = $5
        }
    }' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"

Customization

  • replace trunk0 by your interface name
  • replace both instances of $5 by $6 for OUT traffic
  • replace /1024 by /1048576 for MB/s values
  • remove /1024 for B/s values
  • replace 1 in sleep 1 by another value if you want to have the value every n seconds

IN/OUT version for both data on the same graph + simpler

Thanks to leot on IRC, netstat can be used in a lot more efficient way and remove all the awk parsing! ttyplot supports having two graphs at the same time, one being in opposite color.

netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"

Streaming to Twitch using OpenBSD

Written by Solène, on 06 July 2019.
Tags: #openbsd68 #gaming

Comments on Mastodon

Introduction

If you ever wanted to make a twitch stream from your OpenBSD system, this is now possible, thanks to OpenBSD developer thfr@ who made a wrapper named fauxstream using ffmpeg with relevant parameters.

The setup is quite easy, it only requires a few steps and searching on Twitch website two informations, hopefully, to ease the process, I found the links for you.

You will need to make an account on twitch, get your api key (a long string of characters) which should stay secret because it allow anyone having it to stream on your account.

Preparation steps

  1. Register / connect on twitch
  2. Get your Stream API key at https://www.twitch.tv/YOUR_USERNAME/dashboard/settings (from this page you can also choose if twitch should automatically saves streams as videos for 14 days)
  3. Choose your nearest server from this page
  4. Add in your shell environnement a variable TWITCH=rtmp://SERVER_FROM_STEP_3/YOUR_API_KEY
  5. Get fauxstream with cvs -d anoncvs@anoncvs.thfr.info:/cvs checkout -P projects/fauxstream/
  6. chmod u+x fauxstream/fauxstream
  7. Allow recording of the microphone
  8. Allow recording of the output sound

Once you have all the pieces, start a new shell and check the $TWITCH variable is correctly set, it should looks like rtmp://live-ams.twitch.tv/app/live_2738723987238_jiozjeoizaeiazheizahezah (this is not a real api key).

Using fauxstream

fauxstream script comes with a README.md file containing some useful informations, you can also check the usage

View usage:

$ ./fauxstream

Starting a stream

When you start a stream, take care your API key isn’t displayed on the stream! I redirect stderr to /dev/null so all the output containing the key is not displayed.

Here is the settings I use to stream:

$ ./fauxstream -m -vmic 5.0 -vmon 0.2 -r 1920x1080 -f 20 -b 4000 $TWITCH 2> /dev/null

If you choose a smaller resolution than your screen, imagine a square of that resolution starting at the top left corner of your screen, the content of this square will be streamed.

I recommend bwm-ng package (I wrote a ports of the week article about it) to view your realtime bandwidth usage, if you see the bandwidth reach a fixed number this mean you reached your bandwidth limit and the stream is certainly not working correctly, you should lower resolution, fps or bitrate.

I recommend doing a few tries before you want to stream, to be sure it’s ok. Note that the flag -a may be be required in case of audio/video desynchronization, there is no magic value so you should guess and try.

Adding webcam

I found an easy trick to add webcam on top of a video game.

$ mpv --no-config --video-sync=display-vdrop --framedrop=vo --ontop av://v4l2:/dev/video1

The trick is to use mpv to display your webcam video on your screen and use the flag to make it stay on top of any other window (this won’t work with cwm(1) window manager). Then you can resize it and place it where you want. What you see is what get streamed.

The others mpv flags are to reduce lag between the webcam video stream and the display, mpv slowly get a delay and after 10 minutes, your webcam will be lagging by like 10 seconds and will be totally out of sync between the action and your face.

Don’t forget to use chown to change the ownership of your video device to your user, by default only root has access to video devices. This is reset upon reboot.

Viewing a stream

For less overhead, people can watch a stream using mpv software, I think this will require youtube-dl package too.

Example to view me streaming:

$ mpv https://www.twitch.tv/seriphyde

This would also work with a recorded video:

$ mpv https://www.twitch.tv/videos/447271018

High quality / low latency VOIP server with umurmur/Mumble on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd68

Comments on Mastodon

Hello,

I HATE Discord.

Discord users keep telling about their so called discord server, which is not dedicated to them at all. And Discord has a very bad quality and a lot of voice distorsion.

Why not run your very own mumble server with high voice quality and low latency and privacy respect? This is very easy to setup on OpenBSD!

Mumble is an open source voip client, it has a client named Mumble (available on various operating system) and at least Android, the server part is murmur but there is a lightweight server named umurmur. People authentication is done through certificate generated locally and automatically accepted on a server, and the certificate get associated with a nickname. Nobody can pick the same nickname as another person if it’s not the same certificate.

How to install?

# pkg_add umurmur
# rcctl enable umurmurd
# cp /usr/local/share/examples/umurmur/umurmur.conf /etc/umurmur/

We can start it as this, you may want to tweak the configuration file to add a password to your server, or set an admin password, create static channels, change ports etc….

You may want to increase the max_bandwidth value to increase audio quality, or choose the right value to fit your bandwidth. Using umurmur on a DSL line is fine up to 1 or 2 remote people. The daemon uses very little CPU and very little memory. Umurmur is meant to be used on a router!

# rcctl start umurmurd

If you have a restrictive firewall (I hope so), you will have to open the ports TCP and UDP 64738.

How to connect to it?

The client is named Mumble and is packaged under OpenBSD, we need to install it:

# pkg_add mumble

The first time you run it, you will have a configuration wizard that will take only a couple of minutes.

Don’t forget to set the sysctl kern.audio.record to 1 to enable audio recording, as OpenBSD did disable audio input by default a few releases ago.

You will be able to choose a push-to-talk mode or voice level to activate and quality level.

Once the configuration wizard is done, you will have another wizard for generating the certificate. I recommend choosing “Automatically create a certificate”, then validate and it’s done.

You will be prompted for a server, click on “Add new”, enter the name server so you can recognized it easily, type its hostname / IP, its port and your nickname and click OK.

Congratulations, you are now using your own private VOIP server, for real!

Nginx and acme-client on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd68 #openbsd #nginx #automation

Comments on Mastodon

I write this blog post as I spent too much time setting up nginx and SSL on OpenBSD with acme-client, due to nginx being chrooted and not stripping path and not doing it easily.

First, you need to set up /etc/acme-client.conf correctly. Here is mine for the domain ports.perso.pw:

authority letsencrypt {
        api url "https://acme-v02.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-privkey.pem"
}

domain ports.perso.pw {
        domain key "/etc/ssl/private/ports.key"
        domain full chain certificate "/etc/ssl/ports.fullchain.pem"
        sign with letsencrypt
}

This example is for OpenBSD 6.6 (which is current when I write this) because of Let’s encrypt API URL. If you are running 6.5 or 6.4, replace v02 by v01 in the api url

Then, you have to configure nginx this way, the most important part in the following configuration file is the location block handling acme-challenge request. Remember that nginx is in chroot /var/www so the path to acme directory is acme.

http {
    include       mime.types;
    default_type  application/octet-stream;
    index         index.html index.htm;
    keepalive_timeout  65;
    server_tokens off;

    upstream backendurl {
        server unix:tmp/plackup.sock;
    }

    server {
      listen       80;
      server_name ports.perso.pw;

      access_log logs/access.log;
      error_log  logs/error.log info;

      root /htdocs/;

      location /.well-known/acme-challenge/ {
          rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
          root /acme;
      } 

      location / {
          return 301 https://$server_name$request_uri;
      }
    }

    server {
      listen 443 ssl;
      server_name ports.perso.pw;
      access_log logs/access.log;
      error_log logs_error.log info;
      root /htdocs/;

      ssl_certificate /etc/ssl/ports.fullchain.pem;
      ssl_certificate_key /etc/ssl/private/ports.key;
      ssl_protocols TLSv1.1 TLSv1.2;
      ssl_prefer_server_ciphers on;
      ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

      [... stuff removed ...]
    }

}

That’s all! I wish I could have find that on the Internet so I share it here.

OpenBSD as an IPv6 router

Written by Solène, on 13 June 2019.
Tags: #openbsd68 #openbsd #network

Comments on Mastodon

This blog post is an update (OpenBSD 6.5 at that time) of this very same article I published in June 2018. Due to rtadvd replaced by rad, this text was not useful anymore.

I subscribed to a VPN service from the french association Grifon (Grifon website[FR] to get an IPv6 access to the world and play with IPv6. I will not talk about the VPN service, it would be pointless.

I now have an IPv6 prefix of 48 bits which can theorically have 280 addresses.

I would like my computers connected through the VPN to let others computers in my network to have IPv6 connectivity.

On OpenBSD, this is very easy to do. If you want to provide IPv6 to Windows devices on your network, you will need one more.

In my setup, I have a tun0 device which has the IPv6 access and re0 which is my LAN network.

First, configure IPv6 on your lan:

# ifconfig re0 inet6 autoconf

that’s all, you can add a new line “inet6 autoconf” to your file /etc/hostname.if to get it at boot.

Now, we have to allow IPv6 to be routed through the differents interfaces of the router.

# sysctl net.inet6.ip6.forwarding=1

This change can be made persistent across reboot by adding net.inet6.ip6.forwarding=1 to the file /etc/sysctl.conf.

Automatic addressing

Now we have to configure the daemon rad to advertise the we are routing, devices on the network should be able to get an IPv6 address from its advertisement.

The minimal configuration of /etc/rad.conf is the following:

interface re0 {
    prefix 2a00:5414:7311::/48
}

In this configuration file we only define the prefix available, this is equivalent to a dhcp addresses range. Others attributes could provide DNS servers to use for example, see rad.conf man page.

Then enable the service at boot and start it:

# rcctl enable rad
# rcctl start rad

Tweaking resolv.conf

By default OpenBSD will ask for IPv4 when resolving a hostname (see resolv.conf(5) for more explanations). So, you will never have IPv6 traffic until you use a software which will request explicit IPv6 connection or that the hostname is only defined with a AAAA field.

# echo "family inet6 inet4" >> /etc/resolv.conf.tail

The file resolv.conf.tail is appended at the end of resolv.conf when dhclient modifies the file resolv.conf.

Microsoft Windows

If you have Windows systems on your network, they won’t get addresses from rad. You will need to deploy dhcpv6 daemon.

The configuration file for what we want to achieve here is pretty simple, it consists of telling what range we want to allow on DHCPv6 and a DNS server. Create the file /etc/dhcp6s.conf:

interface re0 {
    address-pool pool1 3600;
};
pool pool1 {
    range 2a00:5414:7311:1111::1000 to 2a00:5414:7311:1111::4000;
};
option domain-name-servers 2001:db8::35;

Note that I added “1111” into the range because it should not be on the same network than the router. You can replace 1111 by what you want, even CAFE or 1337 if you want to bring some fun to network engineers.

Now, you have to install and configure the service:

# pkg_add wide-dhcpv6
# touch /etc/dhcp6sctlkey
# chmod 400 /etc/dhcp6sctlkey
# echo SOME_RANDOM_CHARACTERS | openssl enc -base64 > /etc/dhcp6sctlkey
# echo "dhcp6s -c /etc/dhcp6s.conf re0" >> /etc/rc.local

The openbsd package wide-dhcpv6 doesn’t provide a rc file to start/stop the service so it must be started from a command line, a way to do it is to type the command in /etc/rc.local which is run at boot.

The openssl command is needed for dhcpv6 to start, as it requires a base64 string as a secret key in the file /etc/dhcp6sctlkey.

RSS feed for OpenBSD stable packages repository (made with XSLT)

Written by Solène, on 05 June 2019.
Tags: #openbsd #automation

Comments on Mastodon

I am happy to announce there is now a RSS feed for getting news in case of new packages available on my repository https://stable.perso.pw/

The file is available at https://stable.perso.pw/rss.xml.

I take the occasion of this blog post to explain how the file is generated as I did not find easy tool for this task, so I ended up doing it myself.

I choosed to use XSLT, which is not quite common. Briefly, XSLT allows to use some kind of XML template on a XML data file, this allow loops, filtering etc… It requires only two parts: the template and the data.

Simple RSS template

The following file is a template for my RSS file, we can see a few tags starting by xsl like xsl:for-each or xsl:value-of.

It’s interesting to note that the xsl-for-each can use a condition like position < 10 in order to limit the loop to the 10 first items.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
     xmlns:xsl="http://www.w3.org/1999/XSL/Transform">

<xsl:template match="/">
    <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
        <channel>
            <description></description>

            <!-- BEGIN CONFIGURATION -->
            <title>OpenBSD unofficial stable packages repository</title>
            <link>https://stable.perso.pw/</link>
            <atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" />
            <!-- END CONFIGURATION -->

            <!-- Generating items -->
            <xsl:for-each select="feed/news[position()&lt;10]">
            <item>
                <title>
                    <xsl:value-of select="title"/>
                </title>
                <description>
                    <xsl:value-of select="description"/>
                </description>
                <pubDate>
                    <xsl:value-of select="date"/>
                </pubDate>
            </item>
            </xsl:for-each>

        </channel>
    </rss>
</xsl:template>
</xsl:stylesheet>

Simple data file

Now, we need some data to use with the template. I’ve added a comment block so I can copy / paste it to add a new entry into the RSS easily. As the date is in a painful format to write for a human, I added to my Makefile starting the commands a call to a script replacing the string DATE by the current date with the correct format.

<feed>
<news>
    <title>www/mozilla-firefox</title>
    <description>Firefox 67.0.1</description>
    <date>Wed, 05 Jun 2019 06:00:00 GMT</date>
</news>

<!-- copy paste for a new item
<news>
    <title></title>
    <description></description>
    <date></date>
</news>
-->
</feed>

Makefile

I love makefiles, so I share it even if this one is really short.

all:
    sh replace_date.sh
    xsltproc template.xml news.xml | xmllint -format - | tee rss.xml
    scp rss.xml perso.pw:/home/stable/

clean:
    rm rss.xml

When I want to add an entry, I copy / paste the comment block in news.xml, add DATE, run make and it’s uploaded :)

The command xsltproc is available from the package libxslt on OpenBSD.

And then, after writing this, I realise that manually editing the result file rss.xml is as much work as editing the news.xml file and then process it with xslt… But I keep that blog post as this can be useful for more complicated cases. :)

Simple way to use ssh tunnels in scripts

Written by Solène, on 15 May 2019.
Tags: #ssh #automation

Comments on Mastodon

While writing a script to backup a remote database, I did not know how to handle a ssh tunnel inside a script correctly/easily. A quick internet search pointed out this link to me: https://gist.github.com/scy/6781836

While I’m not a huge fan of the ControlMaster solution which consists at starting a ssh connection with ControlMaster activated, and tell ssh to close it, and don’t forget to put a timeout on the socket otherwise it won’t close if you interrupt the script.

But I really enjoyed a neat solution which is valid for most of the cases:

$ ssh -f -L 5432:localhost:5432 user@host "sleep 5" && pg_dumpall -p 5432 -h localhost > file.sql

This will create a ssh connection and make it go to background because of -f flag, but it will close itself after the command is run, sleep 5 in this case. As we chain it quickly to a command using the tunnel, ssh will only stops when the tunnel is not used anymore, keeping it alive only the required time for the pg_dump command, not more. If we interrupt the script, I’m not sure ssh will stop immediately or only after it ran successfully the command sleep, in both cases ssh will stop correctly. There is no need to use a long sleep value because as I said previously, the tunnel will stay up until nothing uses it.

You should note that the ControlMaster way is the only reliable way if you need to use the ssh tunnel for multiples commands inside the script.

Kermit command line to fetch remote files through ssh

Written by Solène, on 15 May 2019.
Tags: #kermit

Comments on Mastodon

I previously wrote about Kermit for fetching remote files using a kermit script. I found that it’s possible to achieve the same with a single kermit command, without requiring a script file.

Given I want to download files from my remote server from the path /home/mirror/pub and that I’ve setup a kermit server on the other part using inetd:

File /etc/inetd.conf:

7878 stream tcp nowait solene /usr/local/bin/kermit-sshsub kermit-sshsub

I can make a ssh tunnel to it to reach it locally on port 7878 to download my files.

kermit -I -j localhost:7878 -C "remote cd /home/mirror/pub","reget /recursive .",close,EXIT

Some flags can be added to make it even faster, like -v 31 -e 9042. I insist on kermit because it’s super reliable and there are no security issues if running behind a firewall and accessed through ssh.

Fetching files can be stopped at any time, it supports very poor connection too, it’s really reliable. You can also skip files, because sometimes you need some file first and you don’t want to modify your script to fetch a specific file (this only works if you don’t have too many files to get of course because you can skip them only one by one).

Simple shared folder with Samba on OpenBSD 6.5

Written by Solène, on 15 May 2019.
Tags: #samba #openbsd

Comments on Mastodon

This article explains how to set up a simple samba server to have a CIFS / Windows shared folder accessible by everyone. This is useful in some cases but samba configuration is not straightforward when you need it for a one shot time or this particular case.

The important covered case here is that no user are needed. The trick comes from map to guest = Bad User configuration line in [global] section. This option will automatically map an unknown user or no provided user to the guest account.

Here is a simple /etc/samba/smb.conf file to share /home/samba to everyone, except map to guest and the shared folder, it’s the stock file with comments removed.

[global]
   workgroup = WORKGROUP
   server string = Samba Server
   server role = standalone server
   log file = /var/log/samba/smbd.%m
   max log size = 50
   dns proxy = no 
   map to guest = Bad User

[myfolder]
   browseable = yes
   path = /home/samba
   writable = yes
   guest ok = yes
   public = yes

If you want to set up this on OpenBSD, it’s really easy:

# pkg_add samba
# rcctl enable smbd nmbd
# vi /etc/samba/smb.conf (you can use previous config)
# mkdir -p /home/samba
# chown nobody:nobody /home/samba
# rcctl start smbd nmbd

And you are done.

Neomutt cheatsheet

Written by Solène, on 23 April 2019.
Tags: #neomutt #openbsd

Comments on Mastodon

I switched from a homemade script using mblaze to neomutt (after being using mutt, alpine and mu4e) and it’s difficult to remember everything. So, let’s do a cheatsheet!

  • Mark as read: Ctrl+R
  • Mark to delete: d
  • Execute deletion: $
  • Tag a mail: t
  • Move a mail: s (for save, which is a copy + delete)
  • Save a mail: c (for copy)
  • Operation on tagged mails: ;[OP] with OP being the key for that operation, like ;d for deleting tagged emails

Operations on attachments

  • Save to file: s
  • Pipe to view as html: | and then w3m -T text/html
  • Pipe to view as picture: | and then feh -

Delete mails based on date

  • use T to enter a date range, format [before]-[after] with before/after being a DD/MM/YYYY format (YYYY is optional)
  • ~d 24/04- to mark mails after 24/04 of this year
  • ~d -24/04 to mark mails before 24/04 of this year
  • ~d 24/04-25/04 to mark mails between 24/04 and 25/04 (inclusive)
  • ;d to tell neomutt we want to delete marked mails
  • $ to make deletion happen

Simple config

Here is a simple config I’ve built to get Neomutt usable for me.

set realname = "Jane Doe"
set from = "jane@doe.com"
set smtp_url = "smtps://login@doe.com:465"
alias me Jane Doe <login@doe.com>
set folder = "imaps://login@doe.com:993"
set imap_user = "login"
set header_cache     = /home/solene/.cache/neomutt/jane/headers
set message_cachedir = /home/solene/.cache/neomutt/jane/bodies
set imap_pass = "xx"
set smtp_pass = "xx"

set mbox_type = Maildir
set ssl_starttls = yes
set ssl_force_tls = yes

set spoolfile = "+INBOX"
set record = "+Sent"
set postponed = "+Drafts"
set trash = "+Trash"

# automatically get list of IMAP mailboxes
set imap_list_subscribed = yes

#sidebar
set sidebar_visible
set sidebar_format = "%B%?F? [%F]?%* %?N?%N/?%S"
set mail_check_stats
bind index,pager \Cp sidebar-prev       # Ctrl-p - Previous Mailbox
bind index,pager \Cn sidebar-next       # Ctrl-n - Next Mailbox
bind index,pager \Ca sidebar-open       # Ctrl-a - Open Highlighted Mailbox

# group mails by threads
set sort=threads

bind index "^" imap-fetch-mail          # ^ - Get new mails

# hide headers except a few useful one
ignore *
unignore from date subject to cc
unignore organization organisation x-mailer: x-newsreader: x-mailing-list:
unignore posted-to:

Create a dedicated user for ssh tunneling only

Written by Solène, on 17 April 2019.
Tags: #openbsd #ssh

Comments on Mastodon

I use ssh tunneling A LOT, for everything. Yesterday, I removed the public access of my IMAP server, it’s now only available through ssh tunneling to access the daemon listening on localhost. I have plenty of daemons listening only on localhost that I can only reach through a ssh tunnel. If you don’t want to bother with ssh and redirect ports you need, you can also make a VPN (using ssh, openvpn, iked, tinc…) between your system and your server. I tend to avoid setting up VPN for the current use case as it requires more work and more maintenance than running ssh server and a ssh client.

The last change, for my IMAP server, added an issue. I want my phone to access the IMAP server but I don’t want to connect to my main account from my phone for security reasons. So, I need a dedicated user that will only be allowed to forward ports.

This is done very easily on OpenBSD.

The steps are: 1. generate ssh keys for the new user 2. add a user with no password 3. allow public key for port forwarding

Obviously, you must allow users (or only this one) to make port forwarding in your sshd_config.

Generating ssh keys

Please generate the keys in a safe place, using ssh-keygen

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:SOMETHINGSOMETHINSOMETHINSOMETHINSOMETHING user@myhost
The key's randomart image is:
+---[RSA 3072]----+
|                 |
| **              |
|  *     **  .    |
|  *     *        |
|  ****  *        |
|     ****        |
|                 |
|                 |
|                 |
+----[SHA256]-----+

This will create your public key in ~/.ssh/id_rsa.pub and the private key in ~/.ssh/id_rsa

Adding a user

On OpenBSD, we will create a user named tunnel, this is done with the following command as root:

# useradd -m tunnel

This user has no password and can’t login on ssh.

Allow the public key to port forward only

We will use the command restriction in the authorized_keys file to allow the previously generated key to only forward.

Edit /home/tunnel/.ssh/authorized_keys as following

command="echo 'Tunnel only!'" ssh-rsa PUT_YOUR_PUBLIC_KEY_HERE

This will tell “Tunnel only” and abort the connection if the user connects and with a shell or a command.

Connect using ssh

You can connect with ssh(1) as usual but you will require the flag -N to not start a shell on the remote server.

$ ssh -N -L 10000:localhost:993 tunnel@host

If you want the tunnel to stay up in the most automated way possible, you can use autossh from ports, which will do a great job at keeping ssh up.

$ autossh -M 0 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "TCPKeepAlive yes" -N -v -L 9993:localhost:993 tunnel@host

This command will start autossh, restart if forwarding doesn’t work which is likely to happens when you lose connectivity, it takes some time for the remote server to disable the forwarding effectively. It will make a keep alive check so the tunnel stays up and ensure it’s up (this is particularly useful on wireless connection like 4G/LTE).

The others flags are also ssh parameters, to not start a shell, and for making a local forwarding. Don’t forget that as a regular user, you can’t bind on ports less than 1024, that’s why I redirect the port 993 to the local port 9993 in the example.

Making the tunnel on Android

If you want to access your personal services from your Android phone, you can use ConnectBot ssh client. It’s really easy:

  1. upload your private key to the phone
  2. add it in ConnectBot from the main menu
  3. create a new connection the user and your remote host
  4. choose to use public key authentication and choose the registered key
  5. uncheck “start a shell session” (this is equivalent to -N ssh flag)
  6. from the main menu, long touch the connection and edit the forwarded ports

Enjoy!

Deploying munin-node with drist

Written by Solène, on 17 April 2019.
Tags: #drist #automation #openbsd

Comments on Mastodon

The following guide is a real world example of drist usage. We will create a script to deploy munin-node on OpenBSD systems.

We need to create a script that will install munin-node package but also configure it using the default proposal. This is done easily using the script file.

#!/bin/sh

# checking munin not installed
pkg_info | grep munin-node
if [ $? -ne 0 ]; then
    pkg_add munin-node
    munin-node-configure --suggest --shell | sh
    rcctl enable munin_node
fi

rcctl restart munin_node

The script contains some simple logic to prevent trying installing munin-node each time we will run it, and also prevent re-configuring it automatically every time. This is done by checking if pkg_info output contains munin-node.

We also need to provide a munin-node.conf file to allow our munin server to reach the nodes. For this how-to, I’ll dump the configuration in the commands using cat, but of course, you can use your favorite editor to create the file, or copy an original munin-node.conf file and edit it to suit your needs.

mkdir -p files/etc/munin/

cat <<EOF > files/etc/munin/munin-node.conf
log_level 4
log_file /var/log/munin/munin-node.log
pid_file /var/run/munin/munin-node.pid
background 1
setsid 1
user root
group wheel
ignore_file [\#~]$
ignore_file DEADJOE$
ignore_file \.bak$
ignore_file %$
ignore_file \.dpkg-(tmp|new|old|dist)$
ignore_file \.rpm(save|new)$
ignore_file \.pod$
allow ^127\.0\.0\.1$
allow ^192\.168\.1\.100$
allow ^::1$
host *
port 4949
EOF

Now, we only need to use drist on the remote host:

drist root@myserver

Last version of drist as now also supports privilege escalation using doas instead of connecting to root by ssh:

drist -s -e doas user@myserver

Playing Slay the Spire on OpenBSD

Written by Solène, on 01 April 2019.
Tags: #openbsd #gaming

Comments on Mastodon

Thanks to a hard work from thfr@, it is now possible to play the commercial game **Slay The Spire** on OpenBSD.

Small introduction to the game: it's a solo deck building game where you need to escalate a tower. Each floor may contain enemie(s) or a treasure or a merchant or an elite (harder enemies) or an event.

There are four characters playable, each unlocked after playing with the previous one. The game is really easy to understand, every game (or run) restart from the beginning with your character, at every new floor you may earn items and cards to build a deck for this run.

When you die, you can unlock some new items per characters and unlock cards for next runs. The goal is to reach the top of the tower. Each character is really different to play and each allow a few obvious deck builds.

The game work with an OpenBSD 6.5 minimum but this method using libgdx will work since 6.9. For this you will need:

1. Buy Slay The Spire on GOG or Steam

2. Copy files from a Slay The Spire installation (Windows or Linux) to your OpenBSD system or unzip the linux installer .sh file

3. Install some packages with pkg_add: openal jdk-11 lwjgl libgdx

4. Search for the .jar file (biggest file), then run libgdx-setup to extract data from the jar file and prepare the game.

5. Run the game with libgdx-run

4. Don't forget to eat, hydrate yourself and sleep. This game is time consuming :)

All settings and saves are stored in the game folder, so you may want to backup it if you don't want to lose your progression.

Again, thanks to thfr@ for his huge work on making games working on OpenBSD!

Using haproxy for TLS layer

Written by Solène, on 07 March 2019.
Tags: #openbsd

Comments on Mastodon

This article explains how to use haproxy to add a TLS layer to any TCP protocol. This includes http or gopher. The following example explains the minimal setup required in order to make it work, haproxy has a lot of options and I won’t use them.

The idea is to let haproxy manage the TLS part and let your http server (or any daemon listening on TCP) replying within the wrapped connection.

You need a simple haproxy.cfg which can looks like that:

defaults
        mode    tcp
        timeout client 50s
        timeout server 50s
        timeout connect 50s

frontend haproxy
        bind *:7000 ssl crt /etc/ssl/certificat.pem
        default_backend gopher

backend gopher
        server gopher 127.0.0.1:7070 check

The idea is that it waits on port 7000 and will use the file /etc/ssl/certificat.pem as a certificate, and forward requests to the backend on 127.0.0.1:7070. That is ALL. If you want to do https, you need to listen on port 443 and redirect to your port 80.

The PEM file is made from the privkey concatenated with the fullchain certificate. If you use a self signed certificate, you can make it with the following command:

cat secret.key certificate.crt > cert.pem

One can use a folder with PEM certificates files inside instead of using a file. This will allow haproxy to receive connections for ALL the certificates loaded.

For more security, I recommend using the chroot feature and a dh file but it’s out of the current topic.

Add a TLS layer to your Gopher server

Written by Solène, on 07 March 2019.
Tags: #gopher #openbsd

Comments on Mastodon

Hi,

In this article I will explain how to setup a gopher server supporting TLS. Gopher TLS support is not “official” as there is currently no RFC to define it. It has been recently chose by the community how to make it work, while keeping compatibility with old servers / clients.

The way to do it is really simple.

Client A tries to connects to Server B, Client A tries TLS handshake, if Server B answers correctly to the TLS handshakes, then Client A sends the gopher request and Server B answers the gopher requests. If Server B doesn’t understand the TLS handshakes, then it will probably output a regular gopher page, then this is throwed and Client A retries the connection using plaintext gopher and Server B answers the gopher request.

This is easy to achieve because gopher protocol doesn’t require the server to send anything to the client before the client sends its request.

The way to add the TLS layer and the dispatching can be achieved using sslh and relayd. You could use haproxy instead of relayd, but the latter is in OpenBSD base system so I will use it. Thanks parazyd for sharing about sslh for this use case.

sslh is a protocol demultiplexer, it listens on a port, and depending on what it receives, it will try to guess the protocol used by the client and send it to the according backend. It’s first purpose was to make ssh available on port 443 while still having https daemon working on that server.

Here is a schema of the setup

                        +→ relayd for TLS + forwarding
                        ↑                        ↓
                        ↑ tls?                   ↓
client -> sslh TCP 70 → +                        ↓
                        ↓ not tls                ↓
                        ↓                        ↓
                        +→ → → → → → → gopher daemon on localhost

This method allows to wrap any server to make it TLS compatible. The best case would be to have TLS compatibles servers which do all the work without requiring sslh and something to add the TLS. But it’s currently a way to show TLS for gopher is real.

Relayd

The relayd(1) part is easy, you first need a x509 certificate for the TLS part, I will not explain here how to get one, there are already plenty of how-to and one can use let’s encrypt with acme-client(1) to get one on OpenBSD.

We will write our configuration in /etc/relayd.conf

log connection
relay "gopher" {
    listen on 127.0.0.1 port 7000 tls
    forward to 127.0.0.1 port 7070
}

In this example, relayd listens on port 7000 and our gopher daemon listens on port 7070. According to relayd.conf(5), relayd will look for the certificate at the following places: /etc/ssl/private/$LISTEN_ADDRESS:$PORT.key and /etc/ssl/$LISTEN_ADDRESS:$PORT.crt, with the current example you will need the files: /etc/ssl/private/127.0.0.1:7000.key and /etc/ssl/127.0.0.1:7000.crt

relayd can be enabled and started using rcctl:

# rcctl enable relayd
# rcctl start relayd

Gopher daemon

Choose your favorite gopher daemon, I recommend geomyidae but any other valid daemon will work, just make it listening on the correct address and port combination.

# pkg_add geomyidae
# rcctl enable geomyidae
# rcctl set geomyidae flags -p 7070
# rcctl start geomyidae

SSLH

We will use sslh_fork (but sslh_select would be valid too, they have differents pros/cons). The --tls parameters tells where to forward a TLS connection while --ssh will forward to the gopher daemon. This is so because the protocol ssh is already configured within sslh and acts exactly like a gopher daemon: the client doesn’t expect the server to be the first sending data.

# pkg_add sslh
# rcctl enable sslh_fork
# rcctl set sslh_fork flags --tls 127.0.0.1:7000 --ssh 127.0.0.1:7070 -p 0.0.0.0:70
# rcctl start sslh_fork

Client

You can easily test if this works using openssl to connect by hand to the port 70

$ openssl s_client -connect 127.0.0.1:7000

You should see a lot of output, which is the TLS handshake, then you can send a gopher request like “/” and you should get a result. Using telnet on the same address and port should give the same result.

My gopher client clic already supports gopher TLS and is available at git://bitreich.org/clic and only requires the ecl common lisp interpreter to compile.

OpenBSD and iSCSI part2: the initiator (client)

Written by Solène, on 21 February 2019.
Tags: #unix #openbsd #iscsi

Comments on Mastodon

This is the second article of the serie about iSCSI. In this one, you will learn how to connect to an iSCSI target using OpenBSD base daemon iscsid.

The configuration file of iscsid doesn’t exist by default, its location is /etc/iscsi.conf. It can be easily written using the following:

target1="100.64.2.3"
myaddress="100.64.2.2"

target "disk1" {
    initiatoraddr $myaddress
    targetaddr $target1
    targetname "iqn.1994-04.org.netbsd.iscsi-target:target0"
}

While most lines are really obvious, it is mandatory to have the line initiatoraddr, many thanks to cwen@ for pointing this out when I was stuck on it.

The targetname value will depend of the iSCSI target server. If you use netbsd-iscsi-target, then you only need to care about the last part, aka target0 and replace it by the name of your target (which is target0 for the default one).

Then we can enable the daemon and start it:

# rcctl enable iscsid
# rcctl start iscsid

In your dmesg, you should see a line like:

sd4 at scsibus0 targ 1 lun 0: <NetBSD, NetBSD iSCSI, 0> SCSI3 0/direct fixed t10.NetBSD_0x5c6cf1b69fc3b38a

If you use netbsd-iscsi-target, the whole line should be identic except for the sd4 part which can change, depending of your hardware.

If you don’t see it, you may need to reload iscsid configuration file with iscsictl reload.

Warning: iSCSI is a bit of pain to debug, if it doesn’t work, double check the IPs in /etc/iscsi.conf, check your PF rules on the initiator and the target. You should be at least able to telnet into the target IP port 3260.

Once you found your new sd device, you can format it and mount it as a regular disk device:

# newfs /dev/rsd4c
# mount /dev/sd4c /mnt

iSCSI is far mor efficient and faster than NFS but it has a total different purpose. I’m using it on my powerpc machines to build packages on it. This reduce their old IDE disks usage while giving better response time and equivalent speed.

OpenBSD and iSCSI part1: the target (server)

Written by Solène, on 21 February 2019.
Tags: #unix #openbsd #iscsi

Comments on Mastodon

This is the first article of a series about iSCSI.

iSCSI is a protocol designed for sharing a block device across network as if it was a local disk. This doesn’t permit using that disk from multiples places at once though, except if you use a specific filesystem like GFS2 or OCFS2 (Linux only). In this article, we will learn how to create an iSCSI target, which is the “server” part of iSCSI, the target is the system holding the disk and making it available to others on the network.

OpenBSD does not have an target server in base, we will have to use net/netbsd-iscsi-target for this. The setup is really simple.

First, we obviously need to install the package and we will activate the daemon so it start automatically at boot, but don’t start it yet:

# pkg_add netbsd-iscsi-target
# rcctl enable iscsi_target

The configurations files are in /etc/iscsi/ folder, it contains files auths and targets. The default configuration files are the same. By looking at the source code, it seems that auths is used there but it seems to have no use at all. We will just overwrite it everytime we modify targets to keep them in sync.

Default /etc/iscsi/targets (with comments stripped):

extent0         /tmp/iscsi-target0      0       100MB
target0         rw      extent0         10.4.0.0/16

The first line defines the file holding our disk in the second field, and the last field defines the size of it. When iscsi-target will be started, it will create files as required with the size defined here.

The second line defines permissions, in that case, the extent0 disk can be used read/write by the net 10.4.0.0/16. For this example, I will only change the netmask to suit my network, then I copy targets over auths.

Let’s start the daemon:

# rcctl start iscsi_target
# rcctl check iscsi_target
iscsi_target(ok)

If you want to restrict ports using PF, you only have to allows the TCP port 3260 from the network that will connect to the target. The according line would looks like this:

pass in proto tcp to port 3260

Done!

Drist release with persistent ssh

Written by Solène, on 18 February 2019.
Tags: #unix #automation #drist

Comments on Mastodon

Drist see its release 1.04 available. This adds support for the flag -p to make the ssh connection persistent across the script using the ssh ControlMaster feature. This fixes one use case where you modify ssh keys in two operations: copy file + script to change permissions and this makes drist a lot faster for fast tasks.

Drist makes a first ssh connection to get the real hostname of the remote machine, and then will ssh for each step (copy, copy-hostname, absent, absent-hostname, script, script-hostname), this mean in the use case where you copy one file and reload a service, it was doing 3 connections. Now with the persistent flag, drist will keep the first connection and reusing it, closing the control socket at the end of the script.

Drist is now 121 lines long.

Download v1.04

SHA512 checksum, it is split it in two to not break the display:

525a7dc1362877021ad2db8025832048d4a469b72e6e534ae4c92cc551b031cd
1fd63c6fa3b74a0fdae86c4311de75dce10601d178fd5f4e213132e07cf77caa

Aspell to check spelling

Written by Solène, on 12 February 2019.
Tags: #unix

Comments on Mastodon

I never used a command line utility to check the spelling in my texts because I did not know how to do. After taking five minutes to learn how to do it, I feel guilty about not having used it before as it is really simple.

First, you want to install aspell package, which may be already there pulled as a dependency. In order to proceed on OpenBSD it’s easy:

# pkg_add aspell

I will only explain how to use it on text files. I think it is possible to have some integration with text editors but then, it would be more relevant to check out the editor documentation.

If I want to check the spelling in my file draft.txt it is as simple as:

$ aspell -l en_EN -c draft.txt

The parameter -l en_EN will depend of your locale, I have fr_FR.UTF–8 so aspell uses it by default if I don’t enforce another language. With this command, aspell will make an interactive display in the terminal

The output looks like this, with the word ful highlighted which I can not render in my article.

It's ful of mistakkes!

I dont know how to type corectly!


1) flu                                              6) FL
2) foul                                             7) fl
3) fuel                                             8) UL
4) full                                             9) fol
5) furl                                             0) fur
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit

?

I am asked how I want to resolve the issue with ful, as I wanted to write full, I will type 4 and aspell will replace the word ful with full. This will automatically jump to the next error found, mistakkes in my case:

It's full of mistakkes!

I dont know how to type corectly!


1) mistakes                                         6) misstates
2) mistake's                                        7) mistimes
3) mistake                                          8) mistypes
4) mistaken                                         9) stake's
5) stakes                                           0) Mintaka's
i) Ignore                                           I) Ignore all
r) Replace                                          R) Replace all
a) Add                                              l) Add Lower
b) Abort                                            x) Exit

?

and it will continue until there are no errors left, then the file is saved with the changes.

I will use aspell everyday from now.

Port of the week: sct

Written by Solène, on 07 February 2019.
Tags: #unix #openbsd

Comments on Mastodon

Long time I didn’t write a “port of the week”.

This week, I am happy to present you sct, a very small utility software to set the color of your screen. You can install it on OpenBSD with pkg_add sct and its usage is really simple, just run sct $temp where $temp is the temperature you want to get on your screen.

The default temperature is 6500, if you lower this value, the screen will change toward red, meaning your screen will appear less blue and this may be more comfortable for some people. The temperature you want to use depend from the screen and from your feeling, I have one screen which is correct at 5900 but another old screen which turn too much red below 6200!

You can add sct 5900 to your .xsession file to start it when you start your X11 session.

There is an alternative to sct whose name is redshift, it is more complicated as you need to tell it your location with latitude and longitude and, as a daemon, it will correct continuously your screen temperature depending on the time. This is possible because when you know your location on earth and the time, you can compute the sunrise time and dawn time. sct is not a daemon, you run it once and does not change the temperature until you call it again.

How to parallelize Drist

Written by Solène, on 06 February 2019.
Tags: #drist #automation #unix

Comments on Mastodon

This article will show you how to make drist faster by using it on multiple servers at the same time, in a correct way.

What is drist?

It is easily possible to parallelize drist (this works for everything though) using Makefile. I use this to deploy a configuration on my servers at the same time, this is way faster.

A simple BSD Make compatible Makefile looks like this:

SERVERS=tor-relay.local srvmail.tld srvmail2.tld
${SERVERS}:
        drist $*
install: ${SERVERS}
.PHONY: all install ${SERVERS}

This create a target for each server in my list which will call drist. Typing make install will iterate over $SERVERS list but it is so possible to use make -j 3 to tell make to use 3 threads. The output may be mixed though.

You can also use make tor-relay.local if you don’t want make to iterate over all servers. This doesn’t do more than typing drist tor-relay.local in the example, but your Makefile may do other logic before/after.

If you want to type make to deploy everything instead of make install you can add the line all: install in the Makefile.

If you use GNU Make (gmake), the file requires a small change:

The part ${SERVERS}: must be changed to ${SERVERS}: %:, I think that gmake will print a warning but I did not succeed with better result. If you have the solution to remove the warning, please tell me.

If you are not comfortable with Makefiles, the .PHONY line tells make that the targets are not valid files.

Make is awesome!

Vincent Delft talk at FOSDEM 2019: OpenBSD as a full-featured NAS

Written by Solène, on 05 February 2019.
Tags: #unix #openbsd

Comments on Mastodon

Hi, I rarely post about external links or other people work, but at FOSDEM 2019 Vincent Delft had a talk about running OpenBSD as a full featured NAS.

I do use OpenBSD on my NAS, I wanted to write an article about it since long time but never did it. Thanks to Vincent, I can just share his work which is very very interesting if you plan to make your own NAS.

Videos can be downloaded directly with following links provided by Fosdem:

Transfer your files with Kermit

Written by Solène, on 31 January 2019.
Tags: #unix #kermit

Comments on Mastodon

Hi, it’s been long time I wanted to write this article. The topic is Kermit, which is a file transfer protocol from the 80’s which solved problems of that era (text files and binaries files, poor lines, high latency etc..).

There is a comm/kermit package on OpenBSD and I am going to show you how to use it. The package is the program ckermit which is a client/server for kermit.

Kermit is a lot of things, there is a protocol, but it’s also the client/server, when you type kermit, it opens a kermit shell, where you can type commands or write kermit scripts. This allows scripts to be done using a kermit in the shebang.

I personally use kermit over ssh to retrieve files from my remote server, this requires kermit on both machines. My script is the following:

#!/usr/local/bin/kermit +
set host /pty ssh -t -e none -l solene perso.pw kermit
remote cd /home/ftp/
cd /home/solene/Downloads/
reget /recursive /delete .
close
exit

This connects to the remote server and starts kermit. It changes the current directory on the remote server into /home/ftp and locally it goes into /home/solene/Downloads, then, it start retrieving data, continuing previous transfer if not finished (reget command), for every file finished, it’s deleted on the remote server. Once finished, it close the ssh connection and exits.

The transfer interfaces looks like this. It shows how you are connected, which file is currently transferring, its size, the percent done (0% in the example), time left, speed and some others information.

C-Kermit 9.0.302 OPEN SOURCE:, 20 Aug 2011, solene.perso.local [192.168.43.56]

   Current Directory: /home/downloads/openbsd
        Network Host: ssh -t -e none -l solene perso.pw kermit (UNIX)
        Network Type: TCP/IP
              Parity: none
         RTT/Timeout: 01 / 03
           RECEIVING: src.tar.gz => src.tar.gz => src.tar.gz
           File Type: BINARY
           File Size: 183640885
        Percent Done:
                          ...10...20...30...40...50...60...70...80...90..100
 Estimated Time Left: 00:43:32
  Transfer Rate, CPS: 70098
        Window Slots: 1 of 30
         Packet Type: D
        Packet Count: 214
       Packet Length: 3998
         Error Count: 0
          Last Error:
        Last Message:

X to cancel file, Z to cancel group, <CR> to resend last packet,
E to send Error packet, ^C to quit immediately, ^L to refresh screen.

What’s interesting is that you can skip a file by pressing “X”, kermit will stop the downloading (but keep the file for later resuming) and start downloading the next file. It can be useful sometimes when you transfer a bunch of files, and it’s really big and you don’t want it now and don’t want to type the command by hand, just “X” and it skips it. Z or E will exists the transfer and close the connection.

Speed can be improved by adding the following lines before the reget command:

set reliable
set window 32
set receive packet-length 9024

This improves performance because nowadays our networks are mostly reliable and fast. Kermit was designed at a time when serial line was used to transfer data. It’s also reported that Kermit is in use in the ISS (International Space Station), I can’t verify if it’s still in use there.

I never had any issue while transferring, even by getting a file by resuming it so many times or using a poor 4G hot-spot with 20s of latency.

I did some tests and I get same performances than rsync over the Internet, it’s a bit slower over Lan though.

I only described an use case. Scripts can be made, there are a lot of others commands. You can type “help” in the kermit shell to get some hints for more help, “?” will display the command list.

It can be used interactively, you can queue files by using “add” to create a send-list, and then proceed to transfer the queue.

Another way to use it is to start the local kermit shell, then type “ssh user@remote-server” which will ssh into a remote box. Then you can type “kermit” and type kermit commands, this make a link between your local kermit and the remote one. You can go back to the local kermit by typing “Ctrl+", and go back to the remote by entering the command ”C".

This is a piece of software I found by lurking into the ports tree for discovering new software and I felt in love with it. It’s really reliable.

It does a different job compared to rsync, I don’t think it can preserve time, permissions etc… but it can be scripted completely, using parameters, and it’s an awesome piece of software!

It should support HTTP, HTTPS and ftp transfers too, as a client, but I did not get it work. On OpenBSD, the HTTPS support is disabled, it requires some work to switch to libreSSL.

You can find information on the official website.

Some 2019 news

Written by Solène, on 14 January 2019.
Tags: #blog

Comments on Mastodon

Hi from 2019! Some news about me and this blog.

It’s been more than a month since the last article, which is unusual. I don’t have much time these days and the ideas in the queue are not easy topics, so I don’t publish anything.

I am now on Mastodon at solene@bsd.network, publishing things on the Fediverse. Mostly UNIX propaganda.

This year I plan to work on reed-alert to improve its usage, maybe write more how-to or documentation about it too. I also think about writing non-core probes in a separate repository.

Cl-yag, the blog generator that I use for this blog should deserve some attention too, I would like to make it possible to create static pages not in the index/RSS, this doesn’t require much code as I already have a proof of concept, but it requires some changes to better integrate within.

Finally, my deployment tool drist should definitely be fixed to support tcsh and csh on remote shells for script execution. This requires a few easy changes. Some better documentation and how-to would be nice too.

I also revived a project named faubackup, it’s a backup software which is now hosted on Framagit.

And I revived another project which is from me, a packages statistics website to have some stats about installed OpenBSD packages. The code is not great, the web UI is not great, the filters are not great but it works. It needs improvements. I’m thinking about making a package of it for people wishing to participate, that would install the client and add a cron to update the package list weekly. The Web UI is at this address Pkgstat, that name is not good but I did not find a good name yet. The code can be downloaded here.

Thank you for reading :)

Fun tip #3: Split a line using ed

Written by Solène, on 04 December 2018.
Tags: #fun-tip #unix #openbsd68

Comments on Mastodon

In this new article I will explain how to programmaticaly a line (with a newline) using ed.

We will use commands sent to ed in its stdin to do so. The logic is to locate the part where to add the newline and if a character need to be replaced.

this is a file
with a too much line in it that should be split
but not this one.

In order to do so, we will format using printf(1) the command list using a small trick to insert the newline. The command list is the following:

/too much line
s/that /that
,p

This search the first line matching “too much line” and then replaced “that ” by "that0, the trick is to escape using a backslash so the substitution command can accept the newline, and at the end we print the file (replace ,n by w to write it).

The resulting command line is:

$ printf '/too much line0/that /that\0n0 | ed file.txt
81
> with a too much line in it that should be split
> should be split
> 1     this is a file
2       with a too much line in it that
3       should be split
4       but not this one.
> ?

Configuration deployment made easy with drist

Written by Solène, on 29 November 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Hello, in this article I will present you my deployement tool drist (if you speak Russian, I am already aware of what you think). It reached a feature complete status today and now I can write about it.

As a system administrator, I started using salt a few years ago. And honestly, I can not cope with it anymore. It is slow, it can get very complicated for some tasks like correctly ordering commands and a configuration file can become a nightmare when you start using condition in it.

You may already have read and heard a bit about drist as I wrote an article about my presentation of it at bitreichcon 2018.

History

I also tried alternatives like ansible, puppet, Rex etc… One day, when lurking in the ports tree, I found sysutils/radmind which got a lot interest from me even if it is really poorly documented. It is a project from 1995 if I remember correctly, but I liked the base idea. Radmind works with files, you create a known working set of files for your system, and you can propagate that whole set to other machines, or see differences between the reference and the current system. Sets could be negative, meaning that the listed files should not be present on the system, but it was also possible to add extra sets for specific hosts. The whole thing is really really cumbersome, this requires a lot of work, I found little documentation etc… so I did not used it but, that lead me to write my own deployment tool using ideas from radmind (working with files) and from Rex (using a script for doing changes).

Concept

drist aims at being simple to understand and pluggable with standard tools. There is no special syntax to learn, no daemon to run, no agent, and it relies on base tools like awk, sed, ssh and rsync.

drist is cross platform as it has a few requirements but it is not well suited for deploying on too much differents operating systems.

When executed, drist will execute six steps in a specific order, you can use only steps you need.

Shamelessly copied from the man page, explanations after:

  1. If folder files exists, its content is copied to server rsync(1).
  2. If folder files-HOSTNAME exists, its content is copied to server using rsync(1).
  3. If folder absent exists, filenames in it are deleted on server.
  4. If folder absent-HOSTNAME exists, filenames in it are deleted on server.
  5. If file script exists, it is copied to server and executed there.
  6. If file script-HOSTNAME exists, it is copied to server and executed there.

In the previous list, all the existences checks are done from the current working directory where drist is started. The text HOSTNAME is replaced by the output of uname -n of the remote server, and files are copied starting from the root directory.

drist does not do anything more. In a more litteral manner, it copies files to the remote server, using a local filesystem tree (folder files). It will delete on the remote server all files present in the local filesystem tree (folder absent), and it will run on the remote server a script named script.

Each of theses can be customized per-host by adding a “-HOSTNAME” suffix to the folder or file name, because experience taught me that some hosts does require specific configuration.

If a folder or a file does not exist, drist will skip it. So it is possible to only copy files, or only execute a script, or delete files and execute a script after.

Drist usage

The usage is pretty simple. drist has 3 flags which are optionals.

  • -n flag will show what happens (simuation mode)
  • -s flag tells drist to use sudo on the remote host
  • -e flag with a parameter will tell drist to use a specific path for the sudo program

The remote server address (ssh format like user@host) is mandatory.

$ drist my_user@my_remote_host

drist will look at files and folders in the current directory when executed, this allow to organize as you want using your filesystem and a revision control system.

Simple examples

Here are two examples to illustrate its usage. The examples are easy, for learning purpose.

Deploying ssh keys

I want to easily copy my users ssh keys to a remote server.

$ mkdir drist_deploy_ssh_keys
$ cd drist_deploy_ssh_keys
$ mkdir -p files/home/my_user1/.ssh
$ mkdir -p files/home/my_user2/.ssh
$ cp -fr /path/to/key1/id_rsa files/home/my_user1/.ssh/
$ cp -fr /path/to/key2/id_rsa files/home/my_user2/.ssh/
$ drist user@remote-host
Copying files from folder "files":
    /home/my_user1/.ssh/id_rsa
    /home/my_user2/.ssh/id_rsa

Deploying authorized_keys file

We can easily create the authorized_key file by using cat.

$ mkdir drist_deploy_ssh_authorized
$ cd drist_deploy_ssh_authorized
$ mkdir -p files/home/user/.ssh/
$ cat /path/to/user/keys/*.pub > files/home/user/.ssh/authorized_keys
$ drist user@remote-host
Copying files from folder "files":
    /home/user/.ssh/authorized_keys

This can be automated using a makefile running the cat command and then running drist.

all:
    cat /path/to/keys/*.pub > files/home/user.ssh/authorized_keys
drist user@remote-host

Installing nginx on FreeBSD

This module (aka a folder which contain material for drist) will install nginx on FreeBSD and start it.

$ mkdir deploy_nginx
$ cd deploy_nginx
$ cat >script <<EOF
#!/bin/sh
test -f /usr/local/bin/nginx
if [ $? -ne 0 ]; then
    pkg install -y nginx
fi
sysrc nginx_enable=yes
service nginx restart
EOF
$ drist user@remote-host
Executing file "script":
    Updating FreeBSD repository catalogue...
    FreeBSD repository is up to date.
    All repositories are up to date.
    The following 1 package(s) will be affected (of 0 checked):

    New packages to be INSTALLED:
            nginx: 1.14.1,2

    Number of packages to be installed: 1

    The process will require 1 MiB more space.
    421 KiB to be downloaded.
    [1/1] Fetching nginx-1.14.1,2.txz: 100%  421 KiB 430.7kB/s    00:01
    Checking integrity... done (0 conflicting)
    [1/1] Installing nginx-1.14.1,2...
    ===> Creating groups.
    Using existing group 'www'.
    ===> Creating users
    Using existing user 'www'.
    [1/1] Extracting nginx-1.14.1,2: 100%
    Message from nginx-1.14.1,2:

    ===================================================================
    Recent version of the NGINX introduces dynamic modules support.  In
    FreeBSD ports tree this feature was enabled by default with the DSO
    knob.  Several vendor's and third-party modules have been converted
    to dynamic modules.  Unset the DSO knob builds an NGINX without
    dynamic modules support.

    To load a module at runtime, include the new `load_module'
    directive in the main context, specifying the path to the shared
    object file for the module, enclosed in quotation marks.  When you
    reload the configuration or restart NGINX, the module is loaded in.
    It is possible to specify a path relative to the source directory,
    or a full path, please see
    https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and
    http://nginx.org/en/docs/ngx_core_module.html#load_module for
    details.

    Default path for the NGINX dynamic modules is

    /usr/local/libexec/nginx.
    ===================================================================
    nginx_enable:  -> yes
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    nginx not running? (check /var/run/nginx.pid).
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    Starting nginx.

More complex example

Now I will show more complexes examples, with host specific steps. I will not display the output because the previous output were sufficient enough to give a rough idea of what drist does.

Removing someone ssh access

We will reuse an existing module here, a user should not be able to login anymore on its account on the servers using the ssh key.

$ cd ssh
$ mkdir -p absent/home/user/.ssh/
$ touch absent/home/user/.ssh/authorized_keys
$ drist user@server

Installing php on FreeBSD

The following module will install php and remove the opcache.ini file, and will install php72-pdo_pgsql if it is run on server production.domain.private.

$ mkdir deploy_php && cd deploy_php
$ mkdir -p files/usr/local/etc
$ cp /some/correct/config.ini files/usr/local/etc/php.ini
$ cat > script <<EOF
#!/bin/sh
test -f /usr/local/etc/php-fpm.conf || pkg install -f php-extensions
sysrc php_fpm_enable=yes
service php-fpm restart
test -f /usr/local/etc/php/opcache.ini || rm /usr/local/etc/php/opcache.ini
EOF
$ cat > script-production.domain.private <<EOF
#!/bin/sh
test -f /usr/local/etc/php/pdo_pgsql.ini || pkg install -f php72-pdo_pgsql
service php-fpm restart
EOF

The monitoring machine

This one is unique and I would like to avoid applying its configuration against another server (that happened to me once with salt and it was really really bad). So I will just do all the job using the hostname specific cases.

$ mkdir my_unique_machine && cd my_unique_machine
$ mkdir -p files-unique-machine.private/usr/local/etc/{smokeping,munin}
$ cp /good/config files-unique-machine.private/usr/local/etc/smokeping/config
$ cp /correct/conf files-unique-machine.private/usr/local/etc/munin/munin.conf
$ cat > script-unique-machine.private <<EOF
#!/bin/sh
pkg install -y smokeping munin-master munin-node
munin-configure --shell --suggest | sh
sysrc munin_node_enable=yes
sysrc smokeping_enable=yes
service munin-node restart
service smokeping restart
EOF
$ drist user@incorrect-host
$ drist user@unique-machine.private
Copying files from folder "files-unique-machine.private":
    /usr/local/etc/smokeping/config
    /usr/local/etc/munin/munin.conf
Executing file "script-unique-machine.private":
    [...]

Nothing happened on the wrong system.

Be creative

Everything can be automated easily. I have some makefile in a lot of my drist modules, because I just need to type “make” to run it correctly. Sometimes it requires concatenating files before being run, sometimes I do not want to make mistake or having to remember on which module apply on which server (if it’s specific), so the makefile does the job for me.

One of my drist module will look at all my SSL certificates from another module, and make a reed-alert configuration file using awk and deploying it on the monitoring server. All I do is typing “make” and enjoy my free time.

How to get it and install it

  • Drist can be downloaded at this address.
  • Sources can be cloned using git clone git://bitreich.org/drist

In the sources folder, type “make install” as root, that will copy drist binary to /usr/bin/drist and its man page to /usr/share/man/man1/drist.1

For copying files, drist requires rsync on both local and remote hosts.

For running the script file, a sh compatible shell is required (csh is not working).

Fun tip #2: Display trailing spaces using ed

Written by Solène, on 29 November 2018.
Tags: #unix #fun-tip #openbsd68

Comments on Mastodon

This second fun-tip article will explain how to display trailing spaces in a text file, using the ed(1) editor. ed has a special command for showing a dollar character at the end of each line, which mean that if the line has some spaces, the dollar character will spaced from the last visible line character.

$ echo ",pl" | ed some-file.txt
453
This second fun-tip article will explain how to display trailing$
spaces in a text file, using the$
[ed(1)$](https://man.openbsd.org/ed)
editor.$
ed has a special command for showing a dollar character at the end of$
each line, which mean that if the line has some spaces, the dollar$
character will spaced from the last visible line character.$
$
.Bd -literal -offset indent$
echo ",pl" | ed some-file.txt$

This is the output of the article file while I am writing it. As you can notice, there is no trailing space here.

The first number shown in the ed output is the file size, because ed starts at the end of the file and then, wait for commands.

If I use that very same command on a small text files with trailing spaces, the following result is expected:

49
this is full    $
of trailing  $
spaces      !    $

It is also possible to display line numbers using the “n” command instead of the “p” command. This would produce this result for my current article file:

1559
1       .Dd November 29, 2018$
2       .Dt "Show trailing spaces using ed"$
3       This second fun-tip article will explain how to display trailing$
4       spaces in a text file, using the$
5       .Lk https://man.openbsd.org/ed ed(1)$
6       editor.$
7       ed has a special command for showing a dollar character at the end of$
8       each line, which mean that if the line has some spaces, the dollar$
9       character will spaced from the last visible line character.$
10      $
11      .Bd -literal -offset indent$
12      echo ",pl" | ed some-file.txt$
13      453$
14      .Dd November 29, 2018
15      .Dt "Show trailing spaces using ed"
16      This second fun-tip article will explain how to display trailing
17      spaces in a text file, using the
18      .Lk https://man.openbsd.org/ed ed(1)
19      editor.
20      ed has a special command for showing a '\ character at the end of
21      each line, which mean that if the line has some spaces, the '\
22      character will spaced from the last visible line character.
23
24      \&.Bd \-literal \-offset indent
25      \echo ",pl" | ed some-file.txt
26      .Ed$
27      $
28      This is the output of the article file while I am writing it. As you$
29      can notice, there is no trailing space here.$
30      $
31      The first number shown in the ed output is the file size, because ed$
32      starts at the end of the file and then, wait for commands.$
33      $
34      If I use that very same command on a small text files with trailing$
35      spaces, the following result is expected:$
36      $
37      .Bd -literal -offset indent$
38      49$
39      this is full
40      of trailing
41      spaces      !
42      .Ed$
43      $
44      It is also possible to display line numbers using the "n" command$
45      instead of the "p" command.$
46      This would produce this result for my current article file:$
47      .Bd -literal -offset indent$

This shows my article file with each line numbered plus the position of the last character of each line, this is awesome!

I have to admit though that including my own article as example is blowing up my mind, especially as I am writing it using ed.

Tor part 6: onionshare for sharing files anonymously

Written by Solène, on 21 November 2018.
Tags: #tor #unix #network #openbsd68

Comments on Mastodon

If for some reasons you need to share a file anonymously, this can be done through Tor using the port net/onionshare. Onionshare will start a web server displaying an unique page with a list of shared files and a Download Files button leading to a zip file.

While waiting for a download, onionshare will display HTTP logs. By default, onionshare will exit upon successful download of the files but this can be changed with the flag –stay-open.

Its usage is very simple, execute onionshare with the list of files to share, as you can see in the following example:

solene@computer ~ $ onionshare Epictetus-The_Enchiridion.txt
Onionshare 1.3 | https://onionshare.org/
Connecting to the Tor network: 100% - Done
Configuring onion service on port 17616.
Starting ephemeral Tor onion service and awaiting publication
Settings saved to /home/solene/.config/onionshare/onionshare.json
Preparing files to share.
 * Running on http://127.0.0.1:17616/ (Press CTRL+C to quit)
Give this address to the person you're sending the file to:
http://3ngjewzijwb4znjf.onion/hybrid-marbled

Press Ctrl-C to stop server

Now, I need to give the address http://3ngjewzijwb4znjf.onion/hybrid-marbled to the receiver who will need a web browser with Tor to download it.

Tor part 5: onioncat for IPv6 VPN over tor

Written by Solène, on 13 November 2018.
Tags: #tor #unix #network #openbsd68

Comments on Mastodon

This article is about a software named onioncat, it is available as a package on most Unix and Linux systems. This software allows to create an IPv6 VPN over Tor, with no restrictions on network usage.

First, we need to install onioncat, on OpenBSD:

$ doas pkg_add onioncat

Run a tor hidden service, as explained in one of my previous article, and get the hostname value. If you run multiples hidden services, pick one hostname.

# cat /var/tor/ssh_hidden_service/hostname
g6adq2w15j1eakzr.onion

Now that we have the hostname, we just need to run ocat.

# ocat g6adq2w15j1eakzr.onion

If everything works as expected, a tun interface will be created. With a fe80:: IPv6 address assigned to it, and a fd87:: address.

Your system is now reachable, via Tor, through its IPv6 address starting with fd87:: . It supports every IP protocol. Instead of using torsocks wrapper and .onion hostname, you can use the IPv6 address with any software.

Moving away from Emacs, 130 days after

Written by Solène, on 13 November 2018.
Tags: #emacs

Comments on Mastodon

It has been more than four months since I wrote my article about leaving Emacs. This article will quickly speak about my journey.

First, I successfully left Emacs. Long story short, I like Emacs and think it’s a great piece of software, but I’m not comfortable being dependent of it for everything I do. I chose to replace all my Emacs usage by other software (agenda, notes taking , todo-list, IRC client, jabber client, editor etc..).

  • agenda is not replaced by when (port productivity/when), but I plan to replace it by calendar(1) as it’s in base and that when doesn’t do much.
  • todo-list: I now use taskwarrior + a kanban board (using kanboard) for team work
  • notes: I wrote a small software named “notes” which is a wrapper for editing files and following edition using git. It’s available at git://bitreich.org/notes
  • IRC: weechat (not better or worse than emacs circe)
  • jabber: profanity
  • editor: vim, ed or emacs, that depend what I do. Emacs is excellent for writing Lisp or Scheme code, while I prefer to use vim for most of edition task. I now use ed for small editions.
  • mail: I wrote some kind of a wrapper on top of mblaze. I plan to share it someday.

I’m happy to have moved out from Emacs.

Fun tip #1: Apply a diff with ed

Written by Solène, on 13 November 2018.
Tags: #fun-tip #unix #openbsd68

Comments on Mastodon

I am starting a new kind of articles that I chose to name it ”fun facts“. Theses articles will be about one-liners which can have some kind of use, or that I find interesting from a technical point of view. While not useless, theses commands may be used in very specific cases.

The first of its kind will explain how to programmaticaly use diff to modify file1 to file2, using a command line, and without a patch.

First, create a file, with a small content for the example:

$ printf "first line\nsecond line\nthird line\nfourth line with text\n" > file1
$ cp file1{,.orig}
$ printf "very first line\nsecond line\n third line\nfourth line\n" > file1

We will use diff(1) -e flag with the two files.

$ diff -e file1 file1.orig
4c
fourth line
.
1c
very first line
.

The diff(1) output is batch of ed(1) commands, which will transform file1 into file2. This can be embedded into a script as in the following example. We also add w last commands to save the file after edition.

#!/bin/sh
ed file1 <<EOF
4c
fourth line
.
1c
very first line
.
w
EOF

This is a quite convenient way to transform a file into another file, without pushing the entire file. This can be used in a deployment script. This is more precise and less error prone than a sed command.

In the same way, we can use ed to alter configuration file by writing instructions without using diff(1). The following script will change the whole first line containing “Port 22” into Port 2222 in /etc/ssh/sshd_config.

#!/bin/sh
ed /etc/ssh/sshd_config <<EOF
/Port 22
c
Port 2222
.
w
EOF

The sed(1) equivalent would be:

sed -i'' 's/.*Port 22.*/Port 2222/' /etc/ssh/sshd_config

Both programs have their use, pros and cons. The most important is to use the right tool for the right job.

Play Stardew Valley on OpenBSD

Written by Solène, on 09 November 2018.
Tags: #gaming #openbsd68

Comments on Mastodon

It’s possible to play native Stardew Valley on OpenBSD, and it’s not using a weird trick!

First, you need to buy Stardew Valley, it’s not very expensive and is often available at a lower price. I recommend to buy it on GOG.

Now, follow the steps:

  1. install packages unzip and fnaify
  2. On GOG, download the linux installer
  3. unzip the installer (use unzip command on the .sh file)
  4. cd into data/noarch/game
  5. fnaify -y
  6. ./StardewValley

Enjoy!

Safely restrict commands through SSH

Written by Solène, on 08 November 2018.
Tags: #ssh #security #openbsd68 #highlight

Comments on Mastodon

sshd(8) has a very nice feature that is often overlooked. That feature is the ability to allow a ssh user to run a specified command and nothing else, not even a login shell.

This is really easy to use and the magic happens in the file authorized_keys which can be used to restrict commands per public key.

For example, if you want to allow someone to run the “uptime” command on your server, you can create a user account for that person, with no password so the password login will be disabled, and add his/her ssh public key in ~/.ssh/authorized_keys of that new user, with the following content.

restrict,command="/usr/bin/uptime" ssh-rsa the_key_content_here

The user will not be able to log-in, and doing the command ssh remoteserver will return the output of uptime. There is no way to escape this.

While running uptime is not really helpful, this can be used for a much more interesting use case, like allowing remote users to use vmctl without giving a shell account. The vmctl command requires parameters, the configuration will be slightly different.

restrict,pty,command="/usr/sbin/vmctl $SSH_ORIGINAL_COMMAND" ssh-rsa the_key_content_here"

The variable SSH_ORIGINAL_COMMAND contains the value of what is passed as parameter to ssh. The pty keyword also make an appearance, that will be explained later.

If the user connects to ssh, vmctl with no parameter will be output.

$ ssh remotehost
usage:  vmctl [-v] command [arg ...]
    vmctl console id
    vmctl create "path" [-b base] [-i disk] [-s size]
    vmctl load "path"
    vmctl log [verbose|brief]
    vmctl reload
    vmctl reset [all|vms|switches]
    vmctl show [id]
    vmctl start "name" [-Lc] [-b image] [-r image] [-m size]
            [-n switch] [-i count] [-d disk]* [-t name]
    vmctl status [id]
    vmctl stop [id|-a] [-fw]
    vmctl pause id
    vmctl unpause id
    vmctl send id
    vmctl receive id

If you pass parameters to ssh, it will be passed to vmctl.

$ ssh remotehost show
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME
1     -     1    1.0G       -       -       solene test
$ ssh remotehost start test
vmctl: started vm 1 successfully, tty /dev/ttyp9
$ ssh -t remotehost console test
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?

The ssh connections become a call to vmctl and ssh parameters become vmctl parameters.

Note that in the last example, I use “ssh -t”, this is so to force allocation of a pseudo tty device. This is required for vmctl console to get a fully working console. The keyword restrict does not allow pty allocation, that is why we have to add pty after restrict, to allow it.

Tor part 4: run a relay

Written by Solène, on 08 November 2018.
Tags: #unix #tor

Comments on Mastodon

In this fourth Tor article, I will quickly cover how to run a Tor relay, the Tor project already have a very nice and up-to-date Guide for setting a relay. Those relays are what make Tor usable, with more relay, Tor gets more bandwidth and it makes you harder to trace, because that would mean more traffic to analyze.

A relay server can be an exit node, which will relay Tor traffic to the outside. This implies a lot of legal issues, the Tor project foundation offers to help you if your exit node gets you in trouble.

Remember that being an exit node is optional. Most relays are not exit nodes. They will either relay traffic between relays, or become a guard which is an entry point to the Tor network. The guard gets the request over non-tor network and send it to the next relay of the user circuit.

Running a relay requires a lot of CPU (capable of some crypto) and a huge amount of bandwidth. Running a relay requires at least a bandwidth of 10Mb/s, this is a minimal requirement. If you have less, you can still run a bridge with obfs4 but I won’t cover it here.

When running a relay, you will be able to set a daily/weekly/monthly traffic limit, so your relay will stop relaying when it reach the quota. It’s quiet useful if you don’t have unmeasured bandwidth, you can also limit the bandwidth allowed to Tor.

To get real-time information about your relay, the software Nyx (net/nyx) is a Tor top-like front end which show Tor CPU usage, bandwidth, connections, log in real time.

The awesome Official Tor guide

File versioning with rcs

Written by Solène, on 31 October 2018.
Tags: #openbsd68 #highlight #unix

Comments on Mastodon

In this article I will present you the rcs tools and we will use it for versioning files in /etc to track changes between editions. These tools are part of the OpenBSD base install.

Prerequisites

You need to create a RCS folder where your files are, so the files versions will be saved in it. I will use /etc in the examples, you can adapt to your needs.

# cd /etc
# mkdir RCS

The following examples use the command ci -u. This will be explained later why so.

Tracking a file

We need to add a file to the RCS directory so we can track its revisions. Each time we will proceed, we will create a new revision of the file which contain the whole file at that point of time. This will allow us to see changes between revisions, and the date of each revision (and some others informations).

I really recommend to track the files you edit in your system, or even configuration file in your user directory.

In next example, we will create the first revision of our file with ci, and we will have to write some message about it, like what is doing that file. Once we write the message, we need to validate with a single dot on the line.

# cd /etc
# ci -u fstab
fstab,v  <--  fstab
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> this is the /etc/fstab file
>> .
initial revision: 1.1
done

Editing a file

The process of edition has multiples steps, using ci and co:

  1. checkout the file and lock it, this will make the file available for writing and will prevent using co on it again (due to lock)
  2. edit the file
  3. commit the new file + checkout

When using ci to store the new revision, we need to write a small message, try to use something clear and short. The log messages can be seen in the file history, that should help you to know which change has been made and why. The full process is done in the following example.

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v  <--  fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
revision 1.4 (unlocked)
done

View changes since last version

Using previous example, we will use rcsdiff to check the changes since the last version.

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# rcsdiff -u fstab
--- fstab   2018/10/28 14:28:29 1.1
+++ fstab   2018/10/28 14:30:41
@@ -9,3 +9,4 @@
 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong

The -u flag is so to produce an unified diff, which I find easier to read. Lines with + shows additions, and lines with - show deletions (there are none in the example).

Use of ci -u

The examples were using ci -u this is because, if you use ci some_file, the file will be saved in the RCS folder but will be missing in its place. You should use co some_file to get it back (in read-only).

# co -l fstab
RCS/fstab,v  -->  fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v  <--  fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
done
# ls fstab
ls: fstab: No such file or directory
# co fstab
RCS/fstab,v  -->  fstab
revision 1.5
done
# ls fstab
fstab

Using ci -u is very convenient because it prevent the user to forget to checkout the file after commiting the changes.

Show existing revisions of a file

# rlog fstab
RCS file: RCS/fstab,v
Working file: fstab
head: 1.2
branch:
locks: strict
access list:
symbolic names:
keyword substitution: kv
total revisions: 2;     selected revisions: 2
description:
new file
----------------------------
revision 1.2
date: 2018/10/28 14:45:34;  author: solene;  state: Exp;  lines: +1 -0;
Adding a disk
----------------------------
revision 1.1
date: 2018/10/28 14:45:18;  author: solene;  state: Exp;
Initial revision
=============================================================================

We have revisions 1.1 and 1.2, if we want to display the file in its 1.1 revision, we can use the following command:

# co -p1.1 fstab
RCS/fstab,v  -->  standard output
revision 1.1
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
done

Note that there is no space between the flag and the revision! This is required.

We can see that the command did output some extra informations about the file and “done” at the end of the file. Thoses extra informations are sent to stderr while the actual file content is sent to stdout. That mean if we redirect stdout to a file, we will get the file content.

# co -p1.1 fstab > a_file
RCS/fstab,v  -->  standard output
revision 1.1
done
# cat a_file
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2

Show a diff of a file since a revision

We can use rcsdiff using -r flag to tell it to show the changes between last and one specific revision.

# rcsdiff -u -r1.1 fstab
--- fstab   2018/10/29 14:45:18 1.1
+++ fstab   2018/10/29 14:45:34
@@ -9,3 +9,4 @@
 52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
 52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong

Configure OpenSMTPD to relay on a network

Written by Solène, on 29 October 2018.
Tags: #openbsd68 #highlight #opensmtpd

Comments on Mastodon

With the new OpenSMTPD syntax change which landed with OpenBSD 6.4 release, changes are needed for making opensmtpd to act as a lan relay to a smtp server. This case wasn’t covered in my previous article about opensmtpd, I was only writing about relaying from the local machine, not for a network. Mike (a reader of the blog) shared that it would be nice to have an article about it. Here it is! :)

A simple configuration would look like the following:

listen on em0
listen on lo0

table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db

action "local" mbox alias <aliases>
action "relay" relay host smtps://myrelay@remote-smtpd.tld auth <secrets>

match for local action "local"
match from local for any action "relay"
match from src 192.168.1.0/24 for action relay

The daemon will listen on em0 interface, and mail delivered from the network will be relayed to remote-smtpd.tld.

For a relay using authentication, the login and passwords must be defined in the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd

smtpd.conf(5) explains creation of /etc/mail/secrets like this:

touch /etc/mail/secrets
chmod 640 /etc/mail/secrets
chown root:_smtpd /etc/mail/secrets

Tor part 3: Tor Browser

Written by Solène, on 24 October 2018.
Tags: #openbsd68 #openbsd #unix #tor

Comments on Mastodon

In this third Tor article, we will discover the web browser Tor Browser.

The Tor Browser is an official Tor project. It is a modified Firefox, including some defaults settings changes and some extensions. The default changes are all related to privacy and anonymity. It has been made to be easy to browse the Internet through Tor without leaving behind any information which could help identify you, because there are much more information than your public IP address which could be used against you.

It requires tor daemon to be installed and running, as I covered in my first Tor article.

Using it is really straightforward.

How to install tor-browser

$ pkg_add tor-browser

How to start tor-browser

$ tor-browser

It will create a ~/TorBrowser-Data folder at launch. You can remove it as you want, it doesn’t contain anything sensitive but is required for it to work.

Show OpenSMTPD queue and force sending queued mails

Written by Solène, on 24 October 2018.
Tags: #opensmtpd #highlight #openbsd68 #openbsd

Comments on Mastodon

If you are using opensmtpd on a device not always connected on the internet, you may want to see what mail did not go, and force it to be delivered NOW when you are finally connected to the Internet.

We can use smtpctl to show the current queue.

$ doas smtpctl show queue
1de69809e7a84423|local|mta|auth|so@tld|dest@tld|dest@tld|1540362112|1540362112|0|2|pending|406|No MX found for domain

The previous command will report nothing if the queue is empty.

In the previous output, we see that there is one mail from me to dest@tld which is pending due to “NO MX found for domain” (which is normal as I had no internet when I sent the mail).

We need to extract the first field, which is 1de69809e7a84423 in the current example.

In order to tell opensmtpd to deliver it now, we will use the following command:

$ doas smtpctl schedule 1de69809e7a84423
1 envelope scheduled
$ doas smtpctl show queue

My mail was delivered, it’s not in the queue anymore.

If you wish to deliver all enveloppes in the queue, this is as simple as:

$ doas smtpctl schedule all

New cl-yag version

Written by Solène, on 12 October 2018.
Tags: #cl-yag #unix

Comments on Mastodon

My website/gopherhole static generator cl-yag has been updated today, and see its first release!

New feature added today is that the gopher output now supports an index menu of tags, and a menu for each tags displaying articles tagged by that tag. The gopher output was a bit of a second class citizen before this, only listing articles.

New release v1.00 can be downloaded here (sha512 sum 53839dfb52544c3ac0a3ca78d12161fee9bff628036d8e8d3f54c11e479b3a8c5effe17dd3f21cf6ae4249c61bfbc8585b1aa5b928581a6b257b268f66630819). Code can be cloned with git: git://bitreich.org/cl-yag

Tor part 2: hidden service

Written by Solène, on 11 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

In this second Tor article, I will present an interesting Tor feature named hidden service. The principle of this hidden service is to make available a network service from anywhere, with only prerequisites that the computer must be powered on, tor not blocked and it has network access.

This service will be available through an address not disclosing anything about the server internet provider or its IP, instead, a hostname ending by .onion will be provided by tor for connecting. This hidden service will be only accessible through Tor.

There are a few advantages of using hidden services:

  • privacy, hostname doesn’t contain any hint
  • security, secure access to a remote service not using SSL/TLS
  • no need for running some kind of dynamic dns updater

The drawback is that it’s quite slow and it only work for TCP services.

From here, we assume that Tor is installed and working.

Running an hidden service require to modify the Tor daemon configuration file, located in /etc/tor/torrc on OpenBSD.

Add the following lines in the configuration file to enable a hidden service for SSH:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22

The directory /var/tor/ssh_service will be be created. The directory /var/tor is owned by user _tor and not readable by other users. The hidden service directory can be named as you want, but it should be owned by user _tor with restricted permissions. Tor daemon will take care at creating the directory with correct permissions once you reload it.

Now you can reload the tor daemon to make the hidden service available.

$ doas rcctl reload tor

In the /var/tor/ssh_service directory, two files are created. What we want is the content of the file hostname which contains the hostname to reach our hidden service.

$ doas cat /var/tor/ssh_service/hostname
piosdnzecmbijclc.onion

Now, we can use the following command to connect to the hidden service from anywhere.

$ torsocks ssh piosdnzecmbijclc.onion

In Tor network, this feature doesn’t use an exit node. Hidden services can be used for various services like http, imap, ssh, gopher etc…

Using hidden service isn’t illegal nor it makes the computer to relay tor network, as previously, just check if you can use Tor on your network.

Note: it is possible to have a version 3 .onion address which will prevent hostname collapsing, but this produce very long hostnames. This can be done like in the following example:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
HiddenServiceVersion 3

This will produce a really long hostname like tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion

If you want to have the short and long hostnames, you need to specify twice the hidden service, with differents folders.

Take care, if you run a ssh service on your website and using this same ssh daemon on the hidden service, the host keys will be the same, implying that someone could theoricaly associate both and know that this public IP runs this hidden service, breaking anonymity.

Tor part 1: how-to use Tor

Written by Solène, on 10 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

Tor is a network service allowing to hide your traffic. People sniffing your network will not be able to know what server you reach and people on the remote side (like the administrator of a web service) will not know where you are from. Tor helps keeping your anonymity and privacy.

To make it quick, tor make use of an entry point that you reach directly, then servers acting as relay not able to decrypt the data relayed, and up to an exit node which will do the real request for you, and the network response will do the opposite way.

You can find more details on the Tor project homepage.

Installing tor is really easy on OpenBSD. We need to install it, and start its daemon. The daemon will listen by default on localhost on port 9050. On others systems, it may be quite similar, install the tor package and enable the daemon if not enabled by default.

# pkg_add tor
# rcctl enable tor
# rcctl start tor

Now, you can use your favorite program, look at the proxy settings and choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and use the default address: 127.0.0.1 with port 9050.

If you need to use tor with a program that doesn’t support setting a SOCKS proxy, it’s still possible to use torsocks to wrap it, that will work with most programs. It is very easy to use.

# pkg_add torsocks
$ torsocks ssh remoteserver

This will make ssh going through tor network.

Using tor won’t make you relaying anything, and is legal in most countries. Tor is like a VPN, some countries has laws about VPN, check for your country laws if you plan to use tor. Also, note that using tor may be forbidden in some networks (companies, schools etc..) because this allows to escape filtering which may be against some kind of “Agreement usage” of the network.

I will cover later the relaying part, which can lead to legal uncertainty.

Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to wrap network system calls, there is a way to do it more cleanly with ssh (or any program supporting a custom command for initialize the connection) using netcat.

ssh -o ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p' address.onion

This can be simplified by adding the following lines to your ~/.ssh/config file, in order to automatically use the proxy command when you connect to a .onion hostname:

Host *.onion
ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p'

This netcat command is tested under OpenBSD, there are differents netcat implementations, the flags may be differents or may not even exist.

Create a new OpenBSD partition from unused space

Written by Solène, on 20 September 2018.
Tags: #openbsd68 #openbsd #highlight

Comments on Mastodon

The default OpenBSD partition layout uses a pre-defined template. If you have a disk more than 356 GB you will have unused space with the default layout (346 GB before 6.4).

It’s possible to create a new partition to use that space if you did not modify the default layout at installation. You only need to start disklabel with flag -E* and type a to add a partition, default will use all remaining space for the partition.

# disklabel -E sd0
Label editor (enter '?' for help at any prompt)
> a
partition: [m]
offset: [741349952]
size: [258863586]
FS type: [4.2BSD]
> w
> q
No label changes.

The new partition here is m. We can format it with:

# newfs /dev/rsd0m

Then, you should add it to your /etc/fstab, for that, use the same uuid as for other partitions, it would look something like 52fdd1ce48744600

52fdd1ce48744600.e /data ffs rw,nodev,nosuid 1 2

It will be auto mounted at boot, you only need to create the folder /data. Now you can do

# mkdir /data
# mount /data

and /data is usable right now.

You can read disklabel(8) and newfs for more informations.

Display the size of installed packages ordered by size

Written by Solène, on 11 September 2018.
Tags: #openbsd68 #openbsd #highlight

Comments on Mastodon

Simple command line to display your installed packages listed by size from smallest to biggest.

$ pkg_info -sa | paste - - - - | sort -n -k 5

Thanks to sthen@ for the command, I was previously using one involving awk which was less readable. paste is often forgotten, it has very specifics uses which can’t be mimic easily with other tools, its purpose is to joins multiples lines into one with some specific rules.

You can easily modify the output to convert the size from bytes to megabytes with awk:

$ pkg_info -sa | paste - - - - | sort -n -k 5 | awk '{ NF=$NF/1024/1024 ; print }'

This divides the last element (using space separator) of each line twice by 1024 and displays the line.

News about the blog

Written by Solène, on 11 September 2018.
Tags: #highlight

Comments on Mastodon

Today I will write about my blog itself. While I started it as my own documentation for some specific things I always forget about (like “How to add a route through a specific interface on FreeBSD”) or to publish my dot files, I enjoyed it and wanted to share about some specific topics.

Then I started the “port of the week” things, but as time goes, I find less of those software and so I don’t have anything to write about. Then, as I run multiples servers, sometimes when I feel that the way I did something is clean and useful, I share it here, as it is a reminder for me I also write it to be helpful for others.

Doing things right is time consuming, but I always want to deliver a polished write. In my opinion, doing things right includes the following:

  • explain why something is needed
  • explain code examples
  • give hints about potential traps
  • where to look for official documentation
  • provide environment informations like the operating system version used at the writing time
  • make the reader to think and get inspired instead of providing a material ready to be copy / pasted brainlessly

I try to keep as much as possible close to those guidelines. I even update from time to time my previous articles to check it still works on the latest operating system version, so the content is still relevant. And until it’s not updated, having the system version let the reader think about “oh, it may have changed” (or not, but it becomes the reader problem).

Now, I want to share about some OpenBSD specifics features, in a way to highlight features. In OpenBSD everything is documented correctly, but as a Human, one can’t read and understand every man page to know what is possible. Here come the highlighting articles, trying to show features, how to use it and where they are documented.

I hope you, reader, like what I write. I am writing here since two years and I still like it.

Manage ”nice” priority of daemons on OpenBSD

Written by Solène, on 11 September 2018.
Tags: #openbsd66 #openbsd #highlight

Comments on Mastodon

Following a discussion on the OpenBSD mailing list misc, today I will write about how to manage the priority (as in nice priority) of your daemons or services.

In man page rc(8), one can read:

Before init(8) starts rc, it sets the process priority, umask, and
resource limits according to the “daemon” login class as described in
login.conf(5).  It then starts rc and attempts to execute the sequence of
commands therein.

Using /etc/login.conf we can manage some limits for services and daemon, using their rc script name.

For example, to make jenkins at lowest priority (so it doesn’t make troubles if it builds), using this line will set it to nice 20.

jenkins:priority=20

If you have a file /etc/login.conf.db you have to update it from /etc/login.conf using the software cap_mkdb. This creates a hashed database for faster information retrieval when this file is big. By default, that file doesn’t exist and you don’t have to run cap_mkdb. See login.conf(5) for more informations.

Configuration of OpenSMTPD to relay mails to outbound smtp server

Written by Solène, on 06 September 2018.
Tags: #openbsd66 #openbsd #opensmtpd #highlight

Comments on Mastodon

In this article I will show how to configure OpenSMTPD, the default mail server on OpenBSD, to relay mail sent locally to your smtp server. In pratice, this allows to send mail through “localhost” by the right relay, so it makes also possible to send mail even if your computer isn’t connected to the internet. Once connected, opensmtpd will send the mails.

All you need to understand the configuration and write your own one is in the man page smtpd.conf(5). This is only a highlight on was it possible and how to achieve it.

In OpenBSD 6.4 release, the configuration of opensmtpd changed drasticaly, now you have to defines rules and action to do when a mail match the rules, and you have to define those actions.

In the following example, we will see two kinds of relay, the first is through smtp over the Internet, it’s the most likely you will want to setup. And the other one is how to relay to a remote server not allowing relaying from outside.

/etc/mail/smtpd.conf

table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets
listen on lo0

action "local" mbox alias <aliases>
action "relay" relay
action "myserver" relay host smtps://myrelay@perso.pw auth <secrets>
action "openbsd"  relay host localhost:2525

match mail-from "@perso.pw"    for any action "myserver"
match mail-from "@openbsd.org" for any action "openbsd"
match for local action "local"
match for any action "relay"

I defined 2 actions, one from “myserver”, it has a label “myrelay” and we use auth <secrets> to tell opensmtpd it needs authentication.

The other action is “openbsd”, it will only relay to localhost on port 2525.

To use them, I define 2 matching rules of the very same kind. If the mail that I want to send match the @domain-name, then choose relay “myserver” or “openbsd”.

The “openbsd” relay is only available when I create a SSH tunnel, binding the local port 25 of the remote server to my port 2525, with flags -L 2525:127.0.0.1:25.

For a relay using authentication, the login and passwords must be defined in the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd

smtpd.conf(5) explains creation of /etc/mail/secrets like this:

touch /etc/mail/secrets
chmod 640 /etc/mail/secrets
chown root:_smtpd /etc/mail/secrets

Now, restarts your server. Then if you need to send mails, just use “mail” command or localhost as a smtp server. Depending on your From address, a different relay will be used.

Deliveries can be checked in /var/log/maillog log file.

See mails in queue

doas smtpctl show queue

Try to deliver now

doas smtpctl schedule all

Automatic switch wifi/ethernet on OpenBSD

Written by Solène, on 30 August 2018.
Tags: #openbsd66 #openbsd #network #highlight

Comments on Mastodon

Today I will cover a specific topic on OpenBSD networking. If you are using a laptop, you may switch from ethernet to wireless network from time to time. There is a simple way to keep the network instead of having to disconnect / reconnect everytime.

It’s possible to aggregate your wireless and ethernet devices into one trunk pseudo device in failover mode, which give ethernet the priority if connected.

To achieve this, it’s quite simple. If you have devices em0 and iwm0 create the following files.

/etc/hostname.em0

up

/etc/hostname.iwm0

join "office_network"  wpakey "mypassword"
join "my_home_network" wpakey "9charshere"
join "roaming phone"   wpakey "something"
join "Public Wifi"
up

/etc/hostname.trunk0

trunkproto failover trunkport em0 trunkport iwm0
dhcp

As you can see in the wireless device configuration we can specify multiples network to join, it is a new feature that will be available from 6.4 release.

You can enable the new configuration by running sh /etc/netstart as root.

This setup is explained in trunk(4) man page and in the OpenBSD FAQ as well.

Presenting drist at BitreichCON 2018

Written by Solène, on 21 August 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Still about bitreich conference 2018, I’ve been presenting drist, an utility for server deployment (like salt/puppet/ansible…) that I wrote.

drist makes deployments easy to understand and easy to extend. Basically, it has 3 steps:

  1. copying a local file tree on the remote server (for deploying files)
  2. delete files on the remote server if they are present in a local tree
  3. execute a script on the remote server

Each step is run if the according file/folder exists, and for each step, it’s possible to have a general / per-host setup.

How to fetch drist

git clone git://bitreich.org/drist

It was my very first talk in english, please be indulgent.

Plain text slides (tgz)

MP3 of the talk

MP3 of questions/answers

Bitreich community is reachable on gopher at gopher://bitreich.org

Presenting Reed-alert at BitreichCON 2018

Written by Solène, on 20 August 2018.
Tags: #unix

Comments on Mastodon

As the author of reed-alert monitoring tool I have been speaking about my software at the bitreich conference 2018.

For the quick intro, reed-alert is a software to get notified when something is wrong on your server, it’s fully customizable and really easy-to-use.

How to fetch reed-alert

git clone git://bitreich.org/reed-alert

It was my very first talk in english, please be indulgent.

Plain text slides (tgz)

MP3 of the talk

MP3 of questions/answers

Bitreich community is reachable on gopher at gopher://bitreich.org

Tmux mastery

Written by Solène, on 05 July 2018.
Tags: #unix #shell

Comments on Mastodon

Tips for using Tmux more efficiently

Enter in copy mode

By default Tmux uses the emacs key-bindings, to make a selection you need to enter in copy-mode by pressing Ctrl+b and then [ with Ctrl+b being the tmux prefix key, if you changed it then do the replacement while reading.

If you need to quit the copy-mode, type Ctrl+C.

Make a selection

While in copy-mode, selects your start or ending position for your selection and then press Ctrl+Space to start the selection. Now, move your cursor to select the text and press Ctrl+w to validate.

Paste a selection

When you want to paste your selection, press Ctrl+b ] (you should not be in copy-mode for this!).

Make a rectangle selection

If you want to make a rectangular selection, press Ctrl+space to start and immediately, press R (capitalized R), then move your cursor and validate with Ctrl+w.

Output the buffer to X buffer

Make a selection to put the content in tmux buffer, then type

tmux save-buffer - | xclip

You may want to look at xclip (it’s a package) man page.

Output the buffer to a file

tmux save-buffer file

Load a file into buffer

It’s possible to load the content of a file inside the buffer for pasting it somewhere.

tmux load-buffer file

You can also load into the buffer the output of a command, using a pipe and - as a file like in this example:

echo 'something very interesting' | tmux load-buffer -

Display the battery percentage in the status bar

If you want to display your battery percentage and update it every 40 seconds, you can add two following lines in ~/.tmux.conf:

set status-interval 40
set -g status-right "#[fg=colour155]#(apm -l)%% | #[fg=colour45]%d %b %R"

This example works on OpenBSD using apm command. You can reuse this example to display others informations.

Writing an article using mdoc format

Written by Solène, on 03 July 2018.
Tags: #unix

Comments on Mastodon

I never wrote a man page. I already had to read at the source of a man page, but I was barely understand what happened there. As I like having fun and discovering new things (people call me a Hipster since last days days ;-) ).

I modified cl-yag (the website generator used for this website) to be only produced by mdoc files. The output was not very cool as it has too many html items (classes, attributes, tags etc…). The result wasn’t that bad but it looked like concatenated man pages.

I actually enjoyed playing with mdoc format (the man page format on OpenBSD, I don’t know if it’s used somewhere else). While it’s pretty verbose, it allows to separate the formatting from the paragraphs. As I’m playing with ed editor last days, it is easier to have an article written with small pieces of lines rather than a big paragraph including the formatting.

Finally I succeded at writing a command line which produced an usable html output to use it as a converter in cl-yag. Now, I’ll be able to write my articles in the mdoc format if I want :D (which is fun). The convert command is really ugly but it actually works, as you can see if you read this.

cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT

The trick here was to use markdown as an convert format between mdoc to html. As markdown is very weak compared to html (in possibilities), it will only use simple tags for formatting the html output. The sed command is needed to delete the mandoc output with the man page title at the top, and the operating system at the bottom.

By having played with this, writing a man page is less obscure to me and I have a new unusual format to use for writing my articles. Maybe unusual for this use case, but still very powerful!

Trying to move away from emacs

Written by Solène, on 03 July 2018.
Tags: #unix #emacs

Comments on Mastodon

Hello

Today I will write about my current process of trying to get rid of emacs. I use it extensively with org-mode for taking notes and making them into a agenda/todo-list, this helped me a lot to remember tasks to do and what people told to me. I also use it for editing of course, any kind of text or source code. This is usually the editor I use for writing the blog articles that you can read here. This one is written using ed. I also read my emails in emacs with mu4e (which last version doesn’t work anymore on powerpc due to a c++14 feature used and no compiler available on powerpc to compile it…).

While I like Emacs, I never liked to use one big tool for everything. My current quest is to look for a portable and efficient way to replace differents emacs parts. I will not stop using Emacs if the replacements are not good enough to do the job.

So, I identified my Emacs uses:

  • todo-list / agenda / taking notes
  • writing code (perl, C, php, Common LISP)
  • IRC
  • mails
  • writing texts
  • playing chess by mail
  • jabber client

I will try for each topic to identify alternatives and challenge them to Emacs.

Todo-list / Agenda / Notes taking

This is the most important part of my emacs use and it is the one I would really like to get out of Emacs. What I need is: writing quickly a task, add a deadline to it, add explanations or a description to it, be able to add sub-tasks for a task and be able to display it correctly (like in order of deadline with days / hours before deadline).

I am trying to convert my current todo-list to taskwarrior, the learning curve is not easy but after spending one hour playing with it while reading the man page, I have understood enough to replace org-mode with it. I do not know if it will be as good as org-mode but only time will let us know.

By the way, I found vit, a ncurses front-end for taskwarrior.

Writing code

Actually Emacs is a good editor. It supports syntax coloring, can evaluates regions of code (depend of the language), the editor is nice etc… I discovered jed which is a emacs-like editor written in C+libslang, it’s stable and light while providing more features than mg editor (available in OpenBSD base installation).

While I am currently playing with ed for some reasons (I will certainly write about it), I am not sure I could use it for writing a software from scratch.

IRC

There are lots of differents IRC clients around, I just need to pick up one.

Mails

I really enjoy using mu4e, I can find my mails easily with it, the query system is very powerful and interesting. I don’t know what I could use to replace it. I have been using alpine some times ago, and I tried mutt before mu4e and I did not like it. I have heard about some tools to manage a maildir folder using unix commands, maybe I should try this one. I did not any searches on this topic at the moment.

Writing text

For writing plain text like my articles or for using $EDITOR for differents tasks, I think that ed will do the job perfectly :-) There is ONE feature I really like in Emacs but I think it’s really easy to recreate with a script, the function bind on M-q to wrap a text to the correct column numbers!

Update: meanwhile I wrote a little perl script using Text::Wrap module available in base Perl. It wraps to 70 columns. It could be extended to fill blanks or add a character for the first line of a paragraph.

#!/usr/bin/env perl
use strict;use warnings;
use Text::Wrap qw(wrap $columns);
open IN, '<'.$ARGV[0];
$columns = 70;
my @file = <IN>;
print wrap("","",@file);

This script does not modify the file itself though.

Some people pointed me that Perl was too much for this task. I have been told about Groff or Par to format my files.

Finally, I found a very BARE way to handle this. As I write my text with ed, I added an new alias named “ruled” with spawn ed with a prompt of 70 characters #, so I have a rule each time ed displays its prompt!!! :D

It looks like this for the last paragraph:

###################################################################### c
been told about Groff or Par to format my files.

Finally, I found a very **BARE** way to handle this. As I write my
text with ed, I added an new alias named "ruled" with spawn ed with a
prompt of 70 characters #, so I have a rule each time ed displays its
prompt!!! :D
.
###################################################################### w

Obviously, this way to proceed only works when writing the content at first. If I need to edit a paragraph, I will need a tool to format correctly my document again.

Jabber client

Using jabber inside Emacs is not a very good experience. I switched to profanity (featured some times ago on this blog).

Playing Chess

Well, I stopped playing chess by mails, I am still waiting for my recipient to play his turn since two years now. We were exchanging the notation of the whole play in each mail, by adding our turn each time, I was doing the rendering in Emacs, but I do not remember exactly why but I had problems with this (replaying the string).

Easy encrypted backups on OpenBSD with base tools

Written by Solène, on 26 June 2018.
Tags: #unix #openbsd66 #openbsd

Comments on Mastodon

Old article

Hello, it turned out that this article is obsolete. The security used in is not safe at all so the goal of this backup system isn’t achievable, thus it should not be used and I need another backup system.

One of the most important feature of dump for me was to keep track of the inodes numbers. A solution is to save the list of the inodes numbers and their path in a file before doing a backup. This can be achieved with the following command.

$ doas ncheck -f "\I \P\n" /var

If you need a backup tool, I would recommend the following:

Duplicity

It supports remote backend like ftp/sftp which is quite convenient as you don’t need any configuration on this other side. It supports compression and incremental backup. I think it has some GUI tools available.

Restic

It supports remote backend like cloud storage provider or sftp, it doesn’t require any special tool on the remote side. It supports deduplication of the files and is able to manage multiples hosts in the same repository, this mean that if you backup multiple computers, the deduplication will work across them. This is the only backup software I know allowing this (I do not count backuppc which I find really unusable).

Borg

It supports remote backend like ssh only if borg is installed on the other side. It supports compression and deduplication but it is not possible to save multiples hosts inside the same repository without doing a lot of hacks (which I won’t recommend).

Change default application for xdg-open

Written by Solène, on 25 June 2018.
Tags: #unix

Comments on Mastodon

I write it as a note for me and if it can helps some other people, it’s fine.

To change the program used by xdg-open for opening some kind of file, it’s not that hard.

First, check the type of the file:

$ xdg-mime query filetype file.pdf
application/pdf

Then, choose the right tool for handling this type:

$ xdg-mime default mupdf.desktop application/pdf

Honestly, having firefox opening PDF files with GIMP IS NOT FUN.

Share a tmux session with someone with tmate

Written by Solène, on 01 June 2018.
Tags: #unix

Comments on Mastodon

New port of the week, and it’s about tmate.

If you ever wanted to share a terminal with someone without opening a remote access to your computer, tmate is the right tool for this.

Once started, tmate will create a new tmux instance connected through the tmate public server, by typing tmate show-messages you will get url for read-only or read-write links to share with someone, by ssh or web browser. Don’t forget to type clear to hide url after typing show-messages, otherwise viewing people will have access to the write url (and it’s not something you want).

If you don’t like the need of a third party, you can setup your own server, but we won’t cover this in this article.

When you want to end the share, you just need to exit the tmux opened by tmate.

If you want to install it on OpenBSD, just type pkg_add tmate and you are done. I think it’s available on most unix systems.

There is no much more to say about it, it’s great, simple, work out-of-the-box with no configuration needed.

Deploying cron programmaticaly the unix way

Written by Solène, on 31 May 2018.
Tags: #unix

Comments on Mastodon

Here is a little script to automatize in some way your crontab deployment when you don’t want to use a configuration tool like ansible/salt/puppet etc… This let you package a file in your project containing the crontab content you need, and it will add/update your crontab with that file.

The script works this way:

$ ./install_cron crontab_solene

with crontab_solene file being an actual crontab correct, which could looks like this:

## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##

Then it will include the file into my current user crontab, the TAG in the file is here to be able to remove it and replace it later with the new version. The script could be easily modified to support the tag name as parameter, if you have multiple deployments using the same user on the same machine.

Example:

$ crontab -l
0 * * * * pgrep iridium | xargs renice -n +20
$ ./install_cron crontab_solene
$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##

If I add to crontab_solene the line 0 20 * * * ~/bin/faubackup.sh I can now reinstall the crontab file.

$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
$ ./install_cron crontab_solene
$ crontabl -l 
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * *  ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
0 20 * * * ~/bin/faubackup.sh
## END_TAG ##

Here is the script:

#!/bin/sh

if [ -z "$1" ]; then
    echo "Usage: $0 user_crontab_file"
    exit 1
fi

VALIDATION=0
grep "^## TAG ##$" "$1" >/dev/null
VALIDATION=$?
grep "^## END_TAG ##$" "$1" >/dev/null
VALIDATION=$(( VALIDATION + $? ))

if [ "$VALIDATION" -ne 0 ]
then
    echo "file ./${1} needs \"## TAG ##\" and \"## END_TAG ##\" to be used"
    exit 2
fi

crontab -l | \
    awk '{ if($0=="## TAG ##") { hide=1 };  if(hide==0) { print } ; if($0=="## END_TAG ##") { hide=0 }; }' | \
    cat - "${1}" | \
    crontab -

Mount a folder on another folder

Written by Solène, on 22 May 2018.
Tags: #openbsd66 #openbsd

Comments on Mastodon

This article will explain quickly how to bind a folder to access it from another path. It can be useful to give access to a specific folder from a chroot without moving or duplicating the data into the chroot.

Real world example: “I want to be able to access my 100GB folder /home/my_data/ from my httpd web server chrooted in /var/www/”.

The trick on OpenBSD is to use NFS on localhost. It’s pretty simple.

# rcctl enable portmap nfsd mountd
# echo "/home/my_data -network=127.0.0.1 -mask=255.255.255.255" > /etc/exports
# rcctl start portmap nfsd mountd

The order is really important. You can check that the folder is available through NFS with the following command:

$ showmount -e
Exports list on localhost:
/home/my_data               127.0.0.1

If you don’t have any line after “Exports list on localhost:”, you should kill mountd with pkill -9 mountd and start mountd again. I experienced it twice when starting all the daemons from the same commands but I’m not able to reproduce it. By the way, mountd only supports reload.

If you modify /etc/exports, you only need to reload mountd using rcctl reload mountd.

Once you have check that everything was alright, you can mount the exported folder on another folder with the command:

# mount localhost:/home/my_data /var/www/htdocs/my_data

You can add -ro parameter in the /etc/exports file on the export line if you want it to be read-only where you mount it.

Note: On FreeBSD/DragonflyBSD, you can use mount_nullfs /from /to, there is no need to setup a local NFS server. And on Linux you can use mount --bind /from /to and some others ways that I won’t cover here.

Faster SSH with multiplexing

Written by Solène, on 22 May 2018.
Tags: #unix #ssh

Comments on Mastodon

I discovered today an OpenSSH feature which doesn’t seem to be widely known. The feature is called multiplexing and consists of reusing an opened ssh connection to a server when you want to open another one. This leads to faster connection establishment and less processes running.

To reuse an opened connection, we need to use the ControlMaster option, which requires ControlPath to be set. We will also set ControlPersist for convenience.

  • ControlMaster defines if we create, or use or nothing about multiplexing
  • ControlPath defines where to store the socket to reuse an opened connection, this should be a path only available to your user.
  • ControlPersist defines how much time to wait before closing a ssh connection multiplexer after all connection using it are closed. By default it’s “no” and once you drop all connections the multiplexer stops.

I choosed to use the following parameters into my ~/.ssh/config file:

Host *
ControlMaster auto
ControlPath ~/.ssh/sessions/%h%p%r.sock
ControlPersist 60

This requires to have ~/.ssh/sessions/ folder restricted to my user only. You can create it with the following command:

install -d -m 700 ~/.ssh/sessions

(you can also do mkdir ~/.ssh/sessions && chmod 700 ~/.ssh/sessions but this requires two commands)

The ControlPath variable will creates sessions with the name “${hostname}${port}${user}.sock”, so it will be unique per remote server.

Finally, I choose to use ControlPersist to 60 seconds, so if I logout from a remote server, I still have 60 seconds to reconnect to it instantly.

Don’t forget that if for some reason the ssh channel handling the multiplexing dies, all the ssh connections using it will die with it.

Benefits with ProxyJump

Another ssh feature that is very useful is ProxyJump, it’s really useful to access ssh hosts which are not directly available from your current place. Like servers with no public ssh server available. For my job, I have a lot of servers not facing the internet, and I can still connect to them using one of my public facing server which will relay my ssh connection to the destination. Using the ControlMaster feature, the ssh relay server doesn’t have to handle lot of connections anymore, but only one.

In my ~/.ssh/config file:

Host *.private.lan
ProxyJump public-server.com

Those two lines allow me to connect to every servers with .private.lan domains (which is known by my local DNS server) by typing ssh some-machine.private.lan. This will establish a connection to public-server.com and then connects to the next server.

Sending mail with mu4e

Written by Solène, on 22 May 2018.
Tags: #unix #emacs

Comments on Mastodon

In my article about mu4e I said that I would write about sending mails with it. This will be the topic covered in this article.

There are a lot of ways to send mails with a lot of differents use cases. I will only cover a few of them, the documentation of mu4e and emacs are both very good, I will only give hints about some interestings setups.

I would thank Raphael who made me curious about differents ways of sending mails from mu4e and who pointed out some mu4e features I wasn’t aware of.

Send mails through your local server

The easiest way is to send mails through your local mail server (which should be OpenSMTPD by default if you are running OpenBSD). This only requires the following line to works in your ~/.emacs file:

(setq message-send-mail-function 'sendmail-send-it)

Basically, it would be only relayed to the recipient if your local mail is well configured, which is not the case for most servers. This requires a reverse DNS address correctly configured (assuming a static IP address), a SPF record in your DNS and a DKIM signing for outgoing mail. This is the minimum to be accepted to others SMTP servers. Usually people send mails from their personal computer and not from the mail server.

Configure OpenSMTPD to relay to another smtp server

We can bypass this problem by configuring our local SMTP server to relay our mails sent locally to another SMTP server using credentials for authentication.

This is pretty easy to set-up, by using the following /etc/mail/smtpd.conf configuration, just replace remoteserver by your server.

table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets

listen on lo0

accept for local alias <aliases> deliver to mbox
accept for any relay via secure+auth://label@remoteserver:465 auth <secrets>

You will have to create the file /etc/mail/secrets and add your credentials for authentication on the SMTP server.

From smtpd.conf(5) man page, as root:

# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "label username:password" > /etc/mail/secrets

Then, all mail sent from your computer will be relayed through your mail server. With ’sendmail-send-it, emacs will delivered the mail to your local server which will relay it to the outgoing SMTP server.

SMTP through SSH

One setup I like and I use is to relay the mails directly to the outgoing SMTP server, this requires no authentication except a SSH access to the remote server.

It requires the following emacs configuration in ~/.emacs:

(setq
  message-send-mail-function 'smtpmail-send-it
  smtpmail-smtp-server "localhost"
  smtpmail-smtp-service 2525)

The configuration tells emacs to connect to the SMTP server on localhost port 2525 to send the mails. Of course, no mail daemon runs on this port on the local machine, it requires the following ssh command to be able to send mails.

$ ssh -N -L 127.0.0.1:2525:127.0.0.1:25 remoteserver

This will bind the port 127.0.0.1:25 from the remote server point of view on your address 127.0.0.1:2525 from your computer point of view.

Your mail server should accept deliveries from local users of course.

SMTP authentication from emacs

It’s also possible to send mails from emacs using a regular smtp authentication directly from emacs. It is boring to setup, it requires putting credentials into a file named ~/.authinfo that it’s possible to encrypt using GPG but then it requires a wrapper to load it. It also requires to setup correctly the SMTP authentication. There are plenty of examples for this on the Internet, I don’t want to cover it.

Queuing mails for sending it later

Mu4e supports a very nice feature which is mail queueing from smtpmail emacs client. To enable it, it requires two easy steps:

In ~/.emacs:

(setq
  smtpmail-queue-mail t
  smtpmail-queue-dir "~/Mail/queue/cur")

In your shell:

$ mu mkdir ~/Mail/queue
$ touch ~/Mail/queue/.noindex

Then, mu4e will be aware of the queueing, in the home screen of mu4e, you will be able to switch from queuing to direct sending by pressing m and flushing the queue by pressing f.

Note: there is a bug (not sure it’s really a bug). When sending a mail into the queue, if your mail contains special characters, you will be asked to send it raw or to add a header containing the encoding.

Autoscrolling text for lazy reading

Written by Solène, on 17 May 2018.
Tags: #unix

Comments on Mastodon

Today I found a software named Lazyread which can read and display file an autoscroll at a chosen speed. I had to read its source code to make it work, the documentation isn’t very helpful, it doesn’t read ebooks (as in epub or mobi format) and doesn’t support stdin… This software requires some C code + a shell wrapper to works, it’s complicated for only scrolling.

So, after thinking a few minutes, the autoscroll can be reproduced easily with a very simple awk command. Of course, it will not have the interactive keys like lazyread to increase/decrease speed or some others options, but the most important part is there: autoscrolling.

If you want to read a file with a rate of 1 line per 700 millisecond, just type the following command:

$ awk '{system("sleep 0.7");print}' file

Do you want to read an html file (documentation file on the disk or from the web), you can use lynx or w3m to convert the html file on the fly to a readable text and pass it to awk stdin.

$ w3m -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ lynx -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ w3m -dump https://dataswamp.org/~solene/ | awk '{system("sleep 0.7");print}'

Maybe you want to read a man page?

$ man awk | awk '{system("sleep 0.7");print}'

If you want to pause the reading, you can use the true unix way, Ctrl+Z to send a signal which will stop the command and let it paused in background. You can resume the reading by typing fg.

One could easily write a little script parsing parameters for setting the speed or handling files or url with the correct command.

Notes: If for some reasons you try to use lazyread, fix the shebang in the file lesspipe.sh and you will need to call lazyread binary with the environment variable LESSOPEN="|./lesspipe.sh %s" (the path of the script if needed). Without this variable, you will have a very helpful error “file not found”.

Port of the week: Sent

Written by Solène, on 15 May 2018.
Tags: #unix

Comments on Mastodon

As the new port of the week, We will discover Sent. While we could think it is mail related, it is not. Sent is a nice software to make presentations from a simple text file. It has been developped by Suckless, a hacker community enjoying writing good software while keeping a small and sane source code, they also made software like st, dwm, slock, surf…

Sent is about simplicity. I will reuse a part of the example file which is also the documentation of the tool.

usage:
$ sent FILE1 [FILE2 …]

▸ one slide per paragraph
▸ lines starting with # are ignored
▸ image slide: paragraph containing @FILENAME
▸ empty slide: just use a \ as a paragraph

@nyan.png
this text will not be displayed, since the @ at the start of the first line
makes this paragraph an image slide.

The previous text, saved into a file and used with sent will open a fullscreen window containg three “slides”. Each slide will resize the text to maximize the display usage, this mean the font size will change on each slide.

It is really easy to use. To display next slide, you have the choice between pressing space, right arrow, return or clicking any button. Pressing left arrow will go back.

If you want to install it on OpenBSD: pkg_add sent, the package comes from the port misc/sent.

Be careful, Sent does not produce any file, you will need it for the presentation!

Suckless sent website

Use ramdisk on /tmp on OpenBSD

Written by Solène, on 08 May 2018.
Tags: #openbsd66 #openbsd

Comments on Mastodon

If you have enough memory on your system and that you can afford to use a few hundred megabytes to store temporary files, you may want to mount a mfs filesystem on /tmp. That will help saving your SSD drive, and if you use an old hard drive or a memory stick, that will reduce your disk load and improve performances. You may also want to mount a ramdisk on others mount points like ~/.cache/ or a database for some reason, but I will just explain how to achieve this for /tmp with is a very common use case.

First, you may have heard about tmpfs, but it has been disabled in OpenBSD years ago because it wasn’t stable enough and nobody fixed it. So, OpenBSD has a special filesystem named mfs, which is a FFS filesystem on a reserved memory space. When you mount a mfs filesystem, the size of the partition is reserved and can’t be used for anything else (tmpfs, as the same on Linux, doesn’t reserve the memory).

Add the following line in /etc/fstab (following fstab(5)):

swap /tmp mfs rw,nodev,nosuid,-s=300m 0 0

The permissions of the mountpoint /tmp should be fixed before mounting it, meaning that the /tmp folder on / partition should be changed to 1777:

# umount /tmp
# chmod 1777 /tmp
# mount /tmp

This is required because mount_mfs inherits permissions from the mountpoint.

Mounting remote samba share through SSH tunnel

Written by Solène, on 04 May 2018.
Tags: #unix

Comments on Mastodon

If for some reason you need to access a Samba share outside of the network, it is possible to access it through ssh and mount the share on your local computer.

Using the ssh command as root is required because you will bind local port 139 which is reserved for root:

# ssh -L 139:127.0.0.1:139 user@remote-server -N

Then you can mount the share as usual but using localhost instead of remote-server.

Example of a mount element for usmb

<mount id="public" credentials="me">
   <server>127.0.0.1</server>
   <!--server>192.168.12.4</server-->
   <share>public</share>
   <mountpoint>/mnt/share</mountpoint>
   <options>allow_other,uid=1000</options>
</mount>

As a reminder, <!--tag>foobar</tag--> is a XML comment.

Extract files from winmail.dat

Written by Solène, on 02 May 2018.
Tags: #unix #email

Comments on Mastodon

If you ever receive a mail with an attachment named “winmail.dat” then may be disappointed. It is a special format used by Microsoft Exchange, it contains the files attached to the mail and need some software to extract them.

Hopefully, there is a little and effecient utility named “tnef” to extract the files.

Install it: pkg_add tnef

List files: tnef -t winmail.dat

Extract files: tnef winmail.dat

That’s all !

Port of the week: ledger

Written by Solène, on 02 May 2018.
Tags: #unix

Comments on Mastodon

In this post I will do a short presentation of the port productivity/ledger, an very powerful command line accounting software, using plain text as back-end. Writing on it is not an easy task, I will use a real life workflow of my usage as material, even if my use is special.

As I said before, Ledger is very powerful. It can helps you manage your bank accounts, bills, rents, shares and others things. It uses a double entry system which means each time you add an operation (withdraw, paycheck, …) , this entry will also have to contain the current state of the account after the operation. This will be checked by ledger by recalculating every operations made since it has been initialized with a custom amount as a start. Ledger can also tracks categories where you spend money or statistics about your payment method (check, credit card, bank transfer, money…).

As I am not an english native speaker and that I don’t work in banks or related, I am not very familiar with accounting words in english, it makes me very hard to understand all ledger keywords, but I found a special use case for accounting things and not money which is really practical.

My special use case is that I work from home for a company working in a remote location. From time to time, I take the train to the to the office, the full travel is

[home]   → [underground A] → [train] → [underground B] → [office]
[office] → [underground B] → [train] → [underground A] → [home]

It means I need to buy tickets for both underground A and underground B system, and I want to track tickets I use for going to work. I buy the tickets 10 by 10 but sometimes I use it for my personal use or sometimes I give a ticket to someone. So I need to keep track of my tickets to know when I can give a bill to my work for being refunded.

Practical example: I buy 10 tickets of A, I use 2 tickets at day 1. On day 2, I give 1 ticket to someone and I use 2 tickets in the day for personal use. It means I still have 5 tickets in my bag but, from my work office point of view, I should still have 8 tickets. This is what I am tracking with ledger.

2018/02/01 * tickets stock Initialization + go to work
    Tickets:inv                                   10 City_A
    Tickets:inv                                   10 City_B
    Tickets:inv                                   -2 City_A
    Tickets:inv                                   -2 City_B
    Tickets

2018/02/08 * Work
    Tickets:inv                                    -2 City_A
    Tickets:inv                                    -2 City_B
    Tickets

2018/02/15 * Work + Datacenter access through underground
    Tickets:inv                                    -4 City_B
    Tickets:inv                                    -2 City_A
    Tickets

At the point, running ledger -f tickets.dat balance Tickets shows my tickets remaining:

4 City_A
2 City_B  Tickets:inv

Will add another entry which requires me to buy tickets:

2018/02/22 * Work + Datacenter access through underground
    Tickets:inv                                    -4 City_B
    Tickets:inv                                    -2 City_A
    Tickets:inv                                    10 City_B
    Tickets

Now, running ledger -f tickets.dat balance Tickets shows my tickets remaining:

2 City_A
8 City_B  Tickets:inv

I hope that the example was clear enought and interesting. There is a big tutorial document available on the ledger homepage, I recommend to read it before using ledger, it contains real world examples with accounting. Homepage link

Port of the week: dnstop

Written by Solène, on 18 April 2018.
Tags: #unix

Comments on Mastodon

Dnstop is an interactive console application to watch in realtime the DNS queries going through a network interface. It currently only supports UDP DNS requests, the man page says that TCP isn’t supported.

It has a lot of parameters and keybinding for the interactive use

To install it on OpenBSD: doas pkg_add dnstop

We will start dnstop on the wifi interface using a depth of 4 for the domain names: as root type dnstop -l 4 iwm0 and then press ‘3’ to display up to 3 sublevel, the -l 4 parameter means we want to know domains with a depth of 4, it means that if a request for the domain my.very.little.fqdn.com. happens, it will be truncated as very.little.fqdn.com. If you press ‘2’ in the interactive display, the earlier name will be counted in the line fqdn.com’.

Example of output:

Queries: 0 new, 6 total                           Tue Apr 17 07:17:25 2018

Query Name          Count      %   cum%
--------------- --------- ------ ------
perso.pw                3   50.0   50.0
foo.bar                 1   16.7   66.7
hello.mydns.com         1   16.7   83.3
mydns.com.lan           1   16.7  100.0

If you want to use it, read the man page first, it has a lot of parameters and can filters using specific expressions.

How to read a epub book in a terminal

Written by Solène, on 17 April 2018.
Tags: #unix

Comments on Mastodon

If you ever had to read an ebook in a epub format, you may have find yourself stumbling on Calibre software. Personally, I don’t enjoy reading a book in Calibre at all. Choice is important and it seems that Calibre is the only choice for this task.

But, as the epub format is very simple, it’s possible to easily read it with any web browser even w3m or lynx.

With a few commands, you can easily find xhtml files that can be opened with a web browser, an epub file is a zip containing mostly xhtml, css and images files. The xhtml files have links to CSS and images contained in others folders unzipped.

In the following commands, I prefer to copy the file in a new directory because when you will unzip it, it will create folder in your current working directory.

$ mkdir /tmp/myebook/
$ cd /tmp/myebook
$ cp ~/book.epub .
$ unzip book.epub
$ cd OPS/xhtml
$ ls *xhtml

I tried with differents epub files, in most case you should find a lot of files named chapters-XX.xhtml with XX being 01, 02, 03 and so forth. Just open the files in the correct order with a web browser aka “html viewer”.

Port of the week: tig

Written by Solène, on 10 April 2018.
Tags: #unix #git

Comments on Mastodon

Today we will discover the software named tig whose name stands for Text-mode Interface for Git.

To install it on OpenBSD: pkg_add tig

Tig is a light and easy to use terminal application to browse a git repository in an interactive manner. To use it, just ‘cd’ into a git repository on your filesystem and type tig. You will get the list of all the commits, with the author and the date. By pressing “Enter” key on a commit, you will get the diff. Tig also displays branching and merging in a graphical way.

Tig has some parameters, one I like a lot if blame which is used like this: tig blame afile. Tig will show the file content and will display for each line to date of last commit, it’s author and the small identifier of the commit. With this function, it gets really easy to find who modified a line or when it was modified.

Tig has a lot of others possibilities, you can discover them in its man pages.

Unofficial OpenBSD FAQ

Written by Solène, on 16 March 2018.
Tags: #openbsd66 #openbsd

Comments on Mastodon

Frequently asked questions (with answers) on #openbsd IRC channel

Please read the official OpenBSD FAQ

I am writing this to answer questions asked too many times. If some answers get good enough, maybe we could try to merge it in the OpenBSD FAQ if the topic isn’t covered. If the topic is covered, then a link to the official FAQ should be used.

If you want to participate, you can fetch the page using gopher protocol and send me a diff:

$ printf '/~solene/article-openbsd-faq.txt\r\n' | nc dataswamp.org 70 > faq.md

OpenBSD features / not features

Here is a list for newcomers to tell what is and what is not OpenBSD

See OpenBSD Innovations

  • Packet Filter : super awesome firewall

  • Sane defaults : you install, it works, no tweak

  • Stability : upgrades go smooth and are easy

  • pledge and unveil : security features to reduce privileges of software, lots of ports are patched

  • W^X security

  • Microphone muted by default, unlockable by root only

  • Video devices owned by root by default, not usable by users until permission change

  • Has only FFS file system which is slow and has no “feature”

  • No wine for windows compatibility

  • No linux compatibility

  • No bluetooth support

  • No usb3 full speed performance

  • No VM guest additions

  • Only in-house VMM for being a VM host, only supports OpenBSD and some Linux

  • Poor fuse support (it crashes quite often)

  • No nvidia support (nvidia’s fault)

  • No container / docker / jails

Does OpenBSD has a Code Of Conduct?

No and there is no known plan of having one.

This is a topic upsetting OpenBSD people, just don’t ask about it and send patches.

What is the OpenBSD release process?

OpenBSD FAQ official information

The last two releases are called “-release” and are officially supported (patches for security issues are provided).

-stable version is the latest release with the base system patches applied, the -stable ports tree has some patches backported from -current, mainly to fix security issues. Official packages for -stable are built and are picked up automatically by pkg_add(1).

What is -current?

It’s the development version with latest packages and latest code. You shouldn’t use it only to get latest package versions.

How do I install -current ?

OpenBSD FAQ about current

  • download the latest snapshot install .iso or .fs file from your favorite mirror under /snapshots/ directory
  • boot from it

How do I upgrade to -current

OpenBSD FAQ about current

You can use the script sysupgrade -s, note that the flag is only useful if you are not running -current right now but harmless otherwise.

Monitor your systems with reed-alert

Written by Solène, on 17 January 2018.
Tags: #unix #lisp

Comments on Mastodon

This article will present my software reed-alert, it checks user-defined states and send user-defined notification. I made it really easy to use but still configurable and extensible.

Description

reed-alert is not a monitoring tool producing graph or storing values. It does a job sysadmins are looking for because there are no alternative product (the alternatives comes from a very huge infrastructure like Zabbix so it’s not comparable).

From its configuration file, reed-alert will check various states and then, if it fails, will trigger a command to send a notification (totally user-defined).

Fetch it

This is a open-source and free software released under MIT license, you can install it with the following command:

# git clone git://bitreich.org/reed-alert
# cd reed-alert
# make
# doas make install

This will install a script reed-alert in /usr/local/bin/ with the default Makefile variables. It will try to use ecl and then sbcl if ecl is not installed.

A README file is available as documentation to describe how to use it, but we will see here how to get started quickly.

You will find a few files there, reed-alert is a Common LISP software and it has been chose for (I hope) good reasons that the configuration file is plain Common LISP.

There is a configuration file looking like a real world example named config.lisp.sample and another configuration file I use for testing named example.lisp containing lot of cases.

Let’s start

In order to use reed-alert we only need to create a new configuration file and then add a cron job.

Configuration

We are going to see how to configure reed-alert. You can find more explanations or details in the README file.

Alerts

We have to configure two kind of parameters, first we need to set-up a way to receive alerts, easiest way to do so is by sending a mail with “mail” command. Alerts are declared with the function alert and as parameters the alert name and the command to be executed. Some variables are replaced with values from the probe, in the README file you can find the list of probes, it looks like %date% or %params%.

In Common LISP functions are called by using a parenthesis before its name and until the parenthesis is closed, we are giving its parameters.

Example:

(alert mail "echo 'problem on %hostname%' | mail me@example.com")

One should take care about nesting quotes here.

reed-alert will fork a shell to start the command, so pipes and redirection works. You can be creative when writing alerts that:

  • use a SMS service
  • write a script to post on a forum
  • publishing a file on a server
  • send text to IRC with ii client

Checks

Now we have some alerts, we will configure some checks in order to make reed-alert useful. It uses probes which are pre-defined checks with parameters, a probe could be “has this file not been updated since N minutes ?” or “Is the disk space usage of partition X more than Y ?”

I chose to name the function “=>” to make a check, it isn’t a name and reminds an item or something going forward. Both previous example using our previous mail notifier would look like:

(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)

It’s also possible to use shell commands and check the return code using the command probe, allowing the user to define useful checks.

(=> mail command :command "echo '/is-this-gopher-server-up?' | nc -w 3 dataswamp.org 70"
                 :desc "dataswamp.org gopher server")

We use echo + netcat to check if a connection to a socket works. The :desc keyword will give a nicer name in the output instead of just “COMMAND”.

Garniture

We wrote the minimum required to configure reed-alert, now the configuration file so your my-config.lisp file should looks like this:

(alert mail "echo 'problem on %hostname%' | mail me@example.com")
(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage   :limit 90)

Now, you can start it every 5 minutes from a crontab with this:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )

If you prefer to use ecl:

*/5 * * * * ( reed-alert /path/to/my-config.lisp )

The time between each run is up to you, depending on what you monitor.

Important

By default, when a check returns a failure, reed-alert will only trigger the notifier associated once it reach the 3rd failure. And then, will notify again when the service is back (the variable %state% is replaced by start or end to know if it starts or stops.)

This is to prevent reed-alert to send a notification each time it checks, there is absolutely no need for this for most users.

The number of failures before triggering can be modified by using the keyword “:try” as in the following example:

(=> mail disk-usage :limit 90 :try 1)

In this case, you will get notified at the first failure of it.

The number of failures of failed checks is stored in files (1 per check) in the “states/” directory of reed-alert working directory.

New cl-yag version

Written by Solène, on 16 December 2017.
Tags: #unix #cl-yag

Comments on Mastodon

Introduction

cl-yag is a static website generator. It's a software used to publish a website and/or a gopher hole from a list of articles. As the developer of cl-yag I'm happy to announce that a new version has been released.

New features

The new version, with its number 0.6, bring lot of new features :

  • supporting different markup language per article
  • date format configurable
  • gopher output format configurable
  • ship with the default theme "clyma", minimalist but responsive (the one used on this website)
  • easier to use
  • full user documentation

The code is available at git://bitreich.org/cl-yag, the program requires sbcl or ecl to work.

Per article markup language

The best feature I'm proud of is allowing to use a different language per article. While on my blog I choosed to use markdown, it's sometimes not adapted for more elaborated articles like the one about LISP containing code which was written in org-mode then converted to markdown manually to fit to cl-yag. Now, the user can declare a named "converter" which is a command line with pattern replacement, to produce the html file. We can imagine a lot of things with this, even producing a gallery with find + awk command. Now, I can use markdown by default and specify if I want to use org-mode or something else.

This is the way to declare a converter, taking org-mode as example, which is not very simple, because of emacs not being script friendly :

(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

And an easy way to produce a gallery with awk from a .txt file containing a list of images path.

(converter :name :gallery :extension ".txt"
	   :command (concatenate 'string
				 "awk 'BEGIN { print \"<div class=\\\"gallery\\\">\"} "
				 "{ print \"<img src=\\\"static/images/\"$1\"\\\" />\" } "
				 " END { print  \"</div>\"} data/%IN | tee %OUT"))

The concatenate function is only used to improve the presentation, to split the command in multiples lines and make it easier to read. It's possible to write all the command in one line.

The patterns %IN and %OUT are replaced by the input file name and the output file name when the command is executed.

For an easier example, the default markdown converter looks like this, calling multimarkdown command :

(converter :name :markdown :extension ".md"
	   :command "multimarkdown -t html -o %OUT data/%IN")

It's really easy (I hope !) to add new converters you need with this feature.

Date format configurable

One problem I had with cl-yag is that it's plain vanilla Common LISP without libraries, so it's easier to fetch and use but it lacks some elaborated libraries like one to parse date and format a date. Before this release, I was writing in plain text "14 December 2017" in the date field of a blog post. It was easy to use, but not really usable in the RSS feed in the pubDate attribute, and if I wanted to change the display of the date for some reason, I would have to rewrite everything.

Now, the date is simply in the format "YYYYMMDD" like "20171231" for the 31rd December 2017. And in the configuration variable, there is a :date-format keyword to define the date display. This variable is a string allowing pattern replacement of the following variables :

%DayNumber
day of the month in number, from 1 to 31
%DayName
day of the week, from Monday to Sunday, names are written in english in the source code and can be translated
%MonthNumber
month in number, from 1 to 12
%MonthName
month name, from January to December, names are written in english in the source code and can be translated
%Year
year

Currently, as the time of writing, I use the value "%DayNumber %MonthName %Year"

A :gopher-format keyword exist in the configuration file to configure the date format in the gopher export. It can be different from the html one.

More Gopher configuration

There are cases where the gopher server use an unusual syntax compared to most of the servers. I wanted to make it configurable, so the user could easily use cl-yag without having to mess with the code. I provide the default for geomyidae and in comments another syntax is available. There is also a configurable value to indicates where to store the gopher page menu, it's not always gophermap, it could be index.gph or whatever you need.

Easier to use

A comparison of code will make it easier to understand. There was a little change the way blog posts are declared :

From

(defparameter *articles*
  (list
   (list :id "third-article"  :title "My third article" :tag "me" :date "20171205")
   (list :id "second-article" :title "Another article"  :tag "me" :date "20171204")
   (list :id "first-article"  :title "My first article" :tag "me" :date "20171201")
   ))

to

(post :id "third-article"  :title "My third article" :tag "me" :date "20171205")
(post :id "second-article" :title "Another article"  :tag "me" :date "20171204")
(post :id "first-article"  :title "My first article" :tag "me" :date "20171201")

Each post are independtly declared and I plan to add a "page" function to create static pages, but this is going to be for the next version !

Future work

I am very happy to hack on cl-yag, I want to continue improving it but I should really think about each feature I want to add. I want to keep it really simple even if it limits the features.

I want to allow the creation of static pages like "About me", "Legal" or "websites I liked" that integrates well in the template. The user may not want all the static pages links to go at the same place in the template, or use the same template. I'm thinking about this.

Also, I think the gopher generation could be improved, but I still have no idea how.

Others themes may come in the default configuration, allowing the user to have a choice between themes. But as for now, I don't plan to bring a theme using javascript.

How to merge changes with git when you are a noob

Written by Solène, on 13 December 2017.
Tags: #git

Comments on Mastodon

I’m very noob with git and I always screw everything when someone clone one of my repo, contributes and asks me to merge the changes.

Now I found an easy way to merge commits from another repository. Here is a simple way to handle this. We will get changes from project1_modified to merge it into our project1 repository. This is not the fastest way or maybe not the optimal way, but I found it to work reliabily.

$ cd /path/to/projects
$ git clone git://remote/project1_modified
$ cd my_project1
$ git checkout master
$ git remote add modified ../project1_modified/
$ git remote update
$ git checkout -b new_code
$ git merge modified/master
$ git checkout master
$ git merge new_code
$ git branch -d new_code

This process will makes you download the repository of the people who contributed to the code, then you add it as a remote sources into your project, you create a new branch where you will do the merge, if something is wrong you will be able to manage conflicts easily. Once you tried the code and you are fine, you need to merge this branch to master and then, when you are done, you can delete the branch.

If later you need to get new commits from the other repo, it become easier.

$ cd /path/to/projects
$ cd project1_modified
$ git pull
$ cd ../my_project1
$ git pull modified
$ git merge modified/master

And you are done !

How to type using only one hand: keyboard mirroring

Written by Solène, on 12 December 2017.
Tags: #unix

Comments on Mastodon

Hello

Today is a bit special because I’m writing with a mirror keyboard layout. I use only half my keyboard to type all characters. To make things harder, the layout is qwerty while I use azerty usually (I’m used to qwerty but it doesn’t help).

Here, “caps lock” is a modifier key that must be pressed to obtain characters of the other side. As a mirror, one will find ‘p’ instead of ‘q’ or ‘h’ instead of ‘g’ while pressing caps lock.

It’s even possible to type backspace to delete characters or to achieve a newline. All the punctuation isn’t available throught this, only ‘.<|¦>’",’.

While I type this I get a bit faster and it become more and more easier. It’s definitely worth if you can’t use hands two.

This a been made possible by Randall Munroe. To enable it just download the file Here and type

xkbcomp mirrorlayout.kbd $DISPLAY

backspace is use with tilde and return with space, using the modifier of course.

I’ve spent approximately 15 minutes writing this, but the time spent hasn’t been linear, it’s much more fluent now !

Mirrorboard: A one-handed keyboard layout for the lazy by Randall Munroe

Showing some Common Lisp features

Written by Solène, on 05 December 2017.
Tags: #lisp

Comments on Mastodon

Introduction: comparing LISP to Perl and Python

We will refer to Common LISP as CL in the following article.

I wrote it to share what I like about CL. I’m using Perl to compare CL features. I am using real world cases for the average programmer. If you are a CL or perl expert, you may say that some example could be rewritten with very specific syntax to make it smaller or faster, but the point here is to show usual and readable examples for usual programmers.

This article is aimed at people with programming interest, some basis of programming knowledge are needed to understand the following. If you know how to read C, Php, Python or Perl it should be enough. Examples have been choosed to be easy.

I thank my friend killruana for his contribution as he wrote the python code.

Variables

Scope: global

Common Lisp code

(defparameter *variable* "value")

Defining a variable with defparameter on top-level (= outside of a function) will make it global. It is common to surround the name of global variables with \* character in CL code. This is only for readability for the programmer, the use of \* has no incidence.

Perl code

my $variable = "value";

Python code

variable = "value";

Scope: local

This is where it begins interesting in CL. Declaring a local variable with let create a new scope with parenthesis where the variable isn’t known outside of it. This prevent doing bad things with variables not set or already freed. let can define multiple variables at once, or even variables depending on previously declared variables using let\*

Common Lisp code

(let ((value (http-request)))
  (when value
    (let* ((page-title (get-title value))
           (title-size (length page-title)))
      (when page-title
        (let ((first-char (subseq page-title 0 1)))
          (format t "First char of page title is ~a~%" first-char))))))

Perl code

{
    local $value = http_request;
    if($value) {
        local $page_title = get_title $value;
        local $title_size = get_size $page_title;
        if($page_title) {
            local $first_char = substr $page_title, 0, 1;
            printf "First char of page title is %s\n", $first_char;
        }
    }
}

The scope of a local value is limited to the parent curly brakets, of a if/while/for/foreach or plain brakets.

Python code

if True:
    hello = 'World'
print(hello) # displays World

There is no way to define a local variable in python, the scope of the variable is limited to the parent function.

Printing and format text

CL has a VERY powerful function to print and format text, it’s even named format. It can even manage plurals of words (in english only) !

Common Lisp code

(let ((words (list "hello" "Dave" "How are you" "today ?")))
  (format t "~{~a ~}~%" words))

format can loop over lists using ~{ as start and ~} as end.

Perl code

my @words = @{["hello", "Dave", "How are you", "today ?"]};
foreach my $element (@words) {
    printf "%s ", $element;
}
print "\n";

Python code

# Printing and format text
# Loop version
words = ["hello", "Dave", "How are you", "today ?"]
for word in words:
    print(word, end=' ')
print()

# list expansion version
words = ["hello", "Dave", "How are you", "today ?"]
print(*words)

Functions

function parameters: rest

Sometimes we need to pass to a function a not known number of arguments. CL supports it with &rest keyword in the function declaration, while perl supports it using the @_ sigil.

Common Lisp code

(defun my-function(parameter1 parameter2 &rest rest)
  (format t "My first and second parameters are ~a and ~a.~%Others parameters are~%~{    - ~a~%~}~%"
          parameter1 parameter2 rest))

(my-function "hello" "world" 1 2 3)

Perl code

sub my_function {
    my $parameter1 = shift;
    my $parameter2 = shift;
    my @rest = @_;

    printf "My first and second parameters are %s and %s.\nOthers parameters are\n",
        $parameter1, $parameter2;

    foreach my $element (@rest) {
        printf "    - %s\n", $element;
    }
}

my_function "hello", "world", 0, 1, 2, 3;

Python code

def my_function(parameter1, parameter2, *rest):
    print("My first and second parameters are {} and {}".format(parameter1, parameter2))
    print("Others parameters are")
    for parameter in rest:
        print(" - {}".format(parameter))

my_function("hello", "world", 0, 1, 2, 3)

The trick in python to handle rests arguments is the wildcard character in the function definition.

function parameters: named parameters

CL supports named parameters using a keyword to specify its name. While it’s not at all possible on perl. Using a hash has parameter can do the job in perl.

CL allow to choose a default value if a parameter isn’t set, it’s harder to do it in perl, we must check if the key is already set in the hash and give it a value in the function.

Common Lisp code

(defun my-function(&key (key1 "default") (key2 0))
  (format t "Key1 is ~a and key2 (~a) has a default of 0.~%"
          key1 key2))

(my-function :key1 "nice" :key2 ".Y.")

There is no way to pass named parameter to a perl function. The best way it to pass a hash variable, check the keys needed and assign a default value if they are undefined.

Perl code

sub my_function {
    my $hash = shift;

    if(! exists $hash->{key1}) {
        $hash->{key1} = "default";
    }

    if(! exists $hash->{key2}) {
        $hash->{key2} = 0;
    }

    printf "My key1 is %s and key2 (%s) default to 0.\n",
        $hash->{key1}, $hash->{key2};
}

my_function { key1 => "nice", key2 => ".Y." };

Python code

def my_function(key1="default", key2=0):
    print("My key1 is {} and key2 ({}) default to 0.".format(key1, key2))

my_function(key1="nice", key2=".Y.")

Loop

CL has only one loop operator, named loop, which could be seen as an entire language itself. Perl has do while, while, for and foreach.

loop: for

Common Lisp code

(loop for i from 1 to 100
   do
     (format t "Hello ~a~%" i))

Perl code

for(my $i=1; $i <= 100; $i++) {
    printf "Hello %i\n";
}

Python code

for i in range(1, 101):
   print("Hello {}".format(i))

loop: foreach

Common Lisp code

(let ((elements '(a b c d e f)))
  (loop for element in elements
     counting element into count
     do
       (format t "Element number ~s : ~s~%"
               count element)))

Perl code

# verbose and readable version
my @elements = @{['a', 'b', 'c', 'd', 'e', 'f']};
my $count = 0;
foreach my $element (@elements) {
    $count++;
    printf "Element number %i : %s\n", $count, $element;
}

# compact version
for(my $i=0; $i<$#elements+1;$i++) {
    printf "Element number %i : %s\n", $i+1, $elements[$i];
}

Python code

# Loop foreach
elements = ['a', 'b', 'c', 'd', 'e', 'f']
count = 0
for element in elements:
    count += 1
    print("Element number {} : {}".format(count, element))

# Pythonic version
elements = ['a', 'b', 'c', 'd', 'e', 'f']
for index, element in enumerate(elements):
    print("Element number {} : {}".format(index, element))

LISP only tricks

Store/restore data on disk

The simplest way to store data in LISP is to write a data structure into a file, using print function. The code output with print can be evaluated later with read.

Common Lisp code

(defun restore-data(file)
  (when (probe-file file)
    (with-open-file (x file :direction :input)
      (read x))))

(defun save-data(file data)
  (with-open-file (x file
                     :direction :output
                     :if-does-not-exist :create
                     :if-exists :supersede)
    (print data x)))

;; using the functions
(save-data "books.lisp" *books*)
(defparameter *books* (restore-data "books.lisp"))

This permit to skip the use of a data storage format like XML or JSON. Common LISP can read Common LISP, this is all it needs. It can store objets like arrays, lists or structures using plain text format. It can’t dump hash tables directly.

Creating a new syntax with a simple macro

Sometimes we have cases where we need to repeat code and there is no way to reduce it because it’s too much specific or because it’s due to the language itself. Here is an example where we can use a simple macro to reduce the written code in a succession of conditions doing the same check.

We will start from this

Common Lisp code

(when value
  (when (string= line-type "3")
    (progn
      (print-with-color "error" 'red line-number)
      (log-to-file "error")))
  (when (string= line-type "4")
    (print-with-color text))
  (when (string= line-type "5")
    (print-with-color "nothing")))

to this, using a macro

Common Lisp code

(defmacro check(identifier &body code)
  `(progn
     (when (string= line-type ,identifier)
     ,@code)))

(when value
  (check "3"
         (print-with-color "error" 'red line-number)
         (log-to-file "error"))
  (check "4"
         (print-with-color text))
  (check "5"
         (print-with-color "nothing")))

The code is much more readable and the macro is easy to understand. One could argue that in another language a switch/case could work here, I choosed a simple example to illustrate the use of a macro, but they can achieve more.

Create powerful wrappers with macros

I’m using macros when I need to repeat code that affect variables. A lot of CL modules offers a structure like with-something, it’s a wrapper macro that will do some logic like opening a database, checking it’s opened, closing it at the end and executing your code inside.

Here I will write a tiny http request wrapper, allowing me to write http request very easily, my code being able to use variables from the macro.

Common Lisp code

(defmacro with-http(url)
  `(progn
     (multiple-value-bind (content status head)
         (drakma:http-request ,url :connection-timeout 3)
       (when content
         ,@code))))

(with-http "https://dataswamp.org/"
  (format t "We fetched headers ~a with status ~a. Content size is ~d bytes.~%"
          status head (length content)))

In Perl, the following would be written like this

Perl code

sub get_http {
    my $url = $1;
    my %http = magic_http_get $url;
    if($http{content}) {
        return %http;
    } else {
        return undef;
    }
}

{
    local %data = get_http "https://dataswamp.org/";
    if(%data) {
        printf "We fetched headers %s with status %d. Content size is %d bytes.\n",
            $http{headers}, $http{status}, length($http{content});
    }
}

The curly brackets are important there, I want to emphase that the local %data variable is only available inside the curly brackets. Lisp is written in a successive of local scope and this is something I really like.

Python code

import requests
with requests.get("https://dataswamp.org/") as fd:
    print("We fetched headers %s with status %d. Content size is %s bytes." \
                % (list(fd.headers.keys()), fd.status_code, len(fd.content)))

Allow wide resolution on intel graphics laptop

Written by Solène, on 22 November 2017.
Tags: #hardware

Comments on Mastodon

I just received a wide screen with a 2560x1080 resolution but xrandr wasn’t allowing me to use it. The intel graphics specifications say that I should be able to go up to 4096xsomething so it’s a software problem.

Generate the informations you need with gtf

$ gtf 2560 1080 59.9

Takes only the numbers after the resolution name between quotes, so in Modeline "2560x1080_59.90" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync keep only 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync

Now add the new resolution and make it available to your output (mine is HDMI2):

$ xrandr --newmode "2560x1080" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync
$ xrandr --addmode HDMI2 2560x1080

You can now use this mode with arandr using the GUI or with xrandr by typing xrandr --output HDMI1 --mode 2560x1080

You will need to set the new mode each time the system start. I added the 2 lines in my ~/.xsession file which starts stumpwm.

Low bandwidth: Fetch OpenBSD sources

Written by Solène, on 09 November 2017.
Tags: #openbsd66 #openbsd

Comments on Mastodon

When you fetch OpenBSD src or ports from CVS and that you want to save bandwidth during the process there is a little trick that change everything: compression

Just add -z9 to the parameter of your cvs command line and the remote server will send you compressed files, saving 10 times the bandwidth, or speeding up 10 times the transfer, or both (I’m in the case where I have differents users on my network and I’m limiting my incoming bandwidth so other people can have bandwidth too so it is important to reduce the packets transffered if possible).

The command line should looks like:

$ cvs -z9 -qd anoncvs@anoncvs.fr.openbsd.org:/cvs checkout -P src

Don’t abuse this, this consumes CPU on the mirror.

Gentoo port of the week: slrn

Written by Solène, on 08 November 2017.
Tags: #portoftheweek

Comments on Mastodon

Introduction

Hello,

Today I will speak about slrn, a nntp client. I’m using it to fetch mailing lists I’m following (without necesserarly subscribing to them) and read it offline. I’ll speak about using nntp to read news-groups, I’m not sure but in a more general way nntp is used to access usenet. I’m not sure to know what usenet is, so we will stick here by connecting to mailing-list archives offered by gmane.org (which offers access to mailing-lists and newsgroups through nntp).

Long story short, recently I moved and now I have a very poor DSL connection. Plus I’m often moving by train with nearly no 4G/LTE support during the trip. I’m going to write about getting things done offline and about reducing bandwith usage. This is a really interesting topic in our hyper-connected world.

So, back to slrn, I want to be able to fetch lot of news and read it later. Every nntp client I tried were getting the articles list (in nntp, an article = a mail, a forum = mailing list) and then it download each article when we want to read it. Some can cache the result when you fetch an article, so if you want to read it later it is already fetched. While slrn doesn’t support caching at all, it comes with the utility slrnpull which will create a local copy of forums you want, and slrn can be configured to fetch data from there. slrnpull need to be configured to tell it what to fetch, what to keep etc… and a cron will start it sometimes to fetch the new articles.

Configuration

The following configuration is made to be simple to use, it runs with your regular user. This is for gentoo, maybe some another system would provide a dedicated user and everything pre-configured.

Create the folder for slrnpull and change the owner:

$ sudo mkdir /var/spool/slrnpull
$ sudo chown user /var/spool/slrnpull

slrnpull configuration file must be placed in the folder it will use. So edit /var/spool/slrnpull/slrnpull.conf as you want, my configuration file is following.

default 200 45 0
# indicates a default value of 20 articles to be retrieved from the server and
# that such an article will expire after 14 days.

gmane.network.gopher.general
gmane.os.freebsd.questions
gmane.os.freebsd.devel.ports
gmane.os.openbsd.misc
gmane.os.openbsd.ports
gmane.os.openbsd.bugs

The client slrn needs to be configured to find the informations from slrnpull.

File ~/.slrnrc:

set hostname "your.hostname.domain"
set spool_inn_root "/var/spool/slrnpull"
set spool_root "/var/spool/slrnpull/news"
set spool_nov_root "/var/spool/slrnpull/news"
set read_active 1
set use_slrnpull 1
set post_object "slrnpull"
set server_object "spool"

Add this to your crontab to fetch news once per hour (at HH:00 minutes):

0 * * * * NNTPSERVER=news.gmane.org slrnpull -d /var/spool/slrnpull/

Now, just type slrn and enjoy.

Cheat Sheet

Quick cheat sheet for using slrn, there is a help using “?” but it is not very easy to understand at first.

  • h : hide/display the article view
  • space : scroll to next page in the article, go to next at the end
  • enter : scroll one line
  • tab : scroll to the end of quotes
  • c : mark all as read

Tips

  • when a forum is empty, it is not shown by default

I found that a slrnconf software provide a GUI to configure slrn exists, I didn’t try it.

Going further

It seems nntp clients supports a score file that can mark interesting articles using user defined rules.

nntp protocol allow to submit articles (reply or new thread) but I have no idea how it works. Someone told me to forget about this and use mails to mailing-lists when it is possible.

leafnode daemon can be used instead of slrnpull in a more generic way. It is a nntp server that one would use locally as a proxy to nntp servers. It will mirror forums you want and serve it back through nntp, allowing you to use any nntp client (slrnpull enforces the use of slrn). leafnode seems old, a v2 is still in development but seems rather inactive. Leafnode is old and complicated, I wanted something KISS (Keep It Simple Stupid) and it is not.

Others clients you may want to try

nntp console client

  • gnus (in emacs)
  • wanderlust (in emacs too)
  • alpine

GUI client

  • pan (may be able to download, but I failed using it)
  • seamonkey (the whole mozilla suite supports nntp)

Zooming with emacs, tmux or stumpwm

Written by Solène, on 25 October 2017.
Tags: #emacs #window-manager #tmux

Comments on Mastodon

Hey ! You use stumpwm, emacs or tmux and your screen (not the GNU screen) split in lot of parts ? There is a solution to improve that. ZOOMING !

Each of them work with a screen divided into panes/windows (the meaning of theses words change between the program), sometime you want want to have the one where your work in fullscreen. An option exists in each of them to get fullscreen temporarly on a window.

Emacs: (not native)

This is not native in emacs, you will need to install zoom-window from your favorite repository.

Add the thoses lines in your ~/.emacs:

(require 'zoom-window)
(global-set-key (kbd "C-x C-z") 'zoom-window-zoom)

Type C-x C-z to zoom/unzoom your current frame

Tmux

Toogle zoom (in or out)

C-b z

Stumpwm

Add this to your ~/.stumpwmrc

(define-key *root-map* (kbd "z")            "fullscreen")

Using “prefix z” the current window will toggle fullscreen.

Gentoo port of the week: Nethogs

Written by Solène, on 17 October 2017.
Tags: #portoftheweek

Comments on Mastodon

Today I will present you a nice port (from Gentoo this time, not from a FreeBSD) and this port is even linux only.

nethogs is a console program which shows the bandwidth usage of each running application consuming network. This can be particulary helpful to find which application is sending traffic and at which rate.

It can be installed with emerge as simple as emerge -av net-analyzer/nethogs.

It is very simple of use, just type nethogs in a terminal (as root). There are some parameters and it’s a bit interactive but I recommend reading the manual if you need some details about them.

I am currently running Gentoo on my main workstation, that makes me discover new things so maybe I will write more regularly about gentoo ports.

How to limit bandwidth usage of emerge in Gentoo

Written by Solène, on 16 October 2017.
Tags: #linux

Comments on Mastodon

If for some reason you need to reduce the download speed of emerge when downloading sources you can use a tweak in portage’s make.conf as explained in the handbook.

To keep wget and just add the bandwidth limit, add this to /etc/portage/make.conf:

FETCHCOMMAND="${FETCHCOMMAND} --limit-rate=200k"

Of course, adjust your rate to your need.

Display manually installed packages on FreeBSD 11

Written by Solène, on 16 August 2017.
Tags: #freebsd11

Comments on Mastodon

If you want to show the packages installed manually (and not installed as dependency of another package), you have to use “pkg query” and compare if %a (automatically installed == 1) isn’t 1. The second string will format the output to display the package name:

$ pkg query -e "%a != 1" "%n"

Using firefox on Guix distribution

Written by Solène, on 16 August 2017.
Tags: #linux #guix

Comments on Mastodon

Update 2020: This method may certainly not work anymore but I don’t have a Guix installation to try.

I’m new to Guix, it’s a wonderful system but it’s such different than any other usual linux distribution that it’s hard to achieve some basics tasks. As Guix is 100% free/libre software, Firefox has been removed and replaced by icecat. This is nearly the same software but some “features” has been removed (like webRTC) for some reasons (security, freedom). I don’t blame Guix team for that, I understand the choice.

But my problem is that I need Firefox. I finally achieve to get it working from the official binary downloaded from mozilla website.

You need to install some packages to get the libraries, which will become available under your profile directory. Then, tells firefox to load libraries from there and it will start.

guix package -i glibc glib gcc gtk+ libxcomposite dbus-glib libxt
LD_LIBRARY_PATH=~/.guix-profile/lib/ ~/.guix-profile/lib/ld-linux-x86-64.so.2 ~/firefox_directory/firefox

Also, it seems that running icecat and firefox simultanously works, they store data in ~/.mozilla/icecat and ~/.mozilla/firefox so they are separated.

Using emacs to manage mails with mu4e

Written by Solène, on 15 June 2017.
Tags: #emacs #email

Comments on Mastodon

In this article we will see how to fetch, read and manage your emails from Emacs using mu4e. The process is the following: mbsync command (while mbsync is the command name, the software name is isync) create a mirror of an imap account into a Maildir format on your filesystem. mu from mu4e will create a database from the Maildir directory using xapian library (full text search database), then mu4e (mu for emacs) is the GUI which queries xapian database to manipulates your mails.

Mu4e handles with dynamic bookmarks, so you can have some predefined filters instead of having classic folders. You can also do a query and reduce the results with successives queries.

You may have heard about using notmuch with emacs to manage mails, mu4e and notmuch doesn’t do the same job. While notmuch is a nice tool to find messages from queries and create filters, it operates as a read-only tool and can’t do anything with your mail. mu4e let you write mail, move, delete, flag etc… AND still allow to make complex queries.

I wrote this article to allow people to try mu4e quickly, you may want to read both isync and mu4e manual to have a better configuration suiting your needs.

Installation

On OpenBSD you need to install 2 packages:

# pkg_add mu4 isync

isync configuration

We need to configure isync to connect to the IMAP server:

Edit the file ~/.mbsyncrc, there is a trick to not have the password in clear text in the configuration file, see isync configuration manual for this:

iMAPAccount my_imap
Host my_host_domain.info
User imap_user
Pass my_pass_in_clear_text
SSLType IMAPS

IMAPStore my_imap-remote
Account my_imap

MailDirStore my_imap-local
Path ~/Maildir/my_imap/
Inbox ~/Maildir/my_imap/Inbox
SubFolders Legacy

channel my_imap
Master :my_imap-remote:
Slave :my_imap-local:
Patterns *
Create Slave
Expunge Both

mu4e / emacs configuration

We need to configure mu4e in order to tell where to find the mail folder. Add this to your ~/.emacs file.

(require 'mu4e)
(setq mu4e-maildir "~/Maildir/my_imap/"
      mu4e-sent-folder "/Sent Messages/"
      mu4e-trash-folder "/Trash"
      mu4e-drafts-folder "/Drafts")

First start

A few commands are needed in order to make everything works. We need to create the base folder as mbsync command won’t do the job for some reason, and we need mu to index the mails the first time.

mbsync can takes a moment because it will download ALL your mails.

$ mkdir -p ~/Maildir/my_imap
$ mbsync -aC
$ mu init --maildir=~/Maildir/my_imap
$ mu index

How to use mu4e

start emacs, run M-x mu4e RET and enjoy, the documentation of mu4e is well done. Press “U” at mu4e screen to synchronize with imap server.

A query for mu4e looks like this:

list:misc.openbsd.org flag:unread avahi

This query will search mails having list header “misc.openbsd.org” and which are unread and which contains “avahi” pattern.

date:20140101..20150215 urgent

This one will looks for mails within date range of 1st january 2014 to 15th february 2015 containing word “urgent”.

Additional notes

The current setup doesn’t handle sending mails, I’ll write another article about this. This requires configuring a smtp authentification and an identify for mu4e.

Also, you may need to tweak mbsync configuration or mu4e configuration, some settings must be changed depending on the imap server, this is particuliarly important for deleted mails.

Fold functions in emacs

Written by Solène, on 16 May 2017.
Tags: #emacs

Comments on Mastodon

You want to fold (hide) code between brackets like an if statement, a function, a loop etc.. ? Use the HideShow minor-mode which is part of emacs. All you need is to enable hs-minor-mode. Now you can fold/unfold by cycling with C-c @ C-c.

HideShow on EmacsWiki

How to change Firefox locale to ... esperanto?

Written by Solène, on 14 May 2017.
Tags: #firefox

Comments on Mastodon

Hello !

Today I felt the need to change the language of my Firefox browser to esperanto but I haven’t been able to do this, it is not straightforward…

First, you need to install your language pack, depending if you use the official Mozilla Firefox or Icecat, the rebranded firefox with non-free stuff removed

Then, open about:config in firefox, we will need to change 2 keys. Firefox needs to know that we don’t want to use our user’s locale as Firefox language and which language we want to set.

  • set intl.locale.matchOS to false
  • set general.useragent.locale to the language code you want (eo for esperanto)
  • restart firefox/icecat

you’re done ! Bonan tagon

Markup languages comparison

Written by Solène, on 13 April 2017.
Tags: #unix

Comments on Mastodon

For the fun, here is a few examples of the same output in differents markup languages. The list isn’t exhaustive of course.

This is org-mode:

* This is a title level 1

+ first item
+ second item
+ third item with a [[http://dataswamp.org][link]]

** title level 2

Blah blah blah blah blah
blah blah blah *bold* here

#+BEGIN_SRC lisp
(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))
#+END_SRC

This is markdown :

# this is title level 1

+ first item
+ second item
+ third item with a [Link](http://dataswamp.org)

## Title level 2

Blah blah blah blah blah
blah blah blah **bold** here

    (let ((hello (init-string)))
       (format t "~A~%" (+ 1 hello))
       (print hello))

or

```
(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))
```

This is HTML :

<h1>This is title level 1</h1>
<ul>
  <li>first item></li>
  <li>second item</li>
  <li>third item with a <a href="http://dataswamp.org">link</a></li>
</ul>

<h2>Title level 2</h2>

<p>Blah blah blah blah blah
  blah blah blah <strong>bold</strong> here

<code><pre>(let ((hello (init-string)))
   (format t "~A~%" (+ 1 hello))
   (print hello))</pre></code>

This is LaTeX :

\begin{document}

\section{This is title level 1}

\begin{itemize}
\item First item
\item Second item
\item Third item
\end{itemize}

\subsection{Title level 2}

Blah blah blah blah blah
blah blah blah \textbf{bold} here

\begin{verbatim}
(let ((hello (init-string)))
    (format t "~A~%" (+ 1 hello))
    (print hello))
\end{verbatim}

\end{document}

OpenBSD 6.1 released

Written by Solène, on 11 April 2017.
Tags: #openbsd #unix

Comments on Mastodon

Today OpenBSD 6.1 has been released, I won’t copy & paste the change list but, in a few words, it gets better.

Link to the official announce

I already upgraded a few servers, with both methods. One with bsd.rd upgrade but that requires physical access to the server and the other method well explained in the upgrade guide which requires to untar the files and do move some files. I recommend using bsd.rd if possible.

Connect to pfsense box console by usb

Written by Solène, on 10 April 2017.
Tags: #unix #network #openbsd66 #openbsd

Comments on Mastodon

Hello,

I have a pfsense appliance (Netgate 2440) with a usb console port, while it used to be a serial port, now devices seems to have a usb one. If you plug an usb wire from an openbsd box to it, you woull see this in your dmesg

uslcom0 at uhub0 port 5 configuration 1 interface 0 "Silicon Labs CP2104 USB to UART Bridge Controller" rev 2.00/1.00 addr 7
ucom0 at uslcom0 portno 0

To connect to it from OpenBSD, use the following command:

# cu -l /dev/cuaU0 -s 115200

And you’re done

List of useful tools

Written by Solène, on 22 March 2017.
Tags: #unix

Comments on Mastodon

Here is a list of software that I find useful, I will update this list everytime I find a new tool. This is not an exhaustive list, theses are only software I enjoy using:

Backup Tool

  • duplicity
  • borg
  • restore/dump

File synchronization tool

  • unison
  • rsync
  • lsyncd

File sharing tool / “Cloud”

  • boar
  • nextcloud / owncloud
  • seafile
  • pydio
  • syncthing (works as peer-to-peer without a master)
  • sparkleshare (uses a git repository so I would recommend storing only text files)

Editors

  • emacs
  • vim
  • jed

Web browsers using keyboard

  • qutebrowser
  • firefox with vimperator extension

Todo list / Personal Agenda…

  • org-mode (within emacs)
  • ledger (accounting)

Mail client

  • mu4e (inside emacs, requires the use of offlineimap or mbsync to fetch mails)

Network

  • curl
  • bwm-ng (to see bandwith usage in real time)
  • mtr (traceroute with a gui that updates every n seconds)

Files integrity

  • bitrot
  • par2cmdline
  • aide

Image viewer

  • sxiv
  • feh

Stuff

  • entr (run command when a file change)
  • rdesktop (RDP client to connect to Windows VM)
  • xclip (read/set your X clipboard from a script)
  • autossh (to create tunnels that stays up)
  • mosh (connects to your ssh server with local input and better resilience)
  • ncdu (watch file system usage interactively in cmdline)
  • mupdf (PDF viewer)
  • pdftk (PDF manipulation tool)
  • x2x (share your mouse/keyboard between multiple computers through ssh)
  • profanity (XMPP cmdline client)
  • prosody (XMPP server)
  • pgmodeler (PostgreSQL database visualization tool)

How to check your data integrity?

Written by Solène, on 17 March 2017.
Tags: #unix #security

Comments on Mastodon

Today, the topic is data degradation, bit rot, birotting, damaged files or whatever you call it. It’s when your data get corrupted over the time, due to disk fault or some unknown reason.

What is data degradation ?

I shamelessy paste one line from wikipedia: “Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay or data rot.”.

Data degradation on Wikipedia

So, how do we know we encounter a bit rot ?

bit rot = (checksum changed) && NOT (modification time changed)

While updating a file could be mistaken as bit rot, there is a difference

update = (checksum changed) && (modification time changed)

How to check if we encounter bitrot ?

There is no way you can prevent bitrot. But there are some ways to detect it, so you can restore a corrupted file from a backup, or repair it with the right tool (you can’t repair a file with a hammer, except if it’s some kind of HammerFS ! :D )

In the following I will describe software I found to check (or even repair) bitrot. If you know others tools which are not in this list, I would be happy to hear about it, please mail me.

In the following examples, I will use this method to generate bitrot on a file:

% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% generate_checksum_database_with_tool
% echo "a" >> my_data/some_file_that_will_be_corrupted
% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% start_tool_for_checking

We generate the checksum database, then we alter a file by adding a “a” at the end of the file and we restore the modification and acess time of the file. Then, we start the tool to check for data corruption.

The first touch is only for convenience, we could get the modification time with stat command and pass the same value to touch after modification of the file.

bitrot

This is a python script, it’s very easy to use. I will scan a directory and create a database with the checksum of the files and their modification date.

Initialization usage:

% cd /home/my_data/
% bitrot
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 189 new, 0 updated, 0 renamed, 0 missing.
Updating bitrot.sha512... done.
% echo $?
0

Verify usage (case OK):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
% echo $?
0

Exit status is 0, so our data are not damaged.

Verify usage (case Error):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
error: SHA1 mismatch for ./sometextfile.txt: expected 17b4d7bf382057dc3344ea230a595064b579396f, got db4a8d7e27bb9ad02982c0686cab327b146ba80d. Last good hash checked on 2017-03-16 21:04:39.
Finished. 199.41 MiB of data read. 1 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
error: There were 1 errors found.
% echo $?
1

When something is wrong. As the exit status of bitrot isn’t 0 when it fails, it’s easy to write a script running every day/week/month.

Github page

bitrot is available in OpenBSD ports in sysutils/bitrot since 6.1 release.

par2cmdline

This tool works with PAR2 archives (see below for more informations about what PAR ) and from them, it will be able to check your data integrity AND repair it.

While it has some pros like being able to repair data, the cons is that it’s not very easy to use. I would use this one for checking integrity of long term archives that won’t changes. The main drawback comes from PAR specifications, the archives are created from a filelist, if you have a directory with your files and you add new files, you will need to recompute ALL the PAR archives because the filelist changed, or create new PAR archives only for the new files, but that will make the verify process more complicated. That doesn’t seems suitable to create new archives for every bunchs of files added in the directory.

PAR2 let you choose the percent of a file you will be able to repair, by default it will create the archives to be able to repair up to 5% of each file. That means you don’t need a whole backup for the files (while it’s would be a bad idea) and only an approximately extra of 5% of your data to store.

Create usage:

% cd /home/
% par2 create -a integrity_archive -R my_data
Skipping 0 byte file: /home/my_data/empty_file

Block size: 3812
Source file count: 17
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7

Opening: my_data/[....]
[text cut here]
Opening: my_data/[....]

Computing Reed Solomon matrix.
Constructing: done.
Wrote 381200 bytes to disk
Writing recovery packets
Writing verification packets
Done

% echo $?
0

% ls -1
integrity_archive.par2
integrity_archive.vol000+01.par2
integrity_archive.vol001+02.par2
integrity_archive.vol003+04.par2
integrity_archive.vol007+08.par2
integrity_archive.vol015+16.par2
integrity_archive.vol031+32.par2
integrity_archive.vol063+37.par2
my_data

Verify usage (OK):

% par2 verify integrity_archive.par2 
Loading "integrity_archive.par2".
Loaded 36 new packets
Loading "integrity_archive.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.


All files are correct, repair is not required.
% echo $?
0

Verify usage (with error):

par2 verify integrity_archive.par.par2                                                 
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:


Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:


Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.

% echo $?
1

Repair usage:

% par2 repair integrity_archive.par.par2      
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:


Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.


Wrote 361069 bytes to disk

Verifying repaired files:

Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - found.

Repair complete.

% echo $?
0

par2cmdline is only one implementation doing the job, others tools working with PAR archives exists. They should be able to all works with the same PAR files.

Parchive on Wikipedia

Github page

par2cmdline is available in OpenBSD ports in archivers/par2cmdline.

If you find a way to add new files to existing archives, please mail me.

mtree

One can write a little script using mtree (in base system on OpenBSD and FreeBSD) which will create a file with the checksum of every files in the specified directories. If mtree output is different since last time, we can send a mail with the difference. This is a process done in base install of OpenBSD for /etc and some others files to warn you if it changed.

While it’s suited for directories like /etc, in my opinion, this is not the best tool for doing integrity check.

ZFS

I would like to talk about ZFS and data integrity because this is where ZFS is very good. If you are using ZFS, you may not need any other software to take care about your data. When you write a file, ZFS will also store its checksum as metadata. By default, the option “checksum” is activated on dataset, but you may want to disable it for better performance.

There is a command to ask ZFS to check the integrity of the files. Warning: scrub is very I/O intensive and can takes from hours to days or even weeks to complete depending on your CPU, disks and the amount of data to scrub:

# zpool scrub zpool

The scrub command will recompute the checksum of every file on the ZFS pool, if something is wrong, it will try to repair it if possible. A repair is possible in the following cases:

If you have multiple disks like raid-Z or raid–1 (mirror), ZFS will be look on the differents disks if the non corrupted version of the file exists, if it finds it, it will restore it on the disk(s) where it’s corrupted.

If you have set the ZFS option “copies” to 2 or 3 (1 = default), that means that the file is written 2 or 3 time on the disk. Each file of the dataset will be allocated 2 or 3 time on the disk, so take care if you want to use it on a dataset containing heavy files ! If ZFS find thats a version of a file is corrupted, it will check the others copies of it and tries to restore the corrupted file is possible.

You can see the percentage of filesystem already scrubbed with

zfs status zpool

and the scrub can be stopped with

zfs scrub -s zpool

AIDE

Its name is an acronym for “Advanced Intrusion Detection Environment”, it’s an complicated software which can be used to check for bitrot. I would not recommend using it if you only need bitrot detection.

Here is a few hints if you want to use it for checking your file integrity:

/etc/aide.conf

/home/my_data/ R
# Rule definition
All=m+s+i+sha256
summarize_changes=yes

The config file will create a database of all files in /home/my_data/ (R for recursive). “All” line list the checks we do on each file. For bitrot checking, we want to check modification time, size, checksum and inode of the files. The summarize_change line permit to have a list of changes if something is wrong.

This is the most basic config file you can have. Then you will have to run aide to create the database and then run aide to create a new database and compare the two databases. It doesn’t update its database itself, you will have to move the old database and tell it where to found the older database.

My use case

I have different kind of data. On a side, I have static data like pictures, clips, music or things that won’t change over time and the other side I have my mails, documents and folders where the content changes regularly (creation, deletetion, modification). I am able to afford a backup for 100% of my data with some history of the backup on a few days, so I won’t be interested about file repairing.

I want to be warned quickly if a file get corrupted, so I can still get the backup in my history but I don’t keep every versions of my files for too long. I choose to go with the python tool bitrot, it’s very easy to use and it doesn’t become a mess with my folders getting updated often.

I would go with par2cmdline if I could not be able to backup all my data. Having 5% or 10% of redundancy of my files should be enough to restore it in case of corruption without taking too much space.

Port of the week: rss2email

Written by Solène, on 24 January 2017.
Tags: #portoftheweek #unix #email

Comments on Mastodon

This is the kind of Port of the week I like. This is a software I just discovered and fall in love to. The tool r2e which is the port mail/rss2email on OpenBSD is a small python utility that solves a problem: how to deal with RSS feeds?

Until last week, I was using a “web app” named selfoss which was aggregating my RSS feeds and displaying it on a web page, I was able to filter by read/unread/marked and also filter by source. It is a good tool that does the job well but I wanted something that doesn’t rely on a web browser. Here comes r2e !

This simple software will send you a mail for each new entry in your RSS feeds. It’s really easy to configure and set-up. Just look at how I configured mine:

$ r2e new my-address+rss@my-domain.com
$ r2e add "http://undeadly.org/cgi?action=rss"
$ r2e add "https://dataswamp.org/~solene/rss.xml"
$ r2e add "https://www.dragonflydigest.com/feed"
$ r2e add "http://phoronix.com/rss.php"

Add this in your crontab to check new RSS items every 10 minutes:

*/10 * * * * /usr/local/bin/r2e run

Add a rule for my-address+rss to store mails in a separate folder, and you’re done !

NOTE: you can use r2e run –no-send for the first time, it will create the database and won’t send you mails for current items in feeds.

Dovecot: folder appears empty

Written by Solène, on 23 January 2017.
Tags: #email

Comments on Mastodon

Today I encountered an unknown issue to me with my Imap server dovecot. In roundcube mail web client, my Inbox folder appeared empty after being reading a mail. My Android mail client K9-Mail was displaying “IOException:readStringUnti….” when trying to synchronize this folder.

I solved it easily by connecting to my server with SSH, cd-ing into the maildir directory and in the Inbox folder, renamed dovecot.index.log to dovecot.index.log.bak (you can remove it if it fix the problem).

And now, mails are back. This is the very first time I have a problem of this kind with dovecot…

New cl-yag version

Written by Solène, on 21 January 2017.
Tags: #lisp #cl-yag

Comments on Mastodon

Today I just updated my tool cl-yag that implies a slightly change on my website. Now, on the top of this blog, you can see a link “Index of articles”. This page only display articles titles, without any text from the article.

Cl-yag is a tool to generate static website like this one. It’s written in Common LISP. For reminder, it’s also capable of producing both html and gopher output now.

If you don’t know what Gopher is, you will learn a lot reading the following links Wikipedia : Gopher (Protocol) and Why is gopher still relevant

Let's encrypt on OpenBSD in 5 minutes

Written by Solène, on 20 January 2017.
Tags: #security #openbsd66 #openbsd

Comments on Mastodon

Let’s encrypt is a free service which provides free SSL certificates. It is fully automated and there are a few tools to generate your certificates with it. In the following lines, I will just explain how to get a certificate in a few minutes. You can find more informations on Let’s Encrypt website.

To make it simple, the tool we will use will generate some keys on the computer, send a request to Let’s Encrypt service which will use http challenging (there are also dns and another one kind of challenging) to see if you really own the domain for which you want the certificate. If the challenge process is ok, you have the certificate.

Please, if you don’t understand the following commands, don’t type it.

While the following is right for OpenBSD, it may change slightly for others systems. Acme-client is part of the base system, you can read the man page acme-client(1).

Prepare your http server

For each certificate you will ask a certificate, you will be challenged for each domain on the port 80. A file must be available in a path under “/.well-known/acme-challenge/”.

You must have this in your httpd config file. If you use another web server, you need to adapt.

server "mydomain.com" {
    root "/empty"
    listen on * port 80
    location "/.well-known/acme-challenge/*" {
        root { "/acme/" , request strip 2 }
    }
}

The request strip 2 part is IMPORTANT. (I’ve lost 45 minutes figuring out why root “/acme/” wasn’t working.)

Prepare the folders

As stated in acme-client man page and if you don’t need to change the path. You can do the following commands with root privileges :

# mkdir /var/www/acme
# mkdir -p /etc/ssl/acme/private /etc/acme
# chmod 0700 /etc/ssl/acme/private /etc/acme

Request the certificates

As root, in the acme-client sources folder, type the following the generate the certificates. The verbose flag is interesting and you will see if the challenging step work. If it doesn’t work, you should try manually to get a file like with the same path tried from Let’s encrypt, and try again the command when you succeed.

$ acme-client -vNn mydomain.com www.mydomain.com mail.mydomain.com

Use the certificates

Now, you can use your SSL certificates for your mail server, imap server, ftp server, http server…. There is a little drawback, if you generate certificates for a lot of domains, they are all written in the certificate. This implies that if someone visit one page, look at the certificate, this person will know every domain you have under SSL. I think that it’s possible to ask every certificate independently but you will have to play with acme-client flags and make some kind of scripts to automatize this.

Certificate file is located at /etc/ssl/acme/fullchain.pem and contains the full certification chain (as its name is explicit). And the private key is located at /etc/ssl/acme/private/privkey.pem.

Restart the service with the certificate.

Renew certificates

Certificates are valid for 3 months. Just type

./acme-client mydomain.com www.mydomain.com mail.mydomain.com

Restart your ssl services

EASY !

How to use ssh tramp on Emacs Windows?

Written by Solène, on 18 January 2017.
Tags: #emacs #windows

Comments on Mastodon

If you are using emacs under Microsoft Windows and you want to edit remote files through SSH, it’s possible to do it without using Cygwin. Tramp can use the tool “plink” from putty tools to do ssh.

What you need is to get “plink.exe” from the following page and get it into your $PATH, or choose the installer which will install all putty tools.

Putty official website

Then, edit your emacs file to add the following lines to tell it that you want to use plink when using tramp

(require 'tramp)
(set-default 'tramp-default-method "plink")

Now, you can edit your remote files, but you will need to type your password. I think that in order to get password-less with ssh keys, you would need to use putty key agent.

Convert mailbox to maildir with dovecot

Written by Solène, on 17 January 2017.
Tags: #unix #email

Comments on Mastodon

I have been using mbox format for a few years on my personal mail server. For those who don’t know what mbox is, it consists of only one file per folder you have on your mail client, each file containing all the mails of the corresponding folder. It’s extremely ineficient when you backup the mail directory because it must copy everything each time. Also, it reduces the system cache possibility of the server because if you have folders with lots of mails with attachments, it may not be cached.

Instead, I switched to maildir, which is a format where every mail is a regular file on the file system. This takes a lot of inodes but at least, it’s easier to backup or to deal with it for analysis.

Here how to switch from mbox to maildir with a dovecot tool.

# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox

That’s all ! In this case, my mbox folder was ~/mail/ and my INBOX file was ~/mail/inbox. It tooks me some time to find where my INBOX really was, at first I tried a few thing that didn’t work and tried a perl convert tool named mb2md.pl which has been able to extract some stuff but a lot of mails were broken. So I have been going back getting dsync working.

If you want to migrate, the whole process looks like:

# service smtpd stop

modify dovecot/conf.d/10-mail.conf, replace the first line
mail_location = mbox:~/mail:INBOX=/var/mail/%u   # BEFORE
mail_location = maildir:~/maildir                # AFTER

# service dovecot restart
# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox
# service smtpd start

Port of the week: entr

Written by Solène, on 07 January 2017.
Tags: #unix

Comments on Mastodon

entr is a command line tool that let you run arbitrary command on file change. This is useful when you are doing something that requires some processing when you modify it.

Recently, I have used it to edit a man page. At first, I had to run mandoc each time I modified to file to check the render. This was the first time I edited a man page so I had to modify it a lot to get what I wanted. I remembered about entr and this is how you use it:

$ ls stagit.1 | entr mandoc /_

This simple command will run “mandoc stagit.1” each time stagit.1 is modified. The file names must be given by stdin to entr, and then use the characters sequence /_ to replace the names (like {} in find).

The man page of entr is very well documented if you need more examples.

Emacs 25: save cursor position

Written by Solène, on 08 December 2016.
Tags: #emacs

Comments on Mastodon

Since I upgraded to Emacs 25 it was no longer saving my last cursor position in edited file. This is a feature I really like because I often fire and close emacs rather than keeping it opened.

Before (< emacs 25)

(setq save-place-file "~/.emacs.d/saveplace") 
(setq-default save-place t) 
(require 'saveplace)

Emacs 25

(save-place-mode t)
(setq save-place-file "~/.emacs.d/saveplace") 
(setq-default save-place t)

That’s all :)

Port of the week: dnscrypt-proxy

Written by Solène, on 19 October 2016.
Tags: #unix #security #portoftheweek

Comments on Mastodon

2020 Update

Now, unwind on OpenBSD and unbound can support DNS over TLS or DNS over HTTPS, dnscrypt lost a bit of relevance but it’s still usable and a good alternative.

Dnscrypt

Today I will talk about net/dnscrypt-proxy. This let you encrypt your DNS traffic between your resolver and the remote DNS recursive server. More and more countries and internet provider use DNS to block some websites, and now they tend to do “man in the middle” with DNS answers, so you can’t just use a remote DNS you find on the internet. While a remote dnscrypt DNS server can still be affected by such “man in the middle” hijack, there is a very little chance DNS traffic is altered in datacenters / dedicated server hosting.

The article also deal with unbound as a dns cache because dnscrypt is a bit slow and asking multiple time the same domain in a few minutes is a waste of cpu/network/time for everyone. So I recommend setting up a DNS cache on your side (which can also permit to use it on a LAN).

At the time I write this article, their is a very good explanation about “how to install it” is named dnscrypt-proxy–1.9.5p3 in the folder /usr/local/share/doc/pkg-readmes/. The following article is made from this file. (Article updated at the time of OpenBSD 6.3)

While I write for OpenBSD this can be easily adapted to anthing else Unix-like.

Install dnscrypt

# pkg_add dnscrypt-proxy

Resolv.conf

Modify your resolv.conf file to this

/etc/resolv.conf :

nameserver 127.0.0.1
lookup file bind
options edns0

When using dhcp client

If you use dhcp to get an address, you can use the following line to force having 127.0.0.1 as nameserver by modifying dhclient config file. Beware, if you use it, when upgrading the system from bsd.rd, you will get 127.0.0.1 as your DNS server but no service running.

/etc/dhclient.conf :

supersede domain-name-servers 127.0.0.1;

Unbound

Now, we need to modify unbound config to tell him to ask DNS at 127.0.0.1 port 40. Please adapt your config, I will just add what is mandatory. Unbound configuration file isn’t in /etc because it’s chrooted

/var/unbound/etc/unbound.conf:

server:
    # this line is MANDATORY
    do-not-query-localhost: no

forward-zone:
    name: "."
    forward-addr: 127.0.0.1@40
    # address dnscrypt listen on

If you want to allow other to resolv through your unbound daemon, please see parameters interface and access-control. You will need to tell unbound to bind on external interfaces and allow requests on it.

Dnscrypt-proxy

Now we need to configure dnscrypt, pick a server in the following LIST /usr/local/share/dnscrypt-proxy/dnscrypt-resolvers.csv, the name is the first column.

As root type the following (or use doas/sudo), in the example we choose dnscrypt.eu-nl as a DNS provider

# rcctl enable dnscrypt_proxy
# rcctl set dnscrypt_proxy flags -E -m1 -R dnscrypt.eu-nl -a 127.0.0.1:40
# rcctl start dnscrypt_proxy

Conclusion

You should be able to resolv address through dnscrypt now. You can use tcpdump on your external interface to see if you see something on udp port 53, you should not see traffic there.

If you want to use dig hostname -p 40 @127.0.0.1 to make DNS request to dnscrypt without unbound, you will need net/isc-bind which will provide /usr/local/bin/dig. OpenBSD base dig can’t use a port different than 53.

How to publish a git repository on http

Written by Solène, on 07 October 2016.
Tags: #unix #git

Comments on Mastodon

Here is an how-to in order to make a git repository available for cloning through a simple http server. This method only allow people to fetch the repository, not to push. I wanted to set-up this to get my code, I don’t plan to have any commit on it from other people at this time so it’s enough.

In a folder publicly available from your http server clone your repository in bare mode. As explained in the [https://git-scm.com/book/tr/v2/Git-on-the-Server-The-Protocols](man page):

$ cd /var/www/htdocs/some-path/
$ git clone --bare /path/to/git_project gitproject.git
$ cd gitproject.git
$ git update-server-info
$ mv hooks/post-update.sample hooks/post-update
$ chmod o+x hooks/post-update

Then you will be able to clone the repository with

$ git clone https://your-hostname/some-path/gitproject.git

I’ve lost time because I did not execute git update-server-info so the clone wasn’t possible.

Port of the week: rlwrap

Written by Solène, on 04 October 2016.
Tags: #unix #shell #portoftheweek

Comments on Mastodon

Today I will present misc/rlwrap which is an utility tool when you use some command-line software which doesn’t provide you a nice readline input. By using rlwrap, you will be able to use telnet, a language REPL or any command-line tool where you input text with an history of what you type, ability to use emacs bindings like C-a C-e M-Ret etc… I use it often with telnet or sbcl.

Usage :

$ rlwrap telnet host port

Common LISP: How to open an SSL / TLS stream

Written by Solène, on 26 September 2016.
Tags: #lisp #network

Comments on Mastodon

Here is a tiny code to get a connection to an SSL/TLS server. I am writing an IRC client and an IRC bot too and it’s better to connect through a secure channel.

This requires usocket and cl+ssl:

(usocket:with-client-socket (socket stream *server* *port*)
  (let ((ssl-stream (cl+ssl:make-ssl-client-stream stream
                               :external-format '(:iso-8859-1 :eol-style :lf)
                               :unwrap-stream-p t
                               :hostname *server*)))
    (format ssl-stream "hello there !~%")
    (force-output ssl-stream)))

Android phone and Unix

Written by Solène, on 06 September 2016.
Tags: #android #emacs

Comments on Mastodon

If you have an android Phone, here are two things you may like:

Org-mode <=> Android

First is the MobileOrg app to synchronize your calendar/tasks between your computer org-mode files and your phone. I am using org-mode since a few months, I think I do pretty basics things with it like having a todo list with a deadline for each item. Having it in my phone calendar is a good enhancement. I can also add todo items from my phone to show it on my computer.

The phone and your computer get synced by publishing a special format of org files for the mobile on a remote server. Mobile Org supports ssh, webdav, dropbox or sdcard. I’m using ssh because I own a server and I can reliabily have my things connected together there on a dedicated account. Emacs will then use tramp to publish/retrieve the files.

Official MobileOrg website

MobileOrg on Google Play

Read/Write sms from a remote place

Second useful thing I like with my android phone is being able to write and send sms (+ some others things but I was most interested by SMS) from my computer. A few services already exists but they work with “cloud” logic and I don’t want my phone to be connected to one more service. The MAXS app provides me what I need : ability to read/write the sms of my phone from the computer without web browser and relying on my own services. MAXS connects the phone to a XMPP account and you set a whitelist of XMPP mails able to send commands, that’s all. Here are a few examples of use:

To write a SMS I just need to speak to the jabber account of my phone and write

sms send firstname lastname  hello how are you ?

Be careful, there are 2 spaces after the lastname ! I think it’s like this so MAXS can make easily the difference between the name and the message.

I can also reply quickly to the last contacted person

reply to Yes I'm answering from my computer

To read the last n sms

sms read n

It’s still not perfect because sometimes it lose connectivity and you can’t speak with it anymore but from the project author it’s not a problem seen on every phone. I did not have the time yet to report exactly the problem (I need to play with Android Debug Bridge for that). If you want to install MAXS, you will need a few app from the store to get it working. First, you will need MAXS main and MAXS transport (a plugin to use XMPP) and then plugins for the differents commands you want, so, maybe, smsread and smswrite. Check their website for more informations.

As presenter earlier on my website, I use profanity as XMPP client. It’s a light and easy to configure/use console client.

Official MAXS Website

MAXS on Google Play

How to kill processes by their name

Written by Solène, on 25 August 2016.
Tags: #unix

Comments on Mastodon

If you want to kill a process by its name instead of its PID number, which is easier if you have to kill processes from the same binary, here are the commands depending of your operating system:

FreeBSD / Linux

$ killall pid_name

OpenBSD

$ pkill pid_name

Solaris

Be careful with Solaris killall. With no argument, the command will send a signal to every active process, which is not something you want.

$ killall pid_name

Automatically mute your Firefox tab

Written by Solène, on 17 August 2016.
Tags: #firefox

Comments on Mastodon

At work I have the sound of my laptop not muted because I need sound from time to time. But browsing the internet with Firefox can sometime trigger some undesired sound, very boring in the office. There is the extension Mute Tab to auto-mute a new tab on Firefox so it won’t play sound. The auto-mute must be activated in the plugin options, it’s un-checked by default.

You can find it here, no restart required: Firefox Mute Tab addon

I also use FlashStopper which block by default flash and HTML5 videos, so you can click on it to activate them, it doesn’t autoplay.

Firefox FlashStopper addon

Port of the week: pwgen

Written by Solène, on 12 August 2016.
Tags: #security #portoftheweek

Comments on Mastodon

I will talk about security/pwgen for the current port of the week. It’s a very light executable to generate passwords. But it’s not just a dumb password generator, it has options to choose what kind of password you want.

Here is a list of options with their flag, you will find a lot more in the nice man page of pwgen:

  • -A : don’t use capital letters
  • -B : don’t use characters which could be missread (O/0, I/l/1 …)
  • -v : don’t use vowels
  • etc…

You can also use a seed to generate your “random” password (which aren’t very random in this case), you may need it for some reason to be able to reproduce password you lost for a ftp/http access for example.

Example of pwgen output generating 5 password of 10 characters. Using –1 parameter so it will only display one password per line, otherwise it display a grid (on column and multiple lines) of passwords.

$ pwgen -1 10 5
fohchah9oP
haNgeik0ee
meiceeW8ae
OReejoi5oo
ohdae2Eisu

Website now compatible gopher !

Written by Solène, on 11 August 2016.
Tags: #gopher #network #lisp

Comments on Mastodon

My website is now available with Gopher protocol ! I really like this protocol. If you don’t know it, I encourage you reading this page : Why is Gopher still relevant?.

This has been made possible by modifying the tool generating the website pages to make it generating gopher compatible pages. This was a bit of work but I am now proud to have it working.

I have also made a “big” change into the generator, it now rely on a “markdown-to-html” tool which sadden me a bit. Before that, I was using ham-mode in emacs which was converting html on the fly to markdown so I can edit in markdown, and was exporting into html on save. This had pros and cons. Nothing more than a lisp interpreter was needed on the system generating the files, but I was sometimes struggling with ham-mode because the conversion was destructive. Multiple editing in a row of the same file was breaking code blocks, because it wasn’t exported the same way each time until it wasn’t a code block anymore. There are some articles that I update sometimes to keep it up-to-date or fix an error in it, and it was boring to fix the code everytime. Having the original markdown text was mandatory for gopher export, and is now easier to edit with any tool.

There is a link to my gopher site on the right of this page. You will need a gopher client to connect to it. There is an android client working, also Firefox can have an extension to become compatible (gopher support was native before it have been dropped). You can find a list of clients on Wikipedia.

Gopher is nice, don’t let it die.

Port of the week: feh

Written by Solène, on 08 August 2016.
Tags: #portoftheweek

Comments on Mastodon

Today I will talk about graphics/feh, it’s a tool to view pictures and it can also be used to set an image as background.

I use this command line, invoked by stumpwm when my session starts so I can a nice background with cubes :)

$ feh --bg-scale /home/solene/Downloads/cubes.jpg

feh as a lot of options and is really easy to use, I still prefer sxiv for viewing but I use feh for my background.

Port of the week: Puddletag

Written by Solène, on 20 July 2016.
Tags: #portoftheweek

Comments on Mastodon

If you ever need to modify the tags of your music library (made of MP3s) I would recommend you audio/puddletag. This tool will let you see all your music metadata like a spreadsheet and just modify the cells to change the artist name, title etc… You can also select multiple cells and type one text and it will be applied on all the selected cells. There is also a tool to extract data from the filename with a regex. This tool is very easy and pleasant to use.

There is an option in the configuration panel that is good to be aware of, by default, when you change the tag of a file, the modification time isn’t changed, so if you use some kind of backup relying on the modification time it won’t be synchronized. In the configuration panel, you will find an option to check which will bump the modification timestamp when you change a tag on a song.

Port of the week: Profanity

Written by Solène, on 12 July 2016.
Tags: #portoftheweek #network

Comments on Mastodon

Profanity is a command-line ncurses based XMPP (Jabber) client. It's easy to use and seem inspired from irssi for the interface. It's available on OpenBSD as a package named "profanity".

It's really easy to use and the documentation on its website is really clear. It supports all main XMPP features including OMEMO / OTR / GPG for end-to-end encryption.

To log-in, just type **/connect myusername@mydomain** and after the password prompt, you will be connected. Easy.

Profanity official website

Stop being tracked by Google search with Firefox

Written by Solène, on 04 July 2016.
Tags: #security #web

Comments on Mastodon

When you use google search and you click on a link, you a redirected on a google server that will take care of saving your navigation choice from their search engine into their database.

  1. This is bad for your privacy
  2. This slow the process of using the search engine because you have a redirection (that you don’t see) when you want to visit a link

There is a firefox extension that will fix the links in the results of the search engine so when you click, you just go on the website without saying “hello Google I clicked there”: Google Search Link Fix

You can also use another web engine if you don’t like Google. I keep it because I have best results when searching technical. I tried to use Yahoo, Bing, Exalead, Qwant, Duck duck go, each one for a few days and Google has the bests results so far.

Port of the week: OpenSCAD

Written by Solène, on 04 July 2016.
Tags: #portoftheweek

Comments on Mastodon

OpenSCAD is a software for creating 3D objects like a programming language, with the possibility to preview your creation.

I am personaly interested in 3D things, I have been playing with 3ds Max and Blender for creating 3d objects but I never felt really comfortable with them. I discovered pov-ray a few years ago which is used to create rendered pictures instead of creating objects. Pov-ray use its own “programming language” to describe the scene and make the render. Now, I have a 3D printer and I would like to create things to print, but I don’t like the GUI stuff of Blender and Pov-ray don’t create objects, so… OpenSCAD ! This is the pov-ray of objects !

Here is a simple example that create an empty box (difference of 2 cubes) and a screw propeller:

width = 3;
height = 3;
depth = 6;
thickness = 0.2;

difference() {
    cube( [width,depth,height], true);

translate( [0,0,thickness] )
    cube( [width-thickness, depth-thickness, height], true);
}

translate( [ width , 0 , 0 ])
    linear_extrude(twist = 400, height = height*2)
        square(2,true);

The following picture is made from the code above:

openscad

There are scad-mode and scad-preview for emacs for editing OpenSCAD files. scad-mode will check the coloration/syntax and scad-preview will create the OpenScad render inside a Emacs pane. Personaly, I use OpenSCAD opened in some corner of the screen with option set to render on file change, and I edit with emacs. Of course you can use any editor, or the embedded editor which is a Scintilla one which is pretty usable.

OpenSCAD website

OpenSCAD gallery

Port of the week: arandr

Written by Solène, on 27 June 2016.
Tags: #portoftheweek

Comments on Mastodon

Today the Port of the week is x11/arandr, it’s a very simple tool to set-up your screen display when using multiple monitors. It’s very handy when you want to make something complicated or don’t want to use xrandr in command line. There is not much to say because it’s very easy to use!

It can generates your current configuration as a script that you will find under the ~/.screenlayout/ repertory. This is quite useful to configure your screens from your ~/.xsession file in case a monitor is connected.

xrandr | grep "HDMI-2 connected" && .screenlayout/dual-monitor.sh

If HDMI–2 has a screen connected, when I log-in my session, I will have my dual-monitor setup!

Port of the week: x2x

Written by Solène, on 23 June 2016.
Tags: #portoftheweek

Comments on Mastodon

Port of the week is now presenting you x2x which stands for X to X connection. This is a really tiny tool in one executable file that let you move your mouse and use your keyboard on another X server than yours. It’s like the other tool synergy but easier to use and open-source (I think synergy isn’t open source anymore).

If you want to use the computer on your left, just use the following command (x2x must be installed on it and ssh available)

$ ssh -CX the_host_address "x2x -west -to :0.0"

and then you can move your cursor to the left of your screen and you will see that you can use your cursor or type with the keyboard on your other computer ! I am using it to manage a wall of screen made of raspberry Pi first generation. I used to connect to it with VNC but it was very very slow.