Introduction
Today I will share with you a simple way I found to transmit text from my computer to my phone. I often have to do it, to type a password, enter an url, copy/paste a message or whatever reasons.
Using QR codes
The best way to get a text from computer to a smartphone (that I am aware of) is scanning a QR code using the camera. By using the command qrencode (I already wrote about this one), xclip and feh (a picture viewer), it is possible to generate QR code on the fly on the screen.
Is it as simple as running the following command, from a menu or a key binding:
xclip -o | qrencode -o - -t PNG | feh -g 600x600 -Z -
Using this command, xclip will gives the clipboard to qrencode which will create a PNG file on stdout and then feh will display it on a 600 by 600 window, no temporary file involved here.
Once the picture is displayed on the screen, you can use a scanner program on your phone to gather the content, I found "QR & Barcode Scanner" to be really light, fast and usable with its history, available on F-Droid.
QR & Barcode Scanner on F-Droid
Composing a quite long text on your computer and sharing it to the phone can be done with sending the text to xclip and then generate the QR code.
Going further
When it comes to sharing data between my phone and my computer, I love "primitive ftpd" which is a SFTP/FTP server for Android, it works out of the box and allow secure transfers over Wifi (use SFTP please!).
primitive ftpd on F-Droid
For simple transfers, I use "Share to Computer" that will share a file or a group of files as a zip on a temporary http server, it is then easy to connect to it to save the file.
Share to Computer on F-Droid
For sending SMS through my phone but from my computer, I use the program KDE Connect (it has to be installed on phone and computer), I wanted to write about it for a long time but it's not easy to explain how to get it to work and uneasy to explain its usage. But it allows me to receive phone notifications on my computer and also send SMS. I have simple aliases in my shell like "mom-sms hello are you ?" to ease my use of SMS. When possible, don't use SMS, it's not secure. The program does a lot more than sending SMS, like using the smartphone as a remote touchpad as one example.
KDE Connect on F-Droid
Hi, today's article will be a bit different than what you are used to. I am currently writing about my experience as an open source author and "project manager". I recently created a project that, while being extremely small, have seen some people getting involved at various level. I didn't know what it was to be in this position.
Having to deal with multiple people contributing to a project I started for myself on one architecture with a limited set of features is surprisingly hard. I don't say it's boring and that no one should ever do it, but I think I wasn't really prepare to handle this.
I made my best to integrate people wishes while keeping the helm of the project in the right direction, but I had to ask myself many questions.
Many questions
Should I care about what other people need? I could say no to everything proposed if I see no benefit for my use case. I chose to accept some changes that I didn't use because they made sense in some context. But I have to be really careful to accept everything if I want to keep the program sane.
Should I care about other platforms I don't use? Someone proposed me to add some code to support Linux targets, which I don't use, meaning more code I can't test. For the sake of compatibility and avoiding extra work to packagers, I made a very simple solution to fix that, but if someone wanted to port my program to Windows or a platform that would require many many changes, I don't know how I would react.
Too much changing code situation. My program changed A LOT since my initials commits, and now a git blame mostly show no lines from me, this doesn't mean I didn't review changes made by contributors, but I am not as comfortable now that I was initially with my own code. That doesn't mean the new code is wrong, but it doesn't hold my logic in it. I think it's the biggest deal in this situation, I, as the project manager, must say what can go in, what can't and when. It's fine to receive contributions but they shouldn't add complexity or weird algorithms.
Accepting changes
I am not an expert programmer, I don't often write code, and when I do, it's for my own benefit. Opening our work to other implies making it accessible to outsiders, accepting changes and explaining choices.
Many times I reviewed submitted code and replied it wasn't fine, and while it compiles and apply correctly, it's not the right way to do, please rework this in some way to make it better or discard it, but it won't get into the repository. It's not always easy, people can submit code I don't understand sometimes, I still have to review it thoroughly because I can't accept everything sent.
In some way, once people get involved into my projects, they get denatured because they receive thoughts from other, their ideas, their logic, their needs. It's wonderful and scary at the same time. When I publish code, I never expect it to be useful for someone and even less that I could receive new features by emails from strangers.
Be prepared for this is important when you start a project and that you make it open source. I could refuse everything but then I would cut myself from a potential community around my own code, that would be a shame.
Responsibility
This part is not related to my projects (or at least not in this situation) but this is a debate I often think about when reading dramas in open source: is an open source author responsible toward the users?
One way to reply this is that if you publish your content online and accept contributions, this mean you care about users (which then contribute back), but where to draw the limit of what is acceptable? If someone writes an awesome program for themselves and gather a community around it, and then choose to make breaking changes or remove important features, what then? The users are free to fork, the author is free to to whatever they want.
There are no clear responsibility binding contributors and end users, I hope most of the time, contributors think about the end users, but with different philosophies in play sometimes we can end in dilemma between the two groups.
Epilogue
I am very happy to publish open source code and to have contributors, coordinate people, goals and features is not something I expected :)
Please, be cautious with this writing, I only had to face this situation with a couple of contributors, I can't imagine how complicated it can become at a bigger scale!
Introduction
I will present you the program ssss (for Shamir's Secret Sharing Scheme), a cryptography program to split a secret into n parts, requiring at least t parts to be recovered (with t <= n).
Shamir Secret Sharing (method is mathematically proven to be secure)
Use case
The project website list a few use cases for real life and I like them, but I will share another use case.
ssss project website
I used to run a community but there was no person in charge apart me, which made me a single point of failure. I decided to make the encrypted backup available to a few kind of trustable community members, and I gave each a secret. There were four members and I made the backup password available only if the four members agreed to share their secrets to get the password. For privacy reasons, I didn't want any of these people to be able to lurk into the backup, at least, if someone had happened to me, they could agree to recover the database only if the four persons agreed on it.
How to use
ssss-split is easy to use, you can only share text with it. So you can use a very long passphrase to encrypt files and share this passphrase into many secrets that you share.
You can install it on OpenBSD using pkg_add ssss.
In the following examples, I will create a simple passphrase and then use the generated secrets to get the original passphrase back.
$ ssss-split -t 3 -n 3
Generating shares using a (3,3) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters: [Note=>hidden input where I typed "this is a very very long password] Using a 264 bit security level.
1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
When you want to recover a secret, you will have to run ssss-combine and tell it how many secrets you have, they can be provided in any order.
$ ssss-combine -t 3
Enter 3 shares separated by newlines:
Share [1/3]: 2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
Share [2/3]: 3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
Share [3/3]: 1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
Resulting secret: this is a very very long password
Tips
If you want to easily store a secret or share it to a non-IT person (or in a vault), you can create a QR code and then print the picture. QR code has redundancy so if the paper is damaged you can still recover it, it's quite big on a paper so if it fades of you may not lose data and it also checks integrity.
Conclusion
ssss is a wonderful program to share a secret among a few people or put a few secrets here and there for a recovery situation. The program can receive the passphrase on its standard input allowing it to be scripted.
Interesting fact, if you run ssss-combine multiple times on the same text, you always get different secrets, so if you give a secret, no brute force can be used to find which input produced the secret.
Introduction
Today I will present the userland program "split" that is used to split a single file into smaller files.
OpenBSD split(1) manual page
Use case
Split will create new files from a single files, but smaller. The original file can be get back using the command cat on all the small files (in the correct order) to recreate the original file.
There are several use cases for this:
- store a single file (like a backup) on multiple medias (floppies, 700MB CD, DVDs etc..)
- parallelize a file process, for example: split a huge log file into small parts to run analysis on each part
- distribute a file across a few people (I have no idea about the use but I like the idea)
Usage
Its usage is very simple, run split on a file or feed its standard input, it will create 1000 lines long files by default. -b could be used to tell a size in kB or MB for the new files or use -l to change the default 1000 lines. Split can also create a new file each time a line match a regex given with -p.
Here is a simple example splitting a file into 1300kB parts and then reassemble the file from the parts, using sha256 to compare checksum of the original and reconstructed files.
solene@kongroo ~/V/pmenu> split -b 1300k pmenu.mp4
solene@kongroo ~/V/pmenu> ls
pmenu.mp4 xab xad xaf xah xaj xal xan
xaa xac xae xag xai xak xam
solene@kongroo ~/V/pmenu> cat x* > concat.mp4
solene@kongroo ~/V/pmenu> sha256 pmenu.mp4 concat.mp4
SHA256 (pmenu.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
SHA256 (concat.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
solene@kongroo ~/V/pmenu> ls -l x*
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xaa
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xab
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xac
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xad
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xae
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xaf
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xag
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xah
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xai
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xaj
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xak
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xal
-rw-r--r-- 1 solene wheel 1331200 Mar 21 16:50 xam
-rw-r--r-- 1 solene wheel 810887 Mar 21 16:50 xan
Conclusion
If you ever need to split files into small parts, think about the command split.
For more advanced splitting requirements, the program csplit can be used, I won't cover it here but I recommend reading the manual page for its usage.
csplit manual page
Introduction
Today I will introduce you to Diffoscope, a command line software to compare two directories. I find it very useful when looking for changes between two extracted tarballs, I use it to compare changes between two version of a program to see what changed.
Diffoscope project website
How to install
On OpenBSD you can use "pkg_add diffoscope", on other systems you may have a package for it, but it could be installed via pip too.
Usage
It is really easy to use, as parameter give the two directories you want to compare, diffoscope will then show the uid, gid, permissions, modification/creation/access time changes between the two directories.
The output on a simple example looks like the following:
--- t/
+++ a/
│ --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello
│ ├── stat {}
│ │ @@ -1 +1 @@
│ │ -1043 492483 -rw-r--r-- 1 solene wheel 1973218 6 "Mar 20 18:31:08 2021" "Mar 20 18:31:14 2021" "Mar 20 18:31:14 2021" 16384 4 0 t/foo
│ │ +1043 77762 -rw-r--r-- 1 solene wheel 314338 10 "Mar 20 18:31:08 2021" "Mar 20 18:31:18 2021" "Mar 20 18:31:18 2021" 16384 4 0 a/foo
Diffoscope has many flags, if you want to only compare the directories content, you have to use "--exclude-directory-metadata yes".
Using the same example as previously with --exclude-directory-metadata yes, it looks like:
--- t/
+++ a/
│ --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello
Introduction
This Port of the week will introduce you to a Pie-menu for X11, available on OpenBSD since 6.9 (not released yet). A pie menu is a circle with items spread in the circle, allowing to open other circle with other items in it. I find it very effective for me because I am more comfortable with information spatially organized (my memory is based on spatialization). I think pmenu was designed for a tablet input device using a pen to trigger pmenu.
Pmenu github page
Installation
On OpenBSD, a pkg_add pmenu is enough, but on other systems you should be able to compile it out of the box with a C compiler and the X headers.
Configuration
This part is a bit tricky because the configuration is not obvious. Pmenu takes its configuration on the standard input and then must be piped to a shell.
My configuration file looks like this:
#!/bin/sh
cat <<ENDOFFILE | pmenu | sh &
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/utilities-terminal.png sakura
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/applets-screenshooter.png screen_up.sh
Apps
IMG:/usr/local/share/icons/hicolor/48x48/apps/gimp.png gimp
IMG:/home/solene/dev/pmenu/claws-mail.png claws-mail
IMG:/usr/local/share/pixmaps/firefox.png firefox
IMG:/usr/local/share/icons/hicolor/256x256/apps/keepassxc.png keepassxc
IMG:/usr/local/share/icons/hicolor/48x48/apps/chrome.png chrome
IMG:/usr/local/share/icons/hicolor/128x128/apps/rclone-browser.png rclone-browser
Games
IMG:/home/jeux/slay_the_spire/sts.png cd /home/jeux/slay_the_spire/ && libgdx-run
IMG:/home/jeux/Delver/unjar/a/Delver-Logo.png cd /home/jeux/Delver/unjar/ && /usr/local/jdk-1.8.0/bin/java -Dsun.java2d.dpiaware=true com.interrupt.dungeoneer.DesktopStarter
IMG:/home/jeux/Dead_Cells/deadcells.png cd /home/jeux/Dead_Cells/ && hl hlboot.dat
IMG:/home/jeux/brutal_doom/Doom-The-Ultimate-1-icon.png cd /home/jeux/doom2/ && gzdoom /home/jeux/brutal_doom/bd21RC4.pk3
Volume
0% sndioctl output.level=0
10% sndioctl output.level=0.1
20% sndioctl output.level=0.2
30% sndioctl output.level=0.3
40% sndioctl output.level=0.4
ENDOFFILE
The configuration supports levels, like "Apps" or "Games" in this example, that will allow a second level of shortcuts. A text could be used like in Volume, but you can also use images like in other categories. Every blank appearing in the configuration are tabs.
The pmenu itself can be customized by using X attributes, you can learn more about this on the official project page.
Video
I made a short video to show how it looks with the configuration shown here.
Note that pmenu is entirely browseable with the keyboard by using tab / enter / escape to switch to next / validate / exit.
Video demonstrating pmenu in action
Introduction
Today I will explain how to setup very easily the anti-spam SpamAssassin and make it work with the OpenSMTPD mail server (OpenBSD default mail server). I will suppose you are already familiar with mail servers.
Installation
We will need two packages to install: opensmtpd-filter-spamassassin and p5-Mail-SpamAssassin. The first one is a "filter" for OpenSMTPD, it's a special meaning in smtpd context, it will run spamassassin on incoming emails and the latter is the spamassassin daemon itself.
Filter
As explained in the pkg-readme file from the filter package /usr/local/share/doc/pkg-readmes/opensmtpd-filter-spamassassin , a few changes must be done to the smtpd.conf file. Mostly a new line to define the filter and add "filter "spamassassin"" to lines starting by "listen".
Website of the filter author who made other filters
SpamAssassin
SpamAssassin works perfectly fine out of the box, "rcctl enable spamassassin" and "rcctl start spamassassin" is enough to make it work.
Official SpamAssassin project website
Usage
It should really work out of the box, but you can train SpamAssassin what are good mails (called "ham") and what are spam by running the command "sa-learn --ham" or "sa-learn --spam" on directories containing that kind of mail, this will make spamassassin more efficient at filtering by content. Be careful, this command should be run as the same user as the daemon used by SpamAssassin.
In /var/log/maillog, spamassassin will give information about scoring, up to 5.0 (default), a mail is rejected. For legitimate mails, headers are added by spamassassin.
Learning
I use a crontab to run once a day sa-learn on my "Archives" directory holding all my good mails and "Junk" directory which has Spam.
0 2 * * * find /home/solene/maildir/.Junk/cur/ -mtime -1 -type f -exec sa-learn --spam {} +
5 2 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec sa-learn --ham {} +
Extra configuration
SpamAssassin is quite slow but can be speeded up by using redis (a key/value database in memory) for storing tokens that help analyzing content of emails. With redis, you would not have to care anymore about which user is running sa-learn.
You can install and run redis by using "pkg_add redis" and "rcctl enable redis" and "rcctl start redis", make sure that your port TCP/6379 is blocked from outside. You can add authentication to your redis server &if you feel it's necessary. I only have one user on my email server and it's me.
You then have to add some content to /etc/mail/spamassassin/local.cf , you may want to adapt to your redis configuration if you changed something.
bayes_store_module Mail::SpamAssassin::BayesStore::Redis
bayes_sql_dsn server=127.0.0.1:6379;database=4
bayes_token_ttl 300d
bayes_seen_ttl 8d
bayes_auto_expire 1
Configure a Bayes backend (like redis or SQL)
Conclusion
Restart spamassassin after this change and enjoy. SpamAssassin has many options, I only shared the most simple way to setup it with opensmtpd.
Introduction
On many Linux systems, there is a special program run by the shell (configured by default) that will tell you which package provide a command you tried to run but is not available in $PATH. Let's do the same for OpenBSD!
Prerequisites
We will need to install the package pkglocate to find binaries.
# pkg_add pkglocate
We will also need a file /usr/local/bin/command-not-found executable with this content:
#!/bin/sh
CMD="$1"
RESULT=$(pkglocate */bin/${CMD} */sbin/${CMD} | cut -d ':' -f 1)
if [ -n "$RESULT" ]
then
echo "The following package(s) contain program ${CMD}"
for result in $RESULT
do
echo " - $result"
done
else
echo "pkglocate didn't find a package providing program ${CMD}"
fi
Configuration
Now, we need to configure the shell to run this command when it detects an error corresponding to an unknown command. This is possible with bash, zsh or fish at least.
Bash configuration
Let's go with bash, add this to your bash configuration file
command_not_found_handle()
{
/usr/local/bin/command-not-found "$1"
}
Fish configuration
function fish_command_not_found
/usr/local/bin/command-not-found $argv[1]
end
ZSH configuration
function command_not_found_handler()
{
/usr/local/bin/command-not-found "$1"
}
Trying it
Now that you configured your shell correctly, if you run a command in your shell that isn't available in your PATH, you may have either a success with a list of packages giving the command or that the command can't be found in any package (unlucky).
This is a successful output that found the program we were trying to run.
$ pup
The following package(s) contain program pup
- pup-0.4.0p0
This is a result showing that no package found a program named "steam".
$ steam
pkglocate didn't find a package providing program steam
Introduction
This article features the 12 best games (in my opinion) in term of quality and fun available in OpenBSD packages. The list only contains open source games that you can install out of the box. This means that game engines requiring proprietary (or paid) game assets are not part of this list.
Tales of Maj'Eyal
Tome4 is a rogue-like game with many classes, many races, lot of areas to explore. There are fun pieces of lore to find and read if it's your thing, you have to play it many times to unlock everything. Note that while the game is open source, there are paid extensions requiring an online account on the official website, this is not mandatory to play or finish the game.
# pkg_add tome4
$ tome4
Tales of Maj'Eyal official website

OpenTTD
This famous game is a free reimplementation of the Transport Tycoon game. Build roads, rails, make huge trains networks with signals, transports materials from extraction to industries and then deliver goods to cities to make them grow. There is a huge community and many mods, and the game can be played in multiplayer. Also available on Android.
# pkg_add openttd
$ openttd
OpenTTD official website
[Peertube video] OpenTTD

The Battle for Wesnoth
Wesnoth is a turn based strategy game based on hexagons. There are many races with their own units. The game features a full set of campaign for playing solo but also include multiplayer. Also available on Android.
# pkg_add wesnoth
$ wesnoth
The Battle for Wesnoth official website

Endless Sky
This game is about space exploration, you are captain of a ship and you can get missions, enhance your ship, trade goods over the galaxy or fight enemies. There is a learning curve to enjoy it because it's quite hard to understand at first.
# pkg_add endless-sky
$ endless-sky
Endless Sky official website

OpenRA
Open Red Alert, the 100% free reimplementation of the engine AND assets of Red Alert, Command and Conquer and Dune. You can play all these games from OpenRA, including multiplayer. Note that there are no campaign, you can play skirmish alone with bots or in multiplayer. Campaigns (and cinematics) could be played using the original games files (from OpenRA launcher), as the games have been published as freeware a few years ago, one can find them for free and legally.
# pkg_add openra
$ openra
wait for instructions to download the assets of the game you want to play
OpenRA official website
[Peertube video] Red Alert

Cataclysm: Dark Days Ahead
Cataclysm DDA is a game in which you awake in a zombie apocalypse and you have to survive. The game is extremely complete and allow many actions/combinations like driving vehicles, disassemble electronics to build your own devices and many things I didn't try yet. The game is turn based and 2D from top, I highly recommend reading the manual and how-to because the game is hard. You can also create your character when you start a game, which will totally change the game experience because of your characters attributes and knowledge.
# pkg_add cataclysm-dda
$ cataclysm-dda
Cataclysm: Dark Days Ahead official website

Taisei
Taisei is a bullet hell game in the Touhou universe. Very well done, extremely fun, multiple characters to play with an alternative mechanic of each character.
# pkg_add taisei
$ taisei
Taisei official website
[Peertube video] Taisei

The Legend of Zelda: Return of the Hylian SE
There is a game engine named Solarus dedicated to write Zelda like games, and Zelda RotH is a game based on this. Nothing special to say, it's a 2D Zelda game, very well done with a new adventure.
# pkg_add zelda_roth_se
$ zelda_roth_se
Zelda RotH official website

Shapez.io
This game is about making industries from shapes and colors in order to deliver what you are asked to produce in the most efficient manner, this game is addictive and easy to understand thanks to the tutorial when you start the game.
# pkg_add shapezio
$ /usr/local/bin/electron /usr/local/share/shapez.io/index.html
Shapez.io official website

OpenArena
OpenArena is a Quake 3 reimplementation, including assets. It's like Quake 3 but it's not Quake 3 :)
# pkg_add openarena
$ openarena
OpenArena official website

Xonotic
This is a fast paced arena FPS game with beautiful graphics, many weapons with two modes of fire and many games modes. Reminds me a lot Unreal Tournament 2003.
# pkg_add xonotic
$ xonotic
Xonotic official website

Hyperrogue
This game is a rogue like (every run is different than last one) in which you move from hexagone to hexagone to get points, each biome has its own characteristics, like a sand biome in which you have to gather spice and you must escape sand worms :-) . The game is easy to play, turn by turn and has unusual graphics because of the non-euclidian nature of its world. I recommend reading the game manual because the first time I played it I really disliked it by missing most of the game mechanics... Also available on Android!
Hyperrogue official website

And many others
Here is a list of games I didn't include but at also worth being played: 0ad, Xmoto, Freedoom, The Dark Mod, Freedink, crack-attack, witchblast, flare, vegastrike and many others.
List of games available on OpenBSD
Introduction
This article features the very useful program "checkrestart" which is OpenBSD specific. The purpose of checkrestart is to display which programs and their according PID for which the binaries doesn't exist anymore.
Why would their binary be absent? The obvious case is that the program was removed, but what it is really good at, is when you upgrade a package with running binaries, the old binary is deleted and the new binary installed. In that case, you will have to stop all the running binaries and restart them. Hence the name "checkrestart".
Installation
Installing it is as simple as running pkg_add checkrestart
Usage
This is simple too, when you run checkrestart, you will have a list of PID numbers with the binary name.
For example, on my system, checkrestart gives me information about what programs got updated that I should restart to run the new binary.
69575 lagrange
16033 lagrange
9664 lagrange
77211 dhcpleased
6134 dhcpleased
21860 dhcpleased
Real world usage
If you run OpenBSD -stable, you will want to use checkrestart after running pkg_add -u. After a package update, most often related to daemons, you will have to restart the related services.
On my server, in my daily script updating packages and running syspatch, I use it to automatically restart some services.
checkrestart | grep php && rcctl restart php-fpm
checkrestart | grep postgres && rcctl restart postgresql
checkrestart | grep nginx && rcctl restart nginx
Other Operating System
I've been told that checkrestart is also available on FreeBSD as a package! The output may differ but the use is the same.
On Linux, a similar tool exists under the name "needrestart", at least on Debian and Gentoo.
Introduction
I would like to introduce you to a very nice game I discovered a few months ago, its name is Shapez.io and is a "factory" game, a genre popularized by the famous game Factorio. In this game you will have to extract shapes and colors and rework the shapez, mix colors and mix the whole thing together to produce wanted pieces.
The game
The gameplay is very cool, the early game is an introduction to the game mechanics, you can extract shapes, cut them rotate pieces, merge conveys belts into one, paint shapes etc... and logic circuits!
In those games, you will have to learn how to make efficient factories and mostly "tile-able" installations. A tile-able setup means that if you copy a setup and paste it next to it, it will be bigger and functional, meaning you can extend it to infinity (except that the input conveyors will starve at some point).
It can be quite addictive to improve your setups over and over. This game is non violent and doesn't require any reflex but you need to think. You can't loose, it's between a puzzle and a management game.

Where to get it
On OpenBSD since version 6.9 (not released yet when I publish this) you can install the package shapezio and find a launcher in your desktop environment Game menu.
I also compiled a web version that you can play in your web browser (I discourage using Firefox due to performance..) without installing it, it's legal because the game is open source :)
Play shapez.io in the web browser
The game is also sold on Steam, pre-compiled and ready to run, if you prefer it, it's also a nice way to support the developer.
shapez.io on Steam
More content
Official website
Youtube video of "Real civil engineer" explaining the game
Introduction
In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement.
I will explain how to install and configure Nginx and how to parse logs to obtain useful information. I will use an OpenBSD system for the examples.
It is important to understand that in this context Nginx is not doing anything related to HTTP.
Installation
On OpenBSD we need the package nginx-stream, if you are unsure about which package is required on your system, search which package provide the file ngx_stream_module.so . To enable Nginx at boot, you can use rcctl enable nginx.
Nginx stream module core documentation
Nginx stream module log documentation
Configuration
The default configuration file for nginx is /etc/nginx/nginx.conf , we will want it to listen on port 1965 and relay to 127.0.0.1:11965.
worker_processes 1;
load_module modules/ngx_stream_module.so;
events {
worker_connections 5;
}
stream {
log_format basic '$remote_addr $upstream_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';
access_log logs/nginx-access.log basic;
upstream backend {
hash $remote_addr consistent;
server 127.0.0.1:11965;
}
server {
listen 1965 ssl;
ssl_certificate /etc/ssl/perso.pw:1965.crt;
ssl_certificate_key /etc/ssl/private/perso.pw:1965.key;
proxy_pass backend;
}
}
In the previous configuration file, the backend defines the destination, multiples servers could be defined, with weights and timeouts, there is only one in this example.
The server block will tell on which port Nginx should listen and if it has to handle TLS (which is named ssl because of history), usual TLS configuration can be used here, then for a request, we have to tell to which backend Nginx have to relay the connections.
The configuration file defines a custom log format that is useful for TLS connections, it includes remote host, backend destination, connection status, bytes transffered and duration.
Log parsing
Using awk to calculate time performance
I wrote a quite long shell command parsing the log defined earlier that display the number of requests, and median/min/max session time.
$ awk '{ print $NF }' /var/www/logs/nginx-access.log | sort -n | awk '{ data[NR] = $1 } END { print "Total: "NR" Median:"data[int(NR/2)]" Min:"data[2]" Max:"data[NR] }'
Total: 566 Median:0.212 Min:0.000 Max:600.487
Find bad clients using awk
Sometimes in the logs there are clients that obtains a status 500, meaning the TLS connection haven't been established correctly. It may be some scanner that doesn't try a TLS connection, if you want to get statistics about those and see if it would be worth to block them if they do too many attempt, it is easy to use awk to get the list.
awk '$(NF-3) == 500 { print $1 }' /var/www/logs/nginx-access.log
Using goaccess for real time log visualization
It is also possible to use the program Goaccess to view logs in real time with many information, it is really an awesome program.
goaccess --date-format="%d/%b/%Y" \
--time-format="%H:%M:%S" \
--log-format="%h %r [%d:%t %^] TCP %s %^ %b %L" /var/www/logs/nginx-access.log
Goaccess official website
Conclusion
I was using relayd before trying Nginx with stream module, while relayd worked fine it doesn't provide any of the logs Nginx offer. I am really happy with this use of Nginx because it is a very versatile program that shown to be more than a http server over time. For a minimal setup I would still recommend lighter daemon such as relayd.
Introduction
In this Port of the Week I will introduce you to the IRC client catgirl. While there are already many IRC clients available (and good ones), there was a niche that wasn't filled yet, between minimalism (ii, irCII) and full featured clients (irssi, weechat) in the terminal world. Here comes catgirl, a simple IRC client coming with enough features to be comfortable to use for heavy IRC users.
Catgirl has the following features: tab completion, split scrolling, URL detection, nick coloring, ignores filter. On the other hand, it doesn't support non-TLS networks, CCTP, multi networks or dynamic configuration. If you want to use catgirl with multiples networks, you have to run it once per network.
Catgirl will be available as a package in OpenBSD starting with version 6.9.
OpenBSD security bonus: catgirl features a very good use of unveil to reduce file system access to the minimum required (configuration+logs+certs), reducing the severity of an exploit. It also has a restricted mode when using the -R parameter that reduce features like notifications or url handling and tight the pledge list (allowing systems calls).
Catgirl official website

Configuration
A simple configuration file to connect to the irc.tilde.chat server would look like the following file that must be stored under ~/.config/catgirl/tilde
nick = solene_nickname
real = Solene
host = irc.tilde.chat
join = #foobar-channel
You can then run catgirl and use the configuration file but passing the config file name as parameter.
$ catgirl tilde
Usage and tips
I recommend reading catgirl man page, everything is well explained there. I will cover most basics needs here.
Catgirl man page
Catgirl only display one window at a time, it is not possible to split the display, but if you scroll up you will see the last displayed lines and the text stream while keeping the upper part displaying the history, it is a neat way to browse the history without cutting yourself from what's going on in the channel.
Channels can be browsed from keyboard using Ctrl+N or Ctrl+P like in Irssi or by typing /window NUMBER, with number being the buffer number. Alt+NUMBER could also be used to switch directly to buffer NUMBER.
Searches in buffer could be used by typing a word in your input and using Ctrl+R to search backward or Ctrl+S for searching forward (given you are in the history of course).
Finally, my most favorite feature which is missing in minimal clients is Alt+A, jumping to next buffers I have to read (also yes, catgirl keep a line with information about how many messages in channels since last time you didn't read them). Even better, when you press alt+A while there is nothing to read, you jump back to the channel you manually selected last, this allow to quickly read what you missed and return to the channel you spend all your time on.
Conclusion
I really love this IRC client, it replaced Irssi that I used for years really easily because most of the key bindings are the same, but I am also very happy to use a client that is a lot safer (on OpenBSD). It can be used with tmux for persistence but also connect to multiple servers and make it manageable.
Introduction
This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).
From all this list, the following list is started by default: cron, pflogd, sndiod, openssh, ntpd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.
Service list
I extracted the list of base install services by looking at /etc/rc.conf.
$ grep _flags /etc/rc.conf | cut -d '_' -f 1
amd
This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".
amd man page
apmd
This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.
apmd man page
apm man page
bgpd
This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.
OpenBGPD website
bootparamd
This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.
Information about a diskless setup
cron
This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.
Cron man page
Crontab command
Crontab format
dhcpd
This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.
dhcrelay
This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.
dvmrpd
This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.
eigrpd
This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.
ftpd
This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).
ftpd man page
ftpproxy
This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.
ftpproxy6
Same as above but for IPv6. Using IPv6 behind a NAT make no sense.
hostapd
This is the daemon that turns OpenBSD into a WiFi access point.
hostapd man page
hostapd configuration file man page
hotplugd
hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.
hotplugd man page
httpd
httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.
httpd man page
httpd configuration file man page
identd
Identd is a daemon for the Identification Protocol which returns the login name of an user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.
ifstated
This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.
ifstated man page
ifstated configuration file man page
iked
This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.
OpenBSD FAQ about VPN
inetd
This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.
inetd man page
isakmpd
This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.
iscsid
This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.
ldapd
This is a light LDAP server, offering version 3 of the protocol.
ldap client man page
ldapd daemon man page
ldapd daemon configuration file man page
ldattach
This daemon allows to configure programs that are exposed as a serial port, such as gps devices.
ldomd
This daemon is specific to the sparc64 platform and provide services for dom feature.
lockd
This daemon is used as part of a NFS environment to support file locking.
ldpd
This daemon is used by MPLS routers to get labels.
lpd
This daemon is used to manage print access to a line printer.
mountd
This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.
mountd man page
showmount man page
mopd
This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.
mrouted
Similar to dvmrpd.
nfsd
This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.
nfsd man page
nfsstat man page
npppd
This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.
nsd
This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.
nsd man page
nsd configuration file man page
ntpd
This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.
ntpd man page
ospfd
It is a daemon for the OSPF routing protocol (Open Shortest Path First).
ospf6d
Same as above for IPv6.
pflogd
This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.
pflogd man page
tcpdump
portmap
This daemon is used as part of a NFS environment.
rad
This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.
radiusd
This daemon is used to offer RADIUS protocol authentication.
rarpd
This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.
Information about a diskless setup
rbootd
Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».
relayd
This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.
relayd man page
relayd control tool man page
relayd configuration file man page
ripd
This is a routing daemon using an old protocol but widely supported.
route6d
Same as above but for IPv6.
sasyncd
This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.
sensorsd
This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.
sensorsd man page
sensorsd configuration file man page
slaacd
This service is a daemon that will automatically pick up auto IPv6 configuration on the network.
slowcgi
This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.
slowcgi man page
smtpd
This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.
smtpd man page
smtpd configuration file man page
smtpd control command man page
sndiod
This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.
sndiod man page
sndiod control command man page
mixerctl man page to control an audio device
OpenBSD FAQ about multimedia devices
snmpd
This daemon is a SNMP server exposing some system metrics to SNMP client.
snmpd man page
snmpd configuration file man page
spamd
This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.
spamlogd
This daemon is dedicated to the update of spamd whitelist.
sshd
This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.
sshd man page
sshd configuration file man page
statd
This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.
switchd
This daemon is used to control a switch pseudo device.
switch pseudo device man page
syslogd
This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.
syslogd man page
syslogd configuration file man page
newsyslog man page
logger man page
tftpd
This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.
Information about a diskless setup
tftpproxy
This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.
unbound
This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.
unbound man page
unbound configuration file man page
unwind
This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.
unwind man page
unwind configuration file man page
unwind control command man page
vmd
This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.
vmd man page
vmd configuration file man page
vmd control command man page
vmm driver man page
OpenBSD FAQ about virtualization
watchdogd
This daemon is used to trigger watchdog timer devices if any.
wsmoused
This daemon is used to provide a mouse support to the console.
xenodm
This daemon is used to start the X server and allow users to authenticate themselves and log in their session.
xenodm man page
ypbind
This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.
ypldap
This daemon offers a YP service using a LDAP backend.
ypserv
This daemon is a YP server.
Introduction
In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.
There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.
Security matters
Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.
In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.
Pledge / unveil on userland
Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.
Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.
Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.
OpenBSD presentation about Unveil (BSDCan2019)
OpenBSD presentation of Pledge and Unveil (BSDCan2018)
Privilege separation
Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.
Clock synchronization
The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.
X display not as root
If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.
Resources limits
Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.
Genuine full disk encryption
When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.
W^X
Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.
OpenBSD presentation « Kernel W^X Improvements In OpenBSD »
Only one reliable randomness source
When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.
OpenBSD presentation about arc4random
Accurate documentation
OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.
OpenBSD man pages online
EuroBSDcon 2018 about « Better documentation »
IPSec and Wireguard out of the box
If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.
Memory safeties
OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.
Dedicated root account
When you install the system, a root account is created and its password is asked, then you create an user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.
Small network attack surface
The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).
Encrypted swap
By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.
SMT disabled
Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.
Meltdown: one of the first security issue related to speculative execution in the CPU
Micro and Webcam disabled
With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.
Maintainability, release often, update often
The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.
Signify chain of trust
Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.
Signify at BSDCan 2015
Packages
While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.
Chroot by default when available
Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.
Dedicated users for services
Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.
Installing a service doesn't enable it
When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.
rcctl man page
Conclusion
Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.
There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.
« Mitigations and other real security features » by Theo De Raadt
OpenBSD innovations
OpenBSD events, often including slides or videos
Introduction
Firejail is a program that can prepare sandboxes to run other programs. This is an efficient way to keep a software isolated from the rest of the system without need of changing its source code, it works for network, graphical or daemons programs.
You may want to sandbox programs you run in order to protect your system for any issue that could happen within the program (security breach, code mistake, unknown errors), like Steam once had a "rm -fr /" issue, using a sandbox that would have partially saved a part of the user directory. Web browsers are major tools nowadays and yet they have access to the whole system and have many security issues discovered and exploited in the wild, running it in a sandbox can reduce the data a hacker could exfiltrate from the computer. Of course, sandboxing comes with an usability tradeoff because if you only allow access to the ~/Downloads/ directory, you need to put files in this directory if you want to upload them, and you can only download files into this directory and then move them later where you really want to keep your files.
Installation
On most Linux systems you will find a Firejail package that you can install. If your distribution doesn't provide a Firejail package, it seems the installing from sources process is quite easy, and as the project is written in C with limited dependencies it may be easy to get the build process done.
There are no service to enable and no kernel parameters to add. Apparmor or SELinux features in kernel can be used to integrates into Firejail profiles if you want to.
Usage
Start a program
The simplest usage is to run a command by adding Firejail before the command name.
$ Firejail firefox
Use a symlink
Firejail has a neat feature to allow starting software by their name without calling Firejail explicitly, if you create a symbolic link in your $PATH using a program name but targeting Firejail, when you call that name Firejail will automatically now what you want to start. The following example will run firefox when you call the symbolic link.
export PATH=~/bin/:$PATH
$ ln -s /usr/bin/firejail ~/bin/firefox
$ firefox
Listing sandboxes
There is a Firejail --list command that will tell you about all sandboxes running and what are their parameters. As a first column the identifier is available for more Firejail features.
$ firejail --list
6108:solene::/usr/bin/firejail /usr/bin/firefox
Limit bandwidth per program
Firejail also has a neat feature that allows to limit the bandwidth available only for one sandbox environment. Reusing previous list output, I will reduce firefox bandwidth, the number are in kB/s.
$ firejail --bandwidth=6108 set wlan0 1000 40
You can find more information about this feature in the "TRAFFIC SHAPING" section of the Firejail man page.
Restrict network access
If for some reason you want to start a program with absolutely no network access, you can run a program and deny it any network.
$ firejail --net=none libreoffice
Conclusion
Firejail is a neat way to start software into sandboxes without requiring any particular setup. It may be more limited and maybe less reliable than OpenBSD programs who received unveil() features but it's a nice trade off between safety and required work within source code (literally none). It is a very interesting project that proves to work easily on any Linux system, with a simple C source code with little dependencies. I am not really familiar with Linux kernel and its features but Firejail seems to use seccomp-bpf and namespace, I guess they are complicated to use but powerful and Firejail comes here as a wrapper to automate all of this.
Firejail has been proven to be USABLE and RELIABLE for me while my attempts at sandboxing Firefox with AppArmor were tedious and not optimal. I really recommend it.
More resources
Official project website with releases and security information
Firejail sources and documentation
Community profiles 1
Community profiles 2
This is a February 2021 update of a text originally published in April 2017.
Introduction
I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.
OpenBSD pf.conf man page about queuing
Prerequisites
My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).
PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.
Configuration
Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.
# we define a main queue (requirement)
queue main on em0 bandwidth 1G
# set a queue for everything
queue normal parent main bandwidth 200K max 200K default
And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`
QUEUE BW/FL SCH PKTS BYTES DROP_P DROP_B QLEN
main on em0 1000M fifo 0 0 0 0 0
normal 1000M fifo 535424 36032467 0 0 60
More control (per user / protocol)
This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.
# within the queue rules
queue oasis parent main bandwidth 150K max 150K
# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis
Instead of an user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.
In these times of remote work / home office, you may have a limited bandwidth shared with other people/device. All software doesn't provide a way to limit bandwidth usage (package manager, Youtube videos player etc...).
Fortunately, Linux has a very nice program very easy to use to limit your bandwidth in one command. This program is « Wondershaper » and is using the Linux QoS framework that is usually manipulated with "tc", but it makes it VERY easy to set limits.
What are QoS, TC and Filters on Linux
On most distributions, wondershaper will be available as a package with its own name. I found a few distributions that didn't provide it (NixOS at least), and some are providing various wondershaper versions.
To know if you have the newer version, a "wondershaper --help" may provide information about "-d" and "-u" flags, the older version doesn't have this.
Wondershaper requires the download and upload bandwidths to be set in kb/s (kilo bits per second, not kilo bytes). I personally only know my bandwidth in kB/s which is a 1/8 of its kb/s equivalent. My home connection is 1600 kB/s max in download and 95 kB/s max in upload, I can use wondershaper to limit to 1000 / 50 so it won't affect much my other devices on my network.
# my network device is enp3s0
# new wondershaper
sudo wondershaper -a enp3s0 -d $(( 1000 * 8 )) -u $(( 50 * 8 ))
# old wondershaper
sudo wondershaper enp3s0 $(( 1000 * 8 )) $(( 50 * 8 ))
I use a multiplication to convert from kB/s to kb/s and still keep the command understandable to me. Once a limit is set, wondershaper can be used to clear the limit to get full bandwidth available again.
# new wondershaper
sudo wondershaper -c -a enp3s0
# old wondershaper
sudo wondershaper clear enp3s0
There are so many programs that doesn't allow to limit download/upload speeds, wondershaper effectiveness and ease of use are a blessing.
Introduction
In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.
OpenBSD pf.conf man page about OS Fingerprinting
Explanations
Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.
Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.
How to setup
The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).
# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22
# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22
This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.
This quick article will explain how to install pkgsrc packages on an OpenBSD installation. This is something regulary asked on #openbsd freenode irc channel. I am not convinced by the relevant use of pkgsrc under OpenBSD but why not :)
I will cover an unprivileged installation that doesn't require root. I will use packages from 2020Q4 release, I may not update regularly this text so you will have to adapt to your current year.
$ cd ~/
$ ftp https://cdn.NetBSD.org/pub/pkgsrc/pkgsrc-2020Q4/pkgsrc.tar.gz
$ tar -xzf pkgsrc.tar.gz
$ cd pkgsrc/bootstrap
$ ./bootstrap --unprivileged
From now you must add the path ~/pkg/bin to your $PATH environment variable. The pkgsrc tree is in ~/pkgsrc/ and all the relevant files for it to work are in ~/pkg/.
You can install programs by searching directories of software you want in ~/pkgsrc/ and run "bmake install", for example in ~/pkgsrc/chat/irssi/ to install irssi irc client.
I'm not sure X11 software compiles well, I got issues compiling dbus as a dependency of x11/xterm and I got compilation errors, maybe clashing with Xenocara from base system... I don't really want to investigate more about this though.
Introduction
In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD
What is TOTP (Time-based One time Password)
When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.
TOTP software
Here is a quick list of TOTP software
- command line: oathtool from package oath-toolkit
- GUI and multiplatform: KeepassXC
- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)
Setup
A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.
# pkg_add login_oath
You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:
# totp OR password
totp:\
:auth=-totp,passwd:\
:tc=default:
# totp AND password
totppw:\
:auth=-totp-and-pwd:\
:tc=default:
If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.
Local login
Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.
# usermod -L totp some_user
In the user directory, you have to generate a key and give it the correct permissions.
$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key
The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.
python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"
SSH login
It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.
This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.
sshd_config man page about AuthenticationMethods
# enable for everyone
AuthenticationMethods publickey,password
# for one user
Match User solene
AuthenticationMethods publickey,password
Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.
Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.
Conclusion
In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.
Hello, in this article I would like to share my thoughts about the NixOS Linux distribution. I've been using it daily for more than six months as my main workstation at work and on some computer at home too. I also made modest contributions to the git repository.
NixOS official website
Introduction
NixOS is a Linux distribution built around Nix tool. I'll try to explain quickly what Nix is but if you want more accurate explanations I recommend visiting the project website. Nix is the package manager of the system, Nix could be used on any Linux distribution on top of the distribution package manager. NixOS is built from top to bottom from Nix.
This makes NixOS a system entirely different than what one can expect from a regular Linux/Unix system (with the exception of Guix sharing the same idea with a different implementation). NixOS system configuration is stateless, most of the system is in read-only and most of paths you know doesn't exist. The directory /bin/sh only contains "sh" which is a symlink.
The whole system configuration: fstab, packages, users, services, crontab, firewall... is configured from a global configuration file that defines the state of the system.
An example of my configuration file to enable graphical interface with Mate as a desktop and a french keyboard layout.
services.xserver.enable = true;
services.xserver.layout = "fr";
services.xserver.libinput.enable = true;
services.xserver.displayManager.lightdm.enable = true;
services.xserver.desktopManager.mate.enable = true;
I could add the following lines into the configuration to add auto login into my graphical session.
services.xserver.displayManager.autoLogin.enable = true;
services.xserver.displayManager.autoLogin.user = "solene";
Pros
There are a lot of pros. The system is really easy to setup, installing a system (for a reinstall or replicate an installation) is very easy, you only need to get the configuration.nix file from the other/previous system. Everything is very fast to setup, it's often only a few lines to add to the configuration.
Every time the system is rebuilt from the configuration file, a new grub entry is made so at boot you can choose on which environment you want to boot. This make upgrades or tries very easy to rollback and safe.
Documentation! The NixOS documentation is very nice and is part of the code. There is a special man page "configuration.nix" in the system that contains all variables you can define, what values to expect, what is the default and what it's doing. You can literally search for "steam", "mediawiki" or "luks" to get information to configure your system.
All the documentation
Builds are reproducible, I don't consider it a huge advantage but it's nice to have it. This allow to challenge a package mirror by building packages locally and verifying they provide the exact same package on the mirror.
It has a lot of packages. I think the NixOS team is pretty happy to share their statistics because, if I got it right, Nixpkgs is the biggest and up to date repository alive.
Search for a package
Cons
When you download a pre compiled Linux program that isn't statically built, it's a huge pain to make it work on NixOS. The binary will expect some paths to exist at usual places but they won't exist on NixOS. There are some tricks to get them work but it's not always easy. If the program you want isn't in the packages, it may not be easy to use it. Flatpak can help to get some programs if they are not in the packages though.
Running binaries
It takes disk space, some libraries can exist at the same time with small compilation differences. A program can exist with different version at the same time because of previous builds still available for boot in grub, if you forget to clean them it takes a lot of memory.
The whole system (especially for graphical environments) may not feel as polished as more mainstream distributions putting a lot of efforts into branding and customization. NixOS will only install everything and you will have a quite raw environment that you will have to configure. It's not a real cons but in comparison to other desktop oriented distributions, NixOS may not look as good out of the box.
Conclusion
NixOS is an awesome piece of software. It works very well and I never had any reliability issue with it. Some services like xrdp are usually quite complex to setup but it worked out of the box here for me.
I see it as a huge Lego© box with which you can automate the building of the super system you want, given you have the schematics of its parts. Once you need a block you don't have in your recipes list, you will have a hard time.
I really classify it into its own category, in comparison to Linux/BSD distributions and Windows, there is the NixOS / Guix category with those stateless systems for which the configuration is their code.
I would like to share about Vger internals in regards to how the security was thought to protect vger users and host systems.
Vger code repository
Thinking about security first
I claim about security in Vger as its main feature, I even wrote Vger to have a secure gemini server that I can trust. Why so? It's written in C and I'm a beginner developer in this language, this looks like a scam.
I chose to follow the best practice I'm aware of from the very first line. My goal is to be sure Vger can't be used to exfiltrate data from the host on which it runs or to allow it to run arbirary command. While I may have missed corner case in which it could crash, I think a crash is the worse that can happen with Vger.
Smallest code possible
Vger doesn't have to manage connections or TLS, this was a lot of code already removed by this design choice. There are better tools which are exactly made for this purpose, so it's time to reuse other people good work.
Inetd and user
Vger is run by inetd daemon, allowing to choose the user running vger. Using a dedicated user is always a good idea to prevent any harm in case of issue, but it's really not sufficient to protect vger to behave badly.
Another kind of security benefit is that vger runtime isn't looping like a daemon awaiting new connections. Vger accept a request, read a file if exist and gives its result and terminates. This is less error prone because no variable can be reused or tricked after a loop that could leave the code in an inconsistent or vulnerable state.
Chroot
A critical vger feature is the ability to chroot into a directory, meaning the directory is now seen as the root of the file system (/var/gemini would be seen as /) and prevent vger to escape it. In addition to the chroot feature, the feature allow vger to drop to an unprivileged user.
/*
* use chroot() if an user is specified requires root user to be
* running the program to run chroot() and then drop privileges
*/
if (strlen(user) > 0) {
/* is root? */
if (getuid() != 0) {
syslog(LOG_DAEMON, "chroot requires program to be run as root");
errx(1, "chroot requires root user");
}
/* search user uid from name */
if ((pw = getpwnam(user)) == NULL) {
syslog(LOG_DAEMON, "the user %s can't be found on the system", user);
err(1, "finding user");
}
/* chroot worked? */
if (chroot(path) != 0) {
syslog(LOG_DAEMON, "the chroot_dir %s can't be used for chroot", path);
err(1, "chroot");
}
chrooted = 1;
if (chdir("/") == -1) {
syslog(LOG_DAEMON, "failed to chdir(\"/\")");
err(1, "chdir");
}
/* drop privileges */
if (setgroups(1, &pw->pw_gid) ||
setresgid(pw->pw_gid, pw->pw_gid, pw->pw_gid) ||
setresuid(pw->pw_uid, pw->pw_uid, pw->pw_uid)) {
syslog(LOG_DAEMON, "dropping privileges to user %s (uid=%i) failed",
user, pw->pw_uid);
err(1, "Can't drop privileges");
}
}
No use of third party libs
Vger only requires standard C includes, this avoid leaving trust to dozens of developers using fragile or barely tested code.
OpenBSD specific code
In addition to all the previous security practices, OpenBSD is offering a few functions to help restricting a lot what Vger can do.
The first function is pledge, allowing to restrict the system calls that can happen within the code itself. The current syscalls allowed in vger are related to the categories "rpath" and "stdio", basically standard input/output and reading files/directories only. This mean after pledge() is called, if any syscall not in those two categories is used, vger will be killed and a pledge error will be reported in the logs.
The second function is unveil, which will basically restrict access to the filesystem to anything but what you list, with the permission. Currently, vger only allows file access in read-only mode in the base directory used to serve files.
Here is an extract of the code relative to the OpenBSD specific code. With unveil available everywhere chroot wouldn't be required.
#ifdef __OpenBSD__
/*
* prevent access to files other than the one in path
*/
if (chrooted) {
eunveil("/", "r");
} else {
eunveil(path, "r");
}
/*
* prevent system calls other parsing queryfor fread file and
* write to stdio
*/
if (pledge("stdio rpath", NULL) == -1) {
syslog(LOG_DAEMON, "pledge call failed");
err(1, "pledge");
}
#endif
The least code before dropping privileges
I made my best to use the least code possible before reducing Vger capabilities. Only the code managing the parameters is done before activating chroot and/or unveil/pledge.
int
main(int argc, char **argv)
{
char request [GEMINI_REQUEST_MAX] = {'\0'};
char hostname [GEMINI_REQUEST_MAX] = {'\0'};
char uri [PATH_MAX] = {'\0'};
char user [_SC_LOGIN_NAME_MAX] = "";
int virtualhost = 0;
int option = 0;
char *pos = NULL;
while ((option = getopt(argc, argv, ":d:l:m:u:vi")) != -1) {
switch (option) {
case 'd':
estrlcpy(chroot_dir, optarg, sizeof(chroot_dir));
break;
case 'l':
estrlcpy(lang, "lang=", sizeof(lang));
estrlcat(lang, optarg, sizeof(lang));
break;
case 'm':
estrlcpy(default_mime, optarg, sizeof(default_mime));
break;
case 'u':
estrlcpy(user, optarg, sizeof(user));
break;
case 'v':
virtualhost = 1;
break;
case 'i':
doautoidx = 1;
break;
}
}
/*
* do chroot if an user is supplied run pledge/unveil if OpenBSD
*/
drop_privileges(user, chroot_dir);
The Unix way
Unix is made of small component that can work together as small bricks to build something more complex. Vger is based on this idea by delegating the listening daemon handling incoming requests to another software (let's say relayd or haproxy). And then, what's left from the gemini specs once you delegate TLS is to take account of a request and return some content, which is well suited for a program accepting a request on its standard input and giving the result on standard ouput. Inetd is a key here to make such a program compatible with a daemon like relayd or haproxy. When a connection is made into the TLS listening daemon, a local port will trigger inetd that will run the command, passing the network content to the binary into its stdin.
Fine grained CGI
CGI support was added in order to allow Vger to make dynamic content instead of serving only static files. It has a fine grained control, you can allow only one file to be executable as a CGI or a whole directory of files. When serving a CGI, vger forks, a pipe is opened between the two processes and a process is using execlp to run the cgi and transmit its output to vger.
Using tests
From the beginning, I wrote a set of tests to be sure that once a kind of request or a use case work I can easily check I won't break it. This isn't about security but about reliability. When I push a new version on the git repository, I am absolutely confident it will work for the users. It was also an invaluable help for writing Vger.
As vger is a simple binary that accept data in stdin and output data on stdout, it is simple to write tests like this. The following example will run vger with a request, as the content is local and within the git repository, the output is predictable and known.
printf "gemini://host.name/autoidx/\r\n" | vger -d var/gemini/
From here, it's possible to build an automatic test by checking the checksum of the output to the checksum of the known correct output. Of course, when you make a new use case, this requires manually generating the checksum to use it as a comparison later.
OUT=$(printf "gemini://host.name/autoidx/\r\n" | ../vger -d var/gemini/ -i | md5)
if ! [ $OUT = "770a987b8f5cf7169e6bc3c6563e1570" ]
then
echo "error"
exit 1
fi
At this time, vger as 19 use case in its test suite.
By using the program `entr` and a Makefile to manage the build process, it was very easy to trigger the testing process while working on the source code, allowing me to check the test suite only by saving my current changes. Anytime a .c file is modified, entr will trigger a make test command that will be displayed in a dedicated terminal.
ls *.c | entr make test
Realtime integration tests? :)
Conclusion
By using best practices, reducing the amount of code and using only system libraries, I am quite confident about Vger good security. The only real issue could be to have too many connections leading to a quite high load due to inetd spawning new processes and doing a denial of services. This could be avoided by throttling simultaneous connection in the TLS daemon.
If you want to contribute, please do, and if you find a security issue please contact me, I'll be glad to examine the issue.
Lately I wanted to change the way I use my free time. I define my free time as: not working, not sleeping, not eating. So, I estimate it to six hours a day in work day and fourteen hours in non worked day.
With the year 2020 being quite unusual, I was staying at home most of the time without seeing the time passing. At the end of the year, I started to mix the duration of weeks and months which disturbed me a lot.
For a a few weeks now, I started to change the way I spend my free time. I thought it was be nice to have a few separate activies in the same day to help me realizing how time is passing by.
Activity list
Here is the way I chose to distribute my free time. It's not a strict approach, I measure nothing. But I try to keep a simple ratio of 3/6, 2/6 and 1/6.
Recreation: 3/6
I spend a lot of time in recreation time. A few activies I've put into recreation:
- video games
- movies
- reading novels
- sports
Creativity: 2/6
Those activies requires creativy, work and knowledge:
- writing code
- reading technical books
- playing music
- creating content (texts, video, audio etc..)
Chores: 1/6
Yes, obviously this has to be done on free time... And it's always better to do it a bit everyday than accumulating it until you are forced to proceed.
Conclusion
I only started for a few weeks now but I really enjoy doing it. As I said previously, it's not something I stricly apply, but more a general way to spend my time and not stick for six hours writing code in a row from after work to going to sleep. I really feel my life is better balanced now and I feel some accomplishments for the few activies done every day.
Questions / Answers
Some asked asked me if I was planning in advance how I spend my time.
The answer is no. I don't plan anything but when I tend to lose focus on what I'm doing (and this happen often), I think about this time repartition method and then I think it may be time to jump on another activity and I pick something in another category. Now I think about it, that was very often that I was doing something because I was bored and lacking idea of activities to occupy myself, with this current list I no longer have this issue.
I don't often give my own opinion on this blog but I really feel it is important here.
The matter is about ecology, fair money distribution and civilization. I feel I need to share a bit about my lifestyle, in hope it will have a positive impact on some of my readers. I really think one person can make a change. I changed myself, only by spending a few moments with a member of my family a few years ago. That person never tried to convince me of anything, they only lived by their own standard without never offending me, it was simple things, nothing that would make that person a paria in our society. But I got curious about the reasons and I figurated it myself way later, now I understand why.
My philisophy is simple. In a life in modern civilization where everything is going fast, everyone cares about opinions other have about them and ultra communication, step back.
Here are the various statement I am following, this is something I self defined, it's not absolute rules.
- Be yourself and be prepare to assume who you are. If you don't have the latest gadget you are not "has been", if you don't live in a giant house, you didn't fail your career, if you don't have a top notch shiny car nobody should ever care.
- Reuse what you have. It's not because a cloth has a little scratch that you can't reuse it. It's not because an electronic device is old that you should replace it.
- Opensource is a great way to revive old computers
- Reduce your food waste to 0 and eat less meat because to feed animals we eat this requires a huge food production, more than what we finally eat in the meat
- Travel less, there are a lot to see around where I live than at the other side of the planet. Certainly not go on vacation far away from home only to enjoy a beach under the sun. This also mean no car if it can be avoided, and if I use a car, why not carpooling?
- Avoid gadgets (electronic devices that bring nothing useful) at all cost. Buy good gears (kitchen tools, workshop tools, furnitures etc...) that can be repaired. If possible buy second hand. For non-essential gears, second hand is mandatory.
- In winter, heat at 19°C maximum with warm clothes while at home.
- In summer, no A/C but use of extern isolation and vines along the home to help cooling down. And fans + water while wearing lights clothes to keep cool.
While some people are looking for more and more, I do seek for less. There are not enough for everyone on the planet, so it's important to make sacrifices.
Of course, it is how I am and I don't expect anyone to apply this, that would be insane :)
Be safe and enjoy this new year! <3
Lowtech Magazine, articles about doing things using simple technology
Dans ce billet je vais vous livrer mon ressenti sur ce que j'aime dans OpenBSD.
Respect de la vie privée
Il n'y a aucune télémétrie dans OpenBSD, je n'ai pas à m'inquiéter pour le respect de ma vie privée. Pour rappel, la télémétrie est un mécanisme qui consiste à remonter des informations de l'utilisateur afin d'analyser l'utilisation du produit.
De plus, le défaut du système a été de désactiver entièrement le micro, à moins d'une intervention avec le compte root, le microphone enregistre du silence (ce qui permet de ne pas le bloquer quant à des droits d'utilisation). A venir dans 6.9, la caméra suit le même chemin et sera désactivée par défaut. Il s'agit pour moi d'un signal fort quant à la nécessité de protéger l'utilisateur.
Navigateurs web sécurisés
Avec l'ajout des fonctionnalités de sécurité (pledge et surtout unveil) dans les sources de Firefox et Chromium, je suis plus sereine quant à leur utilisation au quotidien. À l'heure actuelle, l'utilisation d'un navigateur web est quasiment incontournable, mais ils sont à la fois devenus extrêmement complexes et mal maîtrisés. L'exécution de code côté client via Javascript qui a de plus en plus de possibilité, de performances et de nécessités, ajouter un peu de sécurité dans l'équation était nécessaire. Bien que ces ajouts soient parfois un peu dérangeants à l'utilisation, je suis vraiment heureuse de pouvoir en bénéficier.
Avec ces sécurités ajoutés (par défaut), les navigateurs cités précédemment ne peuvent pas parcourir les répertoires en dehors de ce qui leur est nécessaire à leur bon fonctionnement plus les dossiers ~/Téléchargements/ et /tmp/. Ainsi, des emplacements comme ~/Documents ou ~/.gnupg sont totalement inaccessibles ce qui limite grandement les risques d'exfiltration de données par le navigateur.
On pourrait refaire grossièrement la même fonctionnalité sous Linux en utilisant AppArmor mais l'intégration est extrêmement compliquée (là où c'est par défaut sur OpenBSD) et un peu moins efficace, il est plus facile d'agir au bon moment depuis le code plutôt qu'en encapsulant le programme entier d'un groupe de règles.
Pare-feu PF
Avec PF, il est très simple de vérifier le fichier de configuration pour comprendre les règles en place sur le serveur ou un ordinateur de bureau. La centralisation des règles dans un fichier et le système de macros permet d'écrire des règles simples et lisibles.
J'utilise énormément la fonctionnalité de gestion de bande passante pour limiter le débit de certaines applications qui n'offrent pas ce réglage. C'est très important pour moi n'étant pas la seule utilisatrice du réseau et ayant une connexion assez lente.
Sous Linux, il est possible d'utiliser les programmes trickle ou wondershaper pour mettre en place des limitations de bande passante, par contre, iptables est un cauchemar à utiliser en tant que firewall!
C'est stable
A part à l'utilisation sur du matériel peu répandu, OpenBSD est très stable et fiable. Je peux facilement atteindre deux semaines d'uptime sur mon pc de bureau avec plusieurs mises en veille par jour. Mes serveurs OpenBSD tournent 24/24 sans problème depuis des années.
Je dépasse rarement deux semaines puisque je dois mettre à jour le système de temps en temps pour continuer les développements sur OpenBSD :)
Peu de maintenance
Garder à jour un système OpenBSD est très simple. Je lance les commandes syspatch et pkg_add -u tous les jours pour garder mes serveurs à jour. Une mise à jour tous les six mois est nécessaire pour monter en version mais à part quelques instructions spécifiques qui peuvent parfois arriver, une mise à jour ressemble à ça :
# sysupgrade
[..attendre un peu..]
# pkg_add -u
# reboot
Documentation de qualité
Installer OpenBSD avec un chiffrement complet du disque est très facile (il faudra que j'écrive un billet sur l'importance de chiffrer ses disques et téléphones).
La documentation officielle expliquant l'installation d'un routeur avec NAT est parfaitement expliquée pas à pas, c'est une référence dès qu'il s'agit d'installer un routeur.
Tous les binaires du système de base (ça ne compte pas les packages) ont une documentation, ainsi que leurs fichiers de configuration.
Le site internet, la FAQ officielle et les pages de man sont les seules ressources nécessaires pour s'en sortir. Elles représentent un gros morceau, il n'est pas toujours facile de s'y retrouve mais tout y est.
Si je devais me débrouiller pendant un moment sans internet, je préférerais largement être sur un système OpenBSD. La documentation des pages de man suffit en général à s'en sortir.
Imaginez mettre en place un routeur qui fait du trafic shaping sous OpenBSD ou Linux sans l'aide de documents extérieurs au système. Personnellement je choisis OpenBSD à 100% pour ça :)
Facilité de contribution
J'adore vraiment la façon dont OpenBSD gère les contributions. Je récupère les sources sur mon système et je procède aux modifications, je génère un fichier de diff (différence entre avant/après) et je l'envoie sur la liste de diffusion. Tout ça peut être fait en console avec des outils que je connais déjà (git/cvs) et des emails.
Parfois, les nouveaux contributeurs peuvent penser que les personnes qui répondent ne sont vraiment pas sympa. **Ce n'est pas vrai**. Si vous envoyez un diff et que vous recevez une critique, cela signifie déjà qu'on vous accorde du temps pour vous expliquer ce qui peut être amélioré. Je peux comprendre que cela puisse paraître rude pour certaines personnes, mais ce n'est pas ça du tout.
Cette année, j'ai fait quelques modestes contributions aux projets OpenIndiana et NixOS, c'était l'occasion de découvrir comment ces projets gèrent les contributions. Les deux utilisent github et la manière de faire est très intéressante, mais la comprendre demande beaucoup de travail car c'est relativement compliqué.
Site officiel d'OpenIndiana
Site officiel de NixOS
La méthode de contribution nécessite un compte sur Github, de faire un fork du projet, cloner le fork en local, créer une branche, faire les modifications en local, envoyer le fork sur son compte github et utiliser l'interface web de github pour faire un "pull request". Ça c'est la version courte. Sur NixOS, ma première tentative de faire un pull request s'est terminée par une demande contenant six mois de commits en plus de mon petit changement. Avec une bonne documentation et de l'entrainement c'est tout à fait surmontable. Cette méthode de travail présente certains avantages comme le suivi des contributeurs, l'intégration continue ou la facilité de critique de code, mais c'est rebutoire au possible pour les nouveaux.
Packages top qualité
Mon opinion est sûrement biaisée ici (bien plus que pour les éléments précédents) mais je pense sincèrement que les packages d'OpenBSD sont de très bonne qualité. La plupart d'entre eux fonctionnent "out of the box" avec des paramètres par défaut corrects.
Les packages qui nécessitent des instructions particulières sont fournis avec un fichier "readme" expliquant ce qui est nécessaire, par exemple créer certains répertoires avec des droits particuliers ou comment mettre à jour depuis une version précédente.
Même si par manque de contributeurs et de temps (en plus de certains programmes utilisant beaucoup de linuxismes pour être faciles à porter), la plupart des programmes libres majeurs sont disponibles et fonctionnent très bien.
Je profite de l'occasion de ce billet pour critiquer une tendance au sein du monde Open Source.
- les programmes distribués avec flatpak / docker / snap fonctionnent très bien sur Linux mais sont hostiles envers les autres systèmes. Ils utilisent souvent des fonctionnalités spécifiques à Linux et les méthodes de compilation sont tournées vers Linux. Cela complique grandement le portage de ces applications vers d'autres systèmes.
- les programmes avec nodeJS: ils nécessitent parfois des centaines voir des milliers des libs et certaines sont mêmes un peu bancales. C'est vraiment compliqué de faire fonctionner ces programmes sur OpenBSD. Certaines libs vont même jusqu'à embarquer du code rust ou à télécharger un binaire statique sur un serveur distant sans solution de compilation si nécessaire ou sans regardant si ce binaire est disponible dans $PATH. On y trouve des aberrations incroyables.
- les programmes nécessitant git pour compiler: le système de compilation dans les ports d'OpenBSD fait de son mieux pour faire au plus propre. L'utilisateur dédié à la création des packages n'a pas du tout accès à internet (bloqué par le pare-feu avec une règle par défaut) et ne pourra pas exécuter de commande git pour récupérer du code. Il n'y a aucune raison pour que la compilation d'un programme nécessite de télécharger du code au milieu de l'étape de compilation!
Évidemment je comprends que ces trois points ci-dessus existent car cela facilite la vie des développeurs, mais si vous écrivez un programme et que vous le publiez, ce serait très sympa de penser aux systèmes non-linux. N'hésite pas à demander sur les réseaux sociaux si quelqu'un veut tester votre code sur un autre système que Linux. On adore les développeurs "BSD friendly" qui acceptent nos patches pour améliorer le support OpenBSD.
Ce que j'aimerais voir évoluer
Il y a certaines choses où j'aimerais voir OpenBSD s'améliorer. Cette liste est personnelle et reflète pas l'opinion des membres du projet OpenBSD.
- Meilleur support ARM
- Débit du Wifi
- Meilleures performances (mais ça s'améliore un peu à chaque version)
- Améliorations de FFS (lors de crashs j'ai parfois des fichiers dans lost+found)
- Un pkg_add -u plus rapide
- Support du décodage vidéo matériel
- Meilleur support de FUSE avec une possibilité de monter des systèmes CIFS/samba
- Plus de contributeurs
Je suis consciente de tout le travail nécessaire ici, et ce n'est certainement pas moi qui vais y faire quelque chose. J'aimerais que cela s'améliore sans toutefois me plaindre de la situation actuelle :)
Malheureusement, tout le monde sait qu'OpenBSD évolue par un travail acharné et pas en envoyant une liste de souhaits aux développeurs :)
Quand on pense à ce qu'arrive à faire une petite équipe (environ 150 développeurs impliqués sur les dernières versions) en comparaison d'autres systèmes majeurs, je pense qu'on est assez efficace!
On me pose souvent la question sur la façon dont je publie mon blog, comment j'écris mes textes et comment ils sont publiés sur trois médias différents. Cet article est l'occasion pour moi de répondre à ces questions.
Pour mes publications j'utilise le générateur de site statique "cl-yag" que j'ai développé. Son principal travail est de générer les fichiers d'index d'accueil et de chaque tags pour chacun des médias de diffusion, HTML pour http, gophermap pour gopher et gemtext pour gemini. Après la génération des indexs, pour chaque article publié en HTML, un convertisseur va être appelé pour transformer le fichier d'origine en HTML afin de permettre sa consultation avec un navigateur internet. Pour gemini et gopher, l'article source est simplement copié avec quelques méta-données ajoutées en haut du fichier comme le titre, la date, l'auteur et les mots-clés.
Publier sur ces trois format en même temps avec un seul fichier source est un défi qui requiert malheureusement de faire des sacrifices sur le rendu si on ne veut pas écrire trois versions du même texte. Pour gopher, j'ai choisi de distribuer les textes tel quel, en tant que fichier texte, le contenu peut être du markdown, org-mode, mandoc ou autre mais gopher ne permet pas de le déterminer. Pour gémini, les textes sont distribués comme .gmi qui correspondent au type gemtext même si les anciennes publications sont du markdown pour le contenu. Pour le http, c'est simplement du HTML obtenu via une commande en fonction du type de données en entrée.
J'ai récemment décidé d'utiliser le format gemtext par défaut plutôt que le markdown pour écrire mes articles. Il a certes moins de possibilités que le markdown, mais le rendu ne contient aucune ambiguïté, tandis que le rendu d'un markdown peut varier selon l'implémentation et le type de markdown (tableaux, pas tableaux ? Syntaxe pour les images ? etc...)
Lors de l'exécution du générateur de site, tous les indexs sont régénérées, pour les fichiers publiés, la date de modification de celui-ci est comparée au fichier source, si la source est plus récente alors le fichier publié est généré à nouveau car il y a eu un changement. Cela permet de gagner énormément de temps puisque mon site atteint bientôt les 200 articles et copier 200 fichiers pour gopher, 200 pour gemini et lancer 200 programmes de conversion pour le HTML rendrait la génération extrêmement longue.
Après la génération de tous les fichiers, la commande rsync est utilisée pour mettre à jour les dossiers de sortie pour chaque protocole vers le serveur correspondant. J'utilise un serveur pour le http, deux serveurs pour gopher (le principal n'était pas spécialement stable à l'époque), un serveur pour gemini.
J'ai ajouté un système d'annonce sur Mastodon en appelant le programme local "toot" configuré sur un compte dédié. Ces changements n'ont pas été déployé dans cl-yag car il s'agit de changements très spécifiques pour mon utilisation personnelle. Ce genre de modification me fait penser qu'un générateur de site statique peut être un outil très personnel que l'on configure vraiment pour un besoin hyper spécifique et qu'il peut être difficile pour quelqu'un d'autre de s'en servir. J'avais décidé de le publier à l'époque, je ne sais pas si quelqu'un l'utilise activement, mais au moins le code est là pour les plus téméraires qui voudraient y jeter un oeil.
Mon générateur de blog peut supporter le mélange de différents types de fichiers sources pour être convertis en HTML. Cela me permet d'utiliser le type de formatage que je veux sans avoir à tout refaire.
Voici quelques commandes utilisées pour convertir les fichiers d'entrées (les articles bruts tels que je les écrits) en HTML. On constate que la conversion org-mode vers HTML n'est pas la plus simple. Le fichier de configuration de cl-yag est du code LISP chargé lors de l'exécution, je peux y mettre des commentaires mais aussi du code si je le souhaite, cela se révèle pratique parfois.
(converter :name :gemini :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown :extension ".md" :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md" :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc :extension ".man"
:command "cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode :extension ".org"
:command (concatenate 'string
"emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
"(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
"(princ (buffer-string)))' --kill | tee %OUT"))
Quand je déclare un nouvel article dans le fichier de configuration qui détient les méta-données de toutes les publications, j'ai la possibilité de choisir le convertisseur HTML à utiliser si ce n'est pas celui par défaut.
;; utilisation du convertisseur par défaut
(post :title "Minimalistic markdown subset to html converter using awk"
:id "minimal-markdown" :tag "unix awk" :date "20190826")
;; utilisation du convertisseur mmd, un script awk très simple que j'ai fait pour convertir quelques fonctionnalités de markdown en html
(post :title "Life with an offline laptop"
:id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)
Quelques statistiques concernant la syntaxe de mes différentes publications, via http vous ne voyez que le HTML, mais en gopher ou gemini vous verrez la source telle quelle.
- markdown :: 183
- gemini :: 12
- mandoc :: 4
- mmd :: 2
- org-mode :: 1
I often have questions about how I write my articles, which format I use and how I publish on various medias. This article is the opportunity to highlight all the process.
So, I use my own static generator cl-yag which supports generating indexes for whole article lists but also for every tags in html, gophermap format and gemini gemtext. After the generation of indexes, for html every article will be converted into html by running a "converter" command. For gopher and gemini the original text is picked up, some metadata are added at the top of the file and that's all.
Publishing for all the three formats is complicated and sacrifices must be made if I want to avoid extra work (like writing a version for each). For gopher, I chose to distribute them as simple text file but it can be markdown, org-mode, mandoc or other formats, you can't know. For gemini, it will distribute gemtext format and for http it will be html.
Recently, I decided to switch to gemtext format instead of markdown as the main format for writing new texts, it has a bit less features than markdown, but markdown has some many implementations than the result can differ greatly from one renderer to another.
When I run the generator, all the indexes are regenerated, and destination file modification time are compared to the original file modification time, if the destination file (the gopher/html/gemini file that is published) is newer than the original file, no need to rewrite it, this saves a lot of time. After generation, the Makefile running the program will then run rsync to various servers to publish the new directories. One server has gopher and html, another server only gemini and another server has only gopher as a backup.
I added a Mastodon announcement calling a local script to publish links to new publications on Mastodon, this wasn't merged into cl-yag git repository because it's too custom code depending on local programs. I think a blog generator is as personal as the blog itself, I decided to publish its code at first but I am not sure it makes much sense because nobody may have the same mindset as mine to appropriate this tool, but at least it's available if someone wants to use it.
My blog software can support mixing input format so I am not tied to a specific format for all its life.
Here are the various commands used to convert a file from its original format to html. One can see that converting from org-mode to html in command line isn't an easy task. As my blog software is written in Common LISP, the configuration file is also a valid common lisp file, so I can write some code in it if required.
(converter :name :gemini :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown :extension ".md" :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md" :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc :extension ".man"
:command "cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode :extension ".org"
:command (concatenate 'string
"emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
"(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
"(princ (buffer-string)))' --kill | tee %OUT"))
When I define a new article to generate from a main file holding the metadata, I can specify the converter if it's not the default one configured.
;; using default converter
(post :title "Minimalistic markdown subset to html converter using awk"
:id "minimal-markdown" :tag "unix awk" :date "20190826")
;; using mmd converter, a simple markdown to html converter written in awk
(post :title "Life with an offline laptop"
:id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)
Some statistics about the various format used in my blog.
- markdown :: 183
- gemini :: 12
- mandoc :: 4
- mmd :: 2
- org-mode :: 1
Today's Port of the Week is about Lagrange, a gemini web browser.
Lagrange official website
Information about the Gemini protocol
Curated list of Gemini clients
Lagrange is the finest browser I ever used and it's still brand new. I imported it into OpenBSD and so it will be available starting from OpenBSD 6.9 releases.

Lagrange is fantastic in the way it helps the user with the content browsed.
- Links already visited display the last visited date
- Subscription on page without RSS is possible for pages respecting a specific format (most of gemini space does)
- Easy management of client certificates, used for authentication
- In-page image loading, video watching and sound playing
- Gopher support
- Table of content displayed generated from headings
- Keyboard navigation
- Very light (dependencies, memory footprint, cpu usage)
- Smooth scrolling
- Dark and light modes
- Much more
If you are interested into Gemini, I highly recommend this piece of software as a browser.
In case you would like to host your own Gemini content without requiring infrastructure, some community servers are offering hosting through secure sftp transfers.
Si3t.ch community Gemini hosting
Un bon café !
Once you get into Gemini space, I recommend the following resources:
CAPCOM feed agregator, a great place to meet new authors
GUS: a search engine
I added a new feature to Vger gemini server.
Vger git repository
The protocol supports status code including redirections, Vger had no way to know if an user wanted to redirect a page to another. The redirection litteraly means "You asked for this content but it is now at that place, load it from there".
To keep it with vger Unix way, a redirection is done using a symbolic link:
The following command would redirect requests from gemini://perso.pw/blog/index.gmi to gemini://perso.pw/blog/index.gmi:
ln -s "gemini://perso.pw/capsule/index.gmi" blog/index.gmi
Unfortunately, this doesn't support globbing, in other words it is not possible to redirect everything from `/blog/` to `/capsule/` without creating a symlink for all previous resources to their new locations.
In this article I will explain how to deploy your own cryptpad instance with OpenBSD.
Cryptpad official website
Cryptpad is a web office suite featuring easy real time collaboration on documents. Cryptpad is written in JavaScript and the daemon acts as a web server.
Pre-requisites
You need to install the packages git, node, automake and autoconfig to be able to fetch the sources and run the program.
# pkg_add node git autoconf--%2.69 automake--%1.16
Another web front-end software will be required to allow TLS connections and secure the network access to the Cryptpad instance. This can be relayd, haproxy, nginx or lighttpd. I'll cover the setup using httpd, and relayd. Note that Cryptpad developers will provide support only to Nginx users.
Installation
I really recommend using dedicated users daemons. We will create a new user with the command:
# useradd -m _cryptpad
Then we will continue the software installation as the `_cryptpad` user.
# su -l _cryptpad
We will mainly follow the official instructions with some exceptions to adapt to OpenBSD:
Official installation guide
$ git clone https://github.com/xwiki-labs/cryptpad
$ cd cryptpad
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install bower
$ node_modules/.bin/bower install
$ cp config/config.example.js config/config.js
Configuration
There are a few variables important to customize:
- "httpUnsafeOrigin" should be set to the public address on which cryptpad will be available. This will certainly be a HTTPS link with an hostname. I will use https://cryptpad.kongroo.eu
- "httpSafeOrigin" should be set to a public address which is different than the previous one. Cryptpad requires two different addresses to work. I will use https://api.cryptpad.kongroo.eu
- "adminEmail" must be set to a valid email used by the admin (certainly you)
Make a rc file to start the service
We need to automatically start the service properly with the system.
Create the file /etc/rc.d/cryptpad
#!/bin/ksh
daemon="/usr/local/bin/node"
daemon_flags="server"
daemon_user="_cryptpad"
location="/home/_cryptpad/cryptpad"
. /etc/rc.d/rc.subr
rc_start() {
${rcexec} "cd ${location}; ${daemon} ${daemon_flags}"
}
rc_bg=YES
rc_cmd $1
Enable the service and start it with rcctl
# rcctl enable cryptpad
# rcctl start cryptpad
Operating
Make an admin account
Register yourself on your Cryptpad instance then visit the *Settings* page of your profile: copy your public signing key.
Edit Cryptpad file config.js and search for the pattern "adminKeys", uncomment it by removing the "/* */" around and delete the example key and paste your key as follow:
adminKeys: [
"[solene@cryptpad.kongroo.eu/YzfbEYwZq6Xhl7ET6AHD01w3QqOE7STYgGglgSTgWfk=]",
],
Restart Cryptpad, the user is now admin and has access to a new administration panel from the web application.
Backups
In the cryptpad directory, you need to backup `data` and `datastore` directories.
Extra configuration
In this section I will explain how to configure generate your TLS certificate with acme-client and how to configure httpd and relayd to publish cryptpad. I consider it besides the current article because if you have nginx and already a setup to generate certificates, you don't need it. If you start from scratch, it's the easiest way to get the job done.
Acme client man page
Httpd man page and
Relayd man page
From here, I consider you use OpenBSD and you have blank configuration files.
I'll use the domain **kongroo.eu** as an example.
httpd
We will use httpd in a very simple way. It will only listen on port 80 for all domain to allow acme-client to work and also to automatically redirect http requests to https.
# cp /etc/examples/httpd.conf /etc/httpd.conf
# rcctl enable httpd
# rcctl start httpd
acme-client
We will use the example file as a default:
# cp /etc/examples/acme-client.conf /etc/acme-client.conf
Edit `/etc/acme-client.conf` and change the last domain block, replace `example.com` and `secure.example.com` with your domains, like `cryptpad.kongroo.eu` and `api.cryptpad.kongroo.eu` as alternative name.
For convenience, you will want to replace the path for the full chain certificate to have `hostname.crt` instead of `hostname.fullchain.pem` to match relayd expectations.
This looks like this paragraph on my setup:
domain kongroo.eu {
alternative names { api.cryptpad.kongroo.eu cryptpad.kongroo.eu }
domain key "/etc/ssl/private/kongroo.eu.key"
domain full chain certificate "/etc/ssl/kongroo.eu.crt"
sign with buypass
}
Note that with the default acme-client.conf file, you can use *letsencrypt* or *buypass* as a certification authority.
acme-client.conf man page
You should be able to create your certificates now.
# acme-client kongroo.eu
Done!
You will want the certificate to be renewed automatically and relayd to restart upon certificate change. As stated by acme-client.conf man page, add this to your root crontab using `crontab -e`:
~ * * * * acme-client kongroo.eu && rcctl reload relayd
relayd
This configuration is quite easy, replace `kongroo.eu` with your domain.
Create a /etc/relayd.conf file with the following content:
relayd.conf man page
tcp protocol "https" {
tls keypair kongroo.eu
}
relay "https" {
listen on egress port 443 tls
protocol https
forward to 127.0.0.1 port 3000
}
Enable and start relayd using rcctl:
# rcctl enable relayd
# rcctl start relayd
Conclusion
You should be able to reach your Cryptpad instance using the public URL now. Congratulations!
This is a simple kakoune cheat sheet to help me (and readers) remember some very useful features.
To see kakoune in action.
Video showing various features in made with asciinema.
Official kakoune website (it has a video)
Commands (in command mode)
Select from START to END position.
Use `Z` to mark start and `alt+z i` to select unti current position.
Add a vertical cursor (useful to mimic rectangle operation)
Type `C` to add a new cursor below your current cursor.
Clear all cursors
Type `space` to remove all cursors except one.
Pasting text verbatim (without completion/indentation)
You have to use "disable hook" command before inserting text. This is done with `\i` with `\` disabling hooks.
Split selection into cursors
When you make a selection, you can use `s` and type a pattern, this will create a new cursor at the start of every pattern match.
This is useful to make replacements for words or characters.
A pattern can be a word, a letter, or even `^` to tell the beginning of each line.
How-to
In kakoune there are often multiples way to do operations.
Select multiples lines
Multiples cursors
Go to first line, press `J` to create cursors below and press `X` to select whole lines of every cursors.
Using start / end markers
Press `Z` on first line, and `alt+z i` on last line and then press `X` to select whole lines of every lines.
Using selections
Press `X` until you reach the last line.
Replace characters or words
Make a selection and type `|`, you are then asked for a shell command, you have to use `sed`.
Sed can be used, but you can also select the lines and split the selection to make a new cursor before each word and replace the content by typing it, using the `s` command.
Format lines
For my blog I format paragraphs so lines are not longer than 80 characters. This can be done by selecting lines and run `fmt` using a pipe command. You can use other software if fmt doesn't please you.
Introduction
In this article I will explain how to install and configure Vger, a gemini server.
What is the gemini protocol
Short introduction about Gemini: it's a very recent protocol that is being simplistic and limited. Keys features are: pages are written in markdown like, mandatory TLS, no header, UTF-8 encoding only.
Vger program
Vger source code
I wrote Vger to discover the protocol and the Gemini space. I had a lot of fun with it, it was the opportunity for me to rediscover the C language with a better approach. The sources include a full test suite. This test suite was unvaluable for the development process.
Vger was really built with security in mind from the first lines of code, now it offers the following features:
- chroot and privilege dropping, and on OpenBSD it uses unveil/pledge all the time
- virtualhost support
- language selection
- MIME detection
- handcrafted man page, OpenBSD quality!
The name Vger is a reference to the 1979 first Star Trek movie.
Star Trek: The Motion Picture
Install Vger
Compile vger.c using clang or gcc
$ make
# install -o root -g bin -m 755 vger /usr/local/bin/vger
Vger receives requests on stdin and gives the result on stdout. It doesn't take account of the hostname given but a request MUST start with `gemini://`.
vger official homepage
Setup on OpenBSD
Create directory /var/gemini/, files will be served from there.
Create the `_gemini` user:
useradd -s /sbin/nologin _gemini
Configure vger in /etc/inetd.conf
11965 stream tcp nowait _gemini /usr/local/bin/vger vger
Inetd will run vger` with the _gemini user. You need to take care that /var/gemini/ is readable by this user.
inetd is a wonderful daemon listening on ports and running commands upon connections. This mean when someone connects on the port 11965, inetd will run vger as _gemini and pass the network data to its standard input, vger will send the result to the standard output captured by inetd that will transmit it back to the TCP client.
Tell relayd to forward connections in relayd.conf
log connection
relay "gemini" {
listen on 163.172.223.238 port 1965 tls
forward to 127.0.0.1 port 11965
}
Make links to the certificates and key files according to relayd.conf documentation. You can use acme / certbot / dehydrate or any "Let's Encrypt" client to get certificates. You can also generate your own certificates but it's beyond the scope of this article.
# ln -s /etc/ssl/acme/cert.pem /etc/ssl/163.172.223.238\:1965.crt
# ln -s /etc/ssl/acme/private/privkey.pem /etc/ssl/private/163.172.223.238\:1965.key
Enable inetd and relayd at boot and start them
# rcctl enable relayd inetd
# rcctl start relayd inetd
From here, what's left is populating /var/gemini/ with the files you want to publish, the `index.md` file is special because it will be the default file if no file are requests.
In this article I will explain how to install a lsp plugin for kakoune to add language specific features such as autocompletion, syntax error reporting, easier navigation to definitions and more.
The principle is to use "Language Server Protocol" (LSP) to communicate between the editor and a daemon specific to a programming language. This can be also done with emacs, vim and neovim using the according plugins.
Language Server Protocol on Wikipedia
For python, _pyls_ would be used while for C or C++ it would be _clangd_.
The how-to will use OpenBSD as a base. The package names may certainly vary for other systems.
Pre-requisites
We need _kak-lsp_ which requires rust and cargo. We will need git too to fetch the sources, and obviously kakoune.
# pkg_add kakoune rust git
Building
Official building steps documentation
I recommend using a dedicated build user when building programs from sources, without a real audit you can't know what happens exactly in the build process. Mistakes could be done and do nasty things with your data.
$ git clone https://github.com/kak-lsp/kak-lsp
$ cd kak-lsp
$ cargo install --locked --force --path .
Configuration
There are a few steps. kak-lsp has its own configuration file but the default one is good enough and kakoune must be configured to run the kak-lsp program when needed.
Take care about the second command if you built from another user, you have to fix the path.
$ mkdir -p ~/.config/kak-lsp
$ cp kak-lsp.toml ~/.config/kak-lsp/
This configuration file tells what program must be used depending of the programming language required.
[language.python]
filetypes = ["python"]
roots = ["requirements.txt", "setup.py", ".git", ".hg"]
command = "pyls"
offset_encoding = "utf-8"
Taking the configuration block for python, we can see the command used is _pyls_.
For kakoune configuration, we need a simple configuration in ~/.config/kak/kakrc
eval %sh{/usr/local/bin/kak-lsp --kakoune -s $kak_session}
hook global WinSetOption filetype=(rust|python|go|javascript|typescript|c|cpp) %{
lsp-enable-window
}
Note that I used the full path of kak-lsp binary in the configuration file, this is due to a rust issue on OpenBSD.
Link to Rust issue on github
Trying with python
To support python programs you need to install python-language-server which is available in pip. There are no package for it on OpenBSD. If you install the program with pip, take care to have the binary in your $PATH (either by extending $PATH to ~/.local/bin/ or by copying the binary in /usr/local/bin/ or whatever suits you).
The pip command would be the following (your pip binary name may change):
$ pip3.8 install --user 'python-language-server[all]'
Then, opening python source file should activate the analyzer automatically. If you add a mistake, you should see `!` or `*` in the most left column.
Trying with C
To support C programs, clangd binary is required. On OpenBSD it is provided by the clang-tools-extra package. If clangd is in your $PATH then you should have working support.
Using kak-lsp
Now that it is installed and working, you may want to read the documentation.
kak-lsp usage
I didn't look deep for now, the autocompletion automatically but may be slow in some situation.
Default keybindings for "gr" and "gd" are made respectively for "jump to reference" and "jump to definition".
Typing "diag" in the command prompt runs "lsp-diagnostics" which will open a new buffer explaining where errors are warnings are located in your source file. This is very useful to fix errors before compiling or running the program.
Debugging
The official documentation explains well how you can check what is wrong with the setup. It consists into starting kak-lsp in a terminal and kakoune separately and check kak-lsp output. This helped me a lot.
Official troubleshooting guide
Sery is back in the fourth floor 4 of the underworld. What mysteries
are to be discovered? What enemies will be slayed so we can make
our path?
Everything is awesome
Sery is in the fourth floor, she found stairs to go deeper but she
also heard coins flipping. Maybe a merchant is around? That would
be the right opportunity to buy weapons, armor and food.
--------------
|............|
#.@...........+
#|............|
#|..>...$.....|
#--------------
###
#
##
#
#
#
#
-- -----#
< #
| |
| |
--------
After walking to a new room south-east, she found a large room with a hobbit statue h
and a potion on the floor. The potion is not identified, so using it will be very risky.
The large room was a dead end. Back to the previous room Sery was now surrounded by enemies. A gas spore e
, a green mold F
and a giant bug :
! She also felt hungry at the time, but she had to fight. Eggs and pancakes will be for another time.
--------------
|.F..........|
#.:.....@..e..-#
#|............|#
#|..>...d.....|#
#--------------#
### #
While fleeing to the ascending stairs to search a merchant on this floor while escaping enemies, a gecko was blocking the way. Sery had to fight with her fists and fortunately the gecko didn’t oppose much resistance. But a few steps later, a goblin was also in the path. Sery’s dog location is unknown, it was certainly fighting in the previous room. Sery decided to drink a potion to recover from her 2 HP left and go back to the room, in hope the dog can help her.
It worked! The dog was just behind and charged the goblin would die instantly. The dog was starving and ate the goblin freshly killed, Sery was hungry too but preferred eating some pancake that wasn’t fresh, it had better taste than the remaining goblin meat tin can she had in her purse.
--------------
| |
#.............-#
#| |#
--------------- #| > |#
.........o....| #--------------#
|.............| ### #
|.......$....@d## # #
--------------- ### ## #
# # #
# # `##################
# # #--------- --
# # #| h|
#-- -----# #| |
# < # # |
| | | |
| | | |
-------- ------------
On the first steps in the room, she found a graffiti on the ground:
Atta?king a? ec| vhere the?c is rone i? usually a ?a?al mistakc!
The message didn’t make any sense. The room had a goblin statue and some gold on the ground, it’s all Sery had to know. The room was calm and nothing happened when crossing it. Sery seemed to be blessed!
-----
|....##
|@..| ###
----- #
Nearby she found a very small room with no other way than the entrance. This looked very suspicious and she decided to spend some time looking around for a clue about a secret door. She was right! A few minutes after she started to search, she found a hidden door! The door was not locked, which was surprising. Who knows what was waiting on the other side?
After walking a bit in a small and dark corridor, a new room was here, with an empty box along a wall and a grave in a corner in the opposite side of the room.
-----
| ## --------------
#- | ### | |
#----- # # -#
## # #| |#
## #--------------- #| > |#
## # o | #--------------#
---------# | | ### #
|.......|# | ## # #
|........# --------------- ### ## #
|.......| # # #
|(@...... # # `##################
|......|| # # #--------- --
--------- # # #| h|
#-- -----# #| |
# < # # |
| | | |
| | | |
-------- ------------
The large box was locked! Without lock pick she wasn’t able to open it. After all she went through in the dungeon, anger gave her some strength to break the box padlock after a few kicks in it.
The box contained the following objects:
- a pyramidal amulet
- a food ration
- a black gem
- two green gems
She still had some room on her bag, it wasn’t too heavy for now so she decided to take everything from the box.
Kicking the box consumed energy and she decided to restart a little, and eat something. The food ration from the box looked very tasty but it may be poisoned or toxic so she avoid it and ate goblin meat in tin can. It wasn’t good, but did the job.
She looked at the grave, it was old and only had engraved words on it which appeared to be
Yes Dear, just a few more minutes…
A corridor in the room was leading to a dead end. There was nothing. Even after searching for a long time, Sery didn’t find any way there so she decided to go back and descend to the next floor.
On a way back, she had to fight monsters: a newt, a sewer rat, a gas spore! After the fights, hunger was back again! It was time for a good meal: goblin meat and food ration. It did hit the spot and Sery felt a lot better.
Fifth floor
In the fifth floor, a potion !
was lying on the ground. There was some light, it wasn’t completely dark, without a lamp or a torch this would be a real problem.
---------
|.......+
|.......|
|@......|
|..d.!..|
|........
------- -
In a corridor leading to a room in the south, she had to kill a coyote in the way. The room had a teleportation trap and an apple %
, food!
Going east, she walked through a long corridor until a dead end. After searching for some time she found a way to get a body through a hole and get to the other side. A boulder was in the tunnel but she have been able to push it, fortunately the bolder was rolling fine.
---------
| +
| |
|< |
| |
|
------- -
#
#
##
#
##
#
# # # ##
--- ------# # # @
| #################################`
| ^ |
----------
Sery found a new room with two potions and a gnome. It was hard for Sery to know if the gnome was hostile
-.--|--
+..!G.|
# |...!.|
########d@....|
# |.....|
####` -------
The dog got triggered by the gnome presence and ran to fight the gnome. The gnome was definitely hostile. Sery ended quickly in hand-to-hand combat with the gnome.
The camera’s flash! She thought it should work, after all the camera still had forty seven pictures to take, or enemies to blind.
It worked, the poor creature got blinded, the dog was biting its back. After a few hits, the gnome died, leaving a bow on the ground.
Continuing her way, Sery found the room with the descending stairs. There were a homunculus i
and a sewer rat r
waiting. She knew the rat was an easy target but the other enemy was unknown. It didn’t appeared friendly and she doubted to be able to kill it without risking her life.
---------
| + -------------
| | |...........|
|< | -....>!.....|
| | |...........|
| ....i....r..|
------- - -- -------@--
# ##
# ###
## ###
# - --)--
## + |
# # | ) |
# # # ######## |
--- ------# # # # | |
| #################################` -------
| ^ |
----------
Sery decided to go back to the long corridor which had cross ways.
---------
| + -------------
| | | |
|< | - >! |
| | | |
| |
------- - -- ------- --
# ##
# ###
## ###
# -.--|--
## #########i@....|
# ####### |..)..|
# # # # ########......|
--- ------# # # # |)....|
| #################################` -------
| ^ |
----------
The homunculus was fast! It found Sery back from where they met. Sery was in troubles. The homunculus seemed hard to escape and while fleeing in a corridor, a dwarf zombie Z
blocked the way.
She tried to fight it but she lost 9 HP in 2 hits, the beast was very powerful. It was time to drink the random potions she got over the journey. They were unidentified but there was no choice, except praying maybe.
Praying! Sery wasn’t a believer but praying was the best she could do. Her pray was deep and pure, she only wanted to have some hope for her future and her quest.
The Lady heard her pray, Sery got surrounded by a shimmering light. The dwarf zombie attacked Sery but got pulled back by some energy field. Sery felt a lot better, her health was fully recovered and also increased.
#########-.....|
####### |..)..|
# #Z@#####......|
# # |)....|
#########` -------
Sery got a second chance, she certainly wanted to make a good use of it. At this time, the only thought in her mind was: RUN AWAY
She did run, very fast, to the stairs leading deeper. None enemies made troubles in her retreat.
Sixth floor
No time to look in the room she arrived, Sery got attacked by a brown mold, which in turn was killed by her dog.
------
|....|
|....|
|.d@.|
|....|
|....|
|....|
--.---
The room had only way to the south. Finding a merchant was becoming urgent. Her food supplies were depleting. She had a lot of money but that is not helpful in the middle of the underground among the monsters.
In the south room there was a lichen F
, but it seemed peaceful, or guarding the stairs to descend to seventh floor, who knows? The room had no other entrance than the one by which Sery came, but after examining the walls, she found a door.
------
| |
| |
| < |
| |
| |
| |
-- ---
####
#
#
##
----- - -----
| | |....|
|.F...-#####@....|
|> | |....|
------- .!...
-----
Nothing unusual in this floor. Continuing her progresses through the tunnels, she ended in a dark room, she wasn’t able to see further than a meter away.
------
| | -------------
| | | .d|
| < | #- .@|
| | #---- -.-
| | #
| | ##
-- --- #
#### #
# #
# #
## #
----- - ------#
| | | |#
| -##### |#
|> | | |#
------- | #
------
One more step and she came face to face with a homunculus. Fortunately the dog was just behind and not fighting any other aggressive animals. The dog killed it fast. But then another homunculus came, which also got killed by the dog.
In the end, those homunculus are pretty weak.
Room after room, with only emptiness as a friend, Sery walked for a long time. And then he appeared! The merchant !
------
| | ------------- ------
| | | | |????|
| < | #- | |????|
| | #---- - - |???+|
| | # ## |??+?|
| | ## # |+??+|
-- --- # # |.@.
#### # ---- -# -@-
# # | -# #
# # | | | -- ------ ###
## # | -######| | #
----- - ------# | | #| | #
| | | |# | < ## #### ` | #
| -##### |# ------ ###### # | ###### - ----
|> | | |# ####### | _ | # | |
------- | # | | ## |
------ --------- ------
He was a bookseller, selling scrolls… Sery was so disappointed by this, she felt helpless for a moment.
In this article I will explain how to download and run the FuguITA OpenBSD live-cd, which is not an official OpenBSD project (it is not endorsed by the OpenBSD project), but is available since a long time and is carefully updated at every release and errata published.
FuguITA official homepage
I do like this project and I am running their European mirror, it was really long to download it from Europe before.
Please note that if you have issues with FuguITA, you must report it to the FuguITA team and not report it to the OpenBSD project.
Preparing
Download the img or iso file on a mirror.
Mirror list from official project page
The file is gzipped, run gunzip on the img file FuguIta-6.8-amd64-202010251.img.gz (name may change over time because they get updated to include new erratas).
Then, copy the file to your usb memory stick. This can be dangerous if you don't write the file to the correct disk!
To avoid mistakes, I plug in the memory stick when I need it, then I check the last lines of the output of dmesg command which looks like:
sd1 at scsibus2 targ 1 lun 0: <Corsair, Voyager 3.0, 1.00> removable serial.1b1c1a03800000000060
sd1: 15280MB, 512 bytes/sector, 31293440 sectors
This tells me my memory stick is the sd1 device.
Now I can copy the image to the memory stick:
# dd if=FuguIta-6.8-amd64-202010251.img of=/dev/rsd1c bs=10M
Note that I use /dev/rsd1c for the sd1 device. I've added a r to use the raw mode (in opposition of buffered mode) so it gets faster, and the c stands for the whole disk (there is a historical explanation).
Starting the system
Boot on your usb memory stick. You will be prompted for a kernel, you can wait or type enter, the default is to use the multiprocessor kernel and there are no reason to use something else.
If will see a prompt "scanning partitions: sd0i sd1a sd1d sd1i" and be asked which is the FuguIta operating device, proposing a default that should be the correct one.
FROM HERE, YOUR KEYBOARD IS IN QWERTY.
Just type enter.
The second question will be the memory disk allowed size (using TMPFS), just press enter for "automatic".
Then, a boot mode will be showed: the best is the mode 0 for a livecd experience.
Official documentation in regards to FuguITA specifics options
Keyboard type will be asked, just type the layout you want. Then answer to questions:
- root password
- hostname (you can just press enter)
- IP to use (v4, v6, both [default])
When prompted for your network interfaces, WIFI may not work because the livecd doesn't have any firmware.
Finally, you will be prompted for C for console or X for xenodm. THERE ARE NO USER except root, so if you start X you can only use root as an user, which I STRONGLY discourage.
You can login console as root, use the two commands "useradd -m username" and "passwd username" to give a password to that user, and then start xenodm.
The livecd can restore data from a local hard drive, this is explained in the start guide of the FuguITA project.
Conclusion
Having FuguITA around is very handy. You can use it to check your hardware compatibility with OpenBSD without installing it. Packages can be installed so it's perfect to check how OpenBSD performs for you and if you really want to install it on your computer.
You can also use it as an usb live system to transport OpenBSD anywhere (the system must be compatible) by using the persistent mode, encryption being a feature! This may be very useful for people traveling on lot and who don't necesserarly want to travel with an OpenBSD laptop.
As I said in the introduction, the team is doing a very good job at producing FuguITA releases shortly after the OpenBSD release, and they continuously update every release with new erratas.
In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.
Privacy
There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.
The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.
Secure firefox / chromium
While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.
With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.
This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.
It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.
PF firewall
With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.
I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.
Linux users could use the software such as trickle or wondershaper for this.
It's stable
Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.
I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.
Low maintenance
Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.
Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):
# sysupgrade
[..wait..]
# pkg_add -u
# reboot
Documentation is accurate
Setting up an OpenBSD system with full disk encryption is easy.
Documentation to create a router with NAT is explained step by step.
Every binary or configuration file have their own up-to-date man page.
The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.
If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.
Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.
Contributing is easy
This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.
There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.
This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.
OpenIndiana official website
NixOS official website
One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.
High quality packages
My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.
Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.
Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.
I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.
- programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
- nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
- programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.
I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.
What I would like to see improved
This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.
- Better ARM support
- Wifi speed
- Better performance (gently improving every release)
- FFS improvements in regards to reliability (I often get files in lost+found)
- Faster pkg_add -u
- hardware video decoding/encoding support
- better FUSE support and mount cifs/smb support
- scaling up the contributions (more contributors and reviewers for ports@)
I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.
Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)
When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.
Since my previous article about a continous integration service to track OpenBSD ports contribution I made a simple proof of concept that allowed me to track what works and what doesn't work.
The continuous integration goal
A first step for the CI service would be to create a database of diffs sent to ports. This would allow people to track what has been sent and not yet committed and what the state of the contribution is (build/don't built, apply/don't apply). I would proceed following this logic:
- a mail arrive and is sent to the pipeline
- it's possible to find a pkgpath out of the file
- the diff applies
- distfiles can be fetched
- portcheck is happy
Step 1 is easy, it could be mail dumped into a directory that get scanned every X minutes.
Step 2 is already done in my POC using a shell script. It's quite hard and required tuning. Submitted diffs are done with diff(1), cvs diff or git diff. The important part is to retrieve the pkgpath like "lang/php/7.4". This allow testing the port exists.
Step 3 is important, I found three cases so far when applying a diff:
- it works, we can then register in the database it can be used to build
- it doesn't work, human investigation required
- the diff is already applied and patch think you want to reverse it. It's already committed!
Being able to check if a diff is applied is really useful. When building the contributions database, a daily check of patches that are known to apply can be done. If a reverse patch is detected, this mean it's committed and the entry could be delete from the database. This would be rather useful to keep the database clean automatically over time.
Step 4 is an inexpensive extra check to be sure the distfiles can be downloaded over the internet.
Step 5 is also an inexpensive check, running portinfo can reports easy to fix mistakes.
All the steps only require a ports tree. Only the step 4 could be tricked by someone malicious, using a patch to make the system download very huge files or files with some legal concerns, but that message would also appear on the mailing list so the risk is quite limited.
To go further in the automation, building the port is required but it must be done in a clean virtual machine. We could then report into the database if the diff has been producing a package correctly, if not, provide the compilation log.
Automatic VM creation
Automatically creating an OpenBSD-current virtual machine was tricky but I've been able to sort this out using vmm, rsync and upobsd.
The script download the last sets using rsync, that directory is served from a mail server. I use upobsd to create an automatic installation with bsd.rd including my autoinstall file. Then it gets tricky :)
vmm must be started with its storage disk AND the bsd.rd, as it's an auto install, it will reboot after the install finishes and then will install again and again.
I found that using the parameters "-B disk" would make the vm to shutdown after installation for some reasons. I can then wait for the vm to stop and then start it without bsd.rd.
My vmm VM creation sequence:
upobsd -i autoinstall-vmm-openbsd -m http://localhost:8080/pub/OpenBSD/
vmctl stop -f -w integration
vmctl start -B disk -m 1G -L -i 1 -d main.qcow2 -b autobuild_vm/bsd.rd integration
vmctl wait integration
vmctl start -m 1G -L -i 1 -d main.qcow2 integration
The whole process is long though. A derivated qcow image could be used after creation to try each port faster until we want to update the VM again.
Multplies vm could be used at once to make parallel testing and make good use of host ressources.
What's done so far
I'm currently able to deposite email as files in a directory and run a script that will extract the pkgpath, try to apply the patch, download distfiles, run portcheck and run the build on the host using PORTS_PRIVSEP. If the ports compiled fine, the email file is deleted and a proper diff is made from the port and moved into a staging directory where I'll review the diffs known to work.
This script would stop on blocking error and write a short text report for each port. I intended to sent this as a reply to the mailing at first, but maintaining a parallel website for people working on ports seems a better idea.
First episode of maybe a serie!
Let’s play NetHack and write a story along the way. I find nethack to be a wonderful game despite its quite simple graphics. In this game, you can do more actions than in any modern games. I can dip a towel in a fountain to make it wet, and wear it on my head. Maybe it would protect me from heat? Who knows.
As this leaves a lot of place for the imagination, every serious nethack game I play, I create a story in my head and try to imagine the various situations, so maybe I could write them down?
Welcome to the underworld Gehennom, you will read the story of Sery the human female neutral tourist and her dog. She has to find the Amulet of Yendor and come back to the surface, for some reasons.
@
is Sery and d
is her dog.
Arrival - first floor
{
is a fountain, #
a sink, -
an open door and +
a closed door.
In her inventory, she has 875 gold, tourists are rich! 24 darts to throw at enemies, 2 fortunes cookies, some various food (goblin meat in tin can, eggs, carrot, apple, pancakes…), 4 scrolls of magic mapping, 2 healing potions, and expensive camera and an uncursed credit card.
---+---------
|......{....-
|@.........#|
|d..........|
-------------
She went to the closed door but it resisted, after kicked it three times, the door opened! After walking around in tunnel, she only found empty rooms, leading to others tunnels.
#
are corridors (when they are not sinks in a room).
--------
# .. |
#| .. |
#| .. |
#---|----
# ##
###########
## #
# #
# #
----------|---### ##d@##
| # # ###
| | #---.---------
| -#######|..... { -
| | |<.... #|
| | |..... |
-------------- -------------
At the end of a corridor, Sery was stuck but after searching around for some secrets passage, she found a hidden passage to the first room. Back to square one.
--------
# |
#| |
#| |
#---|----
# ##
###########
## # # #
# # #######
# # # #
----------|---### ############ #d
| # # ### @
| | #--- ---------#
| -#######| {....-#
| | |< ......#|
| | | ........|
-------------- -------------
After she heard some noise in a corridor, she stumbled on a boulder ``` but it is impossible to move it to clear the corridor.
A new room was found, with a large box (
in it. What could be in this box?
------
|....|
##d.@..+
###|....|
## |....|
##`|.(..|
# |....|
# ------
While walking toward the box, her dog suddenly disappeared, falling in a trap door! Sery shorten her exploration of the first level after opening the box to look after her dog.
The large box was locked, without weapon or tools to unlock it, Sery kicked the large box a dozen time so it opened. What a disappointment when she was it was empty!
Second floor
----------
|......@.|
.........|
|........|
|....>...|
|.....$..|
----------
Sery jumped into the trap to descend to the level below, her dog wasn’t in the room though. There were five gold to loot and stairs to descend to the third level. She needed to find her dog before continuing exploration to third level.
In the adjacent corridor, the dog was found sound and safe!
After continuing the exploration, a room was found with enemies!
F
lichen, o
goblin and a :
newt! That was a lot of enemies for a simple tourist. She wanted to pull them into a corridor and let her dog take care of the enemies. This was a good spartiate strategy after all!
----------
| |
# |
#| |
#| > |
#| |
#----------
#
#
-------- #
|....... #
.......F| -------#
|:....o.@d#####......|#
|.......| | #
|....... | |
|...... | |
------- -------
Unfortunately, when a lichen is in contact with you, you can’t escape. It took a while for Sery to kill the lichen and retreat in the corridor, she receive a few hits from the lichen and the goblin (HP 6/10). She heard some noises while staying in the corridor, after coming back in the room, the dog finished to kill the newt and the goblin seemed to ran away.
--------
|.....o.
........|
|.....d.@
|.......|
|.......
|......
-------
The dog was then attacking the goblin and killed it rather quickly. This was really fortunate that Sery was in company of her dog.
After walking a bit to continue the exploration, Sery stumbled on a sewer rat, she got hit rather hard and didn’t had much HP left! While retreating to the last room, looking for the dog who stayed back eating the goblin corp, the dog came back to her bringing a iron skull cap certainly found on the dead goblin. In one bit, the dog killed the rat.
After some rest to recover a few HP, Sery went back to the exploration. The exploration was quiet and easy, rooms with unlocked doors, she found the stairs to go upstairs. Nothing of interested was to be found, so it was time to go to the third level. A newt and a lichen were encountered in the corridors but opposed little resistance to the dog.
--------- ----------
| | |........|
| | ---------- #.........|
| | | | #|.d..@...|
| | | | #|F...>...|
| | | | #|........|
- -|--- -# ###- | #----------
### #### ## | | #
# `##`### --- ------ #
### ### ## --------- #
##### # ##### | | #
---------|-## ###### ## | -------#
| |# -- ---|----- # | -###### |#
| |# | | ### | | | #
| |# | | # | | | |
| -# | #### | | | |
| < | ------------ --------- -------
-----------
Third floor
The room where Sery arrived in the third level had an enemy, a huge x
bug and some money in a corner near a door.
--------------
|...@........|
|....d.......|
....x.......$|
|............+
--------------
The door required two kicks to be opened.
In the next room, Sery saw a bug before entering, so she immediately swapped her place with her dig in the corridor to let her defender do their job.
<
are upstairs stairs.
--------------
| < |
| |
|
| ##
-------------- #
## --+-
##d@.x..|
.$|
.
-
As usual, the dog took care of the enemies. A new room was found, multiples windows, some opening in previous rooms wasn’t explored yet too. There were lot of exploration to be done in this area.
--------
|......+
|......|
+>.{...|
-------------- |......|
| < | |....@.|
| | -----.-- ...
| ######
| ## #####
-------------- # #
## ---|-
#### |
| |
|
-----
While exploring, Sery got to fight a giant rat, she didn’t know where her dog was so she had to fight for real this time.
--------
---- | +
.... | |
.. ######################-> { |
r #-------------- | |
#@##### #| < | | |
# # ###| | ----- --
## ### | ######
# ## | ## #####
# ## -------------- # #
# # ## ---|-
## ##### #### |
#- ------ #### | |
+ | # |
| > ### -----
| |###
| |
--------
Thinking about her inventory, she panicked and used her camera. The flash blinded the giant rat and he ran away! Unfortunately, another giant rat came from the left corridor. She tried to use her camera again but it didn’t work as expected as the giant was still standing in the corridor. The blinding effect didn’t seem very effective because a few seconds later, the first giant rat was back again!
----
....
..
r
r@#####
# #
##
She had no choice but run away, maybe at least fight then but one at a time in a corridor. She want backward, suffered from a giant rat bite and found her dog on the way, who came to the rescue. While she let her dog fighting, a third rat came from behind, this one, she really had to fight, no escape was possible with the dog fighting two rats in the corridor on the other side.
Camera flash, it worked! Time to throw darts, one dart was enough to kill the rat but she missed it a few times. The rat never missed a bite, Sery was in poor health at this moment.
The dog killed the two rats and she was safe, for now.
While walking around to find her way, she got surprised by a giant zombie Z
who hit her hard. She had only 1 health point left. Death was close. What she could do? Try the camera flash, drink a potion, escape until her dog run and try to bite the zombie?
She decided to try the health potion and then, support enough hits from the zombie to blind it while the dog behind it was killing the undead living. It was a good idea, at the moment she dunked the healing potion, the zombie hit her, losing one health point, she would be dead if she didn’t drink that potion, then the dog killed the monster and our duo leveled up!
It was time to finish exploring and get deeper in the underworld. A =
ring was on the ground in the last room. It was silver ring.
--------
-------------- | +
#. | | |
#| | ######################-> { |
#-- ----------- #-------------- | |
######### #| < | | |
# # ###| | ----- --
# ## ### | ######
-----------# # ## | ## #####
|.......=@.# # ## -------------- # #
|.........| # # ## ---|-
|......... ## ##### #### |
|....`....| #- ------ #### | |
|.. .....| + | # |
--- ------ | > ### -----
| |###
| |
--------
It would be foolish to wear the ring without identifying it first, it could be a cursed ring you can’t remove that makes you blind or provoke some unwanted effects.
Fourth floor
Arriving at the fourth floor, Sery found a green gem. Feeling this floor would be quite complicated, she decided to read one of her mapping scroll.
-------
-- | --- --- ---
| -- | ------ --- ---- -- ---- -- -- --
| -|-- | | | --- -- --- -- | ---- |
| --| | | ---- -- | | > |
| || ---------- -- | --------------- -- | --- |
| | || ------- | -- | --- -- -- --- --
| |--| ------- --- | ---- --- -- |
| | | -- --- | | |---- -- --
| -- | -- ------- ---- -- - -- --- -- | | --
-- --| | | -- | |-- --- --- --- |
| |-- | --- | -- -| --- -------- |
| | | --------- ---- | -- -- --| --- |
| -- | |.....--.@-- -- | ------ |-- -- -- | |
---| | ----.......| ------ | |- | ---|- -- | |
-- | --......-| -- | -- | --- | -- -- --
--- | --........| -- | | | | --- -----
-- -- |.........| | -- --- -- | ----
| -- |......--.| | -- |--- --- |
-- | --.|.------ ---- ------ ----
---- ----- ---
After the whole map got reveal in her mind, she got face to face with a dwarf h
wielding a dagger. He really didn’t seem friendly but he didn’t attack her yet.
The whole area was very dark, without a torch or a light source, exploring this level would be very tedious.
After exploring the room, looking for interesting loots on the ground, the dwarf attacked her. This was a very dolorous stabbing. Sery retreated back to the upper stairs, she wanted to reach the level below through the other stairs on this level. In the room, she found her dog which stayed behind, fighting a gecko and a giant rat.
She started to feel hungry, hopefully she went to the underworld with a lot of food. She decided to eat a fortune cookie. When cracking it, she found a paper saying: They say that you should never introduce a rope golem to a succubus. This didn’t make much sense to her though.
While walking toward the other stairs, Sery found a graffiti on the ground: ??urist? we?r shirts loud enougn to wake t?e ?e?d.. As for the fortune cookie, this didn’t make much sense.
On her way, she fought various enemies: red mold, newt, rats, found a banana. Descending the stairs, she was surprised to see they didn’t lead to the forth floor with the dwarves, it was a parallel fourth floor. Could it be possible?? There were a newt and money in the room, it wasn’t dark.
-- -----
.....@..
|....d.|
|...:.$|
--------
She was angry.
The dog jumped on the newt and killed it. The duo got experience to reach level four. The dog, being a little dog, did grow up into a dog.
After a short rest to eat and recover health, Sery went back in corridors to find a way and continue her quest.
--------------
|............|
#.@...........+
#|............|
#|..>...$.....|
#--------------
###
#
##
#
#
#
#
-- -----#
< #
| |
| |
--------
In the room she found stairs to go in the level below, would it be a good idea to descend now or should she explore the area first? She had lot of money, finding a merchant to buy armors and weapons would be a good idea.
To be continued
It’s all for today! Please tell me if you enjoyed it!
This article is about making your own mail server using Slackware
linux distribution, sendmail and cyrus-imap. This choice is because I
really love Slackware and I also enjoy non-mainstream stacks. While
everyone would recommend postfix/dovecot, I prefer using
sendmail/cyrus-imap. Please not this article contain ironical
statements, I will try to write them with some emphasis.
While some people use fossil fuel cars, some people use Slackware.
If you are used to clean, reproducible and automated deployments, the
present how-to is the totally opposite. This is the /Slackware/ way.
Slackware
Slackware is one of the oldest (maybe the oldest with debian) linux
distribution out there and it’s still usable. The last release (14.2)
is 4 years old but there are still security updates. I choose to use
the development branch slackware-current for this article.
I discovered an alternative to Windows in the early 2000’ with a
friend showing me a « Linux » magazine, featuring Slackware
installation CDs and the instructions to install. It was my very first
contact with Linux and open source ever. I used Slackware multiple
times over time, and it was always a great system for me on my main
laptop.
The Slackware specifics could be said as: “not changing much” and
“quite limited”. Slackware never change much between releases, from
2010 to 2020, it’s pretty much the same system when you use it. I say
it’s rather limited, package wise, the default Slackware installation
requires like 15 GB on your disk because it bundles KDE and all the
kde apps, a bunch of editors (emacs,vim,vs,elvis), lot of
compilers/interpreter (gcc, llvm, ada, scheme, python, ruby
etc..). While it provides a LOT of things out of the box, you really
get all Slackware can offer. If something isn’t in the packages, you
need to install it yourself.
Full Disk Encryption or nothing
I recommend to EVERYONE the practice of having a full disk encryption
(phone, laptop, workstation, servers). If your system get stolen, you
will only lose hardware when you use full disk encryption.
Without encryption, the thief can access all your data forever.
Slackware provides a file README_CRYPT.txt
explaining how to install
on an encrypted partition. Don’t forget to tell the bootloader LILO
about the initrd, and keep in mind the initrd must be recreated after
kernel upgrade
Use ntpd
It’s important to have a correct time on your server.
# chmod +x /etc/rc.d/rc.ntpd
# /etc/rc.d/rc.ntpd start
Disable ssh password authentication
In /etc/ssh/sshd_config
there are two changes to do:
Turn UsePam yes
into UsePam no
and add PasswordAuthentication
.
Changes can be applied by restarting ssh with /etc/rc.d/rc.sshd
restart
.
Before enabling this, don’t forget to deploy your public key to an
user who is able to become to root.
Get a SSL certificate
We need a SSL certificate for the infrastructure, so we will install
certbot. Unfortunately, certbot-auto
doesn’t work on Slackware because the system is unsupported. So we
will use pip and call certbot in standalone mode so we don’t need a
web server.
# pip3 install certbot
# certbot certonly --standalone -d mydomain.foobar -m usernam@example
My domain being kongroo.eu
the files are generated under
/etc/letsencrypt/live/kongroo.eu/
.
Three DNS entries have to be added for a working email server.
- SPF to tell the world which addresses have the right send your
emails
- MX to tell the world which addresses will receive the emails and in
which order
- DKIM (a public key) to allow recipients to check your emails really
comes from your servers (signed used a private key)
- DMARC to tell recipient what to do with mails not respecting SPF
SPF
Simple, add an entry with v=spf1 mx
if you want to allow your MX
servers to send emails. Basically, for simple setups, the same server
receive and send emails.
@ 1800 IN SPF "v=spf1 mx"
MX
My server with the address kongroo.eu
will receive the emails.
@ 10800 IN MX 50 kongroo.eu.
DKIM
This part will be a bit more complicated. We have to generate a pair
of public and private keys and run a daemon that will sign outgoing
emails with the private key, so recipients can verify the emails
signature using the public key available in the DNS. We will use
opendkim, I found this
very good
article explaining how to use opendkim with sendmail.
Opendkim isn’t part of slackware base packages, fortunately it is
available in slackbuilds, you can check my
previous article explaining how to setup slackbuilds.
# groupadd -g 305 opendkim
# useradd -r -u 305 -g opendkim -d /var/run/opendkim/ -s /sbin/nologin \
-c "OpenDKIM Milter" opendkim
# sboinstall opendkim
We want to enable opendkim at boot, as it’s not a service from the
base system, so we need to “register” it in rc.local and enable both.
Add the following to /etc/rc.d/rc.local
:
if [ -x /etc/rc.d/rc.opendkim ]; then
/etc/rc.d/rc.opendkim start
fi
Make the scripts executable so they will be run at boot:
# chmod +x /etc/rc.d/rc.local
# chmod +x /etc/rc.d/rc.opendkim
Create the key pair:
# mkdir /etc/opendkim
# cd /etc/opendkim
# opendkim-genkey -t -s default -d kongroo.eu
Get the content of default.txt
, we will use it as a content for a
TXT entry in the DNS, select only the content between parenthesis
without double quotes: your DNS tool (like on Gandi) may take
everything without warning which would produce an invalid DKIM
signature. Been there, done that.
The file should looks like:
default._domainkey IN TXT ( "v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB" )
But the content I used for my entry at gandi is:
v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB
Now we need to configure opendkim to use our keys. Edit
/etc/opendkim.conf
to changes the following lines already
there:
Domain kongroo.eu
KeyFile /etc/opendkim/default.private
ReportAddress postmaster@kongroo.eu
Dmarc
We have to tell DMARC, this may help being accepted by big corporate
mail servers.
_dmarc.kongroo.eu. IN TXT "v=DMARC1;p=none;pct=100;rua=mailto:postmaster@kongroo.eu;"
This will tell the recipient that we don’t give specific instruction
to what to do with suspicious mails from our domain and tell
postmaster@kongroo.eu about the reports. Expect daily mail from
every mail server reached in the day to arrive on that address.
Install Sendmail
Unfortunately Slackware team dropped sendmail in favor to postfix in
the default install, this may be a good thing but I want
sendmail. Good news: sendmail is still in the extra directory.
I wanted to use citadel but it was really
complicated, so I went to sendmail.
Installation
Download the two sendmail txz packages on a mirror in the “extra”
directory:
https://mirrors.slackware.com/slackware/slackware64-current/extra/sendmail/
Run /sbin/installpkg
on both packages.
Configuration
We will disable postfix.
# sh /etc/rc.d/rc.postfix stop
# chmod -x /etc/rc.d/rc.postfix
Enable sendmail and saslauthd
# chmod +x /etc/rc.d/rc.sendmail
# chmod +x /etc/rc.d/rc.saslauthd
All the configuration will be done in /usr/share/sendmail/cf/cf
, we
will use a default template from the package. As explained in the cf
files, we need to use a template and rebuild from this directory
containing all the macros.
# cp sendmail-slackware-tls-sasl.mc /usr/share/sendmail/cf/cf/config.mc
Every time we want to rebuild the configuration file, we need to apply
the m4 macros to have the real configuration file.
# sh Build config.mc
# cp config.cf /etc/mail/sendmail.cf
My config.mc
file looks like this (I stripped the comments):
include(`../m4/cf.m4')
VERSIONID(`TLS supporting setup for Slackware Linux')dnl
OSTYPE(`linux')dnl
define(`confCACERT_PATH', `/etc/letsencrypt/live/kongroo.eu/')
define(`confCACERT', `/etc/letsencrypt/live/kongroo.eu/cert.pem')
define(`confSERVER_CERT', `/etc/letsencrypt/live/kongroo.eu/fullchain.pem')
define(`confSERVER_KEY', `/etc/letsencrypt/live/kongroo.eu/privkey.pem')
define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
define(`confTO_IDENT', `0')dnl
FEATURE(`use_cw_file')dnl
FEATURE(`use_ct_file')dnl
FEATURE(`mailertable',`hash -o /etc/mail/mailertable.db')dnl
FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable.db')dnl
FEATURE(`access_db', `hash -T<TMPF> /etc/mail/access')dnl
FEATURE(`blocklist_recipients')dnl
FEATURE(`local_procmail',`',`procmail -t -Y -a $h -d $u')dnl
FEATURE(`always_add_domain')dnl
FEATURE(`redirect')dnl
FEATURE(`no_default_msa')dnl
EXPOSED_USER(`root')dnl
LOCAL_DOMAIN(`localhost.localdomain')dnl
INPUT_MAIL_FILTER(`opendkim', `S=inet:8891@localhost')
MAILER(local)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl
define(`confAUTH_OPTIONS', `A p y')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
TRUST_AUTH_MECH(`LOGIN PLAIN DIGEST-MD5 CRAM-MD5')dnl
DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
DAEMON_OPTIONS(`Port=smtps, Name=MSA-SSL, M=Esa')dnl
LOCAL_CONFIG
O CipherList=ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-LOW:+SSLv3:+TLSv1:-SSLv2:+EXP:+eNULL
Create the file /etc/sasl2/Sendmail.conf
with this content:
pwcheck_method:saslauthd
This will tell sendmail to use saslauthd for PLAIN and LOGIN
connections. Any SMTP client will have to use either PLAIN or LOGIN.
If you start sendmail and saslauthd, you should be able to send
e-mails with authentication.
We need to edit /etc/mail/local-host-names
to tell sendmail for
which domain it should accept local deliveries.
Simply add your email domain:
kongroo.eu
The mail logs are located under /var/log/maillog
, every mail sent
well signed with DKIM should appear under a line like this:
[time] [host] sm-mta[2520]: 0AECKet1002520: Milter (opendkim) insert (1): header: DKIM-Signature: [whole signature]
This has been explained in a subsection of sendmail configuration. If
you didn’t read this step because you don’t want to setup dkim, you
missed information required for the next steps.
Install cyrus-imap
Slackware ships with dovecot in the default installation, but
cyrus-imapd is available in slackbuilds.
The bad news is that the slackbuild is outdated, so here it a simple
patch to apply in /usr/sbo/repo/network/cyrus-imapd
. This patch also
fixes a compilation issue.
diff --git a/network/cyrus-imapd/cyrus-imapd.SlackBuild b/network/cyrus-imapd/cyrus-imapd.SlackBuild
index 48e2c54e55..251ca5f207 100644
--- a/network/cyrus-imapd/cyrus-imapd.SlackBuild
+++ b/network/cyrus-imapd/cyrus-imapd.SlackBuild
@@ -23,7 +23,7 @@
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PRGNAM=cyrus-imapd
-VERSION=${VERSION:-2.5.11}
+VERSION=${VERSION:-2.5.16}
BUILD=${BUILD:-1}
TAG=${TAG:-_SBo}
@@ -107,6 +107,8 @@ CXXFLAGS="$SLKCFLAGS" \
$DATABASE \
--build=$ARCH-slackware-linux
+sed -i'' 's/gettid/_gettid/g' lib/cyrusdb_berkeley.c
+
make PERL_MM_OPT='INSTALLDIRS=vendor'
make install DESTDIR=$PKG
diff --git a/network/cyrus-imapd/cyrus-imapd.info b/network/cyrus-imapd/cyrus-imapd.info
index 99b2c68075..6ae26365dc 100644
--- a/network/cyrus-imapd/cyrus-imapd.info
+++ b/network/cyrus-imapd/cyrus-imapd.info
@@ -1,8 +1,8 @@
PRGNAM="cyrus-imapd"
VERSION="2.5.11"
HOMEPAGE="https://www.cyrusimap.org/"
-DOWNLOAD="ftp://ftp.cyrusimap.org/cyrus-imapd/cyrus-imapd-2.5.11.tar.gz"
-MD5SUM="674083444c36a786d9431b6612969224"
+DOWNLOAD="https://github.com/cyrusimap/cyrus-imapd/releases/download/cyrus-imapd-2.5.16/cyrus-imapd-2.5.16.tar.gz"
+MD5SUM="d5667e91d8e094ef24560a148e39c462"
DOWNLOAD_x86_64=""
MD5SUM_x86_64=""
REQUIRES=""
You can apply it by carefully copying the content in a file and use
the command patch
.
We can now proceed with cyrus-imapd compilation and installation.
# env DATABASE=sqlite sboinstall cyrus-imapd
As explained in the README file shown during installation, we need to
do a few instructions.
# mkdir -m 750 -p /var/imap /var/spool/imap /var/sieve
# chown cyrus:cyrus /var/imap /var/spool/imap /var/sieve
# su - cyrus
# /usr/doc/cyrus-imapd-2.5.16/tools/mkimap
# logout
Add the following to /etc/rc.d/rc.local
to enable cyrus-imapd at
boot:
if [ -x /etc/rc.d/rc.cyrus-imapd ]; then
/etc/rc.d/rc.cyrus-imapd start
fi
And make the rc script executable:
# chmod +x /etc/rc.d/rc.cyrus-imapd
The official cyrus
documentation is very well done and was very helpful while writing this.
The configuration file is /etc/imapd.conf
:
configdirectory: /var/imap
partition-default: /var/spool/imap
sievedir: /var/sieve
admins: cyrus
sasl_pwcheck_method: saslauthd
allowplaintext: yes
tls_server_cert: /etc/letsencrypt/cyrus/fullchain.pem
tls_server_key: /etc/letsencrypt/cyrus/privkey.pem
tls_client_ca_dir: /etc/ssl/certs
There is another file /etc/cyrusd.conf
used but we don’t need to
make changes in it.
We will have to copy the certificates into a separate place and allow
cyrus user to read them. This will have to be done every time the
certificate are renewed. Let’s add the certbot command so we can use
this script as a cron.
#!/bin/sh
DOMAIN=kongroo.eu
LIVEDIR=/etc/letsencrypt/live/$DOMAIN/
DESTDIR=/etc/letsencrypt/cyrus/
certbot certonly --standalone -d $DOMAIN -m usernam@example
mkdir -p $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/fullchain.pem $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/privkey.pem $DESTDIR
/etc/rc.d/rc.sendmail restart
/etc/rc.d/rc.cyrus-imapd restart
Add a crontab entry to run this script once a day, using crontab -e
to change root crontab.
MAILTO=""
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
0 5 * * * sh /root/renew_certs.sh
Starting the mail server
We prepared the mail server to be working on reboot, but the services
aren’t started yet.
# /etc/rc.d/rc.saslauthd start
# /etc/rc.d/rc.sendmail start
# /etc/rc.d/rc.cyrus-imapd start
# /etc/rc.d/rc.opendkim start
Adding a new user
Add a new user to your system.
# useradd $username
# passwd $username
For some reasons the user mailboxes must be initialized. The same
password must be typed twice (or passed as parameter using -w
$password
).
# USER=foobar
# DOMAIN=kongroo.eu
# echo "cm INBOX" | rlwrap cyradm -u $USER $DOMAIN
Password:
IMAP Password:
Voila! The user should be able to connect using IMAP and receive
emails.
Check your email setup
You can use the web service Mail
tester by sending an email. You could
copy/paste a real email to avoid having a bad mark due to spam
recognition (which happens if you send a mail with a few words). The
bad spam core isn’t relevant anyway as long as it’s due to the content
of your email.
Conclusion
I had real fun writing this article, digging hard in Slackware and
playing with unusual programs like sendmail and cyrus-imapd. I hope
you will enjoy too as much as I enjoyed writing it!
If you find mistakes or bad configuration settings, please contact me
so, I will be happy to discuss about the change and fix this how-to.
Nota Bene: Slackbuilds aren’t mean to be used on the current version,
but really on the last release. There is a github repository carrying
the -current changes on a github repository
https://github.com/Ponce/slackbuilds/.
In today article I will explain how to use
Slackbuilds repository on a
Slackware current system.
You can read the Documentation of
slackbuilds for more information.
We will first install sbotools package which make the use of
slackbuilds a lot easier: like a proper ports tree. As it’s preferable
to let the tools create the repository, we will install them without
downloading the whole slackbuild repository.
Download the slackbuild
from this page,
extract it and cd into the new directory.
$ tar xzvf sbotools.tar.gz
$ cd sbotools
$ . ./sbotools.info
$ wget $DOWNLOAD
$ md5sum $(basename $DOWNLOAD)
$ echo $MD5SUM
The two md5 string should match.
Now, run the build as root
$ sudo sh sbotools.SlackBuild
[lot of text]
Slackware package /tmp/sbotools-2.7-noarch-1_SBo.tgz created.
Now you can install the created package using
$ sudo /sbin/installpkg /tmp/sbotools-2.7-noarch-1_SBo.tgz
We now have a few programs to use the slackbuilds repository, they all
have their own man page:
- sbocheck
- sboclean
- sboconfig
- sbofind
- sboinstall
- sboremove
- sbosnap
- sboupgrade
Creating the repository
As root, run the following command:
# sbosnap fetch
Pulling SlackBuilds tree...
Cloning into '/usr/sbo/repo'...
remote: Enumerating objects: 59, done.
remote: Counting objects: 100% (59/59), done.
remote: Compressing objects: 100% (59/59), done.
remote: Total 485454 (delta 31), reused 14 (delta 0), pack-reused 485395
Receiving objects: 100% (485454/485454), 134.37 MiB | 1.20 MiB/s, done.
Resolving deltas: 100% (337079/337079), done.
Updating files: 100% (39863/39863), done.
The slackbuilds tree is now installed under /usr/sbo/repo
. This
could be configured before using sboconfig -s /home/solene
which
would create a /home/solene/repo
.
Searching a port
One can use the command sbofind
to look for a port:
# sbofind nethack
SBo: nethack 3.6.6
Path: /usr/sbo/repo/games/nethack
SBo: unnethack 5.2.0
Path: /usr/sbo/repo/games/unnethack
Install a port
We will install the previously searched port: nethack
# sboinstall nethack
Nethack is a single-player dungeon exploration game. The emphasis is
on discovering the detail of the dungeon. Each game presents a
different landscape - the random number generator provides an
essentially unlimited number of variations of the dungeon and its
denizens to be discovered by the player in one of a number of
characters: you can pick your race, your role, and your gender.
User accounts that play this need to be members of the "games" group.
Proceed with nethack? [y] y
nethack added to install queue.
Install queue: nethack
Are you sure you wish to continue? [y] y
[... compilation ... ]
+==============================================================================
| Installing new package /tmp/nethack-3.6.6-x86_64-1_SBo.tgz
+==============================================================================
Verifying package nethack-3.6.6-x86_64-1_SBo.tgz.
Installing package nethack-3.6.6-x86_64-1_SBo.tgz:
PACKAGE DESCRIPTION:
# nethack (roguelike game)
#
# Nethack is a single-player dungeon exploration game. The emphasis is
# on discovering the detail of the dungeon. Each game presents a
# different landscape - the random number generator provides an
# essentially unlimited number of variations of the dungeon and its
# denizens to be discovered by the player in one of a number of
# characters: you can pick your race, your role, and your gender.
#
# http://nethack.org
#
Package nethack-3.6.6-x86_64-1_SBo.tgz installed.
Cleaning for nethack-3.6.6...
Done, nethack is installed! sboinstall manages dependencies and if
required will ask you for every required other slackbuilds to install
to add to the queue before starting compiling.
Example: getting flatpak
Flatpak is a software distribution system
for linux distributions, mainly to provide desktop software that could
be complicated to package like Libreoffice, GIMP, Microsoft Teams
etc… Using Slackware, this can be a good source of software.
To use flatpak and the official flathub repository, we need to
install flatpak first. It’s now as easy as:
# sboinstall flatpak
And answer yes to questions (you will be asked to agree for every
dependency required, there are a few of them), if you don’t want to
answer, you can use -r
flag to automatically accept.
We need to add the official repository flathub using the
following command:
# flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
And now you can browse flatpak programs on flathub
For example, if you want to install
VLC
# flatpak install flathub org.videolan.VLC
You will be prompted about all the dependencies required in order to
get VLC installed, those dependencies are some system parts that will
be shared across all the flatpak software in order to efficiently use
disk space. For VLC, some kde components will be required and also
Xorg GL/VAAPI/openh264 environments, flatpak manage all this and you
don’t have to worry about this.
The file /usr/sbo/repo/desktop/flatpak/README
explains quirks of
flatpak on Slackware, like pulseaudio instructions or the polkit
policy on slackware not allowing your user to use the global flatpak
install command.
I found the following ~/.xinitrc
to enable dbus and pulseaudio for
me, so flatpak programs work.
start-pulseaudio-x11
eval $(pax11publish -i)
dbus-run-session fvwm2
Third article of the offline laptop serie.
Sometimes, network access is required
Having a totally disconnected system isn’t really practical for a few
reasons. Sometimes, I really need to connect the offline laptop to the
network. I do produce some content on the computer, so I need to do
backups. The easiest way for me to have reliable backup is to host
them on a remote server holding the data, this requires network
connection for the time of the backup. Of course, backups could be
done on external disks or usb memory sticks (I don’t need to backup
much), but I never liked this backup solution; don’t get me wrong, I
don’t say it’s ineffective, but it doesn’t suit my needs.
Besides the backup, I may need to sync files like my music files. I
may have bought new music that I want to get on the offline laptop, so
network access is required.
I also require internet access to install new packages or upgrade the
system, this isn’t a regular need but I occasionnaly require a new
program I forgot to install. This could be solved by downloaded the
whole packages repository but this would require too many disk space
for packages I would never use. This would also waste a lot of network
transfer.
Finally, when I work on my blog, I need to publish the files, I use
rsync to sync the destination directory from my local computer and
this requires access to the Internet through ssh.
A nice place at the right time
The moments I enjoy using this computer the most is by taking the
laptop on a table with nothing around me. I can then focus about what
I am doing. I find comfortable setups being source of distraction, so
a stool and a table are very nice in my opinion.
In addition to have a clean place to use it, I like to dedicate some
time for the use of this computer. I can write texts or some code in a
given time frame.
On a computer with 24/7 power and internet access I always feel
everything is at reach, then I tend to slack with it.
Having a rather limited battery life changes the way I experience the
computer use. It has a finite time, I have N minutes until the
computer has to be charged or shutdown. This produces for me the same
effect than when starting watching a movie, sometimes I pick up a
movie that fits the time I can spend on it.
Knowing I have some time until the computer stops, I know I must keep
focused because time is passing.
Simple article for posterity or future-me. I will share here my tweaks
to make the IBook G4 laptop (apple keyboard) suitable for OpenBSD ,
this should work for Linux too as long as you run X.
Command should be alt+gr
I really need the alt+gr key which is not there on the keyboard, I
solved this by using this line in my ~/.xsession
.
xmodmap -e "keycode 115 = ISO_Level3_Shift"
i3 and mod4
As the touchpad is incredibely bad by nowadays standards (and it only
has 1 button and no scrolling feature!), I am using a window manager
that could be entirely keyboard driven, while I’m not familiar with
tiling window manager, i3 was easy to understand and light
enough. Long time readers may remember I am familiar with stumpwm but
it’s not really a dynamic tiling window manager, I can only tolerate
i3 using the tabs mode.
But an issue arise, there are no “super” key on the keyboard, and
using “alt” would collide with way too many programs. One solution is
to use “caps lock” as a “super” key.
I added this in my ~/.xsession
file:
xmodmap ~/.Xmodmap
with ~/.Xmodmap
having the following instructions:
clear Lock
keycode 66 = Hyper_L
add mod4 = Hyper_L
clear Lock
This will disable to “toggling” effect of caps lock, and will turn it
into a “Super” key that will be refered as mod4 for i3.
Today post is about
Brutaldon, a
Mastodon/Pleroma interface in old fashion HTML like in the web 1.0
era. I will explain how it works and how to install it. Tested and
approved on an 16 years old powerpc laptop, using Mastodon with w3m
or dillo web browsers!
Introduction
Brutaldon is a mastodon client running as a web server. This mean you
have to connect to a running brutaldon server, you can use a public
one like Brutaldon.online and then you
will have two ways to connect to your account:
- using oauth which will redirect through a dedicated API page of
your mastodon instance and will give back a token once you logged
in properly, this is totally safe of use, but requires javascript
to be enabled to works due to the login page on the instance
- there is “old login” method in which you have to provide your
instance address, your account login and password. This is not
really safe because the brutaldon instance will known about your
credentials, but you can use any web browser with that. There are
not much security issues if you use a local brutaldon instance
How to install it
The installation is quite easy, I wish this could be as easy more
often. You need a python3 interpreter and pipenv
. If you don’t have
pipenv, you need pip
to install pipenv
. On OpenBSD this would
translates as:
$ pip3.8 install --user pipenv
Note that on some system, pip3.8 could be pip3, or pip. Due to the
coexistence of python2 and python3 for some time until we can get ride
of python2, most python related commands have a suffix to tell which
python version it uses.
If you install pipenv with pip, the path will be
~/.local/bin/pipenv
.
Now, very easy to proceed! Clone the code, run pipenv to get the
dependencies, create a sqlite database and run the server.
$ git clone http://git.carcosa.net/jmcbray/brutaldon.git
$ cd brutaldon
$ pipenv install
$ pipenv run python ./manage.py migrate
$ pipenv run python ./manage.py runserver
And voilà! Your brutaldon instance is available on
http://localhost:8000, you only need to open
it on your web browser and log-in to your instance.
As explained in the INSTALL.md
file of the project, this method
isn’t suitable for a public deployment. The code is a Django webapp
and could be used with wsgi and a proper web server. This setup is
beyond the scope of this article.
In this article I will tell you about the
Scuttlebutt social network,
what makes it special and how to join it using OpenBSD. From here,
I’ll refer to Scuttlebutt as SSB.
Introduction to the protocol
You can find all the related documentation on
the official website.
I will make a simplification of the protocol to present it.
SSB is decentralized, meaning there are no central server with
clients around it (think about Twitter model) nor it has a constellation
of servers federating to each others (Fediverse: mastodon, plemora,
peertube…). SSB uses a peer to peer model, meaning nodes exchanges
data between others nodes. A device with an account is a node,
someone using SSB acts as a node.
The protocol requires people to be mutual followers to make the
private messaging system to work (messages are encrypted end-to
end).
This peer to peer paradigm has specific implications:
- Internet is not required for SSB to work. You could use it with
other people in a local network. For example, you could visit a
friend’s place exchange your SSB data over their network.
- Nodes owns the data: when you join, this can be very long to
download the content of nodes close to you (relatively to people
you follow) because the SSB client will download the data, and then
serves everything locally. This mean you can use SSB while being
offline, but also that in the case seen previously at your friend’s
place, you can exchange data from mutual friends. Example: if A
visits B, B receives A updates. When you visit B, you will receive
B updates but also A updates if you follow B on the network.
- Data are immutables: when you publish something on the network,
it will be spread across nodes and you can’t modify those data.
This is important to think twice before publishing.
- Moderation: there are no moderation as there are no autority in
control, but people can block nodes they don’t want to get data
from and this blocking will be published, so other people can easily
see who gets blocked and block it too. It seems to work, I don’t
have opinion about this.
- You discover parts of the network by following people, giving
you access to the people they follow. This makes the discovery of
the network quite organic and should create some communities by
itself. Birds of feather flock together!
- It’s complicated to share an account across multiples devices
because you need to share all your data between the devices, most
people use an account per device.
SSB clients
There are differents clients, the top clients I found were:
There are also lot of applications using the protocol, you can find
a list on this link.
One particularly interesting project is git-ssb, hosting a git
repository on the network.
Most of the code related to SSB is written in NodeJS.
In my opinion, Patchwork is the most user-friendly client but Oasis
is very nice too. Patchwork has more features, like being able to
publish pictures within your messages which is not currently possible
with Oasis.
Manyverse works fine but is rather limited in term of features.
The developer community working on the projects seems rather small
and would be happy to receive some help.
How to install Oasis on OpenBSD
I’ve been able to get the Oasis client to run on OpenBSD. The NodeJS
ecosystem is quite hostile to anything non linux but following the
path of qbit (who solved few libs years
ago), this piece of software works.
$ doas pkg_add libvips git node autoconf--%2.69 automake--%1.16 libtool
$ git clone https://github.com/fraction/oasis
$ cd oasis
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install --only=prod
There is currently ONE issue that require a hack to start Oasis.
The lo0
interface must not have any IPv6 address.
You can use the following command as root to remove the IPv6
addresses.
# ifconfig lo0 -inet6
I reported this bug as I’ve not been able to fix it myself.
How to use Oasis on OpenBSD
When you want to use Oasis, you have to run
$ node /path/to/oasis_sources
You can add --help
to have the usage output, like --offline
if
you don’t want oasis to do networking.
When you start oasis, you can then open http://localhost:3000
to
access network. Beware that this address is available to anyone
having access to your system.
You have to use an invitation from someone to connect to a node
and start following people to increase your range in this small
world.
You can use a public server which acts as a 24/7 node to connect
people together on
https://github.com/ssbc/ssb-server/wiki/Pub-Servers.
How to backup your account
You absolutely need to backup your ~/.ssb/
directory if you don’t
want to lose your account. There are no central server able to
help you recover your account in case of data lass.
If you want to use another client on another computer, you have
to copy this directory to the new place.
I don’t think the whole directory is required, but I have not
been able to find more precise information.
In this long blog post, I will write about the technical details
of the OpenBSD stable packages building infrastructure. I have setup
the infrastructure with the help of Theo De Raadt who provided me
the hardware in summer 2019, since then, OpenBSD users can upgrade
their packages using pkg_add -u
for critical updates that has
been backported by the contributors. Many thanks to them, without
their work there would be no packages to build. Thanks to pea@ who
is my backup for operating this infrastructure in case something
happens to me.
The total lines of code used is around 110 lines of shell.
Original design
In the original design, the process was the following. It was done
separately on each machine (amd64, arm64, i386, sparc64).
Updating ports
First step is to update the ports tree using cvs up
from a cron
job and capture its output. If there is a result, the process
continues into the next steps and we discard the result.
With CVS being per-directory and not using a database like git or
svn, it is not possible to “poll” for an update except by verifying
every directory if a new version of files is available. This check
is done three time a day.
Make a list of ports to compile
This step is the most complicated of the process and weights for a
third of the total lines of code.
The script uses cvs rdiff
between the cvs release and stable
branches to show what changed since release, and its output is
passed through a few grep and awk scripts to only retrieve the
“pkgpaths” (the pkgpath of curl is net/curl) of the packages
that were updated since the last release.
From this raw output of cvs rdiff:
File ports/net/dhcpcd/Makefile changed from revision 1.80 to 1.80.2.1
File ports/net/dhcpcd/distinfo changed from revision 1.48 to 1.48.2.1
File ports/net/dnsdist/Makefile changed from revision 1.19 to 1.19.2.1
File ports/net/dnsdist/distinfo changed from revision 1.7 to 1.7.2.1
File ports/net/icinga/core2/Makefile changed from revision 1.104 to 1.104.2.1
File ports/net/icinga/core2/distinfo changed from revision 1.40 to 1.40.2.1
File ports/net/synapse/Makefile changed from revision 1.13 to 1.13.2.1
File ports/net/synapse/distinfo changed from revision 1.11 to 1.11.2.1
File ports/net/synapse/pkg/PLIST changed from revision 1.10 to 1.10.2.1
The script will produce:
net/dhcpcd
net/dnsdist
net/icinga/core2
net/synapse
From here, for each pkgpath we have sorted out, the sqlports database
is queried to get the full list of pkgpaths of each packages, this
will include all packages like flavors, subpackages and multipackages.
This is important because an update in editors/vim
pkgpath will
trigger this long list of packages:
editors/vim,-lang
editors/vim,-main
editors/vim,gtk2
editors/vim,gtk2,-lang
[...40 results hidden for readability...]
editors/vim,no_x11,ruby
editors/vim,no_x11,ruby,-lang
editors/vim,no_x11,ruby,-main
Once we gathered all the pkgpaths to build and stored them in a
file, next step can start.
Preparing the environment
As the compilation is done on the real system (using PORTS_PRIVSEP
though) and not in a chroot we need to clean all packages installed
except the minimum required for the build infrastructure, which are
rsync and sqlports.
dpb(1)
can’t be used because it didn’t gave good results for
building the delta of the packages between release and stable.
The various temporary directories used by the ports infrastructure
are cleaned to be sure the build starts in a clean environment.
Compiling and creating the packages
This step is really simple. The ports infrastructure is used
to build the packages list we produced at step 2.
env SUBDIRLIST=package_list BULK=yes make package
In the script there is some code to manage the logs of the previous
batch but there is nothing more.
Every new run of the process will pass over all the packages which
received a commit, but the ports infrastructure is smart enough to
avoid rebuilding ports which already have a package with the correct
version.
Transfer the package to the signing team
Once the packages are built, we need to pass only the built
packages to the person who will manually sign the packages before
publishing them and have the mirrors to sync.
From the package list, the package file lists are generated and
reused by rsync to only copy the packages generated.
env SUBDIRLIST=package_list show=PKGNAMES make | grep -v "^=" | \
grep ^. | tr ' ' '\n' | sed 's,$,\.tgz,' | sort -u
The system has all the -release packages in
${PACKAGE_REPOSITORY}/${MACHINE_ARCH}/all/
(like
/usr/ports/packages/amd64/all
) to avoid rebuilding all dependencies
required for building a package update, thus we can’t copy all the
packages from the directory where the packages are moved after
compilation.
Send a notification
Last step is to send an email with the output of rsync to send an
email telling which machine built which package to tell the people
signing the packages that some packages are available.
As this process is done on each machine and that they
don’t necessarily build the same packages (no firefox on sparc64)
and they don’t build at the same speed (arm64 is slower), mails
from the four machines could arrive at very different time, which
led to a small design change.
The whole process is automatic from building to delivering the
packages for signature. The signature step requires a human to be
done though, but this is the price for security and privilege
separation.
Current design
In the original design, all the servers were running their separate
cron job, updating their own cvs ports tree and doing a very long
cvs diff. The result was working but not very practical for the
people signing who were receiving mails from each machine for each
batch.
The new design only changed one thing: One machine was chosen to
run the cron job, produce the package list and then will copy that
list to the other machines which update their ports tree and run
the build. Once all machines finished to build, the initiator machine
will gather outputs and send an unique mail with a summary of each
machine. This became easier to compare the output of each architecture
and once you receive the email this means every machine finished
their job and the signing can be done.
Having the summary of all the building machines resulted in another
improvement: In the logic of the script, it is possible to send an
email telling absolutely no package has been built while the process
was triggered, which means, something went wrong. From here, I
need to check the logs to understand why the last commit didn’t
produce a package. This can be failures like a distinfo file
update forgotten in the commit.
Also, this permitted fixing one issue: As the distfiles are shared
through a common NFS mount point, if multiples machines try to fetch
a distfile at the same time, both will fail to build. Now, the
initiator machine will download all the required distfiles before
starting the build on every node.
All of the previous scripts were reused, except the one
sending the email which had to be rewritten.
New Port of the Week after 3 years! I never thought it was so long
since last blog post about slrn.
This post is about the awesome rclone program, written in Go and
available on most popular platforms (including OpenBSD!). I will
explain how to configure it from the interactive command, from file
and what you can do with rclone.
rclone can be see as a rsync on steroids which supports lot of
Cloud backend and also support creating an encrypted data repository
over any backend (local file, ftp, sftp, webdav, Dropbox, AWS S3,
etc…).
It’s not a automatic synchronization tool or a backup
software. It can copy files from A to B, synchronize two places
(can be harmful if you don’t pay attention).
Let’s see how to use it with an ssh server on which we will
create an encrypted repository to store important data.
Official documentation
Installation
Most of the time, run your package manager to install rclone
.
It’s a single binary.
Interactive configuration
You can skip this LONG section if you want to learn what rclone
can do and how to configure it in a 10 lines files.
There is a parameter to have a question / answer interface to
configure your repository, using rclone config
.
I’ll make a full walkthrough to enable an encrypted repository
because I struggled to understand the logic behind rclone when I
started using it.
Let’s start. I’ll create an encrypted destination on my local NAS
which doesn’t have full disk encryption, so anyone who access the
system won’t be able to read my data. First, this will require to
set up an sftp repository and then an encrypted repository using the
previous one as a backend.
Let’s create a new config named home_nas
.
$ rclone config
2020/10/27 21:30:48 NOTICE: Config file "/home/solene/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> home_nas
We want the storage type 29, “SSH/SFTP” (I removed all 50+ others
storages for readability).
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
29 / SSH/SFTP Connection
\ "sftp"
[...]
Storage> 29
My host is 192.168.1.200
** See help for sftp backend at: https://rclone.org/sftp/ **
SSH host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Connect to example.com
\ "example.com"
host> 192.168.1.200
I will connect with the username solene
.
SSH username, leave blank for current username, solene
Enter a string value. Press Enter for the default ("").
user> solene
Standard port 22, which is the default
SSH port, leave blank to use default (22)
Enter a string value. Press Enter for the default ("").
port>
I answer n because I want rclone to use ssh agent, this could
be the ssh password to the remote user, but I highly discourage
everyone from using password authentication on SSH!
SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n
Leave this except if you want to provide a private key.
Raw PEM-encoded private key, If specified, will override key_file parameter.
Enter a string value. Press Enter for the default ("").
key_pem>
Leave this except if you want to provide a PEM-encoded private key.
Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a string value. Press Enter for the default ("").
key_file>
Leave this except if you need to use a password to unlock your
private key. I use ssh agent so I don’t need it.
The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n
If your user agent manage multiples keys, you should enter the
correct value here, I only have one key so I leave it empty.
When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
when the ssh-agent contains many keys.
Enter a boolean value (true or false). Press Enter for the default ("false").
key_use_agent>
This is a question about crypto, accept the default except if you
have to connect to old servers.
Enable the use of insecure ciphers and key exchange methods.
This enables the use of the following insecure ciphers and key exchange methods:
- aes128-cbc
- aes192-cbc
- aes256-cbc
- 3des-cbc
- diffie-hellman-group-exchange-sha256
- diffie-hellman-group-exchange-sha1
Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Use default Cipher list.
\ "false"
2 / Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.
\ "true"
use_insecure_cipher>
We want to keep hashcheck feature so just skip the answer to keep
the default value.
Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Enter a boolean value (true or false). Press Enter for the default ("false").
disable_hashcheck>
We are at the end of the configuration, we are proposed to change
more parameters but we don’t need to.
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Now we can see the output of the configuration file of rclone in
regards to my home_nas
destination. I agree with the configuration
to continue.
Remote config
--------------------
[home_nas]
type = sftp
host = 192.168.1.200
user = solene
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Here is a summary of the configuration, we have only one remote
here.
Current remotes:
Name Type
==== ====
home_nas sftp
In the menu, I will choose to add another remote. Let’s name it
home_nas_encrypted
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> home_nas_encrypted
We will choose the special storage crypt
which work on an existing
backend.
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
10 / Encrypt/Decrypt a remote
\ "crypt"
Storage> 10
To this question, we will define we want the data stored to
home_nas_encrypted
being saved in home_nas
remote in the
encrypted_repo
directory.
** See help for crypt backend at: https://rclone.org/crypt/ **
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
remote> home_nas:encrypted_repo
Depending on the level of obfuscation your choice may vary. The
simple filename obfuscation is fine for me.
How to encrypt the filenames.
Enter a string value. Press Enter for the default ("standard").
Choose a number from below, or type in your own value
1 / Encrypt the filenames see the docs for the details.
\ "standard"
2 / Very simple filename obfuscation.
\ "obfuscate"
3 / Don't encrypt the file names. Adds a ".bin" extension only.
\ "off"
filename_encryption> 2
As for the directory names obfuscation, I recommend to enable it,
otherwise that leave the whole directory tree readable!
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Enter a boolean value (true or false). Press Enter for the default ("true").
Choose a number from below, or type in your own value
1 / Encrypt directory names.
\ "true"
2 / Don't encrypt directory names, leave them intact.
\ "false"
directory_name_encryption> 1
Type the password that will be used to encrypt the data.
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
You can add a salt to the passphrase, I choose not too.
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n>
No need to change advanced parameters.
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Here is a summary of the configuration of this remote backend.
I’m fine with it.
Remote config
--------------------
[home_nas_encrypted]
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
We see we have now two remote backends, one with the crypt type.
Current remotes:
Name Type
==== ====
home_nas sftp
home_nas_encrypted crypt
Quit rclone, the configuration is done.
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
Configuration file
The previous configuration process only produced this short
configuration file, so you may copy/paste from it and adapt to add
more backends if you want, instead of doing the tedious config
process.
Here is my file ~/.config/rclone/rclone.conf
on my desktop.
[home_nas]
type = sftp
host = 192.168.1.200
user = solene
[home_nas_encrypted]
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = GDS9B1B1LrBa3ltQrSbLf1Vq5C6VbaA1AJVlSZ8
First usage
Now we defined our configuration, we need to create the remote
directory that will be used as a backend, this is important to avoid
errors when using rclone, this is a simple step required only once.
$ rclone mkdir home_nas_encrypted:
On the remote server, I can see a /home/solene/encryted_repo
directory. It’s now ready to use!
A few commands
rclone has a LOT of commands available, I will present a few
of them.
Copying files to/from backend
Let’s say I want to copy files to the encrypted repository. There
is a copy
command.
$ rclone copy /home/solene/log/templates home_nas_encrypted:blog_template
There are no output by default when the program runs fine. You can
use -v
flag to have some verbose output (I prefer it).
List files on a remote backend
Now, we want to see if the files were copied correctly, we will use
the ls
command.
$ rclone ls home_nas_encrypted:
299 blog_template/article.tpl
700 blog_template/gopher_head.tpl
2505 blog_template/layout.tpl
295 blog_template/more.tpl
236 blog_template/navigation.tpl
57 blog_template/one-tag.tpl
34 blog_template/page.tpl
189 blog_template/rss-item.tpl
326 blog_template/rss.tpl
We can also use ncdu
to mimic the ncdu program displaying a
curses interfaces to visualize disk usage in a nice browsing tree.
$ rclone ncdu home_nas_encrypted
-- home_nas_encrypted: ------------------
6.379k [##########] /blog_template
The sync command
Files and directories can also be copied with the sync
command,
but this must be used with care because it makes sure the destination
matches exactly the origin when you use it. It’s the equivalent of
rsync -a --delete origin/ destination/
, so any extra files will
be removed! Note that you can use --dry-run
to see what will
happen.
Filters
When you copy files using the various available method, instead of
using a path, you can provide a filter file or a list of paths to
transfers. This can be very efficient when you want to recover
specifics data.
The documentation about
filtering is available here
Parameters
rclone supports a lot of parameters, like to limit upload
bandwidth, copy multiples files at once, enable an interactive mode
in case of file deletion/overwriting.
Mount
On Linux, FreeBSD and MacOS, rclone can use a FUSE filesystem
to mount the remote repository on the filesystem, making its uses
totally transparent.
This is extremely useful, avoiding the tediousness of the get/put
paradigm of rclone.
This can even be used to make an encrypted repository on the local
filesystem! :)
Create a webdav/sftp/ftp server
rclone has the capability of act as a server and expose a
configured remote backend on various network protocol like webdav,
sftp, ftp, s3 (minio) !
The
serv document is available here
Example running a simple webdav server with hardcoded login/password:
$ rclone serv webdav --user solene --password ANicePassword home_nas_encrypted:
If you plan to use an OpenVPN tunnel to reach your default gateway,
which would make the tun interface in the egress
group, and use
tun0
in your pf.conf
which is loaded before OpenVPN starts?
Here are the few tips I use to solve the problems.
Remove your current default gateway
We don’t want a default gateway on the system. You need to know
the remote address of the VPN server.
If you have a /etc/mygate
file, remove it.
The /etc/hostname.if
file (with if being your interface name,
like em0 for example), should look like this:
192.168.1.200
up
!route add -host A.B.C.D 192.168.1.254
- First line is the IP on my lan
- Second line is to make the interface up.
- Third line is means you want to reach
A.B.C.D
via 192.168.1.254
,
with the IP A.B.C.D
being the remote VPN server.
Create the tun0 interface at boot
Create a /etc/hostname.tun0
file with only up
as content,
that will create tun0
at boot and make it available to pf.conf
and you prevent it from loading the configuration.
You may think one could use “egress” instead of the interface name,
but this is not allowed in queuing.
Don’t let OpenVPN manage the route
Don’t use redirect-gateway def1 bypass-dhcp
from the OpenVPN
configuration, this will create a route which is not default
and
so the tun0 interface won’t be in the egress group, which is not
something we want.
Add those two lines in your configuration file, to execute
a script once the tunnel is established, in which we will make
the default route.
script-security 2
up /etc/openvpn/script_up.sh
In /etc/openvpn/script_up.sh
you simply have to write
#!/bin/sh
/sbin/route add -net default X.Y.Z.A
If you have IPv6 connectivity, you have to add this line:
/sbin/route add -inet6 2000::/3 fe80::%tun0
(not sure it’s 100% correct for IPv6 but it works fine for me! If
it’s wrong, please tell me how to make it better).
For long time I wanted to share a list of non-violent games I enjoyed, so here it is. Obviously, this list is FAR from being complete and exhaustive. It contains games I played and that I liked. They should all run on Linux and some on OpenBSD.
Aside this list, most tycoon and puzzle games should be non-violent.
Automation / Building games
This game is like Factorio, you have to automate production lines
and increase the output of shapes/colors. Very time consuming.
The project is Open source but you need to buy the game if you
don’t want to compile yourself. Or just use my compiled version
working in a web browser.
Play shapez.io in web browser
A transport tycoon game, multiplayer possible! Very complex,
the community is active and you can find tons of mods.
The game is Open source and you can certainly install it
on any distribution with the package manager.
This game is about building equipment to restore the nature into
a wasteland, improve the biodiversity and then remove all your
structures.
The game is not open source but is free of charge. The music
seems to be under an open licence.
Still, you can pay what you want for it to support the developer.
This is a short game about chaining producing buildings into another,
all from garbage up to some secret ending :)
The game is not open source but is free of charge.
Sandbox / Adventure game
This game is a clone of Minecraft, it supports a lot of mods (which
can make the game very complex, like adding trains tracks with their
signals, the pinnacle of complexity :D). As far as I know, the game
now supports health but there are no fight involved.
The game is Open source and free of charge.
This game is about exploration in a forest. It has a nice
music, gameplay is easy.
The game is not open source but it’s free.
Still, you can pay what you want for it to support the developer.
Action / reflex games
This category of games contains games that require
some reflexes or at least need to player to be
active to play.
This game is about driving a 2D motocross and
pass through obstacles, it can be very hard
and will challenge you for long time.
it’s Open source and free of charge.
This is a fun game where you need to drive some big trucks only
using a displayed control panel with your mouse which make
things very hard.
The game is not open source and not free, but the cost isn’t very
high (3.99€ at the moment from France).
This game is about a teenager character who is on vacation in a
place with no cell network, and you will have to make a hike and
meet people to go to the end. Very relaxing :)
The game isn’t open source and isn’t free, but costs around 8€ at
the moment from France.
This game is about adding trains to tracks and avoid them
to crash. I found this game to be more about reflexes than
building, simulation or tycoon. You mostly need to route
the trains in real time.
The game isn’t open source and not free but costs around 10€.
This game is a 2D platform game with interesting gameplay mechanics,
it is surprisingly full of good ideas and a very nice music :) The
characters are very cute and the whole environment looking great.
The game isn’t open source and not free.
Simulation
This game may not be liked by everyone, it consists at driving a
truck in Europe and pick up a cargo to deliver it someone else,
taking care of not hurting it and driving safely by respecting the
law. You can also buy garages and hire people to drive trucks for
you to make money. The game is relaxing and also pretty accurate
in the environment. I have been driving in many European countries
and this game really reflects country signs, cars, speed limits,
country side etc… Some cities received more work and you can see
monuments from the road. The game doesn’t cost much and works on
Linux although it’s not open source.
This game is hard and will require learning. The goal is to create
rockets to send astronauts in space, or even land on a planet or
an asteroid, and come back. Doing a whole trip like this requires
some knowledge about the game mechanics and physics. This game is
certainly not for everyone if you want to achieve something, I never
made better than just sending a rocket in space and let it crash
on the planet after lacking fuel or drifting in space forever…
The game works on Linux, requires an average computer and can be
obtained at a very fair price like 10€ when it’s on sales (which
happens very often). Definitely a must to play if you like space.
Puzzle games (Zachtronics games)
What’s a Zachtronics game? It’s a game edited by Zachtronics! Every
game from this studio have a common pattern. You solve puzzles with
more and more complexes systems, you can compare your result in
speed / efficiency / steps to the others player. They are a mix in
between automation and puzzles. Those games are really good. There
are more than the 3 games I list, but I didn’t enjoy them all,
check the full list
You play an alchemist who is asked to create product for a rich
family. You need to setup devices to transforms and combine
materials into the expected result.
The game isn’t open source and isn’t free. The average cost is 20€.
This game is in 3D, you receive materials on conveyor belts and you
will have to rotate and wield them to deliver the expect material.
The game isn’t open source and isn’t free. The average cost is 20€.
This game is about writing code into assembly. There are calculations
units that will add/sub values from registers and pass it to another
unit. Even more fun if you print the old fashion instructions book!
The game isn’t open source and isn’t free. The average cost is 10€.
Visual Novel
The expression Amrilato
This game is about a Japanese girl who ends in a parallel world where
everything seems similar but in this Japan, people talk Esperanto.
The game isn’t open source and isn’t free. The average cost is 20€.
Not very violent
Way of the Passive Fist
I would like to add this game to this list. It’s a brawler (like
street of rage) in which you don’t fight people, but you only dodge
attacks to exhaust enemies or counter-attack. It’s still a bit
violent because it involves violence toward you, and throwing back
a knife would still be violent… But still, I think this is an
unique game that merits to be better known. :)
The game isn’t open source and isn’t free, expect around 15€ for
it.
Still playing with NixOS, I wanted to experience
how difficult it would be to write a NixOS configuration file to
turn a computer into a simple NAS with basics features: samba
storage, dlna server and auto suspend/resume.
What is NixOS? As a reminder for
some and introduction to the others, NixOS is a Linux distribution
built by the Nix package manager, which make it very different than
any other operating system out there, except Guix
which has a similar approach with their own package manager written
in Scheme.
NixOS uses a declarative configuration approach along with lot of
others features derived from Nix. What’s big here is you no longer
tweak anything in /etc
or install packages, you can define the
working state of the system in one configuration file. This system
is a totally different beast than the others OS and require some
time to understand how it work. Good news though, everything
is documented in the man page configuration.nix
, from fstab
configuration to users managements or how to enable samba!
Here is the /etc/nixos/configuration.nix
file on my NAS.
It enables ssh server, samba, minidlna and vnstat. Set up an user
with my ssh public key. Ready to work.
Using rtcwake
command (Linux specific), it’s possible to put
the system into standby mode and schedule an auto resume after
some time. This is triggered by a cron job at 01h00.
{ config, pkgs, ... }:
{
# include stuff related to hardware, auto generated at install
imports = ./hardware-configuration.nix ];
boot.loader.grub.device = "/dev/sda";
# network configuration
networking.interfaces.enp3s0.ipv4.addresses = [ {
address = "192.168.42.150";
prefixLength = 24;
} ];
networking.defaultGateway = "192.168.42.1";
networking.nameservers = [ "192.168.42.231" ];
# FR locales and layout
i18n.defaultLocale = "fr_FR.UTF-8";
console = { font = "Lat2-Terminus16"; keyMap = "fr"; };
time.timeZone = "Europe/Paris";
# Packages management
environment.systemPackages = with pkgs; [
kakoune vnstat borgbackup utillinux
];
# network disabled (I need to check the ports used first)
networking.firewall.enable = false;
# services to enable
services.openssh.enable = true;
services.vnstat.enable = true;
# auto standby
services.cron.systemCronJobs = [
"0 1 * * * root rtcwake -m mem --date +6h"
];
# samba service
services.samba.enable = true;
services.samba.enableNmbd = true;
services.samba.extraConfig = ''
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server
log file = /var/log/samba/smbd.%m
max log size = 50
dns proxy = no
map to guest = Bad User
'';
services.samba.shares = {
public = {
path = "/home/public";
browseable = "yes";
"writable" = "yes";
"guest ok" = "yes";
"public" = "yes";
"force user" = "share";
};
};
# minidlna service
services.minidlna.enable = true;
services.minidlna.announceInterval = 60;
services.minidlna.friendlyName = "Rorqual";
services.minidlna.mediaDirs = ["A,/home/public/Musique/" "V,/home/public/Videos/"];
# trick to create a directory with proper ownership
# note that tmpfiles are not necesserarly temporary if you don't
# set an expire time. Trick given on irc by someone I forgot the name..
systemd.tmpfiles.rules = [ "d /home/public 0755 share users" ];
# create my user, with sudo right and my public ssh key
users.users.solene = {
isNormalUser = true;
extraGroups = [ "wheel" "sudo" ];
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15viQXHYRjGqE4LLfvETMkjjgSz0mzMzS personal"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15vAQXBYRjGqE6L1fvETMkjjgSz0mxMzS pro"
];
};
# create a dedicated user for the shares
# I prefer a dedicated one than "nobody"
# can't log into it
users.users.share= {
isNormalUser = false;
};
}
As a claws-mail user, I like to have calendar support in the mail
client to be able to “accept” invitations. In the default NixOS
claws-mail package, the vcalendar module isn’t installed with the
package. Still, it is possible to add support for the vcalendar
module without ugly hack.
It turns out, by default, the claws-mail package in Nixpkg has an
optional build option for the vcalendar module, we need to tell
nixpkg we want this module and claws-mail will be compiled.
As stated in the NixOS
manual,
the optionals features can’t be searched yet. So what’s possible
is to search for your package in the NixOS packages
search, click on the package
name to get to the details and click on the link named “Nix expression”
that will open a link to the package definition on GitHUB, claws-mail
nix
expression
As you can see on the claws-mail nix expression code, there are lot
of lines with optional, those are features we can enable. Here
is a sample:
[..]
++ optional (!enablePluginArchive) "--disable-archive-plugin"
++ optional (!enablePluginLitehtmlViewer) "--disable-litehtml_viewer-plugin"
++ optional (!enablePluginPdf) "--disable-pdf_viewer-plugin"
++ optional (!enablePluginPython) "--disable-python-plugin"
[..]
In your configuration.nix
file, where you define the package list
you want, you can tell you want to enable the plugin vcalendar,
this is done as in the following example:
environment.systemPackages = with pkgs; [
kakoune git firefox irssi minetest
(pkgs.claws-mail.override { enablePluginVcalendar = true;})
];
When you rebuild your system to match the configuration definition,
claws-mail will be compiled with the extras options you defined.
Now, I have claws-mail with vCalendar support.
Using NixOS on a laptop on which the keyboard isn’t detected when
I need to type the password to decrypt disk, I had to find a solution.
This problem is hardware related, not Linux or NixOS related.
I highly recommend using full disk encryption on every computer
following a thief threat model. Having your computer stolen is bad,
but if the thief has access to all your data, you will certainly
be in trouble.
This was time to find how to use an usb memory stick to unlock the
full disk encryption in case I don’t have my hands on an usb keyboard
to unlock the computer.
There are 4 steps to enable unlocking the luks volume using a device.
- Create the key
- Add the key on the luks volume
- Write the key on the usb device
- Configure NixOS
First step, creating the file. The easiest way is to the following:
# dd if=/dev/urandom of=/root/key.bin bs=4096 count=1
This will create a 4096 bytes key. You can choose the size you want.
Second step is to register that key in the luks volume, you will
be prompted for luks password when doing so.
# cryptsetup luksAddKey /dev/sda1 /root/key.bin
Then, it’s time to write the key to your usb device, I assume it
will be /dev/sdb
.
# dd if=/root/key.bin of=/dev/sdb bs=4096 count=1
And finally, you will need to configure NixOS to give the information
about the key. It’s important to give the correct size of the key.
Don’t forget to adapt "crypted"
to your luks volume name.
boot.initrd.luks.devices."crypted".keyFileSize = 4096;
boot.initrd.luks.devices."crypted".keyFile = "/dev/sdb";
Rebuild your system with nixos-rebuild switch
and voilà!
Going further
I recommend using the fallback to password feature so if you
lose or don’t have your memory stick, you can type the password to
unlock the disk. Note that you need to not put anything looking
like a /dev/sdb
because if it exists and no key are there, the
system won’t ask for password, and you will need to reboot.
boot.initrd.luks.devices."crypted".fallbackToPassword = true;
It’s also possible to write the key in a partition or at a specific
offset into your memory disk. For this, look at
boot.initrd.luks.devices."volume".keyFileOffset
entry.
It’s possible to play chess using email. This is possible because
there are notations like PGN (Portable Game Notation) that describe
the state of a game.
By playing on your computer and sending the PGN of the game to
your opponent, that person will be able to play their move and
send you the new PGN so you can play.
Using xboard
This is quite easy with xboard (which should be available in most
bsd/linux/unix distributions), as long as you are aware of the few
keybindings.
When you start a game, press Ctrl+E to enter edition mode, this
will prevent the AI to play, then make your move.
From there, you can press Ctrl+C to copy the state of the game.
You will have something like this in your clipboard.
[Event "Edited game"]
[Site "solene.local"]
[Date "2020.09.28"]
[Round "-"]
[White "-"]
[Black "-"]
[Result "*"]
1. d3
*
You can send this to your opponent, but the only needed data is 1.
d3
which is the PGN notation of the moves. You can throw the rest.
In a more advanced game, you will end up mailing this kind of data:
1. d3 e6 2. e4 f5 3. exf5 exf5 4. Qe2+ Be7 5. Qxe7+ Qxe7+
When you want to play your turn, load that line and press Ctrl+V,
you should see the moves happening on the board.
Using gnuchess
gnuchess allow playing chess in command line.
When you want to start a game, you will have a prompt, type manual
to not play against the AI. I recommend using coords
to display
coordinates on the axis of the board.
When you type show board
you will have this display:
white KQkq
8 r n b q k b n r
7 p p p p p p p p
6 . . . . . . . .
5 . . . . . . . .
4 . . . . . . . .
3 . . . . . . . .
2 P P P P P P P P
1 R N B Q K B N R
a b c d e f g h
Then, I can type d3
I get a display
8 r n b q k b n r
7 p p p p p p p p
6 . . . . . . . .
5 . . . . . . . .
4 . . . . . . . .
3 . . . P . . . .
2 P P P . P P P P
1 R N B Q K B N R
a b c d e f g h
From the game, you can save the game using pgnsave FILE
and load a game using pgnload FILE
.
You can see the list of the moves using show game
.
After modest contributions to the NixOS operating system which made
me learn about the contribution process, I found enjoyable to have
an automatic report and feedback about the quality of the submitted
work. While on NixOS this requires GitHub, I think this could be
applied as well on OpenBSD and the mailing list contributing system.
I made a prototype before starting the real work and actually I’m
happy with the result.
This is what I get after feeding the script with a mail containing
a patch:
Determining package path ✓
Verifying patch isn't committed ✓
Applying the patch ✓
Fetching distfiles ✓
Distfile checksum ✓
Applying ports patches ✓
Extracting sources ✓
Building result ✓
It requires a lot of checks to find a patch in the file, because
we have have patches generated from cvs or git which have a slightly
different output. And then, we need to find from where to apply
this patch.
The idea would be to retrieve mails sent to ports@openbsd.org by
subscribing, then store metadata about that submission into a
database:
Sender
Date
Diff (raw text)
Status (already committed, doesn't apply, apply, compile)
Then, another program will pick a diff from the database, prepare a VM using a
derivated qcow2 disk from a base image so it always start fresh and
clean and ready, and do the checks within the VM.
Once it is finished, a mail could be sent as a reply to the original
mail to give the status of each step until error or last check. The
database could be reused to make a web page to track what compiles
but is not yet committed. As it’s possible to verify if a patch is
committed in the tree, this can automatically prune committed patches
over time.
I really think this can improve tracking patches sent to ports@ and
ease the contribution process.
DISCLAIMER
- This would not be an official part of the project, I do it on my own
- This may be cancelled
- This may be a bad idea
- This could be used “as a service” instead of pulling automatically
from ports, meaning people could send mails to it to receive an
automatic review. Ideally this should be done in portcheck(1) but
I’m not sure how to verify a diff apply on the ports tree without
enforcing requirements
- Human work will still be required to check the content and verify
the port works correctly!
Simple Docker cheatsheet. This is a short introduction about Docker usage
and common questions I have been asking myself about Docker.
The official documentation for building docker images can be found
here
Build an image
Building an image is really easy. As a requirement, you need to be
in a directory that can contain data you will use for building the
image but most importantly, you need a Dockerfile
file.
The Dockerfile file hold all the instructions to create the container.
A simple example would be this description:
FROM busybox
CMD "echo" "hello world"
This will create a docker container using busybox base image
and run echo "hello world"
when you run it.
To create the container, use the following command in the same
directory in which Dockerfile is:
$ docker build -t your-image-name .
Advanced image building
If you need to compile sources to distribute a working binary,
you need to prepare the environment to have the required
dependencies to compile and then you need to compile a static
binary to ship the container without all the dependencies.
In the following example we will use a debian environment to build
the software downloaded by git.
FROM debian as work
WORKDIR /project
RUN apt-get update
RUN apt-get install -y git make gcc
RUN git clone git://bitreich.org/sacc /project
RUN apt-get install -y libncurses5-dev libncurses5
RUN make LDFLAGS="-static -lncurses -ltinfo"
FROM debian
COPY --from=work /project/sacc /usr/local/bin/sacc
CMD "sacc" "gopherproject.org"
I won’t explain every command here, but you may see that I have
split the packages installation in two commands. This was to help
debugging.
The trick here is that the docker build process has a cache feature.
Every time you use a FROM
, COPY
, RUN
or CMD
docker will
cache the current state of the build process, if you re-run the
process docker will be able to pick up the most recent state until
the change.
I wasn’t sure how to compile statically the software at first, and
having to install git make and gcc and run git clone EVERY TIME
was very time consuming and bandwidth consuming.
In case you run this build and it fails, you can re-run the build
and docker will catch up directly at the last working step.
If you change a line, docker will reuse the last state with a
FROM/COPY/RUN/CMD command before the changed line. Knowing about
this is really important for more efficient cache use.
Run an image
With the previously locally built image we can run it with the command:
$ docker run your-image-name
hello world
By default, when you use an image name to run, if you don’t have a
local image that match the name docker will check on the docker
official repository if an image exists, if so, it will be pulled
and run.
$ docker run hello-world
This is a sample official container that will display some
explanations about docker.
If you want to try a gopher client, I made a docker version of it
that you can run with the following command:
$ docker run -t -i rapennesolene/sacc
Why did you require -t
and -i
parameters? The former
is to tell docker you want a tty because it will manipulate
a terminal and the latter is to ask an interactive session.
Persistant data
By default, every data of the docker container get wiped out
once it stops, which may be really undesirable if you use
docker to deploy a service that has a state and require an
installation, configuration files etc…
Docker has two ways to solve it:
1) map a local directory
2) map a docker volume name
This is done with the parameter -v
with the docker run
command.
$ docker run -v data:/var/www/html/ nextcloud
This will map a persistent storage named “data” on the host
on the path /var/www/html
in the docker instance. By using data
,
docker will check if /var/lib/docker/volumes/data
exists, if so
it will reuse it and if not it will create it.
This is a convenient way to name volumes and let docker manage it.
The other way is to map a local path to a container environment
path.
$ docker run -v /home/nextcloud:/var/www/html nextcloud
In this case, the directory /home/nextcloud
on the host and
/var/www/html
in the docker environment will be the same directory.
While everyone familiar with a shell know about the command cd
there are a few tips you should know.
Moving to your $HOME directory
$ pwd
/tmp
$ cd
$ pwd
/home/solene
Using cd
without argument will change your current directory to
your $HOME.
Moving into someone $HOME directory
While this should fail most of the time because people shouldn’t allow
anyone to visit their $HOME, there are use case it can be used though.
$ cd ~user1
$ pwd
/home/user1
$ cd ~solene
$ pwd
/home/solene
Using ~user
as a parameter will move to that user $HOME directory,
note that cd
and cd ~youruser
have the same result.
Moving to previous directory
This is a very useful command which allow going back and forth between
two directories.
$ pwd
/home/solene
$ cd /tmp
$ pwd
/tmp
$ cd -
/home/solene
$ pwd
/home/solene
When you use cd -
the command will move to the previous directory
in which you were. There are two special variables in your shell:
PWD
and OLDPWD
, when you move somewhere, OLDPWD
will hold
your current location before moving and then PWD
hold the new
path. When you use cd -
the two variables get exchanged, this
mean you can only jump from two paths using cd -
multiple times.
Please note that when using cd -
your new location is displayed.
Changing directory by modifying current PWD
thfr@ showed me a cd feature I never heard about, and it’s the
perfect place to write about it. Note that this work in ksh and zsh
but is reported to not work in bash.
One example will explain better than any text.
$ pwd
/tmp/pobj/foobar-1.2.0/work
$ cd 1.2.0 2.4.0
/tmp/pobj/foobar-2.4.0/work
This tells cd
to replace first parameter pattern by the second
parameter in the current PWD
and then cd into it.
$ pwd
/home/solene
$ cd solene user1
/home/user1
This could be done in a bloated way with the following command:
$ cd $(echo $PWD | sed "s/solene/user1/")
I learned it a few minutes ago but I see a lot of uses cases where
I could use it.
Moving into the current directory after removal
In some specific case, like having your shell into a directory that
existed but was deleted and removed (this happens often when you
working into compilation directories).
A simple trick is to tell cd
to go to the current location.
$ cd .
or
$ cd $PWD
And cd
will go into the same path and you can start hacking
again in that directory.
There is one very handy package on OpenBSD named pkglocatedb
which provides the command pkglocate
.
If you need to find a file or binary/program and you don’t know
which package contains it, use pkglocate.
$ pkglocate */bin/exiftool
p5-Image-ExifTool-12.00:graphics/p5-Image-ExifTool:/usr/local/bin/exiftool
With the result, I know that the package p5-Image-ExifTool
will provide me
the command exiftool
.
Another example looking for files containing the pattern “libc++”
$ pkglocate libc++
base67:/usr/lib/libc++.so.5.0
base67:/usr/lib/libc++abi.so.3.0
comp67:/usr/lib/libc++.a
comp67:/usr/lib/libc++_p.a
comp67:/usr/lib/libc++abi.a
comp67:/usr/lib/libc++abi_p.a
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.app
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.lib
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qmake.conf
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qplatformdefs.h
As you can see, base sets are also in the database used by pkglocate,
so you can easily find if a file is from a set (that you should
have) or if the file comes from a package.
Find which package installed a file
Klemmens Nanni (kn@) told me it’s possible to find which package
installed a file present in the filesystem using pkg_info
command
which comes from the base system. This can be handy to know from
which package an installed file comes from, without requiring
pkglocatedb.
$ pkg_info -E /usr/local/bin/convert
/usr/local/bin/convert: ImageMagick-6.9.10.86p0
ImageMagick-6.9.10.86p0 image processing tools
This tells me convert
binary was installed by ImageMagick package.
Sometimes I need to download files through http from a list on an “autoindex”
page and it’s always painful to find a correct command for this.
The easy solution is wget but you need to use the correct parameters
because wget has a lot of mirroring options but you only want specific ones to
achieve this goal.
I ended up with the following command:
wget --continue --accept "*.tgz" --no-directories --no-parent --recursive http://ftp.fr.openbsd.org/pub/OpenBSD/6.7/amd64/
This will download every tgz files available at the address given as last parameter.
The parameters given will filter to only download the tgz files, put the
files in the current working directory and most important, don’t try to escape
to the parent directory to start downloading again. The `–continue`` parameter
allow to interrupt wget and start again, downloaded file will be skipped and
partially downloaded files will be completed.
Do not reuse this command if files changed on the remote server because
continue feature only work if your local file and the remote file are the same,
this simply look at the local and remote names and will ask the remote server
to start downloading at the current byte range of your local file. If meanwhile
the remote file changed, you will have a mix of the old and new file.
Obviously ftp protocol would be better suited for this download job but ftp is
less and less available so I find wget to be a nice workaround for this.
I manage my birthday list so I don’t forget about them in a
calendar file so I can use
it in scripts
The calendar file format is easy but sadly it only works using
English month names.
This is an example file with differents spacing:
7 August This is 7 august birthday!
8 August This is 8 august birthday!
16 August This is 16 august birthday!
Now you have a calendar file you can use the calendar binary
on it and show incoming events in the next n days using -A flag.
calendar -A 20
Note that the default file is ~/.calendar/calendar
so if you
use this file you don’t need to use the -f
flag in calendar.
Now, I also use it in crontab with xmessage to show a popup once a
day with incoming birthdays.
30 13 * * * calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -
You have to set the DISPLAY variable so it appear on the screen.
It’s important to check if calendar will have any output before
calling xmessage to prevent having an empty window.
The software developer prx, his website is available at
https://ybad.name/ (en/fr),
released a new software called prose to publish a blog by sending emails.
I really like this idea, while this doesn’t suit my needs at all,
I wanted to write about it.
The code can be downloaded from this address https://dev.ybad.name/prose/ .
I will briefly introduce how it works but the README file is well explaining,
prose must be started from the mail server, upon email receival in
/etc/mail/aliases
the email will be piped into prose which will produce the
html output.
On the security side, prose doesn’t use any external command and on OpenBSD
it will use unveil and pledge features to reduce privileges of prose,
unveil will restrict the process file system accesses outside of the html
output directory.
I would also congrats prx who demonstrates again that writing good software
isn’t exclusive to IT professionnal.
While no one would expect this, there are huge efforts from a small team to
bring more games into OpenBSD. In fact, now some commercial games works
natively now, thanks to Mono or Java. There are no wine or linux emulation
layer in OpenBSD.
Here is a small list of most well known games that run on OpenBSD:
- Northguard (RTS)
- Dead Cells (Side scroller action game)
- Stardew Valley (Farming / Roguelike)
- Slay The Spire (Card / Roguelike)
- Axiom Verge (Side scroller, metroidvania)
- Crosscode (top view twin stick shooter)
- Terraria (Side scroller action game with craft)
- Ion Fury (FPS)
- Doom 3 (FPS)
- Minecraft (Sandbox - not working using latest version)
- Tales Of Maj’Eyal (Roguelike with lot of things in it - open source and free)
I would also like to feature the recently made compatible games from
Zachtronics developer, those are ingenious puzzles games requiring efficiency.
There are games involving Assembly code, pseudo code, molecules etc…
- Opus Magnum
- Exapunks
- Molek-Syntez
Finally, there are good RPG running thanks to devoted developer spending their
free time working on game engine reimplementation:
- Elder Scroll III: Morrowind (openmw engine)
- Baldur’s Gate 1 and 2 (gemrb engine)
- Planescape: Torment (gemrb engine)
There is a Peertube (opensource decentralized Youtube alternative) channel
where I started publishing gaming videos recorded from OpenBSD. Now there are
also videos from others people that are published. OpenBSD Gaming
channel
The full list of running games is available in the Shopping guide
webpage including information how they
run, on which store you can buy them and if they are compatible.
Big thanks to thfr@ who works hard to keep the shopping guide up to date and
who made most of this possible. Many thanks to all the other people in the
OpenBSD Gaming community :)
Note that it seems last Terraria release/update doesn’t work on OpenBSD yet.
While the title may appear quite strange, the article is about installing a
package to have a new random wallpaper everytime you start the X session!
First, you need to install a package named openbsd-backgrounds
which is quite
large with a size of 144 MB. This package made by Marc Espie contains lot of
pictures shot by some OpenBSD developers.
You can automatically set a picture as a background when xenodm start and
prompt for your username by uncommenting a few lines in the file
/etc/X11/xenodm/Xsetup_0
:
Uncomment this part
if test -x /usr/local/bin/openbsd-wallpaper
then
/usr/local/bin/openbsd-wallpaper
fi
The command openbsd-wallpaper
will display a different random picture on
every screen (if you have multiples screen connected) every time you run it.
This article is exceptionnaly in French because it’s about a French OpenBSD
community.
Bonjour à toutes et à tous.
Exceptionnellement je publie un billet en français sur mon blog car je tiens à
faire passer le mot concernant la communauté française obsd4a.
Vous pourrez par exemple trouver la quasi intégralité de la FAQ OpenBSD
traduite
à cette adresse
Sur l’accueil du site vous pourrez trouver des liens vers le forum, le wiki, le
blog, la mailing list et aussi les informations pour rejoindre le salon irc
(#obsd4* sur freenode)
https://openbsd.fr.eu.org/
I added a new feature to my blog today, when I post a new blog article this
will trigger my dedicated Mastodon user
https://bsd.network/@solenepercent to
publish a Toot so people can discuss the content there.
Every article now contains a link to the toot if you want to discuss about an
article.
This is not perfect but a good trade-off I think:
- the website remains static and light (nothing is included, only one more
link per blog post)
- people who would like to discuss about it can proceed in a known place
instead of writing reactions on reddit or other places without a chance for
me to asnwer
- this is not relying on proprietary services
Of course, if you want to give me feedback, I’m still happy to reply to emails
or on IRC.
Introduction
I’m using FreeBSD again on a laptop for some reasons so expect to read more
about FreeBSD here. This tutorial explain how to get a graphical desktop using
FreeBSD 12.1.
I used a Lenovo Thinkpad T480 for this tutorial.
Intel graphics hardware support
If you have a recent Intel integrated graphic card (maybe less than 3 years),
you have to install a package containing the driver:
pkg install drm-kmod
and you also have to tell the system the correct path of the module (because
another i915kms.ko file exist):
sysrc kld_list="/boot/modules/i915kms.ko"
Choose your desktop environnement
Install Xfce
pkg install xfce
Then in your user ~/.xsession
file you must append:
exec ck-launch-session startxfce4
Install MATE
pkg install mate
Then in your user ~/.xsession
file you must append:
exec ck-launch-session mate-session
Install KDE5
pkg install kde5
Then in your user ~/.xsession
file you must append:
exec ck-launch-session startplasma-x11
Setting up the graphical interface
You have to enable a few services to have a working graphical session:
- moused to get laptop mouse support
- dbus for hald
- hald for hardware detection
- xdm for display manager where you log-in
You can install them with the command:
pkg install xorg dbus hal xdm
Then you can enable the services at boot using the following commands, order is
important:
sysrc moused_enable="yes"
sysrc dbus_enable="yes"
sysrc hald_enable="yes"
sysrc xdm_enable="yes"
Reboot or start the services in the same order:
service moused start
service dbus start
service hald start
service xdm start
Note that xdm will be in qwerty layout.
Power management
The installer should have prompted for the service powerd, if you didn’t
activate it at this time, you can still enable it.
Check if it’s running
service powerd status
Enabling
sysrc powerd_enable="yes"
Starting the service
service powerd start
Webcam support
If you have a webcam and want to use it, some configuration is required in
order to make it work.
Install the package webcamd, it will displays all the instructions written
below at the install step.
pkg install webcamd
From here, append this line to the file /boot/loader.conf
to load webcam
support at boot time:
cuse_load="yes"
Add your user to the webcamd group so it will be able to use the device:
pw groupmod webcamd -m YOUR_USER
Enable webcamd at boot:
sysrc webcamd_enable="yes"
Now, you have to logout from your user for the group change to take place. And
if you want the webcamd daemon to work now and not wait next reboot:
kldload cuse
service webcamd start
service devd restart
You should have a /dev/video0 device now. You can test it easily with the
package pwcview
.
External resources
I found this blog very interesting, I wish I found it before I struggle with
all the configuration as it explains how to install FreeBSD on the exact same
laptop. The author explains how to make a transparent lagg0 interface for
switching from ethernet to wifi automatically with a failover pseudo device.
https://genneko.github.io/playing-with-bsd/hardware/freebsd-on-thinkpad-t480/
Some websites (like this one) now offers two differents themes: light and dark.
Dark themes are proven to be better for the eyes and reduce battery usage on
mobiles devices because it requires less light to be displayed hence it
requires less energy to display. The gain is optimal on OLED devices but it
also works on classic LCD screens.
While on Windows and MacOS there is a global setting for the user interface in
which you choose if your system is in light or dark mode, with that setting
being used by lot of applications supporting dark/light themes, on Linux and
BSDs (and others) operating systems there is no such settings and your web
browser will keep displaying the light theme all the time.
Hopefully, it can be fixed in firefox as as explained in the
documentation.
To make it short, in the about:config special Firefox page, one can create a
new key ui.systemUsesDarkTheme
with a number value of 1
, the firefox
about:config page should turn dark immediately and then Firefox will try to use
dark themes when they are available.
You should note that as explained in the mozilla documentation, if you have the
key privacy.resistFingerprinting
set to true
the dark mode can’t be used.
It seems dark mode and privacy can’t belong together for some reasons.
Many thanks to https://tilde.zone/@andinus who
pointed me this out after I overlooked that page and searched a long time with
no result how to make Firefox display website using the dark theme.
In this article I’ll explain how to aggregate internet access bandwidth using
mlvpn software. I struggled a lot to set this up so I wanted to share a
how-to.
Pre-requisites
mlvpn is meant to be used with DSL / fiber links, not wireless or 4G links
with variable bandwidth or packet loss.
mlvpn requires to be run on a server which will be the public internet
access and on the client on which you want to aggregate the links, this is like
doing multiples VPN to the same remote server with a VPN per link, and
aggregate them.
Multi-wan roundrobin / load balancer doesn’t allow to stack bandwidth but
doesn’t require a remote server, depend on what you want to do, this may be
enough and mlvpn may not be required.
mlvpn should be OS agnostic between client / server but I only tried
between two OpenBSD hosts, your setup may differ.
Some network diagram
Here is a simple network, the client has access to 2 ISP through two ethernet
interfaces.
em0 and em1 will have to be on different rdomains (it’s a feature to separate
routing tables).
Let’s say the public ip of the server is 1.2.3.4.
[internet]
↑
| (public ip on em0)
#-------------#
| |
| Server |
| |
#-------------#
| |
| |
| |
| |
(internet) | | (internet)
#-------------# #-------------#
| | | |
| ISP 1 | | ISP 2 |
| | | | (you certainly don't control those)
#-------------# #-------------#
| |
| |
(dsl1 via em0)| | (dsl1 via em1)
#-------------#
| |
| Client |
| |
#-------------#
Network configuration
As said previously, em0 and em1 must be on different rdomains, it can easily be
done by adding rdomain 1
and rdomain 2
to the interfaces configuration.
Example in /etc/hostname.em0
rdomain 1
dhcp
mlvpn installation
On OpenBSD the installation is as easy as pkg_add mlvpn
(should work starting
from 6.7 because it required patching).
mlvpn configuration
Once the network configuration is done on the client, there are 3 steps to do
to get aggregation working:
- mlvpn configuration on the server
- mlvpn configuration on the client
- activating NAT on the client
Server configuration
On the server we will use the UDP ports 5080 et 5081.
Connections speed must be defined in bytes to allow mlvpn to correctly
balance the traffic over the links, this is really important.
The line bandwidth_upload = 1468006
is the maximum download bandwidth of the
client on the specified link in bytes. If you have a download speed of 1.4 MB/s
then you can choose a value of 1.4*1024*1024 => 1468006.
The line bandwidth_download = 102400
is the maximum upload bandwidth of the
client on the specified link in bytes. If you have an upload speed of 100 kB/s
then you can choose a value of 100*1024 => 102400.
The password line must be a very long random string, it’s a shared secret
between the client and the server.
# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
protocol = "tcp"
loglevel = 4
mode = "server"
tuntap = "tun"
interface_name = "tun0"
cleartext_data = 0
ip4 = "10.44.43.2/30"
ip4_gateway = "10.44.43.1"
# things you need to change
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"
[dsl1]
bindhost = "1.2.3.4"
bindport = 5080
bandwidth_upload = 1468006
bandwidth_download = 102400
[dsl2]
bindhost = "1.2.3.4"
bindport = 5081
bandwidth_upload = 1468006
bandwidth_download = 102400
Client configuration
The password
value must match the one on the server, the values of ip4
and
ip4_gateway
must be reversed compared to the server configuration (this is so
in the following example).
The bindfib
lines must correspond to the according rdomain values of your
interfaces.
# config you don't need to change
[general]
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
loglevel = 4
mode = "client"
tuntap = "tun"
interface_name = "tun0"
ip4 = "10.44.43.1/30"
ip4_gateway = "10.44.43.2"
timeout = 30
cleartext_data = 0
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"
[dsl1]
remotehost = "1.2.3.4"
remoteport = 5080
bindfib = 1
[dsl2]
remotehost = "1.2.3.4"
remoteport = 5081
bindfib = 2
NAT configuration (server side)
As with every VPN you must enable packet forwarding and create a pf rule for
the NAT.
Enable forwarding
Add this line in /etc/sysctl.conf:
net.inet.ip.forwarding=1
You can enable it now with sysctl net.inet.ip.forwarding=1
instead of waiting
for a reboot.
In pf.conf you must allow the UDP ports 5080 and 5081 on the public interface
and enable nat, this can be done with the following lines in pf.conf but you
should obviously adapt to your configuration.
# allow NAT on VPN
pass in on tun0
pass out quick on em0 from 10.44.43.0/30 to any nat-to em0
# allow mlvpn to be reachable
pass in on egress inet proto udp from any to (egress) port 5080:5081
Start mlvpn
On both server and client you can run mlvpn with rcctl:
rcctl enable mlvpn
rcctl start mlvpn
You should see a new tun0 device on both systems and being able to ping them
through tun0.
Now, on the client you have to add a default gateway through the mlvpn
tunnel with the command route add -net default 10.44.43.2
(adapt if you
use others addresses). I still didn’t find how to automatize it properly.
Your client should now use both WAN links and being visible with the remote
server public IP address.
mlvpn can be used for more links, you only need to add new sections.
mlvpn also support IPv6 but I didn’t take time to find how to make it work,
si if you are comfortable with ipv6 it may be easy to set up IPv6 with the
variables ip6
and ip6_gateway
in mlvpn.conf.
Hello, as there are so many questions about OpenBSD -current on IRC, Mastodon
or reddit I’m writing this FAQ in hope it will help people.
The official FAQ already contains answers about -current like Following
-current and using snapshots and
Building the system from
sources.
What is OpenBSD -current?
OpenBSD -current is the development version of OpenBSD. Lot of people use it
for everyday tasks.
How to install OpenBSD -current?
OpenBSD -current refers to the last version built from sources obtained with
CVS, however, it’s also possible to get a pre-built system (a snapshot) usually
built and pushed on mirrors every 1 or 2 days.
You can install OpenBSD -current by getting an installation media like usual,
but on the path /pub/OpenBSD/snapshots/ on the mirror.
How do I upgrade from -release to -current?
There are two ways to do so:
- Download bsd.rd file from the snapshots directory and boot it to upgrade
like for a -release to -release upgrade
- Run
sysupgrade -s
command as root, this will basically download all sets
under /home/_sysupgrade
and boot on bsd.rd with an autoinstall(8)
config.
How do I upgrade my -current snapshot to a newer snapshot?
Exactly the same process as going from -release to -current.
Can I downgrade to a -release if I switch to -current?
No.
What issues can I expect in OpenBSD -current?
There are a few issues possibles that one can expect
Out of sync packages
If a library get updated into the base system and you want to update packages,
they won’t be installable until packages are rebuilt with that new library,
this usually takes 1 up to 3 days.
This only create issues in case you want to install a package you don’t have.
The other way around, you can have an old snapshot and packages are not
installable because the libraries linked to by the packages are newer than what
is available in your system, in this case you have to upgrade snapshot.
Snapshots sets are getting updated on the mirror
If you download the sets on the mirror to update your -current version, you may
have an issue with the sha256 sum, this is because the mirror is getting
updated and the sha256 file is the first to be transferred, so sets you are
downloading are not the one the sha256 will compare.
Unexpected system breakage
Sometimes, very rarely (maybe 2 or 3 time in a year?), some snapshots are
borked and will prevent system to boot or lead to regularly crashes. In that
case, it’s important to report the issue with the sendbug
utility.
You can fix this by using an older snapshot from this archives
server and prevent this to happen by
reading bugs@ mailing list before updating.
Broken package
Sometimes, a package update will break it or break some others packages, this
is often quickly fixed on popular packages but in some niche packages you may
be the only one using it on -current and the only one who can report about it.
If you find breakage on something you use, it may be a good idea to report the
problem on ports@openbsd.org mailing list if nobody did before. By doing so,
the issue will be fixed and next -release users will be able to install a
working package.
Is -current stable enough for a server or a workstation?
It’s really up to you. Developers are all using -current and are forbidden to
break it, so the system should totally be usable for everyday use.
What may be complicated on a server is keep updating it regularly and face
issues requires troubleshooting (like major database upgrade which was missing
a quirk).
For a workstation I think it’s pretty safe as long as you can deal with
packages that can’t be installed until they are in sync.
Hello,
A few days ago, as someone working remotely since 3 years I published some tips
to help new remote workers to feel more confident into their new workplace: home
I’ve been told I should publish it on my blog so it’s easier to share the
information, so here it is.
dedicate some space to your work area, if you use a laptop try to dedicate a
table corner for it, so you don’t have to remove your “work station” all the
time
keep track of the time, remember to drink and stand up / walk every hour, you
can set an alarm every hour to remember or use software like
http://www.workrave.org/ or https://github.com/hovancik/stretchly which are
very useful. If you are alone at home, you may lose track of time so this is
important.
don’t forget to keep your phone at hand if you use it for communication with
colleagues. Think that they may only know your phone number, so it’s their
only way to reach you
keep some routine for lunch, you should eat correctly and take the time to do
so, avoid eating in front of the computer
don’t work too much after work hours, do like at your workplace, leave work
when you feel it’s time to and shutdown everything related to work, it’s a
common trap to want to do more and keep an eye on mails, don’t fall into it.
depending on your social skills, work field and colleagues, speak with others
(phone, text whatever), it’s important to keep social links.
Here are some others tips from Jason Robinson
after work, distance yourself from the work time by taking a short walk
outside, cooking, doing laundry, or anything that gets you away from the work
area and cuts the flow.
take at least one walk outside if possible during the day time to get fresh air.
get a desk that can be adjusted for both standing and sitting.
I hope those advices will help you going through the crisis, take care of
yourselves.
This is a little story that happened a few days ago, it explains well how I
usually get involved into ports in OpenBSD.
1 - Lurking into ports/graphics/
At first, I was looking in various ports there are in the graphics category,
searching for an image editor that would run correctly on my offline laptop.
Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open
ports randomly to read their pkg/DESCR file.
This way, I often find gems I reuse later, sometimes I have less luck and I
only tried 20 ports which are useless to me. It happens I find issues in ports
looking randomly like this…
2 - Find the port « comix »
Then, the second or third port I look at is « comix », here is the DESCR file.
Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.
That looked awesome, I have lot of books as PDF I want to read but it’s not
convenient in a “normal” PDF reader, so maybe comix would help!
3 - Using comix
Once comix was compiled (a mix of python and gtk), I start it and I get errors
opening PDFs… I start it again from console, and in the output I get the
explanation that PDF files are not usable in comix.
Then I read about the CBZ or CBT files, they are archives (zip or tar)
containing pictures, definitely not what a PDF is.
4 - mcomix > comix
After a few searches on the Internet, I find that comix last release is from
2009 and it never supported PDF, so nothing wrong here, but I also found comix
had a fork named mcomix.
mcomix forked a long time ago from comix to fix issues and add support for new
features (like PDF support), while last release is from 2016, it works and
still receive commits (last is from late 2019). I’m going for using comix!
5 - Installing mcomix from ports
Best way to install a program on OpenBSD is to make a port, so it’s correctly
packaged, can be deinstalled and submit to ports@ mailing list later.
I did copy comix folder into mcomix, use a brain dead sed command to replace all
occurrence of comix by mcomix, and it mostly worked! I won’t explain little
details, but I got mcomix to work within a few minutes and I was quite happy!
Fun fact is that comix port Makefile was mentioning mcomix as a suggestion
for upgrade.
6 - Enjoying a CBR reader
With mcomix installed, I was able to read some PDF, it was a good experience
and I was pretty happy with it. I’ve spent a few hours reading, a few moments
after mcomix was installed.
7 - mcomix works but not all the time
After reading 2 longs PDFs, I got issues with the third, some pages were not
rendered and not displayed. After digging this issue a bit, I found about
mcomix internals. Reading PDF is done by rendering every page of the PDF using
mutool binary from mupdf software, this is quite CPU intensive, and for
some reason in mcomix the command execution fails while I can do the exact same
command a hundred time with no failure. Worse, the issue is not reproducible in
mcomix, sometimes some pages will fail to be rendered, sometimes not!
8 - Time to debug some python
I really want to read those PDF so I take my favorite editor and start
debugging some python, adding more debug output (mcomix has a -W parameter
to enable debug output, which is very nice), to try to understand why it
fails at getting output of a working command.
Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just
found it fail, sometimes, but I wasn’t able to understand why.
9 - mcomix on PowerPC
While mcomix is clunky with PDF, I wanted to check if it was working on
PowerPC, it took some times to get all the dependencies installed on my old
computer but finally I got mcomix displayed on the screen… and dying on PDF
loading! Crash seems related to GTK and I don’t want to touch that, nobody will
want to patch GTK for that anyway so I’ve lost hope there.
10 - Looking for alternative
Once I knew about mcomix, I was able to search the Internet for alternatives of
it and also CBR readers. A program named zathura seems well known here and
we have it in the OpenBSD ports tree.
Weird thing is that it comes with two different PDF plugins, one named
mupdf and the other one poppler. I did try quickly on my amd64 machine
and zathura was working.
11 - Zathura on PowerPC
As Zathura was working nice on my main computer, I installed it on the PowerPC,
first with the poppler plugin, I was able to view PDF, but installing this
plugin did pull so many packages dependencies it was a bit sad. I deinstalled
the poppler PDF plugin and installed mupdf plugin.
I opened a PDF and… error. I tried again but starting zathura from the
terminal, and I got the message that PDF is not a supported format, with a lot
of lines related to mupdf.so file not being usable. The mupdf plugin work on
amd64 but is not usable on powerpc, this is a bug I need to report, I don’t
understand why this issue happens but it’s here.
12 - Back to square one
It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT
files and then use any CBT reader out there and not having to deal with that
PDF madness!!
13 - Use big calibre for the job
I have found on the Internet that Calibre is the most used tool to convert a
PDF into CBT files (or into something else but I don’t really care here). I
installed calibre, which is not lightweight, started it and wanted to change
the default library path, the software did hang when it displayed the file
dialog. This won’t stop me, I restart calibre and keep the default path, I
click on « Add a book » and then it hang again on file dialog. I did report
this issue on ports@ mailing list, but it didn’t solve the issue and this mean
calibre is not usable.
14 - Using the command line
After all, CBT files are images in a tar file, it should be easy to reproduce
the mcomix process involving mutool to render pictures and make a tar of that.
IT WORKED.
I found two ways to proceed, one is extremely fast but may not make pages in
the correct order, the second requires CPU time.
Making CBT files - easiest process
The first way is super easy, it requires mutool (from mupdf package) and it
will extract the pictures from the PDF, given it’s not a vector PDF, not sure
what would happen on those. The issue is that in the PDF, the embedded pictures
have a name (which is a number from the few examples I found), and it’s not
necessarily in the correct order. I guess this depend how the PDF is made.
$ mutool extract The_PDF_file.pdf
$ tar cvf The_PDF_file.tar *jpg
That’s all you need to have your CBT file. In my PDF there was jpg files in it,
but it may be png in others, I’m not sure.
Making CBT files - safest process (slow)
The other way of making pictures out of the PDF is the one used in mcomix, call
mutool for rendering each page as a PNG file using width/height/DPI you
want. That’s the tricky part, you may not want to produce pictures with larger
resolution than the original pictures (and mutool won’t automatically help you
for this) because you won’t get any benefit. This is the same for the DPI. I
think this could be done automatically using a correct script checking each PDF
page resolution and using mutool to render the page with the exact same
resolution.
As a rule of thumb, it seems that rendering using the same width as your screen
is enough to produce picture of the correct size. If you use large values, it’s
not really an issue, but it will create bigger files and take more time for
rendering.
$ mutool draw -w 1920 -o page%d.png The_PDF_file.pdf
$ tar cvf The_PDF_file.tar page*.png
You will get PNG files for each page, correctly numbered, with a width of 1920
pixels. Note that instead of tar, you can use zip to create a zip file.
15 - Finally reading books again
After all this LONG process, I was finally able to read my PDF with any CBR
reader out there (even on phone), and once the process is done, it uses no cpu
for viewing files at the opposite of mcomix rendering all the pages when you
open a file.
I have to use zathura on PowerPC, even if I like it less due to the continuous
pages display (can’t be turned off), but mcomix definitely work great when not
dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports
tree if it fails randomly on random pages with PDF.
16 - Being an open source activist is exhausting
All I wanted was to read a PDF book with a warm cup of tea at hand.
It ended into learning new things, debugging code, making ports, submitting
bugs and writing a story about all of this.
Last year I wrote a huge blog post about an offline laptop attempt.
It kinda worked but I wasn’t really happy with the setups, need and goals.
So, it is back and I use it know, and I am very happy with it.
This article explains my experience at solving my needs, I would
appreciate not receiving advice or judgments here.
State of the need
Internet is infinite, my time is not
Having access to the Internet is a gift, I can access anything or anyone. But
this comes with a few drawbacks. I can waste my time on anything, which is not
particularly helpful. There are so many content that I only scratch things,
knowing it will still be there when I need it, and jump to something else. The
amount of data is impressive, one human can’t absorb that much, we have to deal
with it.
I used to spend time of what I had, and now I just spend time on what exist. An
example of this statement is that instead of reading books I own, I’m looking
for which book I may want to read once, meanwhile no book are read.
Network socialization requires time
When I say “network socialization” this is so to avoid the easy “social
network” saying. I do speak with people on IRC (in real time most of the time),
I am helping people on reddit, I am reading and writing mail most of the time
for OpenBSD development.
Don’t get me wrong, I am happy doing this, but I always keep an eye on each,
trying to help people as soon as they ask a question, but this is really time
consuming for me. I spend a lot of time jumping from one thing to another to
keep myself updated on everything, and so I am too distracted to do anything.
In my first attempt of the offline laptop, I wanted to get my mails on it, but
it was too painful to download everything and keep mails in sync. Sending
emails would have required network too, it wouldn’t be an offline laptop
anymore.
IT as a living and as a hobby
On top of this, I am working in IT so I spend my day doing things over the
Internet and after work I spend my time on open source projects. I can not
really disconnect from the Internet for both.
How I solved this
First step was to define « What do I like to do? », and I came with this short
list:
- reading
- listening to music
- playing video games
- writing things
- learning things
One could say I don’t need a computer to read books, but I have lots of ebooks
and PDF about lots of subjects. The key is to load everything you need on the
computer, because it can be tempting to connect the device to the Internet
because you need a bit of this or that.
I use a very old computer with a PowerPC CPU (1.3 GHz single core) with 512MB
of ram. I like that old computer, and slower computer forbid doing multiple
things at the same time and help me staying on focus.
Reading files
For reading, I found zathura or comix (and its fork mcomix) very
useful for reading huge PDF, the scrolling customization make those tools
useful.
Listening to music
I buy my music as FLAC files and download it, this doesn’t require any internet
access except at purchase time, so nothing special there. I use moc player
which is easy to use, have a lot of feature and supports FLAC (on powerpc).
Video games
Emulation is a nice way to play lot of games on OpenBSD, on my old computer
it’s up to game boy advance / super nes / megadrive which should allow me to do
again lots of games I own.
We also have a lot of nice games in ports, but my computer is too slow to run
them or they won’t work on powerpc.
Encyclopedia - Wikipedia
I’ve set up a local wikipedia replica like I explained in a previous article,
so anytime I need to find about something, I can ask my local wikipedia. It’s
always available. This is the best I found for a local encyclopedia, works
well.
Writing things
Since I started the offline computer experience, I started a diary. I never
felt the need to do so but I wanted to give it a try. I have to admit summing up
what I achieved in the day before going to bed is a satisfying experience and
now I continue to update it.
You can use any text editor you want, there are special software with specific
features, like rednotebook or lifeograph which supports embedded pictures or on
the fly markdown rendering. But a text file and your favorite editor also do
the job.
I also write some articles of this blog. It’s easy to do so as articles are
text files in a git repository. When I finish and I need to publish, I get
network and push changes to the connected computer which will do the publishing
job.
Technical details
I will go fast on this. My set up is an old Apple IBook G4 with a
1024x768 screen (I love this 4:3 ratio) running OpenBSD.
The system firewall pf is configured to prevent any incoming
connections, and only allow TCP on the network to port 22, because
when I need to copy files, I use ssh / sftp. The /home partition is
encrypted using the softraid crypto device, full disk encryption is
not supported on powerpc.
The experience is even more enjoyable with a warm cup of tea on hand.
Introduction
I started doing biking seriously a few months ago, as I love having statistics
I needed to gather some. I found a lot of devices on the market but I prefered
using opensource tool and not relying on any vendor.
The best option to do so for me was reusing a 6 years old smartphone on which
the SIM card bus is broken, that phone lose the sim card when it is shaked a
little and requires a reboot to find it again, I am happy I found a way to
reuse it.
Tip: turn ON airplane mode on the smartphone while riding, even without a SIM
card it will try to get network and it will draw battery + emitting useless
radio waves. In case of emergency, just disable the airplane mode to get access
to your local emergency call number. GPS is a passive module and doesn’t
require any network.
This smartphone has a GPS receiver, it’s enough for recording my position as
often I want. Using the correct GPS software from F-droid store and a program
for sftp transfer, I can record data and transfer it easily to my computer.
The most common file format for recording GPS position is the GPX format, it’s
a simple XML file containing all positions with their timestamp, sometimes with
a few more information like speed at that time, but given you have all
positions, software can calculate the speed between each position.
Android GPS Software
It seems GPS software for recording GPX tracks are becoming popular, and in the
last months, lot of new software appeared, which is a good thing, I didn’t
tested all of them though but they tend to be more easy to use and
minimalistic.
OpenStreetMap app - OSMand~
You can install it from F-droid an alternate store for
Android only with opensource software, it’s a full free version (and
opensource) compared to the one you can find on Android store.
This is OpenStreetMap official software, it’s full of features and quite
heavy, you can download maps for navigation, record tracks, view tracks
statistics, contribute to OSM, get Wikipedia information for an area and
everything of this while being OFFLINE. Not only on my bike, I use it all the
time while walking or in my car.
Recorded GPX can be found in the default path
Android/data/net.osmand.plus/files/tracks/rec/
Trekarta
I found another software named Trekarta which is a lot more lighter than
OSM, but only focuses on recording your tracks. I would recommend it if you
don’t want any other feature or have a really old android compatible phone or
low disk space.
Analyzing GPX files / keep track of everything
I found Turtlesport, an opensource software in Java for which last release was
years ago but still work out of the box, given you have a java implementation
installed. You can find it at the following
link.
/usr/local/bin/jdk-1.8.0/bin/java -jar turtlesport.jar
Turtlesport is a nice tool for viewing tracks, it’s not for only for cycling
and can be used for various sports, the process is the following:
- define sports you do (bike, skateboard, hiking etc..)
- define equipments you use (bike, sport shoes, skis etc..)
- import GPX files and tell Turtlesport which sport and equipment it’s related to
Then, for each GPX file, you will be able to see it on a map, see elevation and
speed of that track, but you can also make statistics per sport or equipment,
like “How many km I ride with that bike over last year, per week”.
If you don’t have a GPX file, you can still add a new trip into the database by
drawing the path on a map.
In the equipments, you will see how many kilometers you used each, with an
alert feature if the equipment goes beyond a defined wearing limit. I’m not
sure about the use of this, maybe you want to know your shoes shouldn’t be used
for more than 2000 km?? Maybe it’s possible to use it for maintenance purpose,
says your bike has a wearing limit of 1000 km, when you reach it you get an
alert, do your maintenance and set the new limit to 2000km.
Viewing GPX files
From OpenBSD 6.7 you can install the package gpxsee to open multiple GPX
files, they will be shown on a map, each track with a different colour, and
nice charts displaying the elevation or speed over the travel for every tracks.
Before gpxsee I was using the GIS (Geographical Information System) tool
qgis but it is really heavy and complicated. But if you want to work on
your recorded data like doing complex statistics, it’s a powerful tool if you
know how to use it.
I like to use it in a gamification purpose: I’m trying to ride over every
road around my home, viewing all GPX files at the same time allow me to plan
the next trip where I never went.
Miscellaneous
Create an unique GPX file from all records
It is possible to merge GPX file into one giant one using gpsbabel .I was
using this before having *gpxsee but I have no idea about what you can do with
that, this create one big spaggheti track. I choose to keep the command here,
in case it’s useful for someone one day:
gpsbabel -s -r -t -i GPX $(ls /path/to/files/*gpx | awk '{ printf "-f %s ", $1 }') -o GPX -F - > sum.gpx
Cycling using electronic devices
Of course, if you are a true cyclist racer and GPX files will not be enough for
you, you will certainly want devices such as a power meter or a cadence meter
and an on-board device to use them. I can’t help much about hardware.
However, you may want to give a try to Golden
Cheetah to import all your data from various
devices and make complex statistics from it. I tried it and I had no idea
about the purpose of 90% of the features.
Have fun
Don’t forget to have fun and do not get obscessed by numbers!
I like Common LISP and I also like awk. Dealing with text files in Common LISP
is often painful. So I wrote a small awk like common lisp macro, which helps a
lot dealing with text files.
Here is the implementation, I used the uiop package for split-string function,
it comes with sbcl. But it's possible to write your own split-string or reused
the infamous split-str function shared on the Internet.
(defmacro awk(file separator &body code)
"allow running code for each line of a text file,
giving access to NF and NR variables, and also to
fields list containing fields, and line containing $0"
`(progn
(let ((stream (open ,file :if-does-not-exist nil)))
(when stream
(loop for line = (read-line stream nil)
counting t into NR
while line do
(let* ((fields (uiop:split-string line :separator ,separator))
(NF (length fields)))
,@code))))))
It's interesting that the "do" in the loop could be replaced with a "collect",
allowing to reuse awk output as a list into another function, a quick example I
have in mind is this:
;; equivalent of awk '{ print NF }' file | sort | uniq
;; for counting how many differents fields long line we have
(uniq (sort (awk "file" " " NF)))
Now, here are a few examples of usage of this macro, I've written the original
awk command in the comments in comparison:
;; numbering lines of a text file with NR
;; awk '{ print NR": "$0 }' file.txt
;;
(awk "file.txt" " "
(format t "~a: ~a~%" NR line))
;; display NF-1 field (yes it's -2 in the example because -1 is last field in the list)
;; awk -F ';' '{ print NF-1 }' file.csv
;;
(awk "file.csv" ";"
(print (nth (- NF 2) fields)))
;; filtering lines (like grep)
;; awk '/unbound/ { print }' /var/log/messages
;;
(awk "/var/log/messages" " "
(when (search "unbound" line)
(print line)))
;; printing 4nth field
;; awk -F ';' '{ print $4 }' data.csv
;;
(awk "data.csv" ";"
(print (nth 4 fields)))
If you want to contribute to OpenBSD ports collection you will want to enable
thePORTS_PRIVSEP
feature. When this variable is set, ports system will use
dedicated users for tasks.
Source tarballs will be downloaded by the user
_pfetch and all compilation and packaging
will be done by the user _pbuild.
Those users are created at system install and pf have a default rule to
prevent _pbuild user doing network access. This will prevent ports
from doing network stuff, and this is what you want.
This adds a big security to the porting process and any malicious code
run by ports being compiled will be harmless.
In order to enable this feature, a few changes must be made.
The file /etc/mk.conf must contains
PORTS_PRIVSEP=yes
SUDO=doas
Then, /etc/doas.conf must allows your user to become _pfetch and _pbuild
permit keepenv nopass solene as _pbuild
permit keepenv nopass solene as _pfetch
permit keepenv nopass solene as root
If you don’t want to use the last line, there is an explanation in the
bsd.port.mk(5) man page.
Finally, within the ports tree, some permissions must be changed.
# chown -R _pfetch:_pfetch /usr/ports/distfiles
# chown -R _pbuild:_pbuild /usr/ports/{packages,plist,pobj,bulk}
If directories doesn’t exist yet on your system (this is the case on a fresh
ports checkout / untar), you can create them with the commands:
# install -d -o _pfetch -g _pfetch /usr/ports/distfiles
# install -d -o _pbuild -g _pbuild /usr/ports/{packages,plist,pobj,bulk}
Now, when you run a command in the ports tree, privileges should be dropped to
according users.
Introduction
rsnapshot is a handy tool to manage backups using rsync and hard links on the
filesystem. rsnapshot will copy folders and files but it will skip duplication
over backups using hard links for files which has not changed.
This kinda create snapshots of your folders you want to backup, only using
rsync, it’s very efficient and easy to use, and getting files from backups is
really easy as they are stored as files under the rsnapshot backup.
Installation
Installing rsnapshot is very easy, on most systems it will be in your official
repository packages.
To install it on OpenBSD: pkg_add rsnapshot
(as root)
Configuration
Now you may want to configure it, in OpenBSD you will find a template in
/etc/rsnapshot.conf
that you can edit for your needs (you can make a backup
of it first if you want to start over). As it’s stated in big (as big as it can
be displayed in a terminal) letters at the top of the configuration sample
file, you will see that things must be separated by TABS and not spaces. I’ve
made the mistakes more than once, don’t forget using tabs.
I won’t explain all options, but only the most importants.
The variable snapshot_root
is where you want to store the backups. Don’t put
that directory in a directory you will backup (that will end into an infinite
loop)
The variable backup
is for telling rsnapshot what you want to backup from
your system to which directory inside snapshot_root
Here are a few examples:
backup /home/solene/ myfiles/
backup /home/shera/Documents shera_files/
backup /home/shera/Music shera_files/
backup /etc/ etc/
backup /var/ var/ exclude=logs/*
Be careful when using ending slashes to paths, it works the same as with rsync.
/home/solene/
means that into target directory, it will contains the content
of /home/solene/
while /home/solene
will copy the folder solene within the
target directory, so you end up with target_directory/solene/the_files_here.
The variables retain
are very important, this will define how rsnapshot keep
your data. In the example you will see alpha, beta, gamma but it could be hour,
day, week or foo and bar. It’s only a name that will be used by rsnapshot to
name your backups and also that you will use to tell rsnapshot which kind of
backup to do. Now, I must explain how rsnapshot actually work.
How it work
Let’s go for a straighforward configuration. We want a backup every hour on the
last 24h, a backup every day for the past 7 days and 3 manuals backup that we
start manually.
We will have this in our rsnapshot configuration
retain hourly 24
retain daily 7
retain manual 3
but how does rsnapshot know how to do what? The answer is that it doesn’t.
In root user crontab, you will have to add something like this:
# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly
# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily
and then, when you want to do a manual backup, just start rsnapshot manual
Every time you run rsnapshot for a “kind” of backup, the last version will be
named in the rsnapshoot root directory like hourly.0 and every backups will be
shifted by one. The directory getting a number higher than the number in the
retain
line will be deleted.
New to crontab?
If you never used crontab, I will share two important things to know about it.
Use MAILTO=“” if you don’t want to receive every output generated from scripts
started by cron.
Use a PATH containing /usr/local/bin/ in it because in the default cron PATH it
is not present. Instead of setting PATH you can also using full binary paths
into the crontab, like /usr/local/bin/rsnapshot daily
You can edit the current user crontab with the command crontab -e
.
Your crontab may then look like:
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin
MAILTO=""
# comments are allowed in crontab
# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly
# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily
If you ever need to crop a video, which mean that you want to reduce the area
of the video to a square of it to trim areas you don’t want.
This is possible with ffmpeg using the video filter crop.
To make the example more readable, I replaced values with variables names:
- WIDTH = width of output video
- HEIGHT = height of output video
- START_LEFT = relative position of the area compared to the left, left being 0
- START_TOP = relative position of the area compared to the top, top being 0
So the actual commands look like
ffmpeg -i input_video.mp4 -filter:v "crop=$WIDTH:$HEIGHT:$START_LEFT:$START_TOP" output_video.mp4
If you want to crop the video to get a 320x240 video from the top-left
position 500,100 the command would be
ffmpeg -i input_video.mp4 -filter:v "crop=320:240:500:100" output_video.mp4
Extract audio and video (separation)
If for some reasons you want to separate the audio and the video from a file
you can use those commands:
ffmpeg -i input_file.flv -vn -acodec copy audio.aac
ffmpeg -i input_file.flv -an -vcodec copy video.mp4
Short explanation:
-vn
means -video null
and so you discard video
-an
means -audio null
and so you discard audio
codec copy
means the output is using original format from the file. If the
audio is mp3 then the output file will be a mp3 whatever the extension you
choose.
Instead of using codec copy you can choose a different codec for the extracted
file, but copy is a good choice, it performs really fast because you don’t need
to re-encode it and is loss-less.
I use this to rework the audio with audacity.
Merge audio and video into a single file (merge)
After you reworked tracks (audio and/or video) of your file, you can combine
them into a single file.
ffmpeg -i input_audio.aac -i input_video.mp4 -acodec copy -vcodec copy -f flv merged_video.flv
Good news for my gamers readers. It’s not really fresh news but it has never
been written anywhere.
The commercial video game Crosscode is
written in HTML5, making it available on every system having chromium or
firefox. The limitation is that it may not support gamepad (except if you find
a way to make it work).
A demo is downloadable at this address
https://radicalfishgames.itch.io/crosscode and should work using the following
instructions.
You need to buy the game to be able to play it, it’s not free and not
opensource. Once you bought it, the process is easy:
- Download the linux installer from GOG (from steam it may be too)
- Extract the data
- Patch a file if you want to use firefox
- Serve the files through a http server
The first step is to buy the game and get the installer.
Once you get a file named like “crosscode_1_2_0_4_32613.sh”, run unzip
on it, it’s a shell script but only a self contained archive that can extract
itself using the small shell script at the top.
Change directory into data/noarch/game/assets
and apply this patch, if you
don’t know how to apply a patch or don’t want to, you only need to
remove/comment the part you can see in the following patch:
--- node-webkit.html.orig Mon Dec 9 17:27:17 2019
+++ node-webkit.html Mon Dec 9 17:27:39 2019
@@ -51,12 +51,12 @@
<script type="text/javascript">
// make sure we don't let node-webkit show it's error page
// TODO for release mode, there should be an option to write to a file or something.
- window['process'].once('uncaughtException', function() {
+/* window['process'].once('uncaughtException', function() {
var win = require('nw.gui').Window.get();
if(!(win.isDevToolsOpen && win.isDevToolsOpen())) {
win.showDevTools && win.showDevTools();
}
- });
+ });*/
function doStartCrossCodePlz(){
if(window.startCrossCode){
Then you need to start a http server in the current path, an easy way to do it
is using… php! Because php contains a http server, you can start the server
with the following command:
$ php -S 127.0.0.1:8080
Now, you can play the game by opening http://localhost:8080/node-webkit.html
I really thank Thomas Frohwein aka thfr@ for finding this out!
Tested on OpenBSD and OpenIndiana, it works fine on an Intel Core 2 Duo T9400
(CPU from 2008).
Wikipedia and openzim
If you ever wanted to host your own wikipedia replica, here is the simplest
way.
As wikipedia is REALLY huge, you don’t really want to host a php wikimedia
software and load the huge database, instead, the project made the openzim
format to compress the huge database that wikipedia became while allowing using
it for fast searches.
Sadly, on OpenBSD, we have no software reading zim files and most software
requires the library openzim to work which requires extra work to get it as a
package on OpenBSD.
Hopefully, there is a python package implementing all you need as pure python
to serve zim files over http and it’s easy to install.
This tutorial should work on all others unix like systems but packages or
binary names may change.
Downloading wikipedia
The project Kiwix is responsible for wikipedia files, they create regularly
files from various projects (including stackexchange, gutenberg, wikibooks
etc…) but for this tutorial we want wikipedia:
https://wiki.kiwix.org/wiki/Content_in_all_languages
You will find a lot of files, the language is contained into the filename. Some
filenames will also self explain if they contain everything or categories, and
if they have pictures or not.
The full French file is 31.4 GB worth.
Running the server
For the next steps, I recommend setting up a new user dedicated to this.
On OpenBSD, we will require python3 and pip:
$ doas pkg_add py3-pip--
Then we can use pip to fetch and install dependencies for the zimply software,
the flag --user
is rather important as it allows any user to download and
install python libraries in its home folder instead of polluting the whole
system as root.
$ pip3.7 install --user --upgrade zimply
I wrote a small script to start the server using the zim file as a parameter, I
rarely write python so the script may not be high standard.
File server.py:
from zimply import ZIMServer
import sys
import os.path
if len(sys.argv) == 1:
print("usage: " + sys.argv[0] + " file")
exit(1)
if os.path.exists(sys.argv[1]):
ZIMServer(sys.argv[1])
else:
print("Can't find file " + sys.argv[1])
And then you can start the server using the command:
$ python3.7 server.py /path/to/wikipedia_fr_all_maxi_2019-08.zim
You will be able to access wikipedia on the url http://localhost:9454/
Note that this is not a “wiki” as you can’t see history and edit/create pages.
This kind of backup is used in place like Cuba or Africa areas where people
don’t have unlimited internet access, the project lead by Kiwix allow more
people to access knowledge.
What this article is about ?
For some times I wanted to share how I manage my personal laptop and
systems. I got the habit to create a lot of users for just
everything for security reasons.
Creating a new users is fast, I can connect as this user using doas
or ssh -X if I need a X app and this allows preventing some code to
steal data from my main account.
Maybe I went this way too much, I have a dedicated irssi users which
is only for running irssi, same with mutt. I also have a user with
a stupid name and I can use it for testing X apps and I can wipe
the data in its home directory (to try fresh firefox profiles in
case of ports update for example).
How to proceed?
Creating a new user is as easy as this command (as root):
# useradd -m newuser
# echo "permit nopass keepenv solene as newuser" >> /etc/doas.conf
Then, from my main user, I can do:
$ doas -u newuser 'mutt'
and it will run mutt as this user.
This way, I can easily manage lots of services from packages which
don’t come with dedicated daemons users.
For this to be effective, it’s important to have a chmod 700 on
your main user account, so others users can’t browse your files.
Graphicals software with dedicated users
It becomes more tricky for graphical users. There are two options there:
- allow another user to use your X session, it will have native performance but
in case of security issue in the software your whole X session is accessible
(recording keys, screnshots etc…)
- running the software through ssh -X will restricts X access to the software
but the rendering will be a bit sluggish and not suitable for some uses.
Example of using ssh -X compared to ssh -Y:
$ ssh -X foobar@localhost scrot
X Error of failed request: BadAccess (attempt to access private resource denied)
Major opcode of failed request: 104 (X_Bell)
Serial number of failed request: 6
Current serial number in output stream: 8
$ ssh -Y foobar@localhost scrot
(nothing output but it made a screenshot of the whole X area)
Real world example
On a server I have the following new users running:
- torrents
- idlerpg
- searx
- znc
- minetest
- quake server
- awk cron parsing http
they can have crontabs.
Maybe I use it too much, but it’s fine to me.
If you want to remove parts of a video, you have to cut it into pieces and then
merge the pieces, so you can avoid parts you don’t want.
The command is not obvious at all (like in all ffmpeg uses), I found some parts
on differents areas of the Internet.
Split in parts, we want to keep from 00:00:00 to 00:30:00 and 00:35:00 to 00:45:00
ffmpeg -i source_file.mp4 -ss 00:00:00 -t 00:30:00 -acodec copy -vcodec copy part1.mp4
ffmpeg -i source_file.mp4 -ss 00:35:00 -t 00:10:00 -acodec copy -vcodec copy part2.mp4
The -ss parameter tells ffmpeg where to start the video and -t parameter tells
it about the duration.
Then, merge the files into one file:
printf "file %s\n" part1.mp4 part2.mp4 > file_list.txt
ffmpeg -f concat -i file_list.txt -c copy result.mp4
instead of printf you can write into file_list.txt the list of files like this:
file /path/to/test1.mp4
file /path/to/test2.mp4
Introduction
I don’t use gpg a lot but it seems the only tool out there for encrypting data
which “works” and widely used.
So this is my personal cheatsheet for everyday use of gpg.
In this post, I use the command gpg2
which is the binary to GPG version 2.
On your system, “gpg” command could be gpg2 or gpg1.
You can use gpg --version
if you want to check the real version behind gpg
binary.
In your ~/.profile file you may need the following line:
export GPG_TTY=$(tty)
Install GPG
The real name of GPG is GnuPG, so depending on your system the package can be
either gpg2, gpg, gnupg, gnugp2 etc…
On OpenBSD, you can install it with: pkg_add gnupg--%gnupg2
GPG Principle using private/public keys
- YOU make a private and a public key (associated with a mail)
- YOU give the public key to people
- PEOPLE import your public key into they keyring
- PEOPLE use your public key from the keyring
- YOU will need your password everytime
I think gpg can do much more, but read the manual for that :)
Initialization
We need to create a public and a private key.
solene$ gpg2 --gen-key
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.
GnuPG needs to construct a user ID to identify your key.
In this part, you should put your real name and your email address and validate
with “O” if you are okay with the input. You will get ask for a passphrase
after.
Real name: Solene
Email address: solene@domain.example
You selected this USER-ID:
"Solene <solene@domain.example>"
Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 368E580748D5CA75 marked as ultimately trusted
gpg: revocation certificate stored as '/home/solene/.gnupg/openpgp-revocs.d/7914C6A7439EADA52643933B368E580748D5CA75.rev'
public and secret key created and signed.
pub rsa2048 2019-09-06 [SC] [expires: 2021-09-05]
7914C6A7439EADA52643933B368E580748D5CA75
uid Solene <solene@domain.example>
sub rsa2048 2019-09-06 [E] [expires: 2021-09-05]
The key will expire in 2 years, but this is okay.
This is a good thing, if you stop using the key, it will die silently at it
expiration time.
If you still use it, you will be able to extend the expiracy time and people
will be able to notice you still use that key.
Export the public key
If someone asks your GPG key, this is what they want:
gpg2 --armor --export solene@domain.example > solene.asc
Import a public key
Import the public key:
gpg2 --import solene.asc
Delete a public key
In case someone change their public key, you will want to delete it to import a
new one, replace $FINGERPRINT by the actual fingerprint of the public key.
gpg2 --delete-keys $FINGERPRINT
Encrypt a file for someone
If you want to send file picture.jpg to remote@mail then use the command:
gpg2 --encrypt --recipient remote@domain.example picture.jpg > picture.jpg.gpg
You can now send picture.jpg.gpg to remote@mail who will be able to read the
file with his/her private key.
You can use `–armor`` parameter to make the output plaintext, so you can put
it into a mail or a text file.
Decrypt a file
Easy!
gpg2 --decrypt image.jpg.gpg > image.jpg
Get public key fingerprint
The fingerprint is a short string made out of your public key and can be
embedded in a mail (often as a signature) or anywhere.
It allows comparing a public key you received from someone with the fingerprint
that you may find in mailing list archives, twitter, a html page etc.. if the
person spreaded it somewhere. This allow to multiple check the authenticity of
the public key you received.
it looks like:
4398 3BAD 3EDC B35C 9B8F 2442 8CD4 2DFD 57F0 A909
This is my real key fingerprint, so if I send you my public key, you can use
the fingerprint from this page to check it matches the key you received!
You can obtain your fingerprint using the following command:
solene@t480 ~ $ gpg2 --fingerprint
pub rsa4096 2018-06-08 [SC]
4398 3BAD 3EDC B35C 9B8F 2442 8CD4 2DFD 57F0 A909
uid [ ultime ] XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sub rsa4096 2018-06-08 [E]
Add a new mail / identity
If for some reason, you need to add another mail to your GPG key (like
personal/work keys) you can create a new identity with the new mail.
Type gpg2 --edit-key solene@domain.example
and then in the prompt, type adduid
and answer questions.
You can now export the public key with a different identity.
List known keys
If you want to get the list of keys you imported, you can use
gpg2 -k
Testing
If you want to do some tests, I’d recommend making new users on your system,
exchanges their keys and try to encrypt a message from one user to another.
I have a few spare users on my system on which I can ssh locally for various
tests, it is always useful.
Earlier in August 2019 happened the BitreichCON 2019.
There was awesome talks there during two days but there are two I would
like to share. You can find all the informations about this event at the
following address with the Gopher protocol gopher://bitreich.org/1/con/2019
BrCON talks are happening through an audio stream, a ssh session for
viewing the current slide and IRC for questions. I have the markdown
files producing the slides (1 title = 1 slide) and the audio recording.
Simple solutions
This is a talk I have made for this conference. It as about using simple
solutions for most problems. Simple solutions come with simple tools,
unix tools. I explain with real life examples like how to retrieve my
blog articles titles from the website using curl, grep, tr or awk.
Link to the
audio
Link to the
slides
Experiences with drist
Another talk from Parazyd about my deployment tool Drist so I feel
obligated to share it with you.
In his talk he makes a comparison with slack (debian package, not the
online community), explains his workflow with Drist and how it saves his
precious time.
Link to the
audio
Link to the
slides
If you want to know more about the bitreich community, check
gopher://bitreich.org or IRC #bitreich-en on Freenode
servers.
There is also the bitreich website which
is a website parody of the worse of what you can daily see.
This blog post is about a nginx rtmp module for turning your nginx
server into a video streaming server.
The official website of the project is located on github at:
https://github.com/arut/nginx-rtmp-module/
I use it to stream video from my computer to my nginx server, then
viewers can use mpv rtmp://perso.pw/gaming
in order to view the
video stream. But the nginx server will also relay to twitch for
more scalability (and some people prefer viewing there for some
reasons).
The module will already be installed with nginx package since OpenBSD
6.6 (not already out at this time).
There is no package for install the rtmp module before 6.6.
On others operating systems, check for something like “nginx-rtmp” or
“rtmp” in an nginx context.
Install nginx on OpenBSD:
pkg_add nginx
Then, add the following to the file /etc/nginx/nginx.conf
load_module modules/ngx_rtmp_module.so;
rtmp {
server {
listen 1935;
buflen 10s;
application gaming {
live on;
allow publish 176.32.212.34;
allow publish 175.3.194.6;
deny publish all;
allow play all;
record all;
record_path /htdocs/videos/;
record_suffix %d-%b-%y_%Hh%M.flv;
}
}
}
The previous configuration sample is a simple example allowing
172.32.212.34 and 175.3.194.6 to stream through nginx, and that will
record the videos under /htdocs/videos/ (nginx is chrooted in
/var/www).
You can add the following line in the “application” block to relay the
stream to your Twitch broadcasting server, using your API key.
push rtmp://live-ams.twitch.tv/app/YOUR_API_KEY;
I made a simple scripts generating thumbnails of the videos and
generating a html index file.
Every 10 minutes, a cron check if files have to be generated,
make thumbnails for videos (tries at 05:30 of the video and then
00:03 if it doesn’t work, to handle very small videos) and then
create the html.
The script checking for new stuff and starting html generation:
#!/bin/sh
cd /var/www/htdocs/videos
for file in $(find . -mmin +1 -name '*.flv')
do
echo $file
PIC=$(echo $file | sed 's/flv$/jpg/')
if [ ! -f "$PIC" ]
then
ffmpeg -ss 00:05:30 -i "$file" -vframes 1 -q:v 2 "$PIC"
if [ ! -f "$PIC" ]
then
ffmpeg -ss 00:00:03 -i "$file" -vframes 1 -q:v 2 "$PIC"
if [ ! -f "$PIC" ]
then
echo "problem with $file" | mail user@my-tld.com
fi
fi
fi
done
cd ~/dev/videos/ && sh html.sh
This one makes the html:
#!/bin/sh
cd /var/www/htdocs/videos
PER_ROW=3
COUNT=0
cat << EOF > index.html
<html>
<body>
<h1>Replays</h1>
<table>
EOF
for file in $(find . -mmin +3 -name '*.flv')
do
if [ $COUNT -eq 0 ]
then
echo "<tr>" >> index.html
INROW=1
fi
COUNT=$(( COUNT + 1 ))
SIZE=$(ls -lh $file | awk '{ print $5 }')
PIC=$(echo $file | sed 's/flv$/jpg/')
echo $file
echo "<td><a href=\"$file\"><img src=\"$PIC\" width=320 height=240 /><br />$file ($SIZE)</a></td>" >> index.html
if [ $COUNT -eq $PER_ROW ]
then
echo "</tr>" >> index.html
COUNT=0
INROW=0
fi
done
if [ $INROW -eq 1 ]
then
echo "</tr>" >> index.html
fi
cat << EOF >> index.html
</table>
</body>
</html>
EOF
Hello
As on my blog I use different markup languages I would like to use a simpler
markup language not requiring an extra package. To do so, I wrote an awk
script handling titles, paragraphs and code blocks the same way markdown does.
16 December 2019 UPDATE: adc sent me a patch to add ordered and unordered list.
Code below contain the addition.
It is very easy to use, like: awk -f mmd file.mmd > output.html
The script is the following:
BEGIN {
in_code=0
in_list_unordered=0
in_list_ordered=0
in_paragraph=0
}
{
# escape < > characters
gsub(/</,"\<",$0);
gsub(/>/,"\>",$0);
# close code blocks
if(! match($0,/^ /)) {
if(in_code) {
in_code=0
printf "</code></pre>\n"
}
}
# close unordered list
if(! match($0,/^- /)) {
if(in_list_unordered) {
in_list_unordered=0
printf "</ul>\n"
}
}
# close ordered list
if(! match($0,/^[0-9]+\. /)) {
if(in_list_ordered) {
in_list_ordered=0
printf "</ol>\n"
}
}
# display titles
if(match($0,/^#/)) {
if(match($0,/^(#+)/)) {
printf "<h%i>%s</h%i>\n", RLENGTH, substr($0,index($0,$2)), RLENGTH
}
# display code blocks
} else if(match($0,/^ /)) {
if(in_code==0) {
in_code=1
printf "<pre><code>"
print substr($0,5)
} else {
print substr($0,5)
}
# display unordered lists
} else if(match($0,/^- /)) {
if(in_list_unordered==0) {
in_list_unordered=1
printf "<ul>\n"
printf "<li>%s</li>\n", substr($0,3)
} else {
printf "<li>%s</li>\n", substr($0,3)
}
# display ordered lists
} else if(match($0,/^[0-9]+\. /)) {
n=index($0," ")+1
if(in_list_ordered==0) {
in_list_ordered=1
printf "<ol>\n"
printf "<li>%s</li>\n", substr($0,n)
} else {
printf "<li>%s</li>\n", substr($0,n)
}
# close p if current line is empty
} else {
if(length($0) == 0 && in_paragraph == 1 && in_code == 0) {
in_paragraph=0
printf "</p>"
} # we are still in a paragraph
if(length($0) != 0 && in_paragraph == 1) {
print
} # open a p tag if previous line is empty
if(length(previous_line)==0 && in_paragraph==0) {
in_paragraph=1
printf "<p>%s\n", $0
}
}
previous_line = $0
}
END {
if(in_code==1) {
printf "</code></pre>\n"
}
if(in_list_unordered==1) {
printf "</ul>\n"
}
if(in_list_ordered==1) {
printf "</ol>\n"
}
if(in_paragraph==1) {
printf "</p>\n"
}
}
Hello, this is a long time I want to work on a special project using an
offline device and work on it.
I started using computers before my parents had an internet access and
I was enjoying it. Would it still be the case if I was using a laptop
with no internet access?
When I think about an offline laptop, I immediately think I will miss
IRC, mails, file synchronization, Mastodon and remote ssh to my servers.
But do I really need it _all the time_?
As I started thinking about preparing an old laptop for the experiment,
differents ideas with theirs pros and cons came to my mind.
Over the years, I produced digital data and I can not deny this. I
don't need all of them but I still want some (some music, my texts,
some of my programs). How would I synchronize data from the offline
system to my main system (which has replicated backups and such).
At first I was thinking about using a serial line over the two
laptops to synchronize files, but both laptop lacks serial ports and
buying gears for that would cost too much for its purpose.
I ended thinking that using an IP network _is fine_, if I connect for a
specific purpose. This extended a bit further because I also need to
install packages, and using an usb memory stick from another computer
to get packages and allow the offline system to use it is _tedious_
and ineffective (downloading packages and correct dependencies is a
hard task on OpenBSD in the case you only want the files). I also
came across a really specific problem, my offline device is an old
Apple PowerPC laptop being big-endian and amd64 is little-endian, while
this does not seem particularly a problem, OpenBSD filesystem is
dependent of endianness, and I could not share an usb memory device
using FFS because of this, alternatives are fat, ntfs or ext2 so it is a
dead end.
Finally, using the super slow wireless network adapter from that
offline laptop allows me to connect only when I need for a few file
transfers. I am using the system firewall pf to limit access to outside.
In my pf.conf, I only have rules for DNS, NTP servers, my remote server,
OpenBSD mirror for packages and my other laptop on the lan. I only
enable wifi if I need to push an article to my blog or if I need to
pull a bit more music from my laptop.
This is not entirely _offline_ then, because I can get access to the
internet at any time, but it helps me keeping the device offline.
There is no modern web browser on powerpc, I restricted packages to
the minimum.
So far, when using this laptop, there is no other distraction than the
stuff I do myself.
At the time I write this post, I only use xterm and tmux, with moc as a
music player (the audio system of the iBook G4 is surprisingly good!),
writing this text with ed and a 72 long char prompt in order to wrap
words correctly manually (I already talked about that trick!).
As my laptop has a short battery life, roughly two hours, this also
helps having "sessions" of a reasonable duration. (Yes, I can still
plug the laptop somewhere).
I did not use this laptop a lot so far, I only started the experiment
a few days ago, I will write about this sometimes.
I plan to work on my gopher space to add new content only available
there :)
Hi,
I’m happy to announce the OpenBSD project will now provide -stable binary
packages. This mean, if you run last release (syspatch applied or not),
pkg_add -u will update packages to get security fixes.
Remember to restart services that may have been updated, to be sure to run new
binaries.
Link to official announcement
I said I will rewrite ttyplot examples to
make them work on OpenBSD.
Here they are, but a small notice before:
Examples using systat will only work for 10000 seconds , or increase that
-d parameter, or wrap it in an infinite loop so it restart (but don’t loop
systat for one run at a time, it needs to start at least once for producing
results).
The systat examples won’t work before OpenBSD 6.6, which is not yet
released at the time I’m writing this, but it’ll work on a -current after 20 july 2019.
I made a change to systat so it flush output at every cycle, it was not
possible to parse its output in realtime before.
Enjoy!
Examples list
ping
Replace test.example by the host you want to ping.
ping test.example | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"
cpu usage
vmstat 1 | awk 'NR>2 { print 100-$(NF); fflush(); }' | ttyplot -t "Cpu usage" -s 100
disk io
systat -d 1000 -b iostat 1 | awk '/^sd0/ && NR > 20 { print $2/1024 ; print $3/1024 ; fflush }' | ttyplot -2 -t "Disk read/write in kB/s"
load average 1 minute
{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($8,0,length($8)-1) ; fflush }' | ttyplot -t "load average 1"
load average 5 minutes
{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($9,0,length($9)-1) ; fflush }' | ttyplot -t "load average 5"
load average 15 minutes
{ while :; do uptime ; sleep 1 ; done } | awk '{ print $10 ; fflush }' | ttyplot -t "load average 15"
wifi signal strengh
Replace iwm0 by your interface name.
{ while :; do ifconfig iwm0 | tr ' ' '\n' ; sleep 1 ; done } | awk '/%$/ { print ; fflush }' | ttyplot -t "Wifi strength in %" -s 100
cpu temperature
{ while :; do sysctl -n hw.sensors.cpu0.temp0 ; sleep 1 ; done } | awk '{ print $1 ; fflush }' | ttyplot -t "CPU temperature in °C"
pf state searches rate
systat -d 10000 -b pf 1 | awk '/state searches/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"
pf state insertions rate
systat -d 10000 -b pf 1 | awk '/state inserts/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"
network bandwidth
Replace trunk0 by your interface.
This is the same command as in my previous article.
netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
Tip
You can easily use those examples over ssh for gathering data, and leave the
plot locally as in the following example:
ssh remote_server "netstat -b -w 1 -I trunk0" | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
or
ssh remote_server "ping test.example" | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"
If for some reasons you want to visualize your bandwidth traffic on an
interface (in or out) in a terminal with a nice graph, here is a small script
to do so, involving ttyplot, a nice software making graphics in a terminal.
The following will works on OpenBSD.
You can install ttyplot by pkg_add ttyplot
as root, ttyplot package appeared
since OpenBSD 6.5.
For Linux, the ttyplot official website
contains tons of examples.
Example
Output example while updating my packages:
IN Bandwidth in KB/s
↑ 1499.2 KB/s#
│ #
│ #
│ #
│ ##
│ ##
│ 1124.4 KB/s##
│ ##
│ ##
│ ##
│ ##
│ ##
│ 749.6 KB/s ##
│ ##
│ ##
│ ## #
│ ## # # # # ##
│ ## # ### # ## # # # ## ## # # ##
│ 374.8 KB/s ## ## #### # # ## # # ### ## ## ### # ## ### # # # # ## # ##
│ ## ### ##### ########## ############# ### # ## ### ##### #### ## ## ###### ## ##
│ ## ### ##### ########## ############# ### #### ### ##### #### ## ## ## ###### ## ###
│ ## ### ##### ########## ############## ### #### ### ##### #### ## ## ######### ## ####
│ ## ### ##### ############################## ######### ##### #### ## ## ############ ####
│ ## ### #################################################### #### ## #####################
│ ## ### #################################################### #############################
└────────────────────────────────────────────────────────────────────────────────────────────────────→
# last=422.0 min=1.3 max=1499.2 avg=352.8 KB/s Fri Jul 19 08:30:25 2019
github.com/tenox7/ttyplot 1.4
In the following command, we will use trunk0 with INBOUND traffic as the
interface to monitor.
At the end of the article, there is a command for displaying both in and out at
the same time, and also instructions for customizing to your need.
Article update: the following command is extremely long and complicated, at
the end of the article you can find a shorter and more efficient version,
removing most of the awk code.
You can copy/paste this command in your OpenBSD system shell, this will produce
a graph of trunk0 inbound traffic.
{ while :; do netstat -i -b -n ; sleep 1 ; done } | awk 'BEGIN{old=-1} /^trunk0/ { if(!index($4,":") && old>=0) { print ($5-old)/1024 ; fflush ; old = $5 } if(old==-1) { old=$5 } }' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"
The script will do an infinite loop doing netstat -ibn
every second and
sending that output to awk.
You can quit it with Ctrl+C.
Explanations
Netstat output contains total bytes (in or out) since system has started so awk
needs to remember last value and will display the difference between two
output, avoiding first value because it would make a huge spike (aka the total
network transfered since boot time).
If I decompose the awk script, this is a lot more readable.
Awk is very readable if you take care to format it properly as any source code!
#!/bin/sh
{ while :;
do
netstat -i -b -n
sleep 1
done
} | awk '
BEGIN {
old=-1
}
/^trunk0/ {
if(!index($4,":") && old>=0) {
print ($5-old)/1024
fflush
old = $5
}
if(old==-1) {
old = $5
}
}' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"
Customization
- replace trunk0 by your interface name
- replace both instances of $5 by $6 for OUT traffic
- replace /1024 by /1048576 for MB/s values
- remove /1024 for B/s values
- replace 1 in sleep 1 by another value if you want to have the value every
n seconds
IN/OUT version for both data on the same graph + simpler
Thanks to leot on IRC, netstat can be used in a lot more efficient way and remove all the awk parsing!
ttyplot supports having two graphs at the same time, one being in opposite color.
netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
Introduction
If you ever wanted to make a twitch stream from your OpenBSD system, this is
now possible, thanks to OpenBSD developer thfr@ who made a wrapper named
fauxstream using ffmpeg with relevant parameters.
The setup is quite easy, it only requires a few steps and searching on Twitch
website two informations, hopefully, to ease the process, I found the links for
you.
You will need to make an account on twitch, get your api key (a long string of
characters) which should stay secret because it allow anyone having it to
stream on your account.
Preparation steps
- Register / connect on twitch
- Get your Stream API key at
https://www.twitch.tv/YOUR_USERNAME/dashboard/settings (from this page you
can also choose if twitch should automatically saves streams as videos for
14 days)
- Choose your nearest server from this page
- Add in your shell environnement a variable TWITCH=rtmp://SERVER_FROM_STEP_3/YOUR_API_KEY
- Get fauxstream with
cvs -d anoncvs@anoncvs.thfr.info:/cvs checkout -P projects/fauxstream/
chmod u+x fauxstream/fauxstream
- Allow recording of the microphone
- Allow recording of the output sound
Once you have all the pieces, start a new shell and check the $TWITCH variable
is correctly set, it should looks like
rtmp://live-ams.twitch.tv/app/live_2738723987238_jiozjeoizaeiazheizahezah
(this is not a real api key).
Using fauxstream
fauxstream script comes with a README.md file containing some useful
informations, you can also check the usage
View usage:
$ ./fauxstream
Starting a stream
When you start a stream, take care your API key isn’t displayed on the
stream! I redirect stderr to /dev/null so all the output containing the
key is not displayed.
Here is the settings I use to stream:
$ ./fauxstream -m -vmic 5.0 -vmon 0.2 -r 1920x1080 -f 20 -b 4000 $TWITCH 2> /dev/null
If you choose a smaller resolution than your screen, imagine a square of that
resolution starting at the top left corner of your screen, the content of this
square will be streamed.
I recommend bwm-ng package (I wrote a ports of the week article about it)
to view your realtime bandwidth usage, if you see the bandwidth reach a fixed
number this mean you reached your bandwidth limit and the stream is certainly
not working correctly, you should lower resolution, fps or bitrate.
I recommend doing a few tries before you want to stream, to be sure it’s ok.
Note that the flag -a
may be be required in case of audio/video
desynchronization, there is no magic value so you should guess and try.
Adding webcam
I found an easy trick to add webcam on top of a video game.
$ mpv --no-config --video-sync=display-vdrop --framedrop=vo --ontop av://v4l2:/dev/video1
The trick is to use mpv to display your webcam video on your screen and use the
flag to make it stay on top of any other window (this won’t work with cwm(1)
window manager). Then you can resize it and place it where you want. What you
see is what get streamed.
The others mpv flags are to reduce lag between the webcam video stream and the
display, mpv slowly get a delay and after 10 minutes, your webcam will be
lagging by like 10 seconds and will be totally out of sync between the action
and your face.
Don’t forget to use chown to change the ownership of your video device to your
user, by default only root has access to video devices. This is reset upon
reboot.
Viewing a stream
For less overhead, people can watch a stream using mpv
software, I think this
will require youtube-dl
package too.
Example to view me streaming:
$ mpv https://www.twitch.tv/seriphyde
This would also work with a recorded video:
$ mpv https://www.twitch.tv/videos/447271018
Hello,
I HATE Discord.
Discord users keep telling about their so called discord server, which is
not dedicated to them at all. And Discord has a very bad quality and a lot of
voice distorsion.
Why not run your very own mumble server with high voice quality and low
latency and privacy respect? This is very easy to setup on OpenBSD!
Mumble is an open source voip client, it has a client named Mumble (available
on various operating system) and at least Android, the server part is murmur
but there is a lightweight server named umurmur. People authentication is done
through certificate generated locally and automatically accepted on a server,
and the certificate get associated with a nickname. Nobody can pick the same
nickname as another person if it’s not the same certificate.
How to install?
# pkg_add umurmur
# rcctl enable umurmurd
# cp /usr/local/share/examples/umurmur/umurmur.conf /etc/umurmur/
We can start it as this, you may want to tweak the configuration file to add a
password to your server, or set an admin password, create static channels,
change ports etc….
You may want to increase the max_bandwidth
value to increase audio quality,
or choose the right value to fit your bandwidth. Using umurmur on a DSL line is
fine up to 1 or 2 remote people. The daemon uses very little CPU and very
little memory. Umurmur is meant to be used on a router!
# rcctl start umurmurd
If you have a restrictive firewall (I hope so), you will have to open the ports
TCP and UDP 64738.
How to connect to it?
The client is named Mumble and is packaged under OpenBSD, we need to install it:
# pkg_add mumble
The first time you run it, you will have a configuration wizard that will take
only a couple of minutes.
Don’t forget to set the sysctl kern.audio.record to 1 to enable audio
recording, as OpenBSD did disable audio input by default a few releases ago.
You will be able to choose a push-to-talk mode or voice level to activate and
quality level.
Once the configuration wizard is done, you will have another wizard for
generating the certificate. I recommend choosing “Automatically create a
certificate”, then validate and it’s done.
You will be prompted for a server, click on “Add new”, enter the name server so
you can recognized it easily, type its hostname / IP, its port and your
nickname and click OK.
Congratulations, you are now using your own private VOIP server, for real!
I write this blog post as I spent too much time setting up nginx and
SSL on OpenBSD with acme-client, due to nginx being chrooted and not
stripping path and not doing it easily.
First, you need to set up /etc/acme-client.conf correctly. Here is
mine for the domain ports.perso.pw:
authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}
domain ports.perso.pw {
domain key "/etc/ssl/private/ports.key"
domain full chain certificate "/etc/ssl/ports.fullchain.pem"
sign with letsencrypt
}
This example is for OpenBSD 6.6 (which is current when I write this)
because of Let’s encrypt API URL. If you are running 6.5 or 6.4,
replace v02 by v01 in the api url
Then, you have to configure nginx this way, the most important part in
the following configuration file is the location block handling
acme-challenge request. Remember that nginx is in chroot /var/www so
the path to acme directory is acme
.
http {
include mime.types;
default_type application/octet-stream;
index index.html index.htm;
keepalive_timeout 65;
server_tokens off;
upstream backendurl {
server unix:tmp/plackup.sock;
}
server {
listen 80;
server_name ports.perso.pw;
access_log logs/access.log;
error_log logs/error.log info;
root /htdocs/;
location /.well-known/acme-challenge/ {
rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
root /acme;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
server_name ports.perso.pw;
access_log logs/access.log;
error_log logs_error.log info;
root /htdocs/;
ssl_certificate /etc/ssl/ports.fullchain.pem;
ssl_certificate_key /etc/ssl/private/ports.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
[... stuff removed ...]
}
}
That’s all! I wish I could have find that on the Internet so I share
it here.
This blog post is an update (OpenBSD 6.5 at that time) of this very same
article I published in June 2018. Due to rtadvd replaced by rad, this text
was not useful anymore.
I subscribed to a VPN service from the french association Grifon (Grifon
website[FR] to get an IPv6 access to the world and play
with IPv6. I will not talk about the VPN service, it would be pointless.
I now have an IPv6 prefix of 48 bits which can theorically have 280 addresses.
I would like my computers connected through the VPN to let others computers in
my network to have IPv6 connectivity.
On OpenBSD, this is very easy to do. If you want to provide IPv6 to Windows
devices on your network, you will need one more.
In my setup, I have a tun0 device which has the IPv6 access and re0 which is my
LAN network.
First, configure IPv6 on your lan:
# ifconfig re0 inet6 autoconf
that’s all, you can add a new line “inet6 autoconf” to your file
/etc/hostname.if
to get it at boot.
Now, we have to allow IPv6 to be routed through the differents
interfaces of the router.
# sysctl net.inet6.ip6.forwarding=1
This change can be made persistent across reboot by adding
net.inet6.ip6.forwarding=1
to the file /etc/sysctl.conf
.
Automatic addressing
Now we have to configure the daemon rad to advertise the we are routing,
devices on the network should be able to get an IPv6 address from its
advertisement.
The minimal configuration of /etc/rad.conf is the following:
interface re0 {
prefix 2a00:5414:7311::/48
}
In this configuration file we only define the prefix available, this is
equivalent to a dhcp addresses range. Others attributes could provide DNS
servers to use for example, see rad.conf man page.
Then enable the service at boot and start it:
# rcctl enable rad
# rcctl start rad
Tweaking resolv.conf
By default OpenBSD will ask for IPv4 when resolving a hostname (see
resolv.conf(5) for more explanations). So, you will never have IPv6
traffic until you use a software which will request explicit IPv6
connection or that the hostname is only defined with a AAAA field.
# echo "family inet6 inet4" >> /etc/resolv.conf.tail
The file resolv.conf.tail is appended at the end of resolv.conf
when dhclient modifies the file resolv.conf.
Microsoft Windows
If you have Windows systems on your network, they won’t get addresses
from rad. You will need to deploy dhcpv6 daemon.
The configuration file for what we want to achieve here is pretty
simple, it consists of telling what range we want to allow on DHCPv6
and a DNS server. Create the file /etc/dhcp6s.conf
:
interface re0 {
address-pool pool1 3600;
};
pool pool1 {
range 2a00:5414:7311:1111::1000 to 2a00:5414:7311:1111::4000;
};
option domain-name-servers 2001:db8::35;
Note that I added “1111” into the range because it should not be on the
same network than the router. You can replace 1111 by what you want, even CAFE
or 1337 if you want to bring some fun to network engineers.
Now, you have to install and configure the service:
# pkg_add wide-dhcpv6
# touch /etc/dhcp6sctlkey
# chmod 400 /etc/dhcp6sctlkey
# echo SOME_RANDOM_CHARACTERS | openssl enc -base64 > /etc/dhcp6sctlkey
# echo "dhcp6s -c /etc/dhcp6s.conf re0" >> /etc/rc.local
The openbsd package wide-dhcpv6 doesn’t provide a rc file to
start/stop the service so it must be started from a command line, a
way to do it is to type the command in /etc/rc.local
which is run at
boot.
The openssl command is needed for dhcpv6 to start, as it requires a
base64 string as a secret key in the file /etc/dhcp6sctlkey.
I am happy to announce there is now a RSS feed for getting news in case of new
packages available on my repository
https://stable.perso.pw/
The file is available at https://stable.perso.pw/rss.xml.
I take the occasion of this blog post to explain how the file is generated as I
did not find easy tool for this task, so I ended up doing it myself.
I choosed to use XSLT, which is not quite common. Briefly, XSLT allows
to use some kind of XML template on a XML data file, this allow loops,
filtering etc… It requires only two parts: the template and the data.
Simple RSS template
The following file is a template for my RSS file, we can see a few tags
starting by xsl
like xsl:for-each
or xsl:value-of
.
It’s interesting to note that the xsl-for-each
can use a condition like
position < 10
in order to limit the loop to the 10 first items.
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<description></description>
<!-- BEGIN CONFIGURATION -->
<title>OpenBSD unofficial stable packages repository</title>
<link>https://stable.perso.pw/</link>
<atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" />
<!-- END CONFIGURATION -->
<!-- Generating items -->
<xsl:for-each select="feed/news[position()<10]">
<item>
<title>
<xsl:value-of select="title"/>
</title>
<description>
<xsl:value-of select="description"/>
</description>
<pubDate>
<xsl:value-of select="date"/>
</pubDate>
</item>
</xsl:for-each>
</channel>
</rss>
</xsl:template>
</xsl:stylesheet>
Simple data file
Now, we need some data to use with the template.
I’ve added a comment block so I can copy / paste it to add a new entry into the
RSS easily. As the date is in a painful format to write for a human, I added to
my Makefile starting the commands a call to a script replacing the string DATE
by the current date with the correct format.
<feed>
<news>
<title>www/mozilla-firefox</title>
<description>Firefox 67.0.1</description>
<date>Wed, 05 Jun 2019 06:00:00 GMT</date>
</news>
<!-- copy paste for a new item
<news>
<title></title>
<description></description>
<date></date>
</news>
-->
</feed>
Makefile
I love makefiles, so I share it even if this one is really short.
all:
sh replace_date.sh
xsltproc template.xml news.xml | xmllint -format - | tee rss.xml
scp rss.xml perso.pw:/home/stable/
clean:
rm rss.xml
When I want to add an entry, I copy / paste the comment block in news.xml, add
DATE, run make
and it’s uploaded :)
The command xsltproc is available from the package libxslt on OpenBSD.
And then, after writing this, I realise that manually editing the result file
rss.xml is as much work as editing the news.xml file and then process it with
xslt… But I keep that blog post as this can be useful for more complicated
cases. :)
While writing a script to backup a remote database, I did not know how to
handle a ssh tunnel inside a script correctly/easily. A quick internet search
pointed out this link to me:
https://gist.github.com/scy/6781836
While I’m not a huge fan of the ControlMaster solution which consists at
starting a ssh connection with ControlMaster activated, and tell ssh to close
it, and don’t forget to put a timeout on the socket otherwise it won’t close if
you interrupt the script.
But I really enjoyed a neat solution which is valid for most of the cases:
$ ssh -f -L 5432:localhost:5432 user@host "sleep 5" && pg_dumpall -p 5432 -h localhost > file.sql
This will create a ssh connection and make it go to background because of -f
flag, but it will close itself after the command is run, sleep 5
in this
case. As we chain it quickly to a command using the tunnel, ssh will only stops
when the tunnel is not used anymore, keeping it alive only the required time
for the pg_dump command, not more. If we interrupt the script, I’m not sure ssh
will stop immediately or only after it ran successfully the command sleep, in
both cases ssh will stop correctly. There is no need to use a long sleep value
because as I said previously, the tunnel will stay up until nothing uses it.
You should note that the ControlMaster way is the only reliable way if you need
to use the ssh tunnel for multiples commands inside the script.
I previously wrote about Kermit for fetching remote files using a kermit
script. I found that it’s possible to achieve the same with a single kermit
command, without requiring a script file.
Given I want to download files from my remote server from the path
/home/mirror/pub and that I’ve setup a kermit server on the other part
using inetd:
File /etc/inetd.conf:
7878 stream tcp nowait solene /usr/local/bin/kermit-sshsub kermit-sshsub
I can make a ssh tunnel to it to reach it locally on port 7878 to download my files.
kermit -I -j localhost:7878 -C "remote cd /home/mirror/pub","reget /recursive .",close,EXIT
Some flags can be added to make it even faster, like -v 31 -e 9042
. I insist
on kermit because it’s super reliable and there are no security issues if
running behind a firewall and accessed through ssh.
Fetching files can be stopped at any time, it supports very poor connection
too, it’s really reliable. You can also skip files, because sometimes you need
some file first and you don’t want to modify your script to fetch a specific
file (this only works if you don’t have too many files to get of course because
you can skip them only one by one).
This article explains how to set up a simple samba server to have a CIFS /
Windows shared folder accessible by everyone. This is useful in some cases but
samba configuration is not straightforward when you need it for a one shot time
or this particular case.
The important covered case here is that no user are needed. The trick comes
from map to guest = Bad User
configuration line in [global]
section. This
option will automatically map an unknown user or no provided user to the guest
account.
Here is a simple /etc/samba/smb.conf
file to share /home/samba to
everyone, except map to guest and the shared folder, it’s the stock file with
comments removed.
[global]
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server
log file = /var/log/samba/smbd.%m
max log size = 50
dns proxy = no
map to guest = Bad User
[myfolder]
browseable = yes
path = /home/samba
writable = yes
guest ok = yes
public = yes
If you want to set up this on OpenBSD, it’s really easy:
# pkg_add samba
# rcctl enable smbd nmbd
# vi /etc/samba/smb.conf (you can use previous config)
# mkdir -p /home/samba
# chown nobody:nobody /home/samba
# rcctl start smbd nmbd
And you are done.
I switched from a homemade script using mblaze to neomutt (after being using mutt, alpine and mu4e) and it’s difficult to remember everything. So, let’s do a cheatsheet!
- Mark as read: Ctrl+R
- Mark to delete: d
- Execute deletion: $
- Tag a mail: t
- Operation on tagged mails: ;[OP] with OP being the key for that operation.
- Move a mail: s (for save, which is a copy + delete)
- Save a mail: c (for copy)
Delete mails based on date
- use
T
to enter a date range, format [before]-[after] with before/after being a DD/MM/YYYY format (YYYY is optional)
~d 24/04-
to mark mails after 24/04 of this year
~d -24/04
to mark mails before 24/04 of this year
~d 24/04-25/04
to mark mails between 24/04 and 25/04 (inclusive)
;d
to tell neomutt we want to delete marked mails
$
to make deletion happen
I use ssh tunneling A LOT, for everything. Yesterday, I removed the
public access of my IMAP server, it’s now only available through ssh
tunneling to access the daemon listening on localhost. I have plenty
of daemons listening only on localhost that I can only reach through a
ssh tunnel. If you don’t want to bother with ssh and redirect ports you
need, you can also make a VPN (using ssh, openvpn, iked, tinc…)
between your system and your server. I tend to avoid setting up VPN for
the current use case as it requires more work and more maintenance than
running ssh server and a ssh client.
The last change, for my IMAP server, added an issue. I want my phone
to access the IMAP server but I don’t want to connect to my main
account from my phone for security reasons. So, I need a dedicated
user that will only be allowed to forward ports.
This is done very easily on OpenBSD.
The steps are:
1. generate ssh keys for the new user
2. add an user with no password
3. allow public key for port forwarding
Obviously, you must allow users (or only this one) to make port forwarding in
your sshd_config.
Generating ssh keys
Please generate the keys in a safe place, using
ssh-keygen
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:SOMETHINGSOMETHINSOMETHINSOMETHINSOMETHING user@myhost
The key's randomart image is:
+---[RSA 3072]----+
| |
| ** |
| * ** . |
| * * |
| **** * |
| **** |
| |
| |
| |
+----[SHA256]-----+
This will create your public key in ~/.ssh/id_rsa.pub and the private key in
~/.ssh/id_rsa
Adding an user
On OpenBSD, we will create an user named tunnel, this is done with the
following command as root:
# useradd -m tunnel
This user has no password and can’t login on ssh.
Allow the public key to port forward only
We will use the command restriction in the authorized_keys file to
allow the previously generated key to only forward.
Edit /home/tunnel/.ssh/authorized_keys as following
command="echo 'Tunnel only!'" ssh-rsa PUT_YOUR_PUBLIC_KEY_HERE
This will tell “Tunnel only” and abort the connection if the user connects and
with a shell or a command.
Connect using ssh
You can connect with ssh(1) as usual but you
will require the flag -N to not start a shell on the remote server.
$ ssh -N -L 10000:localhost:993 tunnel@host
If you want the tunnel to stay up in the most automated way possible, you can
use autossh from ports, which will do a great job at keeping ssh up.
$ autossh -M 0 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "TCPKeepAlive yes" -N -v -L 9993:localhost:993 tunnel@host
This command will start autossh, restart if forwarding doesn’t work which is
likely to happens when you lose connectivity, it takes some time for the remote
server to disable the forwarding effectively. It will make a keep alive check
so the tunnel stays up and ensure it’s up (this is particularly useful on
wireless connection like 4G/LTE).
The others flags are also ssh parameters, to not start a shell, and for making
a local forwarding. Don’t forget that as a regular user, you can’t bind on
ports less than 1024, that’s why I redirect the port 993 to the local port
9993 in the example.
Making the tunnel on Android
If you want to access your personal services from your Android phone, you can
use ConnectBot ssh client. It’s really easy:
- upload your private key to the phone
- add it in ConnectBot from the main menu
- create a new connection the user and your remote host
- choose to use public key authentication and choose the registered key
- uncheck “start a shell session” (this is equivalent to -N ssh flag)
- from the main menu, long touch the connection and edit the forwarded ports
Enjoy!
The following guide is a real world example of drist usage. We will
create a script to deploy munin-node on OpenBSD systems.
We need to create a script that will install munin-node package but
also configure it using the default proposal. This is done easily
using the script file.
#!/bin/sh
# checking munin not installed
pkg_info | grep munin-node
if [ $? -ne 0 ]; then
pkg_add munin-node
munin-node-configure --suggest --shell | sh
rcctl enable munin_node
fi
rcctl restart munin_node
The script contains some simple logic to prevent trying installing
munin-node each time we will run it, and also prevent re-configuring it
automatically every time. This is done by checking if pkg_info output
contains munin-node.
We also need to provide a munin-node.conf file to allow our munin
server to reach the nodes. For this how-to, I’ll dump the
configuration in the commands using cat, but of course, you can use
your favorite editor to create the file, or copy an original
munin-node.conf file and edit it to suit your needs.
mkdir -p files/etc/munin/
cat <<EOF > files/etc/munin/munin-node.conf
log_level 4
log_file /var/log/munin/munin-node.log
pid_file /var/run/munin/munin-node.pid
background 1
setsid 1
user root
group wheel
ignore_file [\#~]$
ignore_file DEADJOE$
ignore_file \.bak$
ignore_file %$
ignore_file \.dpkg-(tmp|new|old|dist)$
ignore_file \.rpm(save|new)$
ignore_file \.pod$
allow ^127\.0\.0\.1$
allow ^192\.168\.1\.100$
allow ^::1$
host *
port 4949
EOF
Now, we only need to use drist on the remote host:
drist root@myserver
Last version of drist as now also supports privilege escalation using
doas instead of connecting to root by ssh:
drist -s -e doas user@myserver
Thanks to a hard work from thfr@, it is now possible to play the commercial game **Slay The Spire** on OpenBSD.
Small introduction to the game: it's a solo deck building game where you need to escalate a tower. Each floor may contain enemie(s) or a treasure or a merchant or an elite (harder enemies) or an event.
There are four characters playable, each unlocked after playing with the previous one. The game is really easy to understand, every game (or run) restart from the beginning with your character, at every new floor you may earn items and cards to build a deck for this run.
When you die, you can unlock some new items per characters and unlock cards for next runs. The goal is to reach the top of the tower. Each character is really different to play and each allow a few obvious deck builds.
The game work with an OpenBSD 6.5 minimum but this method using libgdx will work since 6.9. For this you will need:
1. Buy Slay The Spire on GOG or Steam
2. Copy files from a Slay The Spire installation (Windows or Linux) to your OpenBSD system or unzip the linux installer .sh file
3. Install some packages with pkg_add: openal jdk-11 lwjgl libgdx
4. Search for the .jar file (biggest file), then run libgdx-setup to extract data from the jar file and prepare the game.
5. Run the game with libgdx-run
4. Don't forget to eat, hydrate yourself and sleep. This game is time consuming :)
All settings and saves are stored in the game folder, so you may want to backup it if you don't want to lose your progression.
Again, thanks to thfr@ for his huge work on making games working on OpenBSD!
This article explains how to use haproxy to add a TLS layer to any TCP
protocol. This includes http or gopher. The following example explains
the minimal setup required in order to make it work, haproxy has a lot
of options and I won’t use them.
The idea is to let haproxy manage the TLS part and let your http server
(or any daemon listening on TCP) replying within the wrapped connection.
You need a simple haproxy.cfg which can looks like that:
defaults
mode tcp
timeout client 50s
timeout server 50s
timeout connect 50s
frontend haproxy
bind *:7000 ssl crt /etc/ssl/certificat.pem
default_backend gopher
backend gopher
server gopher 127.0.0.1:7070 check
The idea is that it waits on port 7000 and will use the file
/etc/ssl/certificat.pem as a certificate, and forward requests to the
backend on 127.0.0.1:7070. That is ALL. If you want to do https, you need
to listen on port 443 and redirect to your port 80.
The PEM file is made from the privkey concatenated with the fullchain
certificate. If you use a self signed certificate, you can make it with the
following command:
cat secret.key certificate.crt > cert.pem
One can use a folder with PEM certificates files inside instead of using a
file. This will allow haproxy to receive connections for ALL the certificates
loaded.
For more security, I recommend using the chroot feature and a dh file but it’s
out of the current topic.
Hi,
In this article I will explain how to setup a gopher server supporting
TLS. Gopher TLS support is not “official” as there is currently no RFC
to define it. It has been recently chose by the community how to make
it work, while keeping compatibility with old servers / clients.
The way to do it is really simple.
Client A tries to connects to Server B, Client A tries TLS handshake,
if Server B answers correctly to the TLS handshakes, then Client A
sends the gopher request and Server B answers the gopher requests. If
Server B doesn’t understand the TLS handshakes, then it will probably
output a regular gopher page, then this is throwed and Client A
retries the connection using plaintext gopher and Server B answers the
gopher request.
This is easy to achieve because gopher protocol doesn’t require the
server to send anything to the client before the client sends its
request.
The way to add the TLS layer and the dispatching can be achieved using
sslh and relayd. You could use haproxy instead of relayd, but
the latter is in OpenBSD base system so I will use it. Thanks parazyd
for sharing about sslh for this use case.
sslh is a protocol demultiplexer, it listens on a port, and
depending on what it receives, it will try to guess the protocol used
by the client and send it to the according backend. It’s first purpose
was to make ssh available on port 443 while still having https daemon
working on that server.
Here is a schema of the setup
+→ relayd for TLS + forwarding
↑ ↓
↑ tls? ↓
client -> sslh TCP 70 → + ↓
↓ not tls ↓
↓ ↓
+→ → → → → → → gopher daemon on localhost
This method allows to wrap any server to make it TLS compatible. The
best case would be to have TLS compatibles servers which do all the
work without requiring sslh and something to add the TLS. But it’s
currently a way to show TLS for gopher is real.
Relayd
The relayd(1) part is easy, you first need a x509 certificate for the
TLS part, I will not explain here how to get one, there are already
plenty of how-to and one can use let’s encrypt with acme-client(1) to
get one on OpenBSD.
We will write our configuration in /etc/relayd.conf
log connection
relay "gopher" {
listen on 127.0.0.1 port 7000 tls
forward to 127.0.0.1 port 7070
}
In this example, relayd listens on port 7000 and our gopher daemon
listens on port 7070. According to relayd.conf(5), relayd will look
for the certificate at the following places:
/etc/ssl/private/$LISTEN_ADDRESS:$PORT.key
and
/etc/ssl/$LISTEN_ADDRESS:$PORT.crt
, with the current example you
will need the files: /etc/ssl/private/127.0.0.1:7000.key and
/etc/ssl/127.0.0.1:7000.crt
relayd can be enabled and started using rcctl:
# rcctl enable relayd
# rcctl start relayd
Gopher daemon
Choose your favorite gopher daemon, I recommend geomyidae but any
other valid daemon will work, just make it listening on the correct
address and port combination.
# pkg_add geomyidae
# rcctl enable geomyidae
# rcctl set geomyidae flags -p 7070
# rcctl start geomyidae
SSLH
We will use sslh_fork (but sslh_select would be valid too, they have
differents pros/cons). The --tls
parameters tells where to forward a
TLS connection while --ssh
will forward to the gopher daemon. This
is so because the protocol ssh is already configured within sslh and
acts exactly like a gopher daemon: the client doesn’t expect the
server to be the first sending data.
# pkg_add sslh
# rcctl enable sslh_fork
# rcctl set sslh_fork flags --tls 127.0.0.1:7000 --ssh 127.0.0.1:7070 -p 0.0.0.0:70
# rcctl start sslh_fork
Client
You can easily test if this works using openssl to connect by hand to the port 70
$ openssl s_client -connect 127.0.0.1:7000
You should see a lot of output, which is the TLS handshake, then you
can send a gopher request like “/” and you should get a result. Using
telnet on the same address and port should give the same result.
My gopher client clic already supports gopher TLS and is available
at git://bitreich.org/clic and only requires the ecl common lisp
interpreter to compile.
This is the second article of the serie about iSCSI. In this one, you will
learn how to connect to an iSCSI target using OpenBSD base daemon iscsid.
The configuration file of iscsid doesn’t exist by default, its location is
/etc/iscsi.conf. It can be easily written using the following:
target1="100.64.2.3"
myaddress="100.64.2.2"
target "disk1" {
initiatoraddr $myaddress
targetaddr $target1
targetname "iqn.1994-04.org.netbsd.iscsi-target:target0"
}
While most lines are really obvious, it is mandatory to have the line
initiatoraddr, many thanks to cwen@ for pointing this out when I was stuck on
it.
The targetname value will depend of the iSCSI target server. If you use
netbsd-iscsi-target, then you only need to care about the last part, aka
target0 and replace it by the name of your target (which is target0 for the
default one).
Then we can enable the daemon and start it:
# rcctl enable iscsid
# rcctl start iscsid
In your dmesg, you should see a line like:
sd4 at scsibus0 targ 1 lun 0: <NetBSD, NetBSD iSCSI, 0> SCSI3 0/direct fixed t10.NetBSD_0x5c6cf1b69fc3b38a
If you use netbsd-iscsi-target, the whole line should be identic except for the
sd4 part which can change, depending of your hardware.
If you don’t see it, you may need to reload iscsid configuration file with
iscsictl reload
.
Warning: iSCSI is a bit of pain to debug, if it doesn’t work, double check the
IPs in /etc/iscsi.conf, check your PF rules on the initiator and the
target. You should be at least able to telnet into the target IP port 3260.
Once you found your new sd device, you can format it and mount it as a regular
disk device:
# newfs /dev/rsd4c
# mount /dev/sd4c /mnt
iSCSI is far mor efficient and faster than NFS but it has a total different
purpose. I’m using it on my powerpc machines to build packages on it. This
reduce their old IDE disks usage while giving better response time and
equivalent speed.
This is the first article of a series about iSCSI.
iSCSI is a protocol designed for sharing a block device across
network as if it was a local disk. This doesn’t permit using that
disk from multiples places at once though, except if you use a
specific filesystem like GFS2 or OCFS2 (Linux only). In this article,
we will learn how to create an iSCSI target, which is the “server”
part of iSCSI, the target is the system holding the disk and making
it available to others on the network.
OpenBSD does not have an target server in base, we will have to use
net/netbsd-iscsi-target for this. The setup is really simple.
First, we obviously need to install the package and we will activate the daemon
so it start automatically at boot, but don’t start it yet:
# pkg_add netbsd-iscsi-target
# rcctl enable iscsi_target
The configurations files are in /etc/iscsi/ folder, it contains files
auths and targets. The default configuration files are the same. By
looking at the source code, it seems that auths is used there but it seems
to have no use at all. We will just overwrite it everytime we modify
targets to keep them in sync.
Default /etc/iscsi/targets (with comments stripped):
extent0 /tmp/iscsi-target0 0 100MB
target0 rw extent0 10.4.0.0/16
The first line defines the file holding our disk in the second field, and the
last field defines the size of it. When iscsi-target will be started, it will
create files as required with the size defined here.
The second line defines permissions, in that case, the extent0 disk can be used
read/write by the net 10.4.0.0/16. For this example, I will only change the
netmask to suit my network, then I copy targets over auths.
Let’s start the daemon:
# rcctl start iscsi_target
# rcctl check iscsi_target
iscsi_target(ok)
If you want to restrict ports using PF, you only have to allows the TCP port
3260 from the network that will connect to the target. The according line would
looks like this:
pass in proto tcp to port 3260
Done!
Drist see its release 1.04 available. This adds support for the flag -p
to
make the ssh connection persistent across the script using the ssh
ControlMaster feature. This fixes one use case where you modify ssh keys in two
operations: copy file + script to change permissions and this makes drist a lot
faster for fast tasks.
Drist makes a first ssh connection to get the real hostname of the remote
machine, and then will ssh for each step (copy, copy-hostname, absent,
absent-hostname, script, script-hostname), this mean in the use case where you
copy one file and reload a service, it was doing 3 connections. Now with
the persistent flag, drist will keep the first connection and reusing it,
closing the control socket at the end of the script.
Drist is now 121 lines long.
Download v1.04
SHA512 checksum, it is split it in two to not break the display:
525a7dc1362877021ad2db8025832048d4a469b72e6e534ae4c92cc551b031cd
1fd63c6fa3b74a0fdae86c4311de75dce10601d178fd5f4e213132e07cf77caa
I never used a command line utility to check the spelling in my texts because I
did not know how to do. After taking five minutes to learn how to do it, I feel
guilty about not having used it before as it is really simple.
First, you want to install aspell package, which may be already there pulled as
a dependency. In order to proceed on OpenBSD it’s easy:
# pkg_add aspell
I will only explain how to use it on text files. I think it is possible to have
some integration with text editors but then, it would be more relevant to check
out the editor documentation.
If I want to check the spelling in my file draft.txt it is as simple as:
$ aspell -l en_EN -c draft.txt
The parameter -l en_EN
will depend of your locale, I have fr_FR.UTF–8 so aspell
uses it by default if I don’t enforce another language. With this command, aspell
will make an interactive display in the terminal
The output looks like this, with the word ful highlighted which I can not
render in my article.
It's ful of mistakkes!
I dont know how to type corectly!
1) flu 6) FL
2) foul 7) fl
3) fuel 8) UL
4) full 9) fol
5) furl 0) fur
i) Ignore I) Ignore all
r) Replace R) Replace all
a) Add l) Add Lower
b) Abort x) Exit
?
I am asked how I want to resolve the issue with ful, as I wanted to write
full, I will type 4 and aspell will replace the word ful with full.
This will automatically jump to the next error found, mistakkes in my case:
It's full of mistakkes!
I dont know how to type corectly!
1) mistakes 6) misstates
2) mistake's 7) mistimes
3) mistake 8) mistypes
4) mistaken 9) stake's
5) stakes 0) Mintaka's
i) Ignore I) Ignore all
r) Replace R) Replace all
a) Add l) Add Lower
b) Abort x) Exit
?
and it will continue until there are no errors left, then the file is saved
with the changes.
I will use aspell everyday from now.
Long time I didn’t write a “port of the week”.
This week, I am happy to present you sct, a very small utility software to
set the color of your screen. You can install it on OpenBSD with pkg_add
sct
and its usage is really simple, just run sct $temp
where $temp is the
temperature you want to get on your screen.
The default temperature is 6500, if you lower this value, the screen will
change toward red, meaning your screen will appear less blue and this may be
more comfortable for some people. The temperature you want to use depend from
the screen and from your feeling, I have one screen which is correct at 5900
but another old screen which turn too much red below 6200!
You can add sct 5900
to your .xsession file to start it when you start your
X11 session.
There is an alternative to sct whose name is redshift, it is more complicated
as you need to tell it your location with latitude and longitude and, as a
daemon, it will correct continuously your screen temperature depending on the
time. This is possible because when you know your location on earth and the
time, you can compute the sunrise time and dawn time. sct is not a daemon,
you run it once and does not change the temperature until you call it again.
This article will show you how to make drist faster by using it on multiple
servers at the same time, in a correct way.
What is drist?
It is easily possible to parallelize drist (this works for everything though)
using Makefile. I use this to deploy a configuration on my servers at the same
time, this is way faster.
A simple BSD Make compatible Makefile looks like this:
SERVERS=tor-relay.local srvmail.tld srvmail2.tld
${SERVERS}:
drist $*
install: ${SERVERS}
.PHONY: all install ${SERVERS}
This create a target for each server in my list which will call drist. Typing
make install
will iterate over $SERVERS
list but it is so possible to use
make -j 3
to tell make to use 3 threads. The output may be mixed though.
You can also use make tor-relay.local
if you don’t want make to iterate over
all servers. This doesn’t do more than typing drist tor-relay.local
in the
example, but your Makefile may do other logic before/after.
If you want to type make
to deploy everything instead of make install
you
can add the line all: install
in the Makefile.
If you use GNU Make (gmake), the file requires a small change:
The part ${SERVERS}:
must be changed to ${SERVERS}: %:
, I think that gmake
will print a warning but I did not succeed with better result. If you have the
solution to remove the warning, please tell me.
If you are not comfortable with Makefiles, the .PHONY line tells make that
the targets are not valid files.
Make is awesome!
Hi, I rarely post about external links or other people work, but at FOSDEM
2019 Vincent Delft had a
talk about running OpenBSD as a full featured NAS.
I do use OpenBSD on my NAS, I wanted to write an article about it since long
time but never did it. Thanks to Vincent, I can just share his work which is
very very interesting if you plan to make your own NAS.
Videos can be downloaded directly with following links provided by Fosdem:
Hi, it’s been long time I wanted to write this article. The topic is Kermit,
which is a file transfer protocol from the 80’s which solved problems of that
era (text files and binaries files, poor lines, high latency etc..).
There is a comm/kermit package on OpenBSD and I am going to show you how to use
it. The package is the program ckermit which is a client/server for kermit.
Kermit is a lot of things, there is a protocol, but it’s also the
client/server, when you type kermit, it opens a kermit shell, where you
can type commands or write kermit scripts. This allows scripts to be done using
a kermit in the shebang.
I personally use kermit over ssh to retrieve files from my remote server, this
requires kermit on both machines. My script is the following:
#!/usr/local/bin/kermit +
set host /pty ssh -t -e none -l solene perso.pw kermit
remote cd /home/ftp/
cd /home/solene/Downloads/
reget /recursive /delete .
close
exit
This connects to the remote server and starts kermit. It changes the current
directory on the remote server into /home/ftp and locally it goes into
/home/solene/Downloads, then, it start retrieving data, continuing previous
transfer if not finished (reget command), for every file finished, it’s deleted
on the remote server. Once finished, it close the ssh connection and exits.
The transfer interfaces looks like this. It shows how you are connected, which
file is currently transferring, its size, the percent done (0% in the example),
time left, speed and some others information.
C-Kermit 9.0.302 OPEN SOURCE:, 20 Aug 2011, solene.perso.local [192.168.43.56]
Current Directory: /home/downloads/openbsd
Network Host: ssh -t -e none -l solene perso.pw kermit (UNIX)
Network Type: TCP/IP
Parity: none
RTT/Timeout: 01 / 03
RECEIVING: src.tar.gz => src.tar.gz => src.tar.gz
File Type: BINARY
File Size: 183640885
Percent Done:
...10...20...30...40...50...60...70...80...90..100
Estimated Time Left: 00:43:32
Transfer Rate, CPS: 70098
Window Slots: 1 of 30
Packet Type: D
Packet Count: 214
Packet Length: 3998
Error Count: 0
Last Error:
Last Message:
X to cancel file, Z to cancel group, <CR> to resend last packet,
E to send Error packet, ^C to quit immediately, ^L to refresh screen.
What’s interesting is that you can skip a file by pressing “X”, kermit will
stop the downloading (but keep the file for later resuming) and start
downloading the next file. It can be useful sometimes when you transfer a bunch
of files, and it’s really big and you don’t want it now and don’t want to type
the command by hand, just “X” and it skips it. Z or E will exists the transfer
and close the connection.
Speed can be improved by adding the following lines before the reget command:
set reliable
set window 32
set receive packet-length 9024
This improves performance because nowadays our networks are mostly reliable and
fast. Kermit was designed at a time when serial line was used to transfer data.
It’s also reported that Kermit is in use in the ISS (International Space
Station), I can’t verify if it’s still in use there.
I never had any issue while transferring, even by getting a file by resuming it
so many times or using a poor 4G hot-spot with 20s of latency.
I did some tests and I get same performances than rsync over the Internet, it’s
a bit slower over Lan though.
I only described an use case. Scripts can be made, there are a lot of others
commands. You can type “help” in the kermit shell to get some hints for more
help, “?” will display the command list.
It can be used interactively, you can queue files by using “add” to create a
send-list, and then proceed to transfer the queue.
Another way to use it is to start the local kermit shell, then type “ssh
user@remote-server” which will ssh into a remote box. Then you can type
“kermit” and type kermit commands, this make a link between your local kermit
and the remote one. You can go back to the local kermit by typing “Ctrl+",
and go back to the remote by entering the command ”C".
This is a piece of software I found by lurking into the ports tree for
discovering new software and I felt in love with it. It’s really reliable.
It does a different job compared to rsync, I don’t think it can preserve time,
permissions etc… but it can be scripted completely, using parameters, and
it’s an awesome piece of software!
It should support HTTP, HTTPS and ftp transfers too, as a client, but I did not
get it work. On OpenBSD, the HTTPS support is disabled, it requires some work
to switch to libreSSL.
You can find information on the official
website.
Hi from 2019! Some news about me and this blog.
It’s been more than a month since the last article, which is unusual. I
don’t have much time these days and the ideas in the queue are not easy
topics, so I don’t publish anything.
I am now on Mastodon at solene@bsd.network, publishing things on the
Fediverse. Mostly UNIX propaganda.
This year I plan to work on reed-alert to improve its usage, maybe write
more how-to or documentation about it too. I also think about writing
non-core probes in a separate repository.
Cl-yag, the blog generator that I use for this blog should deserve some
attention too, I would like to make it possible to create static pages
not in the index/RSS, this doesn’t require much code as I already have a
proof of concept, but it requires some changes to better integrate
within.
Finally, my deployment tool drist should definitely be fixed to support
tcsh and csh on remote shells for script execution. This requires a few
easy changes. Some better documentation and how-to would be nice too.
I also revived a project named faubackup, it’s a backup software which
is now hosted on Framagit.
And I revived another project which is from me, a packages statistics
website to have some stats about installed OpenBSD packages. The code is
not great, the web UI is not great, the filters are not great but it
works. It needs improvements. I’m thinking about making a package of it
for people wishing to participate, that would install the client and add
a cron to update the package list weekly. The Web UI is at this address
Pkgstat, that name is not good but
I did not find a good name yet. The code can be downloaded
here.
Thank you for reading :)
In this new article I will explain how to programmaticaly
a line (with a newline) using ed.
We will use commands sent to ed in its stdin to do so. The logic is to
locate the part where to add the newline and if a character need to be
replaced.
this is a file
with a too much line in it that should be split
but not this one.
In order to do so, we will format using printf(1) the command list
using a small trick to insert the newline. The command list is the
following:
/too much line
s/that /that
,p
This search the first line matching “too much line” and then replaced
“that ” by "that0, the trick is to escape using a backslash so the
substitution command can accept the newline, and at the end we print
the file (replace ,n by w to write it).
The resulting command line is:
$ printf '/too much line0/that /that\0n0 | ed file.txt
81
> with a too much line in it that should be split
> should be split
> 1 this is a file
2 with a too much line in it that
3 should be split
4 but not this one.
> ?
Hello, in this article I will present you my deployement tool drist (if you
speak Russian, I am already aware of what you think). It reached a feature
complete status today and now I can write about it.
As a system administrator, I started using salt a few years ago. And
honestly, I can not cope with it anymore. It is slow, it can get very
complicated for some tasks like correctly ordering commands and a
configuration file can become a nightmare when you start using condition in it.
You may already have read and heard a bit about drist as I wrote an article
about my presentation of it at bitreichcon 2018.
History
I also tried alternatives like ansible, puppet, Rex etc… One day, when
lurking in the ports tree, I found sysutils/radmind which got a lot
interest from me even if it is really poorly documented. It is a project from
1995 if I remember correctly, but I liked the base idea. Radmind works with
files, you create a known working set of files for your system, and you can
propagate that whole set to other machines, or see differences between the
reference and the current system. Sets could be negative, meaning that the
listed files should not be present on the system, but it was also possible to
add extra sets for specific hosts. The whole thing is really really cumbersome,
this requires a lot of work, I found little documentation etc… so I did not
used it but, that lead me to write my own deployment tool using ideas from
radmind (working with files) and from Rex (using a script for doing
changes).
Concept
drist aims at being simple to understand and pluggable with standard tools.
There is no special syntax to learn, no daemon to run, no agent, and it relies
on base tools like awk, sed, ssh and rsync.
drist is cross platform as it has a few requirements but it is not well
suited for deploying on too much differents operating systems.
When executed, drist will execute six steps in a specific order, you can
use only steps you need.
Shamelessly copied from the man page, explanations after:
- If folder files exists, its content is copied to server rsync(1).
- If folder files-HOSTNAME exists, its content is copied to server using rsync(1).
- If folder absent exists, filenames in it are deleted on server.
- If folder absent-HOSTNAME exists, filenames in it are deleted on server.
- If file script exists, it is copied to server and executed there.
- If file script-HOSTNAME exists, it is copied to server and executed there.
In the previous list, all the existences checks are done from the current
working directory where drist is started. The text HOSTNAME is replaced by
the output of uname -n
of the remote server, and files are copied starting from
the root directory.
drist does not do anything more. In a more litteral manner, it copies files to
the remote server, using a local filesystem tree (folder files). It will
delete on the remote server all files present in the local filesystem tree
(folder absent), and it will run on the remote server a script named
script.
Each of theses can be customized per-host by adding a “-HOSTNAME” suffix to the
folder or file name, because experience taught me that some hosts does require
specific configuration.
If a folder or a file does not exist, drist will skip it. So it is possible
to only copy files, or only execute a script, or delete files and execute a
script after.
Drist usage
The usage is pretty simple. drist has 3 flags which are optionals.
- -n flag will show what happens (simuation mode)
- -s flag tells drist to use sudo on the remote host
- -e flag with a parameter will tell drist to use a specific path for the sudo
program
The remote server address (ssh format like user@host) is mandatory.
$ drist my_user@my_remote_host
drist will look at files and folders in the current directory when executed,
this allow to organize as you want using your filesystem and a revision control
system.
Simple examples
Here are two examples to illustrate its usage. The examples are easy, for
learning purpose.
Deploying ssh keys
I want to easily copy my users ssh keys to a remote server.
$ mkdir drist_deploy_ssh_keys
$ cd drist_deploy_ssh_keys
$ mkdir -p files/home/my_user1/.ssh
$ mkdir -p files/home/my_user2/.ssh
$ cp -fr /path/to/key1/id_rsa files/home/my_user1/.ssh/
$ cp -fr /path/to/key2/id_rsa files/home/my_user2/.ssh/
$ drist user@remote-host
Copying files from folder "files":
/home/my_user1/.ssh/id_rsa
/home/my_user2/.ssh/id_rsa
Deploying authorized_keys file
We can easily create the authorized_key file by using cat.
$ mkdir drist_deploy_ssh_authorized
$ cd drist_deploy_ssh_authorized
$ mkdir -p files/home/user/.ssh/
$ cat /path/to/user/keys/*.pub > files/home/user/.ssh/authorized_keys
$ drist user@remote-host
Copying files from folder "files":
/home/user/.ssh/authorized_keys
This can be automated using a makefile running the cat command and then running
drist.
all:
cat /path/to/keys/*.pub > files/home/user.ssh/authorized_keys
drist user@remote-host
Installing nginx on FreeBSD
This module (aka a folder which contain material for drist) will install nginx
on FreeBSD and start it.
$ mkdir deploy_nginx
$ cd deploy_nginx
$ cat >script <<EOF
#!/bin/sh
test -f /usr/local/bin/nginx
if [ $? -ne 0 ]; then
pkg install -y nginx
fi
sysrc nginx_enable=yes
service nginx restart
EOF
$ drist user@remote-host
Executing file "script":
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
nginx: 1.14.1,2
Number of packages to be installed: 1
The process will require 1 MiB more space.
421 KiB to be downloaded.
[1/1] Fetching nginx-1.14.1,2.txz: 100% 421 KiB 430.7kB/s 00:01
Checking integrity... done (0 conflicting)
[1/1] Installing nginx-1.14.1,2...
===> Creating groups.
Using existing group 'www'.
===> Creating users
Using existing user 'www'.
[1/1] Extracting nginx-1.14.1,2: 100%
Message from nginx-1.14.1,2:
===================================================================
Recent version of the NGINX introduces dynamic modules support. In
FreeBSD ports tree this feature was enabled by default with the DSO
knob. Several vendor's and third-party modules have been converted
to dynamic modules. Unset the DSO knob builds an NGINX without
dynamic modules support.
To load a module at runtime, include the new `load_module'
directive in the main context, specifying the path to the shared
object file for the module, enclosed in quotation marks. When you
reload the configuration or restart NGINX, the module is loaded in.
It is possible to specify a path relative to the source directory,
or a full path, please see
https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and
http://nginx.org/en/docs/ngx_core_module.html#load_module for
details.
Default path for the NGINX dynamic modules is
/usr/local/libexec/nginx.
===================================================================
nginx_enable: -> yes
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
nginx not running? (check /var/run/nginx.pid).
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.
More complex example
Now I will show more complexes examples, with host specific steps. I will not
display the output because the previous output were sufficient enough to give a
rough idea of what drist does.
Removing someone ssh access
We will reuse an existing module here, an user should not be able to login
anymore on its account on the servers using the ssh key.
$ cd ssh
$ mkdir -p absent/home/user/.ssh/
$ touch absent/home/user/.ssh/authorized_keys
$ drist user@server
Installing php on FreeBSD
The following module will install php and remove the opcache.ini file, and will
install php72-pdo_pgsql if it is run on server production.domain.private.
$ mkdir deploy_php && cd deploy_php
$ mkdir -p files/usr/local/etc
$ cp /some/correct/config.ini files/usr/local/etc/php.ini
$ cat > script <<EOF
#!/bin/sh
test -f /usr/local/etc/php-fpm.conf || pkg install -f php-extensions
sysrc php_fpm_enable=yes
service php-fpm restart
test -f /usr/local/etc/php/opcache.ini || rm /usr/local/etc/php/opcache.ini
EOF
$ cat > script-production.domain.private <<EOF
#!/bin/sh
test -f /usr/local/etc/php/pdo_pgsql.ini || pkg install -f php72-pdo_pgsql
service php-fpm restart
EOF
The monitoring machine
This one is unique and I would like to avoid applying its configuration against
another server (that happened to me once with salt and it was really really
bad). So I will just do all the job using the hostname specific cases.
$ mkdir my_unique_machine && cd my_unique_machine
$ mkdir -p files-unique-machine.private/usr/local/etc/{smokeping,munin}
$ cp /good/config files-unique-machine.private/usr/local/etc/smokeping/config
$ cp /correct/conf files-unique-machine.private/usr/local/etc/munin/munin.conf
$ cat > script-unique-machine.private <<EOF
#!/bin/sh
pkg install -y smokeping munin-master munin-node
munin-configure --shell --suggest | sh
sysrc munin_node_enable=yes
sysrc smokeping_enable=yes
service munin-node restart
service smokeping restart
EOF
$ drist user@incorrect-host
$ drist user@unique-machine.private
Copying files from folder "files-unique-machine.private":
/usr/local/etc/smokeping/config
/usr/local/etc/munin/munin.conf
Executing file "script-unique-machine.private":
[...]
Nothing happened on the wrong system.
Be creative
Everything can be automated easily. I have some makefile in a lot of my drist
modules, because I just need to type “make” to run it correctly. Sometimes it
requires concatenating files before being run, sometimes I do not want to make
mistake or having to remember on which module apply on which server (if it’s
specific), so the makefile does the job for me.
One of my drist module will look at all my SSL certificates from another
module, and make a reed-alert configuration file using awk and deploying it on
the monitoring server. All I do is typing “make” and enjoy my free time.
How to get it and install it
- Drist can be downloaded at this address.
- Sources can be cloned using
git clone git://bitreich.org/drist
In the sources folder, type “make install” as root, that will copy drist binary
to /usr/bin/drist and its man page to /usr/share/man/man1/drist.1
For copying files, drist requires rsync on both local and remote hosts.
For running the script file, a sh compatible shell is required (csh is not working).
This second fun-tip article will explain how to display trailing
spaces in a text file, using the
ed(1)
editor.
ed has a special command for showing a dollar character at the end of
each line, which mean that if the line has some spaces, the dollar
character will spaced from the last visible line character.
$ echo ",pl" | ed some-file.txt
453
This second fun-tip article will explain how to display trailing$
spaces in a text file, using the$
[ed(1)$](https://man.openbsd.org/ed)
editor.$
ed has a special command for showing a dollar character at the end of$
each line, which mean that if the line has some spaces, the dollar$
character will spaced from the last visible line character.$
$
.Bd -literal -offset indent$
echo ",pl" | ed some-file.txt$
This is the output of the article file while I am writing it. As you
can notice, there is no trailing space here.
The first number shown in the ed output is the file size, because ed
starts at the end of the file and then, wait for commands.
If I use that very same command on a small text files with trailing
spaces, the following result is expected:
49
this is full $
of trailing $
spaces ! $
It is also possible to display line numbers using the “n” command
instead of the “p” command.
This would produce this result for my current article file:
1559
1 .Dd November 29, 2018$
2 .Dt "Show trailing spaces using ed"$
3 This second fun-tip article will explain how to display trailing$
4 spaces in a text file, using the$
5 .Lk https://man.openbsd.org/ed ed(1)$
6 editor.$
7 ed has a special command for showing a dollar character at the end of$
8 each line, which mean that if the line has some spaces, the dollar$
9 character will spaced from the last visible line character.$
10 $
11 .Bd -literal -offset indent$
12 echo ",pl" | ed some-file.txt$
13 453$
14 .Dd November 29, 2018
15 .Dt "Show trailing spaces using ed"
16 This second fun-tip article will explain how to display trailing
17 spaces in a text file, using the
18 .Lk https://man.openbsd.org/ed ed(1)
19 editor.
20 ed has a special command for showing a '\ character at the end of
21 each line, which mean that if the line has some spaces, the '\
22 character will spaced from the last visible line character.
23
24 \&.Bd \-literal \-offset indent
25 \echo ",pl" | ed some-file.txt
26 .Ed$
27 $
28 This is the output of the article file while I am writing it. As you$
29 can notice, there is no trailing space here.$
30 $
31 The first number shown in the ed output is the file size, because ed$
32 starts at the end of the file and then, wait for commands.$
33 $
34 If I use that very same command on a small text files with trailing$
35 spaces, the following result is expected:$
36 $
37 .Bd -literal -offset indent$
38 49$
39 this is full
40 of trailing
41 spaces !
42 .Ed$
43 $
44 It is also possible to display line numbers using the "n" command$
45 instead of the "p" command.$
46 This would produce this result for my current article file:$
47 .Bd -literal -offset indent$
This shows my article file with each line numbered plus the position
of the last character of each line, this is awesome!
I have to admit though that including my own article as example is
blowing up my mind, especially as I am writing it using ed.
If for some reasons you need to share a file anonymously, this can be done
through Tor using the port net/onionshare. Onionshare will start a web server
displaying an unique page with a list of shared files and a Download Files
button leading to a zip file.
While waiting for a download, onionshare will display HTTP logs. By default,
onionshare will exit upon successful download of the files but this can be
changed with the flag –stay-open.
Its usage is very simple, execute onionshare with the list of files to
share, as you can see in the following example:
solene@computer ~ $ onionshare Epictetus-The_Enchiridion.txt
Onionshare 1.3 | https://onionshare.org/
Connecting to the Tor network: 100% - Done
Configuring onion service on port 17616.
Starting ephemeral Tor onion service and awaiting publication
Settings saved to /home/solene/.config/onionshare/onionshare.json
Preparing files to share.
* Running on http://127.0.0.1:17616/ (Press CTRL+C to quit)
Give this address to the person you're sending the file to:
http://3ngjewzijwb4znjf.onion/hybrid-marbled
Press Ctrl-C to stop server
Now, I need to give the address http://3ngjewzijwb4znjf.onion/hybrid-marbled
to the receiver who will need a web browser with Tor to download it.
This article is about a software named onioncat, it is available as a
package on most Unix and Linux systems. This software allows to create an IPv6
VPN over Tor, with no restrictions on network usage.
First, we need to install onioncat, on OpenBSD:
$ doas pkg_add onioncat
Run a tor hidden service, as explained in one of my previous article, and get
the hostname value. If you run multiples hidden services, pick one hostname.
# cat /var/tor/ssh_hidden_service/hostname
g6adq2w15j1eakzr.onion
Now that we have the hostname, we just need to run ocat
.
# ocat g6adq2w15j1eakzr.onion
If everything works as expected, a tun interface
will be created. With a fe80:: IPv6 address assigned to it, and a fd87::
address.
Your system is now reachable, via Tor, through its IPv6 address starting with
fd87:: . It supports every IP protocol. Instead of using torsocks wrapper
and .onion hostname, you can use the IPv6 address with any software.
It has been more than four months since I wrote my article about leaving Emacs.
This article will quickly speak about my journey.
First, I successfully left Emacs. Long story short, I like Emacs and think
it’s a great piece of software, but I’m not comfortable being dependent of it
for everything I do. I chose to replace all my Emacs usage by other software
(agenda, notes taking , todo-list, IRC client, jabber client, editor etc..).
- agenda is not replaced by when (port productivity/when), but I plan to
replace it by calendar(1) as it’s in base and that when doesn’t do much.
- todo-list: I now use taskwarrior + a kanban board (using kanboard) for team
work
- notes: I wrote a small software named “notes” which is a wrapper for editing
files and following edition using git. It’s available at
git://bitreich.org/notes
- IRC: weechat (not better or worse than emacs circe)
- jabber: profanity
- editor: vim, ed or emacs, that depend what I do. Emacs is excellent for
writing Lisp or Scheme code, while I prefer to use vim for most of edition
task. I now use ed for small editions.
- mail: I wrote some kind of a wrapper on top of mblaze. I plan to share it
someday.
I’m happy to have moved out from Emacs.
I am starting a new kind of articles that I chose to name it ”fun facts“.
Theses articles will be about one-liners which can have some kind of use, or
that I find interesting from a technical point of view. While not useless,
theses commands may be used in very specific cases.
The first of its kind will explain how to programmaticaly use diff to modify
file1 to file2, using a command line, and without a patch.
First, create a file, with a small content for the example:
$ printf "first line\nsecond line\nthird line\nfourth line with text\n" > file1
$ cp file1{,.orig}
$ printf "very first line\nsecond line\n third line\nfourth line\n" > file1
We will use diff(1) -e
flag with the two
files.
$ diff -e file1 file1.orig
4c
fourth line
.
1c
very first line
.
The diff(1) output is batch of ed(1) commands,
which will transform file1 into file2. This can be embedded into a script as
in the following example. We also add w
last commands to save the file after
edition.
#!/bin/sh
ed file1 <<EOF
4c
fourth line
.
1c
very first line
.
w
EOF
This is a quite convenient way to transform a file into another file, without
pushing the entire file. This can be used in a deployment script. This is more
precise and less error prone than a sed command.
In the same way, we can use ed to alter configuration file by writing
instructions without using diff(1). The following script will change the whole
first line containing “Port 22” into Port 2222 in /etc/ssh/sshd_config.
#!/bin/sh
ed /etc/ssh/sshd_config <<EOF
/Port 22
c
Port 2222
.
w
EOF
The sed(1) equivalent would be:
sed -i'' 's/.*Port 22.*/Port 2222/' /etc/ssh/sshd_config
Both programs have their use, pros and cons. The most important is to use the
right tool for the right job.
It’s possible to play native Stardew Valley on OpenBSD, and it’s not using a
weird trick!
First, you need to buy Stardew Valley, it’s not very expensive and is often
available at a lower price. I recommend to buy it on
GOG.
Now, follow the steps:
- install packages unzip and fnaify
- On GOG, download the linux installer
- unzip the installer (use unzip command on the .sh file)
- cd into data/noarch/game
- fnaify -y
- ./StardewValley
Enjoy!
sshd(8) has a very nice feature that is often
overlooked. That feature is the ability to allow a ssh user to run a specified
command and nothing else, not even a login shell.
This is really easy to use and the magic happens in the file
authorized_keys which can be used to restrict commands per public key.
For example, if you want to allow someone to run the “uptime” command on your
server, you can create a user account for that person, with no password so the
password login will be disabled, and add his/her ssh public key in
~/.ssh/authorized_keys of that new user, with the following content.
restrict,command="/usr/bin/uptime" ssh-rsa the_key_content_here
The user will not be able to log-in, and doing the command ssh remoteserver
will return the output of uptime
. There is no way to escape this.
While running uptime is not really helpful, this can be used for a much more
interesting use case, like allowing remote users to use vmctl without
giving a shell account. The vmctl command requires parameters, the configuration
will be slightly different.
restrict,pty,command="/usr/sbin/vmctl $SSH_ORIGINAL_COMMAND" ssh-rsa the_key_content_here"
The variable SSH_ORIGINAL_COMMAND contains the value of what is passed as
parameter to ssh. The pty keyword also make an appearance, that will be
explained later.
If the user connects to ssh, vmctl with no parameter will be output.
$ ssh remotehost
usage: vmctl [-v] command [arg ...]
vmctl console id
vmctl create "path" [-b base] [-i disk] [-s size]
vmctl load "path"
vmctl log [verbose|brief]
vmctl reload
vmctl reset [all|vms|switches]
vmctl show [id]
vmctl start "name" [-Lc] [-b image] [-r image] [-m size]
[-n switch] [-i count] [-d disk]* [-t name]
vmctl status [id]
vmctl stop [id|-a] [-fw]
vmctl pause id
vmctl unpause id
vmctl send id
vmctl receive id
If you pass parameters to ssh, it will be passed to vmctl.
$ ssh remotehost show
ID PID VCPUS MAXMEM CURMEM TTY OWNER NAME
1 - 1 1.0G - - solene test
$ ssh remotehost start test
vmctl: started vm 1 successfully, tty /dev/ttyp9
$ ssh -t remotehost console test
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?
The ssh connections become a call to vmctl and ssh parameters become vmctl
parameters.
Note that in the last example, I use “ssh -t”, this is so to force allocation
of a pseudo tty device. This is required for vmctl console to get a fully
working console. The keyword restrict does not allow pty allocation, that
is why we have to add pty after restrict, to allow it.
In this fourth Tor article, I will quickly cover how to run a Tor relay, the
Tor project already have a very nice and up-to-date Guide for setting a relay.
Those relays are what make Tor usable, with more relay, Tor gets more bandwidth
and it makes you harder to trace, because that would mean more traffic to
analyze.
A relay server can be an exit node, which will relay Tor traffic to the
outside. This implies a lot of legal issues, the Tor project foundation offers
to help you if your exit node gets you in trouble.
Remember that being an exit node is optional. Most relays are not exit
nodes. They will either relay traffic between relays, or become a guard
which is an entry point to the Tor network. The guard gets the request over
non-tor network and send it to the next relay of the user circuit.
Running a relay requires a lot of CPU (capable of some crypto) and a huge
amount of bandwidth. Running a relay requires at least a bandwidth of 10Mb/s,
this is a minimal requirement. If you have less, you can still run a bridge
with obfs4 but I won’t cover it here.
When running a relay, you will be able to set a daily/weekly/monthly traffic
limit, so your relay will stop relaying when it reach the quota. It’s quiet
useful if you don’t have unmeasured bandwidth, you can also limit the bandwidth
allowed to Tor.
To get real-time information about your relay, the software Nyx (net/nyx) is a
Tor top-like front end which show Tor CPU usage, bandwidth, connections, log in
real time.
The awesome Official Tor guide
In this article I will present you the rcs
tools and we will use it for versioning files in /etc to track changes between
editions. These tools are part of the OpenBSD base install.
Prerequisites
You need to create a RCS
folder where your files are, so the files
versions will be saved in it. I will use /etc in the examples, you
can adapt to your needs.
# cd /etc
# mkdir RCS
The following examples use the command ci -u
. This will be explained
later why so.
Tracking a file
We need to add a file to the RCS directory so we can track its
revisions. Each time we will proceed, we will create a new revision
of the file which contain the whole file at that point of time. This
will allow us to see changes between revisions, and the date of each
revision (and some others informations).
I really recommend to track the files you edit in your system, or even
configuration file in your user directory.
In next example, we will create the first revision of our file with
ci, and we will have to write some message about
it, like what is doing that file. Once we write the message, we need to
validate with a single dot on the line.
# cd /etc
# ci -u fstab
fstab,v <-- fstab
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> this is the /etc/fstab file
>> .
initial revision: 1.1
done
Editing a file
The process of edition has multiples steps, using
ci and co:
- checkout the file and lock it, this will make the file available
for writing and will prevent using
co
on it again (due to lock)
- edit the file
- commit the new file + checkout
When using ci
to store the new revision, we need to write a small
message, try to use something clear and short. The log messages can be
seen in the file history, that should help you to know which change
has been made and why. The full process is done in the following
example.
# co -l fstab
RCS/fstab,v --> fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v <-- fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
revision 1.4 (unlocked)
done
View changes since last version
Using previous example, we will use rcsdiff
to check the changes since the last version.
# co -l fstab
RCS/fstab,v --> fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# rcsdiff -u fstab
--- fstab 2018/10/28 14:28:29 1.1
+++ fstab 2018/10/28 14:30:41
@@ -9,3 +9,4 @@
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong
The -u
flag is so to produce an unified diff, which I find easier to
read. Lines with +
shows additions, and lines with -
show
deletions (there are none in the example).
Use of ci -u
The examples were using ci -u
this is because, if you use ci
some_file
, the file will be saved in the RCS folder but will be
missing in its place. You should use co some_file
to get it back (in
read-only).
# co -l fstab
RCS/fstab,v --> fstab
revision 1.1 (locked)
done
# echo "something wrong" >> fstab
# ci -u fstab
RCS/fstab,v <-- fstab
new revision: 1.4; previous revision: 1.3
enter log message, terminated with a single '.' or end of file:
>> I added a mistake on purpose!
>> .
done
# ls fstab
ls: fstab: No such file or directory
# co fstab
RCS/fstab,v --> fstab
revision 1.5
done
# ls fstab
fstab
Using ci -u
is very convenient because it prevent the user to forget
to checkout the file after commiting the changes.
Show existing revisions of a file
# rlog fstab
RCS file: RCS/fstab,v
Working file: fstab
head: 1.2
branch:
locks: strict
access list:
symbolic names:
keyword substitution: kv
total revisions: 2; selected revisions: 2
description:
new file
----------------------------
revision 1.2
date: 2018/10/28 14:45:34; author: solene; state: Exp; lines: +1 -0;
Adding a disk
----------------------------
revision 1.1
date: 2018/10/28 14:45:18; author: solene; state: Exp;
Initial revision
=============================================================================
We have revisions 1.1 and 1.2, if we want to display the file in its
1.1 revision, we can use the following command:
# co -p1.1 fstab
RCS/fstab,v --> standard output
revision 1.1
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
done
Note that there is no space between the flag and the revision! This
is required.
We can see that the command did output some extra informations about
the file and “done” at the end of the file. Thoses extra
informations are sent to stderr while the actual file content is sent
to stdout. That mean if we redirect stdout to a file, we will get the
file content.
# co -p1.1 fstab > a_file
RCS/fstab,v --> standard output
revision 1.1
done
# cat a_file
52fdd1ce48744600.b none swap sw
52fdd1ce48744600.a / ffs rw 1 1
52fdd1ce48744600.l /home ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.d /tmp ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.f /usr ffs rw,nodev 1 2
52fdd1ce48744600.g /usr/X11R6 ffs rw,nodev 1 2
52fdd1ce48744600.h /usr/local ffs rw,wxallowed,nodev 1 2
52fdd1ce48744600.k /usr/obj ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
Show a diff of a file since a revision
We can use rcsdiff using -r flag to tell it to show the
changes between last and one specific revision.
# rcsdiff -u -r1.1 fstab
--- fstab 2018/10/29 14:45:18 1.1
+++ fstab 2018/10/29 14:45:34
@@ -9,3 +9,4 @@
52fdd1ce48744600.j /usr/src ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.e /var ffs rw,nodev,nosuid 1 2
52fdd1ce48744600.m /data ffs rw,dev,wxallowed,nosuid 1 2
+something wrong
With the new OpenSMTPD syntax change which landed with OpenBSD 6.4
release, changes are needed for making opensmtpd to act as a lan relay
to a smtp server. This case wasn’t covered in my previous article
about opensmtpd, I was only writing about relaying from the local
machine, not for a network. Mike (a reader of the blog) shared that it
would be nice to have an article about it. Here it is! :)
A simple configuration would look like the following:
listen on em0
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
action "local" mbox alias <aliases>
action "relay" relay host smtps://myrelay@remote-smtpd.tld auth <secrets>
match for local action "local"
match from local for any action "relay"
match from src 192.168.1.0/24 for action relay
The daemon will listen on em0 interface, and mail delivered from the
network will be relayed to remote-smtpd.tld.
For a relay using authentication, the login and passwords must be
defined in the file /etc/mail/secrets like this: myrelay
login:Pa$$W0rd
smtpd.conf(5) explains creation
of /etc/mail/secrets like this:
touch /etc/mail/secrets
chmod 640 /etc/mail/secrets
chown root:_smtpd /etc/mail/secrets
In this third Tor article, we will discover the web browser Tor
Browser.
The Tor Browser is an official Tor project. It is a modified
Firefox, including some defaults settings changes and some extensions.
The default changes are all related to privacy and anonymity. It has
been made to be easy to browse the Internet through Tor without
leaving behing any information which could help identify you, because
there are much more informations than your public IP address which
could be used against you.
It requires tor daemon to be installed and running, as I covered in my
first Tor article.
Using it is really straightforward.
How to install tor-browser
$ pkg_add tor-browser
How to start tor-browser
$ tor-browser
It will create a ~/TorBrowser-Data folder at launch. You can remove it
as you want, it doesn’t contain anything sensitive but is required for
it to work.
If you are using opensmtpd on a device not
always connected on the internet, you may want to see what mail did not go, and
force it to be delivered NOW when you are finally connected to the
Internet.
We can use smtpctl to show the current queue.
$ doas smtpctl show queue
1de69809e7a84423|local|mta|auth|so@tld|dest@tld|dest@tld|1540362112|1540362112|0|2|pending|406|No MX found for domain
The previous command will report nothing if the queue is empty.
In the previous output, we see that there is one mail from me to
dest@tld which is pending due to “NO MX found for domain” (which is
normal as I had no internet when I sent the mail).
We need to extract the first field, which is 1de69809e7a84423 in the
current example.
In order to tell opensmtpd to deliver it now, we will use the
following command:
$ doas smtpctl schedule 1de69809e7a84423
1 envelope scheduled
$ doas smtpctl show queue
My mail was delivered, it’s not in the queue anymore.
If you wish to deliver all enveloppes in the queue, this is as simple as:
$ doas smtpctl schedule all
My website/gopherhole static generator cl-yag has been updated today,
and see its first release!
New feature added today is that the gopher output now supports an
index menu of tags, and a menu for each tags displaying articles
tagged by that tag. The gopher output was a bit of a second class
citizen before this, only listing articles.
New release v1.00 can be downloaded
here (sha512
sum
53839dfb52544c3ac0a3ca78d12161fee9bff628036d8e8d3f54c11e479b3a8c5effe17dd3f21cf6ae4249c61bfbc8585b1aa5b928581a6b257b268f66630819).
Code can be cloned with git: git://bitreich.org/cl-yag
In this second Tor article, I will present an interesting Tor feature
named hidden service. The principle of this hidden service is to
make available a network service from anywhere, with only
prerequisites that the computer must be powered on, tor not blocked
and it has network access.
This service will be available through an address not disclosing
anything about the server internet provider or its IP, instead, a
hostname ending by .onion will be provided by tor for
connecting. This hidden service will be only accessible through Tor.
There are a few advantages of using hidden services:
- privacy, hostname doesn’t contain any hint
- security, secure access to a remote service not using SSL/TLS
- no need for running some kind of dynamic dns updater
The drawback is that it’s quite slow and it only work for TCP
services.
From here, we assume that Tor is installed and working.
Running an hidden service require to modify the Tor daemon
configuration file, located in /etc/tor/torrc on OpenBSD.
Add the following lines in the configuration file to enable a hidden
service for SSH:
HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
The directory /var/tor/ssh_service will be be created. The
directory /var/tor is owned by user _tor and not readable by
other users. The hidden service directory can be named as you want,
but it should be owned by user _tor with restricted
permissions. Tor daemon will take care at creating the directory with
correct permissions once you reload it.
Now you can reload the tor daemon to make the hidden service
available.
$ doas rcctl reload tor
In the /var/tor/ssh_service directory, two files are created. What
we want is the content of the file hostname which contains the
hostname to reach our hidden service.
$ doas cat /var/tor/ssh_service/hostname
piosdnzecmbijclc.onion
Now, we can use the following command to connect to the hidden service
from anywhere.
$ torsocks ssh piosdnzecmbijclc.onion
In Tor network, this feature doesn’t use an exit node. Hidden services
can be used for various services like http, imap, ssh, gopher etc…
Using hidden service isn’t illegal nor it makes the computer to relay
tor network, as previously, just check if you can use Tor on your
network.
Note: it is possible to have a version 3 .onion address which will
prevent hostname collapsing, but this produce very long
hostnames. This can be done like in the following example:
HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
HiddenServiceVersion 3
This will produce a really long hostname like
tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion
If you want to have the short and long hostnames, you need to specify
twice the hidden service, with differents folders.
Take care, if you run a ssh service on your website and using this
same ssh daemon on the hidden service, the host keys will be the same,
implying that someone could theoricaly associate both and know that
this public IP runs this hidden service, breaking anonymity.
Tor is a network service allowing to hide your traffic. People
sniffing your network will not be able to know what server you reach
and people on the remote side (like the administrator of a web
service) will not know where you are from. Tor helps keeping your
anonymity and privacy.
To make it quick, tor make use of an entry point that you reach
directly, then servers acting as relay not able to decrypt the data
relayed, and up to an exit node which will do the real request for
you, and the network response will do the opposite way.
You can find more details on the
Tor project homepage.
Installing tor is really easy on OpenBSD. We need to install it,
and start its daemon. The daemon will listen by default on localhost
on port 9050. On others systems, it may be quite similar, install the
tor package and enable the daemon if not enabled by default.
# pkg_add tor
# rcctl enable tor
# rcctl start tor
Now, you can use your favorite program, look at the proxy settings and
choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and
use the default address: 127.0.0.1
with port 9050
.
If you need to use tor with a program that doesn’t support setting a
SOCKS proxy, it’s still possible to use torsocks to wrap it, that
will work with most programs. It is very easy to use.
# pkg_add torsocks
$ torsocks ssh remoteserver
This will make ssh going through tor network.
Using tor won’t make you relaying anything, and is legal in most
countries. Tor is like a VPN, some countries has laws about VPN, check
for your country laws if you plan to use tor. Also, note that using
tor may be forbidden in some networks (companies, schools etc..)
because this allows to escape filtering which may be against some kind
of “Agreement usage” of the network.
I will cover later the relaying part, which can lead to legal
uncertainty.
Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to
wrap network system calls, there is a way to do it more cleanly with
ssh (or any program supporting a custom command for initialize the
connection) using netcat.
ssh -o ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p' address.onion
This can be simplified by adding the following lines to your
~/.ssh/config file, in order to automatically use the proxy
command when you connect to a .onion hostname:
Host *.onion
ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p'
This netcat command is tested under OpenBSD, there are differents
netcat implementations, the flags may be differents or may not even
exist.
The default OpenBSD partition layout uses a pre-defined template. If
you have a disk more than 356 GB you will have unused space with the
default layout (346 GB before 6.4).
It’s possible to create a new partition to use that space if you did
not modify the default layout at installation. You only need to start
disklabel with flag -E* and type a to add a partition,
default will use all remaining space for the partition.
# disklabel -E sd0
Label editor (enter '?' for help at any prompt)
> a
partition: [m]
offset: [741349952]
size: [258863586]
FS type: [4.2BSD]
> w
> q
No label changes.
The new partition here is m. We can format it with:
# newfs /dev/rsd0m
Then, you should add it to your /etc/fstab, for that, use the same
uuid as for other partitions, it would look something like
52fdd1ce48744600
52fdd1ce48744600.e /data ffs rw,nodev,nosuid 1 2
It will be auto mounted at boot, you only need to create the folder
/data. Now you can do
# mkdir /data
# mount /data
and /data is usable right now.
You can read disklabel(8) and
newfs for more informations.
Simple command line to display your installed packages listed by size
from smallest to biggest.
$ pkg_info -sa | paste - - - - | sort -n -k 5
Thanks to sthen@ for the command, I was previously using one involving
awk which was less readable. paste is often forgotten, it has very
specifics uses which can’t be mimic easily with other tools, its
purpose is to joins multiples lines into one with some specific rules.
You can easily modify the output to convert the size from bytes to
megabytes with awk:
$ pkg_info -sa | paste - - - - | sort -n -k 5 | awk '{ NF=$NF/1024/1024 ; print }'
This divides the last element (using space separator) of each line
twice by 1024 and displays the line.
Today I will write about my blog itself. While I started it as my own
documentation for some specific things I always forget about (like
“How to add a route through a specific interface on FreeBSD”) or to
publish my dot files, I enjoyed it and wanted to share about some
specific topics.
Then I started the “port of the week” things, but as time goes, I find
less of those software and so I don’t have anything to write
about. Then, as I run multiples servers, sometimes when I feel that
the way I did something is clean and useful, I share it here, as it is
a reminder for me I also write it to be helpful for others.
Doing things right is time consuming, but I always want to deliver a
polished write. In my opinion, doing things right includes the
following:
- explain why something is needed
- explain code examples
- give hints about potential traps
- where to look for official documentation
- provide environment informations like the operating system version
used at the writing time
- make the reader to think and get inspired instead of providing a
material ready to be copy / pasted brainlessly
I try to keep as much as possible close to those guidelines. I even
update from time to time my previous articles to check it still works
on the latest operating system version, so the content is still
relevant. And until it’s not updated, having the system version let
the reader think about “oh, it may have changed” (or not, but it
becomes the reader problem).
Now, I want to share about some OpenBSD specifics features, in a way
to highlight features. In OpenBSD everything is documented
correctly, but as a Human, one can’t read and understand every man
page to know what is possible. Here come the highlighting articles,
trying to show features, how to use it and where they are documented.
I hope you, reader, like what I write. I am writing here since two
years and I still like it.
Following a discussion on the OpenBSD mailing list misc, today I
will write about how to manage the priority (as in nice priority) of
your daemons or services.
In man page rc(8), one can read:
Before init(8) starts rc, it sets the process priority, umask, and
resource limits according to the “daemon” login class as described in
login.conf(5). It then starts rc and attempts to execute the sequence of
commands therein.
Using /etc/login.conf we can manage some limits for services and
daemon, using their rc script name.
For example, to make jenkins at lowest priority (so it doesn’t
make troubles if it builds), using this line will set it to nice 20.
jenkins:priority=20
If you have a file /etc/login.conf.db you have to update it from
/etc/login.conf using the software cap_mkdb
. This creates a
hashed database for faster information retrieval when this file is
big. By default, that file doesn’t exist and you don’t have to run
cap_mkdb
. See login.conf(5) for
more informations.
In this article I will show how to configure OpenSMTPD, the default mail server
on OpenBSD, to relay mail sent locally to your smtp server. In pratice, this
allows to send mail through “localhost” by the right relay, so it makes also
possible to send mail even if your computer isn’t connected to the internet.
Once connected, opensmtpd will send the mails.
All you need to understand the configuration and write your own one is in the
man page smtpd.conf(5). This is only a
highlight on was it possible and how to achieve it.
In OpenBSD 6.4 release, the configuration of opensmtpd changed drasticaly, now
you have to defines rules and action to do when a mail match the rules, and you
have to define those actions.
In the following example, we will see two kinds of relay, the first is through
smtp over the Internet, it’s the most likely you will want to setup. And the
other one is how to relay to a remote server not allowing relaying from
outside.
/etc/mail/smtpd.conf
table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets
listen on lo0
action "local" mbox alias <aliases>
action "relay" relay
action "myserver" relay host smtps://myrelay@perso.pw auth <secrets>
action "openbsd" relay host localhost:2525
match mail-from "@perso.pw" for any action "myserver"
match mail-from "@openbsd.org" for any action "openbsd"
match for local action "local"
match for any action "relay"
I defined 2 actions, one from “myserver”, it has a label “myrelay” and we use
auth <secrets>
to tell opensmtpd it needs authentication.
The other action is “openbsd”, it will only relay to localhost on port 2525.
To use them, I define 2 matching rules of the very same kind. If the mail that
I want to send match the @domain-name, then choose relay “myserver” or
“openbsd”.
The “openbsd” relay is only available when I create a SSH tunnel, binding the
local port 25 of the remote server to my port 2525, with flags
-L 2525:127.0.0.1:25
.
For a relay using authentication, the login and passwords must be defined in
the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd
smtpd.conf(5) explains creation
of /etc/mail/secrets like this:
touch /etc/mail/secrets
chmod 640 /etc/mail/secrets
chown root:_smtpd /etc/mail/secrets
Now, restarts your server. Then if you need to send mails, just use “mail”
command or localhost as a smtp server. Depending on your From address, a
different relay will be used.
Deliveries can be checked in /var/log/maillog log file.
See mails in queue
doas smtpctl show queue
Try to deliver now
doas smtpctl schedule all
I wrote a script generating a RSS file from the content of the page
https://www.openbsd.org/faq/current.html
This allow to be notified when a big change is made in -current.
The file is available at this place : https://perso.pw/openbsd-current.xml
Today I will cover a specific topic on OpenBSD networking. If you are using a
laptop, you may switch from ethernet to wireless network from time to time.
There is a simple way to keep the network instead of having to disconnect /
reconnect everytime.
It’s possible to aggregate your wireless and ethernet devices into one trunk
pseudo device in failover mode, which give ethernet the priority if connected.
To achieve this, it’s quite simple. If you have devices em0 and iwm0
create the following files.
/etc/hostname.em0
up
/etc/hostname.iwm0
join "office_network" wpakey "mypassword"
join "my_home_network" wpakey "9charshere"
join "roaming phone" wpakey "something"
join "Public Wifi"
up
/etc/hostname.trunk0
trunkproto failover trunkport em0 trunkport iwm0
dhcp
As you can see in the wireless device configuration we can specify multiples
network to join, it is a new feature that will be available from 6.4 release.
You can enable the new configuration by running sh /etc/netstart
as root.
This setup is explained in trunk(4)
man page and in the
OpenBSD FAQ as well.
Still about bitreich conference 2018, I’ve been presenting drist,
an utility for server deployment (like salt/puppet/ansible…) that I
wrote.
drist makes deployments easy to understand and easy to
extend. Basically, it has 3 steps:
- copying a local file tree on the remote server (for deploying files)
- delete files on the remote server if they are present in a local tree
- execute a script on the remote server
Each step is run if the according file/folder exists, and for each
step, it’s possible to have a general / per-host setup.
How to fetch drist
git clone git://bitreich.org/drist
It was my very first talk in english, please be indulgent.
Plain text slides (tgz)
MP3 of the talk
MP3 of questions/answers
Bitreich community is reachable on gopher at gopher://bitreich.org
As the author of reed-alert monitoring tool I have been speaking
about my software at the bitreich conference 2018.
For the quick intro, reed-alert is a software to get notified when
something is wrong on your server, it’s fully customizable and really
easy-to-use.
How to fetch reed-alert
git clone git://bitreich.org/reed-alert
It was my very first talk in english, please be indulgent.
Plain text slides (tgz)
MP3 of the talk
MP3 of questions/answers
Bitreich community is reachable on gopher at gopher://bitreich.org
If you need to generate a QR picture using command line tool. I would
recommend libqrencode.
qrencode -o file.png 'some text'
It’s also possible to display the QR code inside the terminal with
the following command.
qrencode -t ANSI256 'some text'
Official qrencode website
Tips for using Tmux more efficiently
Enter in copy mode
By default Tmux uses the emacs key-bindings, to make a selection you
need to enter in copy-mode
by pressing Ctrl+b and then [ with Ctrl+b
being the tmux prefix key, if you changed it then do the replacement
while reading.
If you need to quit the copy-mode, type Ctrl+C.
Make a selection
While in copy-mode, selects your start or ending position for your
selection and then press Ctrl+Space to start the selection. Now, move
your cursor to select the text and press Ctrl+w to validate.
Paste a selection
When you want to paste your selection, press Ctrl+b ] (you should not
be in copy-mode for this!).
Make a rectangle selection
If you want to make a rectangular selection, press Ctrl+space to
start and immediately, press R (capitalized R), then move your cursor
and validate with Ctrl+w.
Output the buffer to X buffer
Make a selection to put the content in tmux buffer, then type
tmux save-buffer - | xclip
You may want to look at xclip (it’s a package) man page.
Output the buffer to a file
tmux save-buffer file
Load a file into buffer
It’s possible to load the content of a file inside the buffer for
pasting it somewhere.
tmux load-buffer file
You can also load into the buffer the output of a command, using a
pipe and - as a file like in this example:
echo 'something very interesting' | tmux load-buffer -
Display the battery percentage in the status bar
If you want to display your battery percentage and update it every
40 seconds, you can add two following lines in ~/.tmux.conf
:
set status-interval 40
set -g status-right "#[fg=colour155]#(apm -l)%% | #[fg=colour45]%d %b %R"
This example works on OpenBSD using apm command. You can reuse
this example to display others informations.
I never wrote a man page. I already had to read at the source of a
man page, but I was barely understand what happened there. As I like
having fun and discovering new things (people call me a Hipster since
last days days ;-) ).
I modified cl-yag (the website generator used for this website) to be
only produced by mdoc files. The output was not very cool as it has
too many html items (classes, attributes, tags etc…). The result
wasn’t that bad but it looked like concatenated man pages.
I actually enjoyed playing with mdoc format (the man page format on
OpenBSD, I don’t know if it’s used somewhere else). While it’s pretty
verbose, it allows to separate the formatting from the paragraphs. As
I’m playing with
ed
editor last days, it is easier to have an article written with small
pieces of lines rather than a big paragraph including the formatting.
Finally I succeded at writing a command line which produced an usable
html output to use it as a converter in cl-yag. Now, I’ll be able to
write my articles in the mdoc format if I want :D (which is fun). The
convert command is really ugly but it actually works, as you can see
if you read this.
cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT
The trick here was to use markdown as an convert format between mdoc
to html. As markdown is very weak compared to html (in
possibilities), it will only use simple tags for formatting the html
output. The sed command is needed to delete the mandoc output with
the man page title at the top, and the operating system at the
bottom.
By having played with this, writing a man page is less obscure to me
and I have a new unusual format to use for writing my articles. Maybe
unusual for this use case, but still very powerful!
Hello
Today I will write about my current process of trying to get rid of
emacs. I use it extensively with org-mode for taking notes and making
them into a agenda/todo-list, this helped me a lot to remember tasks
to do and what people told to me. I also use it for editing of
course, any kind of text or source code. This is usually the editor I
use for writing the blog articles that you can read here. This one is
written using ed. I also read my emails in emacs with mu4e (which
last version doesn’t work anymore on powerpc due to a c++14 feature
used and no compiler available on powerpc to compile it…).
While I like Emacs, I never liked to use one big tool for everything.
My current quest is to look for a portable and efficient way to
replace differents emacs parts. I will not stop using Emacs if the
replacements are not good enough to do the job.
So, I identified my Emacs uses:
- todo-list / agenda / taking notes
- writing code (perl, C, php, Common LISP)
- IRC
- mails
- writing texts
- playing chess by mail
- jabber client
I will try for each topic to identify alternatives and challenge them
to Emacs.
Todo-list / Agenda / Notes taking
This is the most important part of my emacs use and it is the one I
would really like to get out of Emacs. What I need is: writing
quickly a task, add a deadline to it, add explanations or a
description to it, be able to add sub-tasks for a task and be able to
display it correctly (like in order of deadline with days / hours
before deadline).
I am trying to convert my current todo-list to taskwarrior, the
learning curve is not easy but after spending one hour playing with it
while reading the man page, I have understood enough to replace
org-mode with it. I do not know if it will be as good as org-mode but
only time will let us know.
By the way, I found vit, a ncurses front-end for taskwarrior.
Writing code
Actually Emacs is a good editor. It supports syntax coloring, can
evaluates regions of code (depend of the language), the editor is
nice etc… I discovered jed which is a emacs-like editor written
in C+libslang, it’s stable and light while providing more features
than mg editor (available in OpenBSD base installation).
While I am currently playing with ed for some reasons (I will
certainly write about it), I am not sure I could use it for
writing a software from scratch.
IRC
There are lots of differents IRC clients around, I just need to pick
up one.
Mails
I really enjoy using mu4e, I can find my mails easily with it, the
query system is very powerful and interesting. I don’t know what I
could use to replace it. I have been using alpine some times ago, and
I tried mutt before mu4e and I did not like it. I have heard about
some tools to manage a maildir folder using unix commands, maybe I
should try this one. I did not any searches on this topic at the
moment.
Writing text
For writing plain text like my articles or for using $EDITOR for
differents tasks, I think that ed will do the job perfectly :-) There
is ONE feature I really like in Emacs but I think it’s really easy to
recreate with a script, the function bind on M-q to wrap a text to
the correct column numbers!
Update: meanwhile I wrote a little perl script using Text::Wrap
module available in base Perl. It wraps to 70 columns. It could be
extended to fill blanks or add a character for the first line of a
paragraph.
#!/usr/bin/env perl
use strict;use warnings;
use Text::Wrap qw(wrap $columns);
open IN, '<'.$ARGV[0];
$columns = 70;
my @file = <IN>;
print wrap("","",@file);
This script does not modify the file itself though.
Some people pointed me that Perl was too much for this task. I have
been told about Groff or Par to format my files.
Finally, I found a very BARE way to handle this. As I write my
text with ed, I added an new alias named “ruled” with spawn ed with a
prompt of 70 characters #, so I have a rule each time ed displays its
prompt!!! :D
It looks like this for the last paragraph:
###################################################################### c
been told about Groff or Par to format my files.
Finally, I found a very **BARE** way to handle this. As I write my
text with ed, I added an new alias named "ruled" with spawn ed with a
prompt of 70 characters #, so I have a rule each time ed displays its
prompt!!! :D
.
###################################################################### w
Obviously, this way to proceed only works when writing the content at
first. If I need to edit a paragraph, I will need a tool to format
correctly my document again.
Jabber client
Using jabber inside Emacs is not a very good experience. I switched
to profanity (featured some times ago on this blog).
Playing Chess
Well, I stopped playing chess by mails, I am still waiting for my
recipient to play his turn since two years now. We were exchanging
the notation of the whole play in each mail, by adding our turn each
time, I was doing the rendering in Emacs, but I do not remember
exactly why but I had problems with this (replaying the string).
Old article
Hello, it turned out that this article is obsolete. The security used in is not
safe at all so the goal of this backup system isn’t achievable, thus it should
not be used and I need another backup system.
One of the most important feature of dump for me was to keep track of the inodes
numbers. A solution is to save the list of the inodes numbers and their path in
a file before doing a backup. This can be achieved with the following command.
$ doas ncheck -f "\I \P\n" /var
If you need a backup tool, I would recommend the following:
Duplicity
It supports remote backend like ftp/sftp which is quite convenient as you don’t
need any configuration on this other side. It supports compression and
incremental backup. I think it has some GUI tools available.
Restic
It supports remote backend like cloud storage provider or sftp, it doesn’t
require any special tool on the remote side. It supports deduplication of the
files and is able to manage multiples hosts in the same repository, this
mean that if you backup multiple computers, the deduplication will work across
them. This is the only backup software I know allowing this (I do not count
backuppc which I find really unusable).
Borg
It supports remote backend like ssh only if borg is installed on the other side.
It supports compression and deduplication but it is not possible to save
multiples hosts inside the same repository without doing a lot of hacks (which I
won’t recommend).
I write it as a note for me and if it can helps some other people,
it’s fine.
To change the program used by xdg-open for opening some kind of
file, it’s not that hard.
First, check the type of the file:
$ xdg-mime query filetype file.pdf
application/pdf
Then, choose the right tool for handling this type:
$ xdg-mime default mupdf.desktop application/pdf
Honestly, having firefox opening PDF files with GIMP IS NOT FUN.
New port of the week, and it’s about tmate.
If you ever wanted to share a terminal with someone without opening a
remote access to your computer, tmate is the right tool for this.
Once started, tmate will create a new tmux instance connected through
the tmate public server, by typing tmate show-messages
you will get
url for read-only or read-write links to share with someone, by ssh or
web browser. Don’t forget to type clear
to hide url after typing
show-messages, otherwise viewing people will have access to the write
url (and it’s not something you want).
If you don’t like the need of a third party, you can setup your own
server, but we won’t cover this in this article.
When you want to end the share, you just need to exit the tmux opened
by tmate.
If you want to install it on OpenBSD, just type pkg_add tmate
and
you are done. I think it’s available on most unix systems.
There is no much more to say about it, it’s great, simple, work
out-of-the-box with no configuration needed.
Here is a little script to automatize in some way your crontab
deployment when you don’t want to use a configuration tool like
ansible/salt/puppet etc… This let you package a file in your project
containing the crontab content you need, and it will add/update your
crontab with that file.
The script works this way:
$ ./install_cron crontab_solene
with crontab_solene file being an actual crontab correct, which
could looks like this:
## TAG ##
MAILTO=""
*/5 * * * * ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
Then it will include the file into my current user crontab, the
TAG in the file is here to be able to remove it and replace it
later with the new version. The script could be easily modified to
support the tag name as parameter, if you have multiple deployments
using the same user on the same machine.
Example:
$ crontab -l
0 * * * * pgrep iridium | xargs renice -n +20
$ ./install_cron crontab_solene
$ crontabl -l
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * * ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
If I add to crontab_solene the line
0 20 * * * ~/bin/faubackup.sh
I can now reinstall
the crontab file.
$ crontabl -l
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * * ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
## END_TAG ##
$ ./install_cron crontab_solene
$ crontabl -l
0 * * * * pgrep iridium | xargs renice -n +20
## TAG ##
MAILTO=""
*/5 * * * * ( cd ~/dev/reed-alert && ecl --load check.lisp )
*/10 * * * * /usr/local/bin/r2e run
1 * * * * vacuumdb -azf -U postgres
0 20 * * * ~/bin/faubackup.sh
## END_TAG ##
Here is the script:
#!/bin/sh
if [ -z "$1" ]; then
echo "Usage: $0 user_crontab_file"
exit 1
fi
VALIDATION=0
grep "^## TAG ##$" "$1" >/dev/null
VALIDATION=$?
grep "^## END_TAG ##$" "$1" >/dev/null
VALIDATION=$(( VALIDATION + $? ))
if [ "$VALIDATION" -ne 0 ]
then
echo "file ./${1} needs \"## TAG ##\" and \"## END_TAG ##\" to be used"
exit 2
fi
crontab -l | \
awk '{ if($0=="## TAG ##") { hide=1 }; if(hide==0) { print } ; if($0=="## END_TAG ##") { hide=0 }; }' | \
cat - "${1}" | \
crontab -
This article will explain quickly how to bind a folder to access it
from another path. It can be useful to give access to a specific
folder from a chroot without moving or duplicating the data into the
chroot.
Real world example: “I want to be able to access my 100GB folder
/home/my_data/ from my httpd web server chrooted in /var/www/”.
The trick on OpenBSD is to use NFS on localhost. It’s pretty simple.
# rcctl enable portmap nfsd mountd
# echo "/home/my_data -network=127.0.0.1 -mask=255.255.255.255" > /etc/exports
# rcctl start portmap nfsd mountd
The order is really important. You can check that the folder is
available through NFS with the following command:
$ showmount -e
Exports list on localhost:
/home/my_data 127.0.0.1
If you don’t have any line after “Exports list on localhost:”, you
should kill mountd with pkill -9 mountd
and start mountd again. I
experienced it twice when starting all the daemons from the same
commands but I’m not able to reproduce it. By the way, mountd only
supports reload.
If you modify /etc/exports, you only need to reload mountd using
rcctl reload mountd
.
Once you have check that everything was alright, you can mount the
exported folder on another folder with the command:
# mount localhost:/home/my_data /var/www/htdocs/my_data
You can add -ro
parameter in the /etc/exports file on the export
line if you want it to be read-only where you mount it.
Note: On FreeBSD/DragonflyBSD, you can use mount_nullfs /from /to
,
there is no need to setup a local NFS server. And on Linux you can use
mount --bind /from /to
and some others ways that I won’t cover here.
I discovered today an OpenSSH feature which doesn’t seem to be widely
known. The feature is called multiplexing and consists of reusing
an opened ssh connection to a server when you want to open another
one. This leads to faster connection establishment and less processes
running.
To reuse an opened connection, we need to use the ControlMaster
option, which requires ControlPath to be set. We will also set
ControlPersist for convenience.
- ControlMaster defines if we create, or use or nothing about
multiplexing
- ControlPath defines where to store the socket to reuse an opened
connection, this should be a path only available to your user.
- ControlPersist defines how much time to wait before closing a
ssh connection multiplexer after all connection using it are
closed. By default it’s “no” and once you drop all connections the
multiplexer stops.
I choosed to use the following parameters into my ~/.ssh/config file:
Host *
ControlMaster auto
ControlPath ~/.ssh/sessions/%h%p%r.sock
ControlPersist 60
This requires to have ~/.ssh/sessions/ folder restricted to my user
only. You can create it with the following command:
install -d -m 700 ~/.ssh/sessions
(you can also do mkdir ~/.ssh/sessions && chmod 700 ~/.ssh/sessions
but this requires two commands)
The ControlPath variable will creates sessions with the name
“${hostname}${port}${user}.sock”, so it will be unique per remote
server.
Finally, I choose to use ControlPersist to 60 seconds, so if I
logout from a remote server, I still have 60 seconds to reconnect to
it instantly.
Don’t forget that if for some reason the ssh channel handling the
multiplexing dies, all the ssh connections using it will die with it.
Benefits with ProxyJump
Another ssh feature that is very useful is ProxyJump, it’s really
useful to access ssh hosts which are not directly available from your
current place. Like servers with no public ssh server available. For
my job, I have a lot of servers not facing the internet, and I can
still connect to them using one of my public facing server which will
relay my ssh connection to the destination. Using the
ControlMaster feature, the ssh relay server doesn’t have to handle
lot of connections anymore, but only one.
In my ~/.ssh/config file:
Host *.private.lan
ProxyJump public-server.com
Those two lines allow me to connect to every servers with .private.lan
domains (which is known by my local DNS server) by typing
ssh some-machine.private.lan
. This will establish a connection to
public-server.com and then connects to the next server.
In my article about mu4e I said that I would write about sending mails
with it. This will be the topic covered in this article.
There are a lot of ways to send mails with a lot of differents use
cases. I will only cover a few of them, the documentation of mu4e and
emacs are both very good, I will only give hints about some
interestings setups.
I would thank Raphael who made me curious about differents ways of
sending mails from mu4e and who pointed out some mu4e features I
wasn’t aware of.
Send mails through your local server
The easiest way is to send mails through your local mail server (which
should be OpenSMTPD by default if you are running OpenBSD). This only
requires the following line to works in your ~/.emacs file:
(setq message-send-mail-function 'sendmail-send-it)
Basically, it would be only relayed to the recipient if your local
mail is well configured, which is not the case for most servers. This
requires a reverse DNS address correctly configured (assuming a static
IP address), a SPF record in your DNS and a DKIM signing for outgoing
mail. This is the minimum to be accepted to others SMTP
servers. Usually people send mails from their personal computer and
not from the mail server.
We can bypass this problem by configuring our local SMTP server to
relay our mails sent locally to another SMTP server using credentials
for authentication.
This is pretty easy to set-up, by using the following
/etc/mail/smtpd.conf configuration, just replace remoteserver by
your server.
table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets
listen on lo0
accept for local alias <aliases> deliver to mbox
accept for any relay via secure+auth://label@remoteserver:465 auth <secrets>
You will have to create the file /etc/mail/secrets and add your
credentials for authentication on the SMTP server.
From smtpd.conf(5) man page, as root:
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "label username:password" > /etc/mail/secrets
Then, all mail sent from your computer will be relayed through your
mail server. With ’sendmail-send-it, emacs will delivered the mail to
your local server which will relay it to the outgoing SMTP server.
SMTP through SSH
One setup I like and I use is to relay the mails directly to the
outgoing SMTP server, this requires no authentication except a SSH
access to the remote server.
It requires the following emacs configuration in ~/.emacs:
(setq
message-send-mail-function 'smtpmail-send-it
smtpmail-smtp-server "localhost"
smtpmail-smtp-service 2525)
The configuration tells emacs to connect to the SMTP server on
localhost port 2525 to send the mails. Of course, no mail daemon runs
on this port on the local machine, it requires the following ssh
command to be able to send mails.
$ ssh -N -L 127.0.0.1:2525:127.0.0.1:25 remoteserver
This will bind the port 127.0.0.1:25 from the remote server point of
view on your address 127.0.0.1:2525 from your computer point of view.
Your mail server should accept deliveries from local users of course.
SMTP authentication from emacs
It’s also possible to send mails from emacs using a regular smtp
authentication directly from emacs. It is boring to setup, it requires
putting credentials into a file named ~/.authinfo that it’s possible
to encrypt using GPG but then it requires a wrapper to load it. It
also requires to setup correctly the SMTP authentication. There are
plenty of examples for this on the Internet, I don’t want to cover it.
Queuing mails for sending it later
Mu4e supports a very nice feature which is mail queueing from smtpmail
emacs client. To enable it, it requires two easy steps:
In ~/.emacs:
(setq
smtpmail-queue-mail t
smtpmail-queue-dir "~/Mail/queue/cur")
In your shell:
$ mu mkdir ~/Mail/queue
$ touch ~/Mail/queue/.noindex
Then, mu4e will be aware of the queueing, in the home screen of mu4e,
you will be able to switch from queuing to direct sending by pressing
m
and flushing the queue by pressing f
.
Note: there is a bug (not sure it’s really a bug). When sending a mail
into the queue, if your mail contains special characters, you will be
asked to send it raw or to add a header containing the encoding.
Today I found a software named
Lazyread which can read and
display file an autoscroll at a chosen speed. I had to read its source
code to make it work, the documentation isn’t very helpful, it doesn’t
read ebooks (as in epub or mobi format) and doesn’t support
stdin… This software requires some C code + a shell wrapper to
works, it’s complicated for only scrolling.
So, after thinking a few minutes, the autoscroll can be reproduced
easily with a very simple awk command. Of course, it will not have the
interactive keys like lazyread to increase/decrease speed or some
others options, but the most important part is there: autoscrolling.
If you want to read a file with a rate of 1 line per 700 millisecond,
just type the following command:
$ awk '{system("sleep 0.7");print}' file
Do you want to read an html file (documentation file on the disk or
from the web), you can use lynx or w3m to convert the html file on the
fly to a readable text and pass it to awk stdin.
$ w3m -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ lynx -dump doc/slsh/slshfun-2.html | awk '{system("sleep 0.7");print}'
$ w3m -dump https://dataswamp.org/~solene/ | awk '{system("sleep 0.7");print}'
Maybe you want to read a man page?
$ man awk | awk '{system("sleep 0.7");print}'
If you want to pause the reading, you can use the true unix way,
Ctrl+Z to send a signal which will stop the command and let it paused
in background. You can resume the reading by typing fg
.
One could easily write a little script parsing parameters for setting
the speed or handling files or url with the correct command.
Notes: If for some reasons you try to use lazyread, fix the shebang
in the file lesspipe.sh and you will need to call lazyread binary with
the environment variable LESSOPEN="|./lesspipe.sh %s"
(the path of
the script if needed). Without this variable, you will have a very
helpful error “file not found”.
As the new port of the week, We will discover Sent. While we could
think it is mail related, it is not. Sent is a nice software to
make presentations from a simple text file. It has been developped by
Suckless, a hacker community enjoying writing good software while
keeping a small and sane source code, they also made software like st,
dwm, slock, surf…
Sent is about simplicity. I will reuse a part of the example
file which is also the documentation of the tool.
usage:
$ sent FILE1 [FILE2 …]
▸ one slide per paragraph
▸ lines starting with # are ignored
▸ image slide: paragraph containing @FILENAME
▸ empty slide: just use a \ as a paragraph
@nyan.png
this text will not be displayed, since the @ at the start of the first line
makes this paragraph an image slide.
The previous text, saved into a file and used with sent will open
a fullscreen window containg three “slides”. Each slide will resize
the text to maximize the display usage, this mean the font size will
change on each slide.
It is really easy to use. To display next slide, you have the choice
between pressing space, right arrow, return or clicking any
button. Pressing left arrow will go back.
If you want to install it on OpenBSD: pkg_add sent
, the package
comes from the port misc/sent.
Be careful, Sent does not produce any file, you will need it for the
presentation!
Suckless sent website
If you have enough memory on your system and that you can afford to
use a few hundred megabytes to store temporary files, you may want to
mount a mfs filesystem on /tmp. That will help saving your SSD drive,
and if you use an old hard drive or a memory stick, that will reduce
your disk load and improve performances. You may also want to mount a
ramdisk on others mount points like ~/.cache/ or a database for some
reason, but I will just explain how to achieve this for /tmp with is a
very common use case.
First, you may have heard about tmpfs, but it has been disabled in
OpenBSD years ago because it wasn’t stable enough and nobody fixed
it. So, OpenBSD has a special filesystem named mfs, which is a FFS
filesystem on a reserved memory space. When you mount a mfs
filesystem, the size of the partition is reserved and can’t be used
for anything else (tmpfs, as the same on Linux, doesn’t reserve the
memory).
Add the following line in /etc/fstab (following fstab(5)):
swap /tmp mfs rw,nodev,nosuid,-s=300m 0 0
The permissions of the mountpoint /tmp should be fixed before
mounting it, meaning that the /tmp
folder on /
partition
should be changed to 1777:
# umount /tmp
# chmod 1777 /tmp
# mount /tmp
This is required because mount_mfs inherits permissions from the
mountpoint.
If for some reason you need to access a Samba share outside of the
network, it is possible to access it through ssh and mount the share
on your local computer.
Using the ssh command as root is required because you will bind local
port 139 which is reserved for root:
# ssh -L 139:127.0.0.1:139 user@remote-server -N
Then you can mount the share as usual but using localhost
instead of
remote-server
.
Example of a mount element for usmb
<mount id="public" credentials="me">
<server>127.0.0.1</server>
<!--server>192.168.12.4</server-->
<share>public</share>
<mountpoint>/mnt/share</mountpoint>
<options>allow_other,uid=1000</options>
</mount>
As a reminder, <!--tag>foobar</tag-->
is a XML comment.
If you ever receive a mail with an attachment named “winmail.dat” then
may be disappointed. It is a special format used by Microsoft
Exchange, it contains the files attached to the mail and need some
software to extract them.
Hopefully, there is a little and effecient utility named “tnef” to
extract the files.
Install it: pkg_add tnef
List files: tnef -t winmail.dat
Extract files: tnef winmail.dat
That’s all !
In this post I will do a short presentation of the port
productivity/ledger, an very powerful command line accounting
software, using plain text as back-end. Writing on it is not an easy
task, I will use a real life workflow of my usage as material, even if
my use is special.
As I said before, Ledger is very powerful. It can helps you manage
your bank accounts, bills, rents, shares and others things. It uses a
double entry system which means each time you add an operation
(withdraw, paycheck, …) , this entry will also have to contain the
current state of the account after the operation. This will be checked
by ledger by recalculating every operations made since it has been
initialized with a custom amount as a start. Ledger can also tracks
categories where you spend money or statistics about your payment
method (check, credit card, bank transfer, money…).
As I am not an english native speaker and that I don’t work in banks
or related, I am not very familiar with accounting words in english,
it makes me very hard to understand all ledger keywords, but I found a
special use case for accounting things and not money which is really
practical.
My special use case is that I work from home for a company working in
a remote location. From time to time, I take the train to the to the
office, the full travel is
[home] → [underground A] → [train] → [underground B] → [office]
[office] → [underground B] → [train] → [underground A] → [home]
It means I need to buy tickets for both underground A and underground
B system, and I want to track tickets I use for going to work. I buy
the tickets 10 by 10 but sometimes I use it for my personal use or
sometimes I give a ticket to someone. So I need to keep track of my
tickets to know when I can give a bill to my work for being refunded.
Practical example: I buy 10 tickets of A, I use 2 tickets at
day 1. On day 2, I give 1 ticket to someone and I use 2 tickets in the
day for personal use. It means I still have 5 tickets in my bag but,
from my work office point of view, I should still have 8 tickets. This
is what I am tracking with ledger.
2018/02/01 * tickets stock Initialization + go to work
Tickets:inv 10 City_A
Tickets:inv 10 City_B
Tickets:inv -2 City_A
Tickets:inv -2 City_B
Tickets
2018/02/08 * Work
Tickets:inv -2 City_A
Tickets:inv -2 City_B
Tickets
2018/02/15 * Work + Datacenter access through underground
Tickets:inv -4 City_B
Tickets:inv -2 City_A
Tickets
At the point, running ledger -f tickets.dat balance Tickets
shows my
tickets remaining:
4 City_A
2 City_B Tickets:inv
Will add another entry which requires me to buy tickets:
2018/02/22 * Work + Datacenter access through underground
Tickets:inv -4 City_B
Tickets:inv -2 City_A
Tickets:inv 10 City_B
Tickets
Now, running ledger -f tickets.dat balance Tickets
shows my tickets
remaining:
2 City_A
8 City_B Tickets:inv
I hope that the example was clear enought and interesting. There is a
big tutorial document available on the ledger homepage, I recommend to
read it before using ledger, it contains real world examples with
accounting. Homepage link
Dnstop is an interactive console application to watch in realtime the
DNS queries going through a network interface. It currently only
supports UDP DNS requests, the man page says that TCP isn’t supported.
It has a lot of parameters and keybinding for the interactive use
To install it on OpenBSD: doas pkg_add dnstop
We will start dnstop on the wifi interface using a depth of 4 for the
domain names: as root type dnstop -l 4 iwm0
and then press ‘3’ to
display up to 3 sublevel, the -l 4
parameter means we want to know
domains with a depth of 4, it means that if a request for the domain
my.very.little.fqdn.com. happens, it will be truncated as
very.little.fqdn.com. If you press ‘2’ in the interactive display, the
earlier name will be counted in the line fqdn.com’.
Example of output:
Queries: 0 new, 6 total Tue Apr 17 07:17:25 2018
Query Name Count % cum%
--------------- --------- ------ ------
perso.pw 3 50.0 50.0
foo.bar 1 16.7 66.7
hello.mydns.com 1 16.7 83.3
mydns.com.lan 1 16.7 100.0
If you want to use it, read the man page first, it has a lot of
parameters and can filters using specific expressions.
If you ever had to read an ebook in a epub format, you may have find
yourself stumbling on Calibre software. Personally, I don’t enjoy
reading a book in Calibre at all. Choice is important and it seems
that Calibre is the only choice for this task.
But, as the epub format is very simple, it’s possible to easily read
it with any web browser even w3m or lynx.
With a few commands, you can easily find xhtml files that can be
opened with a web browser, an epub file is a zip containing mostly
xhtml, css and images files. The xhtml files have links to CSS and
images contained in others folders unzipped.
In the following commands, I prefer to copy the file in a new
directory because when you will unzip it, it will create folder in
your current working directory.
$ mkdir /tmp/myebook/
$ cd /tmp/myebook
$ cp ~/book.epub .
$ unzip book.epub
$ cd OPS/xhtml
$ ls *xhtml
I tried with differents epub files, in most case you should find a lot
of files named chapters-XX.xhtml with XX being 01, 02, 03 and so
forth. Just open the files in the correct order with a web browser aka
“html viewer”.
Today we will discover the software named tig whose name stands
for Text-mode Interface for Git.
To install it on OpenBSD: pkg_add tig
Tig is a light and easy to use terminal application to browse a git
repository in an interactive manner. To use it, just ‘cd’ into a git
repository on your filesystem and type tig
. You will get the list of
all the commits, with the author and the date. By pressing “Enter” key
on a commit, you will get the diff. Tig also displays branching and
merging in a graphical way.
Tig has some parameters, one I like a lot if blame
which is used
like this: tig blame afile
. Tig will show the file content and will
display for each line to date of last commit, it’s author and the
small identifier of the commit. With this function, it gets really
easy to find who modified a line or when it was modified.
Tig has a lot of others possibilities, you can discover them in its
man pages.
Frequently asked questions (with answers) on #openbsd IRC channel
Please read the official OpenBSD FAQ
I am writing this to answer questions asked too many times.
If some answers get good enough, maybe we could try to merge it in the OpenBSD
FAQ if the topic isn’t covered.
If the topic is covered, then a link to the official FAQ should be used.
If you want to participate, you can fetch the page using gopher protocol and
send me a diff:
$ printf '/~solene/article-openbsd-faq.txt\r\n' | nc dataswamp.org 70 > faq.md
OpenBSD features / not features
Here is a list for newcomers to tell what is and what is not OpenBSD
See OpenBSD Innovations
Packet Filter : super awesome firewall
Sane defaults : you install, it works, no tweak
Stability : upgrades go smooth and are easy
pledge and unveil : security features to reduce privileges of software, lots of ports are patched
W^X security
Microphone muted by default, unlockable by root only
Video devices owned by root by default, not usable by users until permission change
Has only FFS file system which is slow and has no “feature”
No wine for windows compatibility
No linux compatibility
No bluetooth support
No usb3 full speed performance
No VM guest additions
Only in-house VMM for being a VM host, only supports OpenBSD and some Linux
Poor fuse support (it crashes quite often)
No nvidia support (nvidia’s fault)
No container / docker / jails
Does OpenBSD has a Code Of Conduct?
No and there is no known plan of having one.
This is a topic upsetting OpenBSD people, just don’t ask about it and send
patches.
What is the OpenBSD release process?
OpenBSD FAQ official information
The last two releases are called “-release” and are officially supported
(patches for security issues are provided).
-stable version is the latest release with the base system patches applied,
the -stable ports tree has some patches backported from -current, mainly to fix
security issues. Official packages for -stable are built and are picked up
automatically by pkg_add(1).
What is -current?
It’s the development version with latest packages and latest code.
You shouldn’t use it only to get latest package versions.
How do I install -current ?
OpenBSD FAQ about current
- download the latest snapshot install .iso or .fs file from your
favorite mirror under /snapshots/ directory
- boot from it
How do I upgrade to -current
OpenBSD FAQ about current
You can use the script sysupgrade -s
, note that the flag is only useful if
you are not running -current right now but harmless otherwise.
This article will present my software reed-alert, it checks
user-defined states and send user-defined notification. I made it
really easy to use but still configurable and extensible.
Description
reed-alert is not a monitoring tool producing graph or storing
values. It does a job sysadmins are looking for because there are no
alternative product (the alternatives comes from a very huge
infrastructure like Zabbix so it’s not comparable).
From its configuration file, reed-alert will check various states
and then, if it fails, will trigger a command to send a notification
(totally user-defined).
Fetch it
This is a open-source and free software released under MIT license,
you can install it with the following command:
# git clone git://bitreich.org/reed-alert
# cd reed-alert
# make
# doas make install
This will install a script reed-alert
in /usr/local/bin/ with the
default Makefile variables. It will try to use ecl and then sbcl if
ecl is not installed.
A README file is available as documentation to describe how to use
it, but we will see here how to get started quickly.
You will find a few files there, reed-alert is a Common LISP
software and it has been chose for (I hope) good reasons that the
configuration file is plain Common LISP.
There is a configuration file looking like a real world example named
config.lisp.sample and another configuration file I use for testing
named example.lisp containing lot of cases.
Let’s start
In order to use reed-alert we only need to create a new
configuration file and then add a cron job.
Configuration
We are going to see how to configure reed-alert. You can find more
explanations or details in the README file.
Alerts
We have to configure two kind of parameters, first we need to set-up a
way to receive alerts, easiest way to do so is by sending a mail with
“mail” command. Alerts are declared with the function alert and as
parameters the alert name and the command to be executed. Some
variables are replaced with values from the probe, in the README
file you can find the list of probes, it looks like %date% or
%params%.
In Common LISP functions are called by using a parenthesis before its
name and until the parenthesis is closed, we are giving its
parameters.
Example:
(alert mail "echo 'problem on %hostname%' | mail me@example.com")
One should take care about nesting quotes here.
reed-alert will fork a shell to start the command, so pipes and
redirection works. You can be creative when writing alerts that:
- use a SMS service
- write a script to post on a forum
- publishing a file on a server
- send text to IRC with ii client
Checks
Now we have some alerts, we will configure some checks in order to
make reed-alert useful. It uses probes which are pre-defined
checks with parameters, a probe could be “has this file not been
updated since N minutes ?” or “Is the disk space usage of partition X
more than Y ?”
I chose to name the function “=>” to make a check, it isn’t a name
and reminds an item or something going forward. Both previous example
using our previous mail notifier would look like:
(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage :limit 90)
It’s also possible to use shell commands and check the return code
using the command probe, allowing the user to define useful
checks.
(=> mail command :command "echo '/is-this-gopher-server-up?' | nc -w 3 dataswamp.org 70"
:desc "dataswamp.org gopher server")
We use echo + netcat to check if a connection to a socket works. The
:desc keyword will give a nicer name in the output instead of just
“COMMAND”.
Garniture
We wrote the minimum required to configure reed-alert, now the
configuration file so your my-config.lisp file should looks like
this:
(alert mail "echo 'problem on %hostname%' | mail me@example.com")
(=> mail file-updated :path "/program/file.generated" :limit "10")
(=> mail disk-usage :limit 90)
Now, you can start it every 5 minutes from a crontab with this:
*/5 * * * * ( reed-alert /path/to/my-config.lisp )
If you prefer to use ecl:
*/5 * * * * ( reed-alert /path/to/my-config.lisp )
The time between each run is up to you, depending on what you monitor.
Important
By default, when a check returns a failure, reed-alert will only
trigger the notifier associated once it reach the 3rd failure. And
then, will notify again when the service is back (the variable %state%
is replaced by start or end to know if it starts or stops.)
This is to prevent reed-alert to send a notification each time it
checks, there is absolutely no need for this for most users.
The number of failures before triggering can be modified by using the
keyword “:try” as in the following example:
(=> mail disk-usage :limit 90 :try 1)
In this case, you will get notified at the first failure of it.
The number of failures of failed checks is stored in files (1 per
check) in the “states/” directory of reed-alert working directory.
Introduction
cl-yag is a static website generator. It's a software used to publish
a website and/or a gopher hole from a list of articles. As the
developer of cl-yag I'm happy to announce that a new version has been
released.
New features
The new version, with its number 0.6, bring lot of new features :
- supporting different markup language per article
- date format configurable
- gopher output format configurable
- ship with the default theme "clyma", minimalist but responsive (the
one used on this website)
- easier to use
- full user documentation
The code is available at git://bitreich.org/cl-yag, the program
requires sbcl or ecl to work.
Per article markup language
The best feature I'm proud of is allowing to use a different language
per article. While on my blog I choosed to use markdown, it's
sometimes not adapted for more elaborated articles like the one about
LISP containing code which was written in org-mode then converted to
markdown manually to fit to cl-yag. Now, the user can declare a named
"converter" which is a command line with pattern replacement, to
produce the html file. We can imagine a lot of things with this, even
producing a gallery with find + awk command. Now, I can use markdown
by default and specify if I want to use org-mode or something else.
This is the way to declare a converter, taking org-mode as example,
which is not very simple, because of emacs not being script friendly :
(converter :name :org-mode :extension ".org"
:command (concatenate 'string
"emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
"(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
"(princ (buffer-string)))' --kill | tee %OUT"))
And an easy way to produce a gallery with awk from a .txt
file
containing a list of images path.
(converter :name :gallery :extension ".txt"
:command (concatenate 'string
"awk 'BEGIN { print \"<div class=\\\"gallery\\\">\"} "
"{ print \"<img src=\\\"static/images/\"$1\"\\\" />\" } "
" END { print \"</div>\"} data/%IN | tee %OUT"))
The concatenate function is only used to improve the presentation, to
split the command in multiples lines and make it easier to read. It's
possible to write all the command in one line.
The patterns %IN
and %OUT
are replaced by the input file
name and the output file name when the command is executed.
For an easier example, the default markdown converter looks like this,
calling multimarkdown command :
(converter :name :markdown :extension ".md"
:command "multimarkdown -t html -o %OUT data/%IN")
It's really easy (I hope !) to add new converters you need with this
feature.
Date format configurable
One problem I had with cl-yag is that it's plain vanilla Common LISP
without libraries, so it's easier to fetch and use but it lacks some
elaborated libraries like one to parse date and format a date. Before
this release, I was writing in plain text "14 December 2017" in the
date field of a blog post. It was easy to use, but not really usable
in the RSS feed in the pubDate
attribute, and if I wanted to
change the display of the date for some reason, I would have to
rewrite everything.
Now, the date is simply in the format "YYYYMMDD" like "20171231" for
the 31rd December 2017. And in the configuration variable, there is a
:date-format
keyword to define the date display. This variable
is a string allowing pattern replacement of the following variables :
- %DayNumber
- day of the month in number, from 1 to 31
- %DayName
- day of the week, from Monday to Sunday, names are
written in english in the source code and can be
translated
- %MonthNumber
- month in number, from 1 to 12
- %MonthName
- month name, from January to December, names are
written in english in the source code and can be
translated
- %Year
- year
Currently, as the time of writing, I use the value "%DayNumber
%MonthName %Year
"
A :gopher-format
keyword exist in the configuration file to
configure the date format in the gopher export. It can be different
from the html one.
More Gopher configuration
There are cases where the gopher server use an unusual syntax compared
to most of the servers. I wanted to make it configurable, so the user
could easily use cl-yag without having to mess with the code. I
provide the default for geomyidae and in comments another syntax
is available. There is also a configurable value to indicates where to
store the gopher page menu, it's not always gophermap, it could be
index.gph or whatever you need.
Easier to use
A comparison of code will make it easier to understand. There was a
little change the way blog posts are declared :
From
(defparameter *articles*
(list
(list :id "third-article" :title "My third article" :tag "me" :date "20171205")
(list :id "second-article" :title "Another article" :tag "me" :date "20171204")
(list :id "first-article" :title "My first article" :tag "me" :date "20171201")
))
to
(post :id "third-article" :title "My third article" :tag "me" :date "20171205")
(post :id "second-article" :title "Another article" :tag "me" :date "20171204")
(post :id "first-article" :title "My first article" :tag "me" :date "20171201")
Each post are independtly declared and I plan to add a "page" function
to create static pages, but this is going to be for the next version !
Future work
I am very happy to hack on cl-yag, I want to continue improving it but
I should really think about each feature I want to add. I want to keep
it really simple even if it limits the features.
I want to allow the creation of static pages like "About me", "Legal"
or "websites I liked" that integrates well in the template. The user
may not want all the static pages links to go at the same place in the
template, or use the same template. I'm thinking about this.
Also, I think the gopher generation could be improved, but I still
have no idea how.
Others themes may come in the default configuration, allowing the user
to have a choice between themes. But as for now, I don't plan to bring
a theme using javascript.
I’m very noob with git and I always screw everything when someone
clone one of my repo, contributes and asks me to merge the changes.
Now I found an easy way to merge commits from another repository. Here
is a simple way to handle this. We will get changes from
project1_modified to merge it into our project1
repository. This is not the fastest way or maybe not the optimal way,
but I found it to work reliabily.
$ cd /path/to/projects
$ git clone git://remote/project1_modified
$ cd my_project1
$ git checkout master
$ git remote add modified ../project1_modified/
$ git remote update
$ git checkout -b new_code
$ git merge modified/master
$ git checkout master
$ git merge new_code
$ git branch -d new_code
This process will makes you download the repository of the people who
contributed to the code, then you add it as a remote sources into your
project, you create a new branch where you will do the merge, if
something is wrong you will be able to manage conflicts easily. Once
you tried the code and you are fine, you need to merge this branch to
master and then, when you are done, you can delete the branch.
If later you need to get new commits from the other repo, it become
easier.
$ cd /path/to/projects
$ cd project1_modified
$ git pull
$ cd ../my_project1
$ git pull modified
$ git merge modified/master
And you are done !
Hello
Today is a bit special because I’m writing with a mirror keyboard
layout. I use only half my keyboard to type all characters. To make
things harder, the layout is qwerty while I use azerty usually (I’m
used to qwerty but it doesn’t help).
Here, “caps lock” is a modifier key that must be pressed to obtain
characters of the other side. As a mirror, one will find ‘p’ instead
of ‘q’ or ‘h’ instead of ‘g’ while pressing caps lock.
It’s even possible to type backspace to delete characters or to
achieve a newline. All the punctuation isn’t available throught this,
only ‘.<|¦>’",’.
While I type this I get a bit faster and it become more and more
easier. It’s definitely worth if you can’t use hands two.
This a been made possible by Randall Munroe. To enable it just
download the file Here and type
xkbcomp mirrorlayout.kbd $DISPLAY
backspace is use with tilde and return with space, using the modifier
of course.
I’ve spent approximately 15 minutes writing this, but the time spent
hasn’t been linear, it’s much more fluent now !
Mirrorboard: A one-handed keyboard layout for the lazy by Randall Munroe
Introduction: comparing LISP to Perl and Python
We will refer to Common LISP as CL in the following article.
I wrote it to share what I like about CL. I’m using Perl to compare CL
features. I am using real world cases for the average programmer. If
you are a CL or perl expert, you may say that some example could be
rewritten with very specific syntax to make it smaller or faster, but
the point here is to show usual and readable examples for usual
programmers.
This article is aimed at people with programming interest, some basis
of programming knowledge are needed to understand the following. If
you know how to read C, Php, Python or Perl it should be
enough. Examples have been choosed to be easy.
I thank my friend killruana for his contribution as he wrote the
python code.
Variables
Scope: global
Common Lisp code
(defparameter *variable* "value")
Defining a variable with defparameter on top-level (= outside of a
function) will make it global. It is common to surround the name of
global variables with \* character in CL code. This is only for
readability for the programmer, the use of \* has no
incidence.
Perl code
my $variable = "value";
Python code
variable = "value";
Scope: local
This is where it begins interesting in CL. Declaring a local variable
with let create a new scope with parenthesis where the variable
isn’t known outside of it. This prevent doing bad things with
variables not set or already freed. let can define multiple
variables at once, or even variables depending on previously declared
variables using let\*
Common Lisp code
(let ((value (http-request)))
(when value
(let* ((page-title (get-title value))
(title-size (length page-title)))
(when page-title
(let ((first-char (subseq page-title 0 1)))
(format t "First char of page title is ~a~%" first-char))))))
Perl code
{
local $value = http_request;
if($value) {
local $page_title = get_title $value;
local $title_size = get_size $page_title;
if($page_title) {
local $first_char = substr $page_title, 0, 1;
printf "First char of page title is %s\n", $first_char;
}
}
}
The scope of a local value is limited to the parent curly brakets, of
a if/while/for/foreach or plain brakets.
Python code
if True:
hello = 'World'
print(hello) # displays World
There is no way to define a local variable in python, the scope of the
variable is limited to the parent function.
Printing and format text
CL has a VERY powerful function to print and format text, it’s even
named format. It can even manage plurals of words (in english only) !
Common Lisp code
(let ((words (list "hello" "Dave" "How are you" "today ?")))
(format t "~{~a ~}~%" words))
format can loop over lists using ~{ as start and ~} as end.
Perl code
my @words = @{["hello", "Dave", "How are you", "today ?"]};
foreach my $element (@words) {
printf "%s ", $element;
}
print "\n";
Python code
# Printing and format text
# Loop version
words = ["hello", "Dave", "How are you", "today ?"]
for word in words:
print(word, end=' ')
print()
# list expansion version
words = ["hello", "Dave", "How are you", "today ?"]
print(*words)
Functions
function parameters: rest
Sometimes we need to pass to a function a not known number of
arguments. CL supports it with &rest keyword in the function
declaration, while perl supports it using the @_ sigil.
Common Lisp code
(defun my-function(parameter1 parameter2 &rest rest)
(format t "My first and second parameters are ~a and ~a.~%Others parameters are~%~{ - ~a~%~}~%"
parameter1 parameter2 rest))
(my-function "hello" "world" 1 2 3)
Perl code
sub my_function {
my $parameter1 = shift;
my $parameter2 = shift;
my @rest = @_;
printf "My first and second parameters are %s and %s.\nOthers parameters are\n",
$parameter1, $parameter2;
foreach my $element (@rest) {
printf " - %s\n", $element;
}
}
my_function "hello", "world", 0, 1, 2, 3;
Python code
def my_function(parameter1, parameter2, *rest):
print("My first and second parameters are {} and {}".format(parameter1, parameter2))
print("Others parameters are")
for parameter in rest:
print(" - {}".format(parameter))
my_function("hello", "world", 0, 1, 2, 3)
The trick in python to handle rests arguments is the wildcard
character in the function definition.
function parameters: named parameters
CL supports named parameters using a keyword to specify its
name. While it’s not at all possible on perl. Using a hash has
parameter can do the job in perl.
CL allow to choose a default value if a parameter isn’t set,
it’s harder to do it in perl, we must check if the key is already set
in the hash and give it a value in the function.
Common Lisp code
(defun my-function(&key (key1 "default") (key2 0))
(format t "Key1 is ~a and key2 (~a) has a default of 0.~%"
key1 key2))
(my-function :key1 "nice" :key2 ".Y.")
There is no way to pass named parameter to a perl function. The best
way it to pass a hash variable, check the keys needed and assign a
default value if they are undefined.
Perl code
sub my_function {
my $hash = shift;
if(! exists $hash->{key1}) {
$hash->{key1} = "default";
}
if(! exists $hash->{key2}) {
$hash->{key2} = 0;
}
printf "My key1 is %s and key2 (%s) default to 0.\n",
$hash->{key1}, $hash->{key2};
}
my_function { key1 => "nice", key2 => ".Y." };
Python code
def my_function(key1="default", key2=0):
print("My key1 is {} and key2 ({}) default to 0.".format(key1, key2))
my_function(key1="nice", key2=".Y.")
Loop
CL has only one loop operator, named loop, which could be seen as an
entire language itself. Perl has do while, while, for and foreach.
loop: for
Common Lisp code
(loop for i from 1 to 100
do
(format t "Hello ~a~%" i))
Perl code
for(my $i=1; $i <= 100; $i++) {
printf "Hello %i\n";
}
Python code
for i in range(1, 101):
print("Hello {}".format(i))
loop: foreach
Common Lisp code
(let ((elements '(a b c d e f)))
(loop for element in elements
counting element into count
do
(format t "Element number ~s : ~s~%"
count element)))
Perl code
# verbose and readable version
my @elements = @{['a', 'b', 'c', 'd', 'e', 'f']};
my $count = 0;
foreach my $element (@elements) {
$count++;
printf "Element number %i : %s\n", $count, $element;
}
# compact version
for(my $i=0; $i<$#elements+1;$i++) {
printf "Element number %i : %s\n", $i+1, $elements[$i];
}
Python code
# Loop foreach
elements = ['a', 'b', 'c', 'd', 'e', 'f']
count = 0
for element in elements:
count += 1
print("Element number {} : {}".format(count, element))
# Pythonic version
elements = ['a', 'b', 'c', 'd', 'e', 'f']
for index, element in enumerate(elements):
print("Element number {} : {}".format(index, element))
LISP only tricks
Store/restore data on disk
The simplest way to store data in LISP is to write a data structure
into a file, using print function. The code output with print
can be evaluated later with read.
Common Lisp code
(defun restore-data(file)
(when (probe-file file)
(with-open-file (x file :direction :input)
(read x))))
(defun save-data(file data)
(with-open-file (x file
:direction :output
:if-does-not-exist :create
:if-exists :supersede)
(print data x)))
;; using the functions
(save-data "books.lisp" *books*)
(defparameter *books* (restore-data "books.lisp"))
This permit to skip the use of a data storage format like XML or
JSON. Common LISP can read Common LISP, this is all it needs. It can
store objets like arrays, lists or structures using plain text
format. It can’t dump hash tables directly.
Creating a new syntax with a simple macro
Sometimes we have cases where we need to repeat code and there is no
way to reduce it because it’s too much specific or because it’s due to
the language itself. Here is an example where we can use a simple
macro to reduce the written code in a succession of conditions doing
the same check.
We will start from this
Common Lisp code
(when value
(when (string= line-type "3")
(progn
(print-with-color "error" 'red line-number)
(log-to-file "error")))
(when (string= line-type "4")
(print-with-color text))
(when (string= line-type "5")
(print-with-color "nothing")))
to this, using a macro
Common Lisp code
(defmacro check(identifier &body code)
`(progn
(when (string= line-type ,identifier)
,@code)))
(when value
(check "3"
(print-with-color "error" 'red line-number)
(log-to-file "error"))
(check "4"
(print-with-color text))
(check "5"
(print-with-color "nothing")))
The code is much more readable and the macro is easy to
understand. One could argue that in another language a switch/case
could work here, I choosed a simple example to illustrate the use of a
macro, but they can achieve more.
Create powerful wrappers with macros
I’m using macros when I need to repeat code that affect variables. A
lot of CL modules offers a structure like with-something, it’s a
wrapper macro that will do some logic like opening a database,
checking it’s opened, closing it at the end and executing your code
inside.
Here I will write a tiny http request wrapper, allowing me to write
http request very easily, my code being able to use variables from the
macro.
Common Lisp code
(defmacro with-http(url)
`(progn
(multiple-value-bind (content status head)
(drakma:http-request ,url :connection-timeout 3)
(when content
,@code))))
(with-http "https://dataswamp.org/"
(format t "We fetched headers ~a with status ~a. Content size is ~d bytes.~%"
status head (length content)))
In Perl, the following would be written like this
Perl code
sub get_http {
my $url = $1;
my %http = magic_http_get $url;
if($http{content}) {
return %http;
} else {
return undef;
}
}
{
local %data = get_http "https://dataswamp.org/";
if(%data) {
printf "We fetched headers %s with status %d. Content size is %d bytes.\n",
$http{headers}, $http{status}, length($http{content});
}
}
The curly brackets are important there, I want to emphase that the
local %data variable is only available inside the curly
brackets. Lisp is written in a successive of local scope and this is
something I really like.
Python code
import requests
with requests.get("https://dataswamp.org/") as fd:
print("We fetched headers %s with status %d. Content size is %s bytes." \
% (list(fd.headers.keys()), fd.status_code, len(fd.content)))
I just received a wide screen with a 2560x1080 resolution but xrandr
wasn’t allowing me to use it. The intel graphics specifications say
that I should be able to go up to 4096xsomething so it’s a software
problem.
Generate the informations you need with gtf
$ gtf 2560 1080 59.9
Takes only the numbers after the resolution name between quotes, so in
Modeline "2560x1080_59.90" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync
keep only 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync
Now add the new resolution and make it available to your output (mine
is HDMI2):
$ xrandr --newmode "2560x1080" 230.37 2560 2728 3000 3440 1080 1081 1084 1118 -HSync +Vsync
$ xrandr --addmode HDMI2 2560x1080
You can now use this mode with arandr using the GUI or with xrandr by
typing xrandr --output HDMI1 --mode 2560x1080
You will need to set the new mode each time the system start. I added
the 2 lines in my ~/.xsession file which starts stumpwm.
When you fetch OpenBSD src or ports from CVS and that you want to save
bandwidth during the process there is a little trick that change
everything: compression
Just add -z9
to the parameter of your cvs command line and the
remote server will send you compressed files, saving 10 times the
bandwidth, or speeding up 10 times the transfer, or both (I’m in the
case where I have differents users on my network and I’m limiting my
incoming bandwidth so other people can have bandwidth too so it is
important to reduce the packets transffered if possible).
The command line should looks like:
$ cvs -z9 -qd anoncvs@anoncvs.fr.openbsd.org:/cvs checkout -P src
Don’t abuse this, this consumes CPU on the mirror.
Introduction
Hello,
Today I will speak about slrn, a nntp client. I’m using it to
fetch mailing lists I’m following (without necesserarly subscribing to
them) and read it offline. I’ll speak about using nntp to read
news-groups, I’m not sure but in a more general way nntp is used to
access usenet. I’m not sure to know what usenet is, so we will
stick here by connecting to mailing-list archives offered by
gmane.org (which offers access to mailing-lists and newsgroups
through nntp).
Long story short, recently I moved and now I have a very poor DSL
connection. Plus I’m often moving by train with nearly no 4G/LTE
support during the trip. I’m going to write about getting things done
offline and about reducing bandwith usage. This is a really
interesting topic in our hyper-connected world.
So, back to slrn, I want to be able to fetch lot of news and read
it later. Every nntp client I tried were getting the articles list (in
nntp, an article = a mail, a forum = mailing list) and then it
download each article when we want to read it. Some can cache the
result when you fetch an article, so if you want to read it later it
is already fetched. While slrn doesn’t support caching at all, it
comes with the utility slrnpull which will create a local copy of
forums you want, and slrn can be configured to fetch data from
there. slrnpull need to be configured to tell it what to fetch, what
to keep etc… and a cron will start it sometimes to fetch the new
articles.
Configuration
The following configuration is made to be simple to use, it runs with
your regular user. This is for gentoo, maybe some another system would
provide a dedicated user and everything pre-configured.
Create the folder for slrnpull and change the owner:
$ sudo mkdir /var/spool/slrnpull
$ sudo chown user /var/spool/slrnpull
slrnpull configuration file must be placed in the folder it will
use. So edit /var/spool/slrnpull/slrnpull.conf as you want, my
configuration file is following.
default 200 45 0
# indicates a default value of 20 articles to be retrieved from the server and
# that such an article will expire after 14 days.
gmane.network.gopher.general
gmane.os.freebsd.questions
gmane.os.freebsd.devel.ports
gmane.os.openbsd.misc
gmane.os.openbsd.ports
gmane.os.openbsd.bugs
The client slrn needs to be configured to find the informations from slrnpull.
File ~/.slrnrc:
set hostname "your.hostname.domain"
set spool_inn_root "/var/spool/slrnpull"
set spool_root "/var/spool/slrnpull/news"
set spool_nov_root "/var/spool/slrnpull/news"
set read_active 1
set use_slrnpull 1
set post_object "slrnpull"
set server_object "spool"
Add this to your crontab to fetch news once per hour (at HH:00 minutes):
0 * * * * NNTPSERVER=news.gmane.org slrnpull -d /var/spool/slrnpull/
Now, just type slrn and enjoy.
Cheat Sheet
Quick cheat sheet for using slrn, there is a help using “?” but it
is not very easy to understand at first.
- h : hide/display the article view
- space : scroll to next page in the article, go to next at the end
- enter : scroll one line
- tab : scroll to the end of quotes
- c : mark all as read
Tips
- when a forum is empty, it is not shown by default
I found that a slrnconf software provide a GUI to configure slrn
exists, I didn’t try it.
Going further
It seems nntp clients supports a score file that can mark interesting
articles using user defined rules.
nntp protocol allow to submit articles (reply or new thread) but I
have no idea how it works. Someone told me to forget about this and
use mails to mailing-lists when it is possible.
leafnode daemon can be used instead of slrnpull in a more
generic way. It is a nntp server that one would use locally as a proxy
to nntp servers. It will mirror forums you want and serve it back
through nntp, allowing you to use any nntp client (slrnpull enforces
the use of slrn). leafnode seems old, a v2 is still in development
but seems rather inactive. Leafnode is old and complicated, I wanted
something KISS (Keep It Simple Stupid) and it is not.
Others clients you may want to try
nntp console client
- gnus (in emacs)
- wanderlust (in emacs too)
- alpine
GUI client
- pan (may be able to download, but I failed using it)
- seamonkey (the whole mozilla suite supports nntp)
This article contains links to tools related to gopher.
Gopher server
Pages generator
- http://git.r–36.net/zs/
- https://git.codemadness.org/sfeed/
Hey ! You use stumpwm, emacs or tmux and your screen (not the GNU
screen) split in lot of parts ? There is a solution to improve
that. ZOOMING !
Each of them work with a screen divided into panes/windows (the
meaning of theses words change between the program), sometime you want
want to have the one where your work in fullscreen. An option exists
in each of them to get fullscreen temporarly on a window.
Emacs: (not native)
This is not native in emacs, you will need to install zoom-window
from your favorite repository.
Add the thoses lines in your ~/.emacs:
(require 'zoom-window)
(global-set-key (kbd "C-x C-z") 'zoom-window-zoom)
Type C-x C-z to zoom/unzoom your current frame
Tmux
Toogle zoom (in or out)
C-b z
Stumpwm
Add this to your ~/.stumpwmrc
(define-key *root-map* (kbd "z") "fullscreen")
Using “prefix z” the current window will toggle fullscreen.
Today I will present you a nice port (from Gentoo this time, not from
a FreeBSD) and this port is even linux only.
nethogs is a console program which shows the bandwidth usage of
each running application consuming network. This can be particulary
helpful to find which application is sending traffic and at which
rate.
It can be installed with emerge as simple as emerge -av
net-analyzer/nethogs
.
It is very simple of use, just type nethogs
in a terminal (as
root). There are some parameters and it’s a bit interactive but I
recommend reading the manual if you need some details about them.
I am currently running Gentoo on my main workstation, that makes me
discover new things so maybe I will write more regularly about gentoo
ports.
If for some reason you need to reduce the download speed of emerge
when downloading sources you can use a tweak in portage’s make.conf as
explained
in the handbook.
To keep wget and just add the bandwidth limit, add this to
/etc/portage/make.conf:
FETCHCOMMAND="${FETCHCOMMAND} --limit-rate=200k"
Of course, adjust your rate to your need.
If you want to show the packages installed manually (and not installed
as dependency of another package), you have to use “pkg query” and
compare if %a (automatically installed == 1) isn’t 1. The second
string will format the output to display the package name:
$ pkg query -e "%a != 1" "%n"
Update 2020: This method may certainly not work anymore but I
don’t have a Guix installation to try.
I’m new to Guix, it’s a wonderful system but it’s such different than
any other usual linux distribution that it’s hard to achieve some
basics tasks. As Guix is 100% free/libre software, Firefox has been
removed and replaced by icecat. This is nearly the same software but
some “features” has been removed (like webRTC) for some reasons
(security, freedom). I don’t blame Guix team for that, I understand
the choice.
But my problem is that I need Firefox. I finally achieve to get it
working from the official binary downloaded from mozilla website.
You need to install some packages to get the libraries, which will
become available under your profile directory. Then, tells firefox to
load libraries from there and it will start.
guix package -i glibc glib gcc gtk+ libxcomposite dbus-glib libxt
LD_LIBRARY_PATH=~/.guix-profile/lib/ ~/.guix-profile/lib/ld-linux-x86-64.so.2 ~/firefox_directory/firefox
Also, it seems that running icecat and firefox simultanously works,
they store data in ~/.mozilla/icecat and ~/.mozilla/firefox so they
are separated.
In this article we will see how to fetch, read and manage your emails
from Emacs using mu4e. The process is the following: mbsync command
(while mbsync is the command name, the software name is isync)
create a mirror of an imap account into a Maildir format on your
filesystem. mu from mu4e will create a database from the Maildir
directory using xapian library (full text search database), then mu4e
(mu for emacs) is the GUI which queries xapian database to manipulates
your mails.
Mu4e handles with dynamic bookmarks, so you can have some predefined
filters instead of having classic folders. You can also do a query and
reduce the results with successives queries.
You may have heard about using notmuch with emacs to manage mails,
mu4e and notmuch doesn’t do the same job. While notmuch is a nice tool
to find messages from queries and create filters, it operates as a
read-only tool and can’t do anything with your mail. mu4e let you
write mail, move, delete, flag etc… AND still allow to make complex
queries.
I wrote this article to allow people to try mu4e quickly, you may want
to read both isync and mu4e manual to have a better configuration
suiting your needs.
Installation
On OpenBSD you need to install 2 packages:
# pkg_add mu4 isync
isync configuration
We need to configure isync to connect to the IMAP server:
Edit the file ~/.mbsyncrc, there is a trick to not have the
password in clear text in the configuration file, see isync
configuration manual for this:
iMAPAccount my_imap
Host my_host_domain.info
User imap_user
Pass my_pass_in_clear_text
SSLType IMAPS
IMAPStore my_imap-remote
Account my_imap
MailDirStore my_imap-local
Path ~/Maildir/my_imap/
Inbox ~/Maildir/my_imap/Inbox
SubFolders Legacy
channel my_imap
Master :my_imap-remote:
Slave :my_imap-local:
Patterns *
Create Slave
Expunge Both
mu4e / emacs configuration
We need to configure mu4e in order to tell where to find the mail
folder. Add this to your ~/.emacs file.
(require 'mu4e)
(setq mu4e-maildir "~/Maildir/my_imap/"
mu4e-sent-folder "/Sent Messages/"
mu4e-trash-folder "/Trash"
mu4e-drafts-folder "/Drafts")
First start
A few commands are needed in order to make everything works. We need
to create the base folder as mbsync command won’t do the job for some
reason, and we need mu to index the mails the first time.
mbsync can takes a moment because it will download ALL your mails.
$ mkdir -p ~/Maildir/my_imap
$ mbsync -aC
$ mu init --maildir=~/Maildir/my_imap
$ mu index
How to use mu4e
start emacs, run M-x mu4e RET and enjoy, the documentation of mu4e
is well done. Press “U” at mu4e screen to synchronize with imap
server.
A query for mu4e looks like this:
list:misc.openbsd.org flag:unread avahi
This query will search mails having list header “misc.openbsd.org”
and which are unread and which contains “avahi” pattern.
date:20140101..20150215 urgent
This one will looks for mails within date range of 1st january 2014 to
15th february 2015 containing word “urgent”.
Additional notes
The current setup doesn’t handle sending mails, I’ll write another
article about this. This requires configuring a smtp authentification
and an identify for mu4e.
Also, you may need to tweak mbsync configuration or mu4e
configuration, some settings must be changed depending on the imap
server, this is particuliarly important for deleted mails.
You want to fold (hide) code between brackets like an if statement, a
function, a loop etc.. ? Use the HideShow minor-mode which is part of
emacs. All you need is to enable hs-minor-mode. Now you can
fold/unfold by cycling with C-c @ C-c.
HideShow on EmacsWiki
Hello !
Today I felt the need to change the language of my Firefox browser to
esperanto but I haven’t been able to do this, it is not
straightforward…
First, you need to install your language pack, depending if you use
the official Mozilla Firefox or Icecat, the rebranded firefox with
non-free stuff removed
Then, open about:config in firefox, we will need to change 2
keys. Firefox needs to know that we don’t want to use our user’s
locale as Firefox language and which language we want to set.
- set
intl.locale.matchOS
to false
- set
general.useragent.locale
to the language code you want (eo for esperanto)
- restart firefox/icecat
you’re done ! Bonan tagon
For the fun, here is a few examples of the same output in differents
markup languages. The list isn’t exhaustive of course.
This is org-mode:
* This is a title level 1
+ first item
+ second item
+ third item with a [[http://dataswamp.org][link]]
** title level 2
Blah blah blah blah blah
blah blah blah *bold* here
#+BEGIN_SRC lisp
(let ((hello (init-string)))
(format t "~A~%" (+ 1 hello))
(print hello))
#+END_SRC
This is markdown :
# this is title level 1
+ first item
+ second item
+ third item with a [Link](http://dataswamp.org)
## Title level 2
Blah blah blah blah blah
blah blah blah **bold** here
(let ((hello (init-string)))
(format t "~A~%" (+ 1 hello))
(print hello))
or
```
(let ((hello (init-string)))
(format t "~A~%" (+ 1 hello))
(print hello))
```
This is HTML :
<h1>This is title level 1</h1>
<ul>
<li>first item></li>
<li>second item</li>
<li>third item with a <a href="http://dataswamp.org">link</a></li>
</ul>
<h2>Title level 2</h2>
<p>Blah blah blah blah blah
blah blah blah <strong>bold</strong> here
<code><pre>(let ((hello (init-string)))
(format t "~A~%" (+ 1 hello))
(print hello))</pre></code>
This is LaTeX :
\begin{document}
\section{This is title level 1}
\begin{itemize}
\item First item
\item Second item
\item Third item
\end{itemize}
\subsection{Title level 2}
Blah blah blah blah blah
blah blah blah \textbf{bold} here
\begin{verbatim}
(let ((hello (init-string)))
(format t "~A~%" (+ 1 hello))
(print hello))
\end{verbatim}
\end{document}
Today OpenBSD 6.1 has been released, I won’t copy & paste the change
list but, in a few words, it gets better.
Link to the official announce
I already upgraded a few servers, with both methods. One with bsd.rd
upgrade but that requires physical access to the server and the other
method well explained in the upgrade guide which requires to untar the
files and do move some files. I recommend using bsd.rd if
possible.
Hello,
I have a pfsense appliance (Netgate 2440) with a usb console port,
while it used to be a serial port, now devices seems to have a usb
one. If you plug an usb wire from an openbsd box to it, you woull see this in your dmesg
uslcom0 at uhub0 port 5 configuration 1 interface 0 "Silicon Labs CP2104 USB to UART Bridge Controller" rev 2.00/1.00 addr 7
ucom0 at uslcom0 portno 0
To connect to it from OpenBSD, use the following command:
# cu -l /dev/cuaU0 -s 115200
And you’re done
Here is a list of software that I find useful, I will update this list
everytime I find a new tool. This is not an exhaustive list, theses
are only software I enjoy using:
- duplicity
- borg
- restore/dump
- boar
- nextcloud / owncloud
- seafile
- pydio
- syncthing (works as peer-to-peer without a master)
- sparkleshare (uses a git repository so I would recommend storing only text files)
Editors
Web browsers using keyboard
- qutebrowser
- firefox with vimperator extension
Todo list / Personal Agenda…
- org-mode (within emacs)
- ledger (accounting)
Mail client
- mu4e (inside emacs, requires the use of offlineimap or mbsync to fetch mails)
Network
- curl
- bwm-ng (to see bandwith usage in real time)
- mtr (traceroute with a gui that updates every n seconds)
Files integrity
Image viewer
Stuff
- entr (run command when a file change)
- rdesktop (RDP client to connect to Windows VM)
- xclip (read/set your X clipboard from a script)
- autossh (to create tunnels that stays up)
- mosh (connects to your ssh server with local input and better resilience)
- ncdu (watch file system usage interactively in cmdline)
- mupdf (PDF viewer)
- pdftk (PDF manipulation tool)
- x2x (share your mouse/keyboard between multiple computers through ssh)
- profanity (XMPP cmdline client)
- prosody (XMPP server)
- pgmodeler (PostgreSQL database visualization tool)
Today, the topic is data degradation, bit rot, birotting, damaged files
or whatever you call it. It’s when your data get corrupted over the
time, due to disk fault or some unknown reason.
What is data degradation ?
I shamelessy paste one line from wikipedia: “Data degradation is the
gradual corruption of computer data due to an accumulation of
non-critical failures in a data storage device. The phenomenon is also
known as data decay or data rot.”.
Data degradation on Wikipedia
So, how do we know we encounter a bit rot ?
bit rot = (checksum changed) && NOT (modification time changed)
While updating a file could be mistaken as bit rot, there is a difference
update = (checksum changed) && (modification time changed)
How to check if we encounter bitrot ?
There is no way you can prevent bitrot. But there are some ways to
detect it, so you can restore a corrupted file from a backup, or
repair it with the right tool (you can’t repair a file with a hammer,
except if it’s some kind of HammerFS ! :D )
In the following I will describe software I found to check (or even
repair) bitrot. If you know others tools which are not in this list, I
would be happy to hear about it, please mail me.
In the following examples, I will use this method to generate bitrot
on a file:
% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% generate_checksum_database_with_tool
% echo "a" >> my_data/some_file_that_will_be_corrupted
% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% start_tool_for_checking
We generate the checksum database, then we alter a file by adding a
“a” at the end of the file and we restore the modification and acess
time of the file. Then, we start the tool to check for data
corruption.
The first touch is only for convenience, we could get the
modification time with stat command and pass the same value to
touch after modification of the file.
bitrot
This is a python script, it’s very easy to use. I will scan a
directory and create a database with the checksum of the files and
their modification date.
Initialization usage:
% cd /home/my_data/
% bitrot
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 189 new, 0 updated, 0 renamed, 0 missing.
Updating bitrot.sha512... done.
% echo $?
0
Verify usage (case OK):
% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
% echo $?
0
Exit status is 0, so our data are not damaged.
Verify usage (case Error):
% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
error: SHA1 mismatch for ./sometextfile.txt: expected 17b4d7bf382057dc3344ea230a595064b579396f, got db4a8d7e27bb9ad02982c0686cab327b146ba80d. Last good hash checked on 2017-03-16 21:04:39.
Finished. 199.41 MiB of data read. 1 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
error: There were 1 errors found.
% echo $?
1
When something is wrong. As the exit status of bitrot isn’t 0 when it
fails, it’s easy to write a script running every day/week/month.
Github page
bitrot is available in OpenBSD ports in sysutils/bitrot since 6.1 release.
par2cmdline
This tool works with PAR2 archives (see below for more informations
about what PAR ) and from them, it will be able to check your data
integrity AND repair it.
While it has some pros like being able to repair data, the cons is
that it’s not very easy to use. I would use this one for checking
integrity of long term archives that won’t changes. The main drawback
comes from PAR specifications, the archives are created from a
filelist, if you have a directory with your files and you add new
files, you will need to recompute ALL the PAR archives because the
filelist changed, or create new PAR archives only for the new files,
but that will make the verify process more complicated. That doesn’t
seems suitable to create new archives for every bunchs of files added
in the directory.
PAR2 let you choose the percent of a file you will be able to repair,
by default it will create the archives to be able to repair up to 5%
of each file. That means you don’t need a whole backup for the files
(while it’s would be a bad idea) and only an approximately extra of 5%
of your data to store.
Create usage:
% cd /home/
% par2 create -a integrity_archive -R my_data
Skipping 0 byte file: /home/my_data/empty_file
Block size: 3812
Source file count: 17
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7
Opening: my_data/[....]
[text cut here]
Opening: my_data/[....]
Computing Reed Solomon matrix.
Constructing: done.
Wrote 381200 bytes to disk
Writing recovery packets
Writing verification packets
Done
% echo $?
0
% ls -1
integrity_archive.par2
integrity_archive.vol000+01.par2
integrity_archive.vol001+02.par2
integrity_archive.vol003+04.par2
integrity_archive.vol007+08.par2
integrity_archive.vol015+16.par2
integrity_archive.vol031+32.par2
integrity_archive.vol063+37.par2
my_data
Verify usage (OK):
% par2 verify integrity_archive.par2
Loading "integrity_archive.par2".
Loaded 36 new packets
Loading "integrity_archive.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par2".
No new packets found
There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.
Verifying source files:
Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
All files are correct, repair is not required.
% echo $?
0
Verify usage (with error):
par2 verify integrity_archive.par.par2
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found
There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.
Verifying source files:
Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.
Scanning extra files:
Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.
% echo $?
1
Repair usage:
% par2 repair integrity_archive.par.par2
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found
There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.
Verifying source files:
Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.
Scanning extra files:
Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.
Wrote 361069 bytes to disk
Verifying repaired files:
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - found.
Repair complete.
% echo $?
0
par2cmdline is only one implementation doing the job, others tools
working with PAR archives exists. They should be able to all works
with the same PAR files.
Parchive on Wikipedia
Github page
par2cmdline is available in OpenBSD ports in archivers/par2cmdline.
If you find a way to add new files to existing archives, please mail
me.
mtree
One can write a little script using mtree (in base system on
OpenBSD and FreeBSD) which will create a file with the checksum of
every files in the specified directories. If mtree output is different
since last time, we can send a mail with the difference. This is a
process done in base install of OpenBSD for /etc and some others files
to warn you if it changed.
While it’s suited for directories like /etc, in my opinion, this is
not the best tool for doing integrity check.
ZFS
I would like to talk about ZFS and data integrity because this is
where ZFS is very good. If you are using ZFS, you may not need any
other software to take care about your data. When you write a file,
ZFS will also store its checksum as metadata. By default, the option
“checksum” is activated on dataset, but you may want to disable it for
better performance.
There is a command to ask ZFS to check the integrity of the
files. Warning: scrub is very I/O intensive and can takes from hours
to days or even weeks to complete depending on your CPU, disks and the
amount of data to scrub:
# zpool scrub zpool
The scrub command will recompute the checksum of every file on the ZFS
pool, if something is wrong, it will try to repair it if possible. A
repair is possible in the following cases:
If you have multiple disks like raid-Z or raid–1 (mirror), ZFS will be
look on the differents disks if the non corrupted version of the file
exists, if it finds it, it will restore it on the disk(s) where it’s
corrupted.
If you have set the ZFS option “copies” to 2 or 3 (1 = default), that
means that the file is written 2 or 3 time on the disk. Each file of
the dataset will be allocated 2 or 3 time on the disk, so take care if
you want to use it on a dataset containing heavy files ! If ZFS find
thats a version of a file is corrupted, it will check the others
copies of it and tries to restore the corrupted file is possible.
You can see the percentage of filesystem already scrubbed with
zfs status zpool
and the scrub can be stopped with
zfs scrub -s zpool
AIDE
Its name is an acronym for “Advanced Intrusion Detection Environment”,
it’s an complicated software which can be used to check for bitrot. I
would not recommend using it if you only need bitrot detection.
Here is a few hints if you want to use it for checking your file integrity:
/etc/aide.conf
/home/my_data/ R
# Rule definition
All=m+s+i+sha256
summarize_changes=yes
The config file will create a database of all files in /home/my_data/
(R for recursive). “All” line list the checks we do on each file. For
bitrot checking, we want to check modification time, size, checksum
and inode of the files. The summarize_change line permit to have a
list of changes if something is wrong.
This is the most basic config file you can have. Then you will have to
run aide to create the database and then run aide to create a new
database and compare the two databases. It doesn’t update its database
itself, you will have to move the old database and tell it where to
found the older database.
My use case
I have different kind of data. On a side, I have static data like
pictures, clips, music or things that won’t change over time and the
other side I have my mails, documents and folders where the content
changes regularly (creation, deletetion, modification). I am able to
afford a backup for 100% of my data with some history of the backup on
a few days, so I won’t be interested about file repairing.
I want to be warned quickly if a file get corrupted, so I can still
get the backup in my history but I don’t keep every versions of my
files for too long. I choose to go with the python tool bitrot,
it’s very easy to use and it doesn’t become a mess with my folders
getting updated often.
I would go with par2cmdline if I could not be able to backup all my
data. Having 5% or 10% of redundancy of my files should be enough to
restore it in case of corruption without taking too much space.
This is the kind of Port of the week I like. This is a software I just
discovered and fall in love to. The tool r2e which is the port
mail/rss2email on OpenBSD is a small python utility that solves a
problem: how to deal with RSS feeds?
Until last week, I was using a “web app” named selfoss which was
aggregating my RSS feeds and displaying it on a web page, I was able
to filter by read/unread/marked and also filter by source. It is a
good tool that does the job well but I wanted something that doesn’t
rely on a web browser. Here comes r2e !
This simple software will send you a mail for each new entry in your
RSS feeds. It’s really easy to configure and set-up. Just look at how
I configured mine:
$ r2e new my-address+rss@my-domain.com
$ r2e add "http://undeadly.org/cgi?action=rss"
$ r2e add "https://dataswamp.org/~solene/rss.xml"
$ r2e add "https://www.dragonflydigest.com/feed"
$ r2e add "http://phoronix.com/rss.php"
Add this in your crontab to check new RSS items every 10 minutes:
*/10 * * * * /usr/local/bin/r2e run
Add a rule for my-address+rss to store mails in a separate folder, and
you’re done !
NOTE: you can use r2e run –no-send for the first time, it will
create the database and won’t send you mails for current items in
feeds.
Today I encountered an unknown issue to me with my Imap server
dovecot. In roundcube mail web client, my Inbox folder appeared empty
after being reading a mail. My Android mail client K9-Mail was
displaying “IOException:readStringUnti….” when trying to synchronize
this folder.
I solved it easily by connecting to my server with SSH, cd-ing into
the maildir directory and in the Inbox folder, renamed
dovecot.index.log to dovecot.index.log.bak (you can remove it
if it fix the problem).
And now, mails are back. This is the very first time I have a problem
of this kind with dovecot…
Today I just updated my tool cl-yag that
implies a slightly change on my website. Now, on the top of this blog, you can
see a link “Index of articles”. This page only display
articles titles, without any text from the article.
Cl-yag is a tool to generate static website like this one. It’s
written in Common LISP. For reminder, it’s also capable of producing
both html and gopher output now.
If you don’t know what Gopher is, you will learn a lot reading the
following links
Wikipedia : Gopher (Protocol)
and
Why is gopher still relevant
Let’s encrypt is a free service which provides free SSL
certificates. It is fully automated and there are a few tools to
generate your certificates with it. In the following lines, I will
just explain how to get a certificate in a few minutes. You can find
more informations on Let’s Encrypt website.
To make it simple, the tool we will use will generate some keys on the
computer, send a request to Let’s Encrypt service which will use http
challenging (there are also dns and another one kind of challenging)
to see if you really own the domain for which you want the
certificate. If the challenge process is ok, you have the certificate.
Please, if you don’t understand the following commands, don’t type
it.
While the following is right for OpenBSD, it may change slightly for
others systems. Acme-client is part of the base system, you can read
the man page acme-client(1).
Prepare your http server
For each certificate you will ask a certificate, you will be
challenged for each domain on the port 80. A file must be available in
a path under “/.well-known/acme-challenge/”.
You must have this in your httpd config file. If you use another
web server, you need to adapt.
server "mydomain.com" {
root "/empty"
listen on * port 80
location "/.well-known/acme-challenge/*" {
root { "/acme/" , request strip 2 }
}
}
The request strip 2
part is IMPORTANT. (I’ve lost 45 minutes figuring
out why root “/acme/” wasn’t working.)
Prepare the folders
As stated in acme-client man page and if you don’t need to change the
path. You can do the following commands with root privileges :
# mkdir /var/www/acme
# mkdir -p /etc/ssl/acme/private /etc/acme
# chmod 0700 /etc/ssl/acme/private /etc/acme
Request the certificates
As root, in the acme-client sources folder, type the following the
generate the certificates. The verbose flag is interesting and you
will see if the challenging step work. If it doesn’t work, you should
try manually to get a file like with the same path tried from Let’s
encrypt, and try again the command when you succeed.
$ acme-client -vNn mydomain.com www.mydomain.com mail.mydomain.com
Use the certificates
Now, you can use your SSL certificates for your mail server, imap
server, ftp server, http server…. There is a little drawback, if you
generate certificates for a lot of domains, they are all written in
the certificate. This implies that if someone visit one page, look at
the certificate, this person will know every domain you have under
SSL. I think that it’s possible to ask every certificate independently
but you will have to play with acme-client flags and make some kind of
scripts to automatize this.
Certificate file is located at /etc/ssl/acme/fullchain.pem and
contains the full certification chain (as its name is explicit). And
the private key is located at /etc/ssl/acme/private/privkey.pem.
Restart the service with the certificate.
Renew certificates
Certificates are valid for 3 months. Just type
./acme-client mydomain.com www.mydomain.com mail.mydomain.com
Restart your ssl services
EASY !
If you are using emacs under Microsoft Windows and you want to edit
remote files through SSH, it’s possible to do it without using Cygwin.
Tramp can use the tool “plink” from putty tools to do ssh.
What you need is to get “plink.exe” from the following page and get it
into your $PATH, or choose the installer which will install all putty
tools.
Putty official website
Then, edit your emacs file to add the following lines to tell it that
you want to use plink when using tramp
(require 'tramp)
(set-default 'tramp-default-method "plink")
Now, you can edit your remote files, but you will need to type your
password. I think that in order to get password-less with ssh keys,
you would need to use putty key agent.
I have been using mbox format for a few years on my personal mail
server. For those who don’t know what mbox is, it consists of only one
file per folder you have on your mail client, each file containing all
the mails of the corresponding folder. It’s extremely ineficient when
you backup the mail directory because it must copy everything each
time. Also, it reduces the system cache possibility of the server
because if you have folders with lots of mails with attachments, it
may not be cached.
Instead, I switched to maildir, which is a format where every mail is
a regular file on the file system. This takes a lot of inodes but at
least, it’s easier to backup or to deal with it for analysis.
Here how to switch from mbox to maildir with a dovecot tool.
# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox
That’s all ! In this case, my mbox folder was ~/mail/ and my INBOX
file was ~/mail/inbox. It tooks me some time to find where my
INBOX really was, at first I tried a few thing that didn’t work and
tried a perl convert tool named mb2md.pl which has been able to
extract some stuff but a lot of mails were broken. So I have been
going back getting dsync working.
If you want to migrate, the whole process looks like:
# service smtpd stop
modify dovecot/conf.d/10-mail.conf, replace the first line
mail_location = mbox:~/mail:INBOX=/var/mail/%u # BEFORE
mail_location = maildir:~/maildir # AFTER
# service dovecot restart
# dsync -u solene mirror mbox:~/mail/:INBOX=~/mail/inbox
# service smtpd start
entr is a command line tool that let you run arbitrary command on
file change. This is useful when you are doing something that requires
some processing when you modify it.
Recently, I have used it to edit a man page. At first, I had to run
mandoc each time I modified to file to check the render. This was the
first time I edited a man page so I had to modify it a lot to get what
I wanted. I remembered about entr and this is how you use it:
$ ls stagit.1 | entr mandoc /_
This simple command will run “mandoc stagit.1” each time stagit.1 is
modified. The file names must be given by stdin to entr, and then use
the characters sequence /_ to replace the names (like {} in find).
The man page of entr is very well documented if you need more
examples.
Since I upgraded to Emacs 25 it was no longer saving my last cursor
position in edited file. This is a feature I really like because I
often fire and close emacs rather than keeping it opened.
Before (< emacs 25)
(setq save-place-file "~/.emacs.d/saveplace")
(setq-default save-place t)
(require 'saveplace)
Emacs 25
(save-place-mode t)
(setq save-place-file "~/.emacs.d/saveplace")
(setq-default save-place t)
That’s all :)
2020 Update
Now, unwind on OpenBSD and unbound can support DNS over TLS or DNS
over HTTPS, dnscrypt lost a bit of relevance but it’s still usable
and a good alternative.
Dnscrypt
Today I will talk about net/dnscrypt-proxy. This let you encrypt your
DNS traffic between your resolver and the remote DNS recursive
server. More and more countries and internet provider use DNS to block
some websites, and now they tend to do “man in the middle” with DNS
answers, so you can’t just use a remote DNS you find on the
internet. While a remote dnscrypt DNS server can still be affected by
such “man in the middle” hijack, there is a very little chance DNS
traffic is altered in datacenters / dedicated server hosting.
The article also deal with unbound as a dns cache because dnscrypt is
a bit slow and asking multiple time the same domain in a few minutes
is a waste of cpu/network/time for everyone. So I recommend setting up
a DNS cache on your side (which can also permit to use it on a LAN).
At the time I write this article, their is a very good explanation
about “how to install it” is named dnscrypt-proxy–1.9.5p3 in the
folder /usr/local/share/doc/pkg-readmes/. The following article is
made from this file. (Article updated at the time of OpenBSD 6.3)
While I write for OpenBSD this can be easily adapted to anthing else
Unix-like.
Install dnscrypt
# pkg_add dnscrypt-proxy
Resolv.conf
Modify your resolv.conf file to this
/etc/resolv.conf :
nameserver 127.0.0.1
lookup file bind
options edns0
When using dhcp client
If you use dhcp to get an address, you can use the following line to
force having 127.0.0.1 as nameserver by modifying dhclient config
file. Beware, if you use it, when upgrading the system from bsd.rd,
you will get 127.0.0.1 as your DNS server but no service running.
/etc/dhclient.conf :
supersede domain-name-servers 127.0.0.1;
Unbound
Now, we need to modify unbound config to tell him to ask DNS at
127.0.0.1 port 40. Please adapt your config, I will just add what is
mandatory. Unbound configuration file isn’t in /etc because it’s
chrooted
/var/unbound/etc/unbound.conf:
server:
# this line is MANDATORY
do-not-query-localhost: no
forward-zone:
name: "."
forward-addr: 127.0.0.1@40
# address dnscrypt listen on
If you want to allow other to resolv through your unbound daemon,
please see parameters interface and access-control. You will need to
tell unbound to bind on external interfaces and allow requests on it.
Dnscrypt-proxy
Now we need to configure dnscrypt, pick a server in the following LIST
/usr/local/share/dnscrypt-proxy/dnscrypt-resolvers.csv, the name is
the first column.
As root type the following (or use doas/sudo), in the example we
choose dnscrypt.eu-nl as a DNS provider
# rcctl enable dnscrypt_proxy
# rcctl set dnscrypt_proxy flags -E -m1 -R dnscrypt.eu-nl -a 127.0.0.1:40
# rcctl start dnscrypt_proxy
Conclusion
You should be able to resolv address through dnscrypt now. You can use
tcpdump on your external interface to see if you see something on udp
port 53, you should not see traffic there.
If you want to use dig hostname -p 40 @127.0.0.1
to make DNS request
to dnscrypt without unbound, you will need net/isc-bind which will
provide /usr/local/bin/dig. OpenBSD base dig can’t use a port
different than 53.
Here is an how-to in order to make a git repository available for
cloning through a simple http server. This method only allow people to
fetch the repository, not to push. I wanted to set-up this to get my
code, I don’t plan to have any commit on it from other people at this
time so it’s enough.
In a folder publicly available from your http server clone your
repository in bare mode. As explained in
the [https://git-scm.com/book/tr/v2/Git-on-the-Server-The-Protocols](man page):
$ cd /var/www/htdocs/some-path/
$ git clone --bare /path/to/git_project gitproject.git
$ cd gitproject.git
$ git update-server-info
$ mv hooks/post-update.sample hooks/post-update
$ chmod o+x hooks/post-update
Then you will be able to clone the repository with
$ git clone https://your-hostname/some-path/gitproject.git
I’ve lost time because I did not execute git update-server-info so
the clone wasn’t possible.
Today I will present misc/rlwrap which is an utility tool when you
use some command-line software which doesn’t provide you a nice
readline input. By using rlwrap, you will be able to use telnet, a
language REPL or any command-line tool where you input text with an
history of what you type, ability to use emacs bindings like C-a C-e
M-Ret etc… I use it often with telnet or sbcl.
Usage :
$ rlwrap telnet host port
Here is a tiny code to get a connection to an SSL/TLS server. I am
writing an IRC client and an IRC bot too and it’s better to connect
through a secure channel.
This requires usocket and cl+ssl:
(usocket:with-client-socket (socket stream *server* *port*)
(let ((ssl-stream (cl+ssl:make-ssl-client-stream stream
:external-format '(:iso-8859-1 :eol-style :lf)
:unwrap-stream-p t
:hostname *server*)))
(format ssl-stream "hello there !~%")
(force-output ssl-stream)))
When I started port of the week articles I was planning to write
an article every week but now I don’t have much ports too speak about.
Today is about x11/stumpwm ! I wrote about this window manager
earlier. It’s now available in OpenBSD since 6.1 release.
If you want to write a script reading stdin and put it into a
variable, there is an very easy way to procede :
#!/bin/sh
var=`cat`
echo $var
That’s all
If you have an android Phone, here are two things you may like:
Org-mode <=> Android
First is the MobileOrg app to synchronize your calendar/tasks
between your computer org-mode files and your phone. I am using
org-mode since a few months, I think I do pretty basics things with it
like having a todo list with a deadline for each item. Having it in my
phone calendar is a good enhancement. I can also add todo items from
my phone to show it on my computer.
The phone and your computer get synced by publishing a special format
of org files for the mobile on a remote server. Mobile Org supports
ssh, webdav, dropbox or sdcard. I’m using ssh because I own a server
and I can reliabily have my things connected together there on a
dedicated account. Emacs will then use tramp to publish/retrieve the
files.
Official MobileOrg website
MobileOrg on Google Play
Read/Write sms from a remote place
Second useful thing I like with my android phone is being able to
write and send sms (+ some others things but I was most interested by
SMS) from my computer. A few services already exists but they work
with “cloud” logic and I don’t want my phone to be connected to one
more service. The MAXS app provides me what I need : ability to
read/write the sms of my phone from the computer without web
browser and relying on my own services. MAXS connects the phone to a
XMPP account and you set a whitelist of XMPP mails able to send
commands, that’s all. Here are a few examples of use:
To write a SMS I just need to speak to the jabber account of my phone
and write
sms send firstname lastname hello how are you ?
Be careful, there are 2 spaces after the lastname ! I think it’s like
this so MAXS can make easily the difference between the name and the
message.
I can also reply quickly to the last contacted person
reply to Yes I'm answering from my computer
To read the last n sms
sms read n
It’s still not perfect because sometimes it lose connectivity and you
can’t speak with it anymore but from the project author it’s not a
problem seen on every phone. I did not have the time yet to report
exactly the problem (I need to play with Android Debug Bridge for
that). If you want to install MAXS, you will need a few app from the
store to get it working. First, you will need MAXS main and MAXS
transport (a plugin to use XMPP) and then plugins for the differents
commands you want, so, maybe, smsread and smswrite. Check their
website for more informations.
As presenter earlier on my website, I use profanity as XMPP
client. It’s a light and easy to configure/use console client.
Official MAXS Website
MAXS on Google Play
If you want to kill a process by its name instead of its PID number,
which is easier if you have to kill processes from the same binary,
here are the commands depending of your operating system:
FreeBSD / Linux
$ killall pid_name
OpenBSD
$ pkill pid_name
Solaris
Be careful with Solaris killall. With no argument, the command will
send a signal to every active process, which is not something you
want.
$ killall pid_name
At work I have the sound of my laptop not muted because I need sound
from time to time. But browsing the internet with Firefox can sometime
trigger some undesired sound, very boring in the office. There is the
extension Mute Tab to auto-mute a new tab on Firefox so it won’t
play sound. The auto-mute must be activated in the plugin options,
it’s un-checked by default.
You can find it here, no restart required: Firefox Mute Tab addon
I also use FlashStopper which block by default flash and HTML5
videos, so you can click on it to activate them, it doesn’t autoplay.
Firefox FlashStopper addon
I will talk about security/pwgen for the current port of the
week. It’s a very light executable to generate passwords. But it’s not
just a dumb password generator, it has options to choose what kind of
password you want.
Here is a list of options with their flag, you will find a lot more in
the nice man page of pwgen:
- -A : don’t use capital letters
- -B : don’t use characters which could be missread (O/0, I/l/1 …)
- -v : don’t use vowels
- etc…
You can also use a seed to generate your “random” password (which
aren’t very random in this case), you may need it for some reason to
be able to reproduce password you lost for a ftp/http access for
example.
Example of pwgen output generating 5 password of 10 characters. Using
–1 parameter so it will only display one password per line, otherwise
it display a grid (on column and multiple lines) of passwords.
$ pwgen -1 10 5
fohchah9oP
haNgeik0ee
meiceeW8ae
OReejoi5oo
ohdae2Eisu
My website is now available with Gopher protocol ! I really like this
protocol. If you don’t know it, I encourage you reading this page :
Why is Gopher still relevant?.
This has been made possible by modifying the tool generating the
website pages to make it generating gopher compatible pages. This was
a bit of work but I am now proud to have it working.
I have also made a “big” change into the generator, it now rely on a
“markdown-to-html” tool which sadden me a bit. Before that, I was
using ham-mode in emacs which was converting html on the fly to
markdown so I can edit in markdown, and was exporting into html on
save. This had pros and cons. Nothing more than a lisp interpreter was
needed on the system generating the files, but I was sometimes
struggling with ham-mode because the conversion was
destructive. Multiple editing in a row of the same file was breaking
code blocks, because it wasn’t exported the same way each time until
it wasn’t a code block anymore. There are some articles that I update
sometimes to keep it up-to-date or fix an error in it, and it was
boring to fix the code everytime. Having the original markdown text
was mandatory for gopher export, and is now easier to edit with any
tool.
There is a link to my gopher site on the right of this page. You will
need a gopher client to connect to it. There is an android client
working, also Firefox can have an extension to become compatible
(gopher support was native before it have been dropped). You can find
a list of clients on
Wikipedia.
Gopher is nice, don’t let it die.
Today I will talk about graphics/feh, it’s a tool to view pictures
and it can also be used to set an image as background.
I use this command line, invoked by stumpwm when my session starts so
I can a nice background with cubes :)
$ feh --bg-scale /home/solene/Downloads/cubes.jpg
feh as a lot of options and is really easy to use, I still prefer sxiv
for viewing but I use feh for my background.
If you ever need to modify the tags of your music library (made of
MP3s) I would recommend you audio/puddletag. This tool will let
you see all your music metadata like a spreadsheet and just modify the
cells to change the artist name, title etc… You can also select
multiple cells and type one text and it will be applied on all the
selected cells. There is also a tool to extract data from the filename
with a regex. This tool is very easy and pleasant to use.
There is an option in the configuration panel that is good to be aware
of, by default, when you change the tag of a file, the modification
time isn’t changed, so if you use some kind of backup relying on the
modification time it won’t be synchronized. In the configuration
panel, you will find an option to check which will bump the
modification timestamp when you change a tag on a song.
Profanity is a command-line ncurses based XMPP (Jabber) client. It’s
easy to use and seem inspired from irssi for the interface. It’s
available in net/profanity.
It’s really easy to use and the documentation on its website is really
clear.
To log-in, just type /connect myusername@mydomain and after the
password prompt, you will be connected. Easy.
Profanity official website
When you use google search and you click on a link, you a redirected
on a google server that will take care of saving your navigation
choice from their search engine into their database.
- This is bad for your privacy
- This slow the process of using the search engine because you have a
redirection (that you don’t see) when you want to visit a link
There is a firefox extension that will fix the links in the results of
the search engine so when you click, you just go on the website
without saying “hello Google I clicked there”:
Google Search Link Fix
You can also use another web engine if you don’t like Google. I keep
it because I have best results when searching technical. I tried to
use Yahoo, Bing, Exalead, Qwant, Duck duck go, each one for a few days
and Google has the bests results so far.
OpenSCAD is a software for creating 3D objects like a programming
language, with the possibility to preview your creation.
I am personaly interested in 3D things, I have been playing with 3ds
Max and Blender for creating 3d objects but I never felt really
comfortable with them. I discovered pov-ray a few years ago which is
used to create rendered pictures instead of creating objects. Pov-ray
use its own “programming language” to describe the scene and make the
render. Now, I have a 3D printer and I would like to create things to
print, but I don’t like the GUI stuff of Blender and Pov-ray don’t
create objects, so… OpenSCAD ! This is the pov-ray of objects !
Here is a simple example that create an empty box (difference of 2
cubes) and a screw propeller:
width = 3;
height = 3;
depth = 6;
thickness = 0.2;
difference() {
cube( [width,depth,height], true);
translate( [0,0,thickness] )
cube( [width-thickness, depth-thickness, height], true);
}
translate( [ width , 0 , 0 ])
linear_extrude(twist = 400, height = height*2)
square(2,true);
The following picture is made from the code above:
There are scad-mode and scad-preview for emacs for editing OpenSCAD
files. scad-mode will check the coloration/syntax and scad-preview
will create the OpenScad render inside a Emacs pane. Personaly, I use
OpenSCAD opened in some corner of the screen with option set to
render on file change, and I edit with emacs. Of course you can use
any editor, or the embedded editor which is a Scintilla one which is
pretty usable.
OpenSCAD website
OpenSCAD gallery
Today the Port of the week is x11/arandr, it’s a very simple tool
to set-up your screen display when using multiple monitors. It’s very
handy when you want to make something complicated or don’t want to use
xrandr in command line. There is not much to say because it’s very
easy to use!
It can generates your current configuration as a script that you will find
under the ~/.screenlayout/
repertory. This is quite useful to configure your
screens from your ~/.xsession file in case a monitor is connected.
xrandr | grep "HDMI-2 connected" && .screenlayout/dual-monitor.sh
If HDMI–2 has a screen connected, when I log-in my session, I will have my
dual-monitor setup!
Port of the week is now presenting you x2x which stands for X to
X connection. This is a really tiny tool in one executable file that
let you move your mouse and use your keyboard on another X server than
yours. It’s like the other tool synergy but easier to use and
open-source (I think synergy isn’t open source anymore).
If you want to use the computer on your left, just use the following
command (x2x must be installed on it and ssh available)
$ ssh -CX the_host_address "x2x -west -to :0.0"
and then you can move your cursor to the left of your screen and you
will see that you can use your cursor or type with the keyboard on
your other computer ! I am using it to manage a wall of screen made of
raspberry Pi first generation. I used to connect to it with VNC but it
was very very slow.
Here is my git cheat sheet ! Because I don’t like git I never remember
how to do X or Y with it so I need to write down simple commands ! (I
am used to darcs and mercurial but with the “git trend” I need to
learn it and use it).
Undo uncommited changes on a tracked file
$ git reset --hard
Get the latest version before working
$ git pull
Make a commit containing all tracked files
$ git commit -m "Commit message" -a
Send the commit to the repository
$ git push
I switched to mu4e to manage my mails at work, and also to send
mails. But in our corporation we all have a signature that include our
logo and some hypertext links, so I couldn’t just insert my signature
and be done with that. There is a simple way to deal with this
problem, I fetched the html part of my signature (which include an
image in base64) and pasted it into my emacs config file this way.
(setq mu4e-compose-signature
"<#part type=text/html><html><body><p>Hello ! I am the html signature which can contains anything in html !</p></body></html><#/part>" )
I pasted my signature instead of the hello world text of course, but
you only have to use the part tag and you are done ! The rest of your
mails will be plain text, except this part.
I want to talk about stumpwm, a window manager written in Common
LISP. I think one must at least like emacs to like stumpwm. Stumpwm is
a tiling window manager one which you create “panes” on the screen
like windows on Emacs. A single pane takes 100% of the screen, then
you can split it into 2 panes vertically or horizontally and resize
it, and you can split again and again. There is no “automatic”
tiling. By default, if you have ONE pane, you will only have ONE
window displayed, this is a bit different that others tiling wm I had
tried. Also, virtual desktops are named groups, nothing special here,
you can create/delete groups and rename it. Finally, stumpwm is not
minimalistic.
To install it, you need to get the sources of stumpwm, install a
common lisp interpreter (sbcl, clisp, ecl etc…), install quicklisp
(which is not in packages), install the quicklisp packages cl-ppcre
and clx and then you can compile stumpwm, that will produce a huge
binary which embedded a common lisp interpreter (that’s a way to share
common lisp executables, the interpreter can create an executable from
itself and include the files you want to execute). I would like to
make a package for OpenBSD but packaging quicklisp and its packages
seems too difficult for me at the moment.
Here is my config file in ~/.stumpwmrc.
Updated: 23th january 2018
(defun chomp(text) (subseq text 0 (- (length text) 1)))
(defmacro cmd(command) `(progn `(:eval (chomp (stumpwm:run-shell-command ,,command t)))))
(defun get-latence()
(let ((now (get-universal-time)))
(when (> (- now *latence-last-update* ) 30)
(setf *latence-last-update* now)
(when (probe-file "/tmp/latenceresult")
(with-open-file (x "/tmp/latenceresult"
:direction :input)
(setf *latence* (read-line x))))))
*latence*)
(defvar *latence-last-update* (get-universal-time))
(defvar *latence* "nil")
(set-module-dir "~/dev/stumpwm-contrib/")
(stumpwm:run-shell-command "setxkbmap fr")
(stumpwm:run-shell-command "feh --bg-fill red_damask-wallpaper-1920x1080.jpg")
(defvar color1 "#886666")
(defvar color2 "#222222")
(setf
stumpwm:*mode-line-background-color* color2
stumpwm:*mode-line-foreground-color* color1
stumpwm:*mode-line-border-color* "#555555"
stumpwm:*screen-mode-line-format* (list "%g | %v ^>^7 %B | " '(:eval (get-latence)) "ms %d ")
stumpwm:*mode-line-border-width* 1
stumpwm:*mode-line-pad-x* 6
stumpwm:*mode-line-pad-y* 1
stumpwm:*mode-line-timeout* 5
stumpwm:*mouse-focus-policy* :click
;;stumpwm:*group-format* "%n·%t
stumpwm:*group-format* "%n"
stumpwm:*time-modeline-string* "%H:%M"
stumpwm:*window-format* "^b^(:fg \"#7799AA\")<%25t>"
stumpwm:*window-border-style* :tight
stumpwm:*normal-border-width* 1
)
(stumpwm:set-focus-color "#7799CC")
(stumpwm:grename "Alpha")
(stumpwm:gnewbg "Beta")
(stumpwm:gnewbg "Tau")
(stumpwm:gnewbg "Pi")
(stumpwm:gnewbg "Zeta")
(stumpwm:gnewbg "Teta")
(stumpwm:gnewbg "Phi")
(stumpwm:gnewbg "Rho")
(stumpwm:toggle-mode-line (stumpwm:current-screen) (stumpwm:current-head))
(set-prefix-key (kbd "M-a"))
(define-key *root-map* (kbd "c") "exec urxvtc")
(define-key *root-map* (kbd "RET") "move-window down")
(define-key *root-map* (kbd "z") "fullscreen")
(define-key *top-map* (kbd "M-&") "gselect 1")
(define-key *top-map* (kbd "M-eacute") "gselect 2")
(define-key *top-map* (kbd "M-\"") "gselect 3")
(define-key *top-map* (kbd "M-quoteright") "gselect 4")
(define-key *top-map* (kbd "M-(") "gselect 5")
(define-key *top-map* (kbd "M--") "gselect 6")
(define-key *top-map* (kbd "M-egrave") "gselect 7")
(define-key *top-map* (kbd "M-underscore") "gselect 8")
(define-key *top-map* (kbd "s-l") "exec slock")
(define-key *top-map* (kbd "s-t") "exec urxvtc")
(define-key *top-map* (kbd "M-S-RET") "exec urxvtc")
(define-key *top-map* (kbd "M-C") "exec urxvtc")
(define-key *top-map* (kbd "s-s") "exec /home/solene/dev/screen_up.sh")
(define-key *top-map* (kbd "s-Left") "gprev")
(define-key *top-map* (kbd "s-Right") "gnext")
(define-key *top-map* (kbd "M-ISO_Left_Tab")"other")
(define-key *top-map* (kbd "M-TAB") "fnext")
(define-key *top-map* (kbd "M-twosuperior") "next-in-frame")
(load-module "battery-portable")
(load-module "stumptray")
I use a function to get latency from a script that is started every 20
seconds to display the network latency or nil if I don’t have internet
access.
I use rxvt-unicode daemon (urxvtd) as a terminal emulator, so the
terminal command is urxvtc (for client), it’s lighter and faster to
load.
I also use a weird “alt+tab” combination:
- Alt+tab switch between panes
- Alt+² (the key above tab) circles windows in the current pane
- Alt+Shift+Tab switch to the previous windows selected
StumpWM website
This Port of the week is a bit special because sadly, the port isn’t
available on OpenBSD. The port is mbuffer (which you can find in
misc/mbuffer).
I discovered it while looking for a way to enhance one of my network
stream scripts. I have some scripts that get a dump of a postgresql
base through SSH, copy it from stdin to a file with tee and send it
out to the local postgres, the command line looks like
$ ssh remote-base-server "pg_dump my_base | gzip -c -f -" | gunzip -f | tee dumps/my_base.dump | psql my_base
I also use the same kind of command to receive a ZFS snapshot from
another server.
But there is an issue, the end server is relatively slow, postgresql
and ZFS will eat lot of data from stdin and then it will stop for
sometimes writing on the disk, when they are ready to take new data,
it’s slow to fill them. This is where mbuffer takes places. This
tool permit to add a buffer that will take data from stdin and fill
its memory (that you set on the command line), so when the slowest
part of the command is ready to take data, mbuffer will empty its
memory into the pipe, so the slowlest command isn’t waiting to get
filled before working again.
The new command looks like that for a buffer of 300 Mb
ssh remote-base-server "pg_dump my_base | gzip -c -f -" | gunzip -f | tee dumps/my_base.dump | mbuffer -s 8192 -m 300M | psql my_base
mbuffer also comes with a nice console output, showing
bandwith in
bandwith out
percentage/consumption of memory filled
total transfered
in @ 1219 KiB/s, out @ 1219 KiB/s, 906 MiB total, buffer 0% full
In this example the server is too fast so there is no wait, the buffer
isn’t used (0% full).
mbuffer can also listen on TCP, unix socket and have a lot of
parameters that I didn’t try, if you think that can be useful for you,
just go for it !
I had a problem with my 3 latests R430 Dell server which all have a
PERC H730P Mini raid controller. The installer could barely works and
slowly, and 2 servers were booting and crashing with FS corruption
while the latest just didn’t boot and the raid was cleared.
It is a problem with a driver of the raid controller. I don’t
understand exatly the problem but I found a fix.
From man page mfi(4)
A tunable is provided to adjust the mfi driver's behaviour when attaching
to a card. By default the driver will attach to all known cards with
high probe priority. If the tunable hw.mfi.mrsas_enable is set to 1,
then the driver will reduce its probe priority to allow mrsas to attach
to the card instead of mfi.
In order to install the system, you have to set
hw.mfi.mrsas_enable=1 on the install media, and set this on the
installed system before booting it.
There are two ways for that:
- if you use a usb media, you can mount it and edit /boot/loader.conf
and add
hw.mfi.mrsas_enable=1
- at the boot screen with the logo freebsd, choose 3) Espace to boot
prompt, type
set hw.mfi.mrsas_enable=1
and boot
You will have to edit /boot/loader.conf to add the line on the
installed system from the live system of the installer.
I have been struggling a long before understanding the problem. I hope
this message could save time to somebody else.
This week we will have a quick look at the tool rdesktop. Rdesktop
is a RDP client (RDP stands for Remote Desktop Protocol) which is used
to share your desktop with another machine. RDP is a Microsoft thing
and it’s most used on Windows.
I am personally using it because sometimes I need to use Microsoft
Word/Excel or Windows only software and I have a dedidated virtual
machine for this. So I use rdesktop to connect in fullscreen to
the virtual machine and I can work on Windows. The RDP protocol is
very efficient, on LAN network there is no lag. I appreciate much more
using the VM with RDP than VNC.
You can also have RDP servers within virtual machines. VirtualBox let
you have (with an additional package to add on the host) RDP server
for a VM. Maybe VmWare provides RDP servers too. I know that Xen and
KVM can give access through VNC or Spice but no RDP.
For its usage, if you want to connect to a RDP server whose IP address
is 192.168.1.100 in fullscreen with max quality, type:
$ rdesktop -f -x 0x80 192.168.1.100
The -x 0x80
bit is needed to set the quality at maximum. If the
machine needs username and password you can add -u my_user -p
my_plaintext_pass
to login automatically. I have an alias in my zsh
shell, I just type “windows” and I get logged in in fullscreen to the
windows machine.
To exit fullscreen type ctrl+alt+return to switch to windowed mode
and again to go in fullscreen mode. I wasn’t able to remember the
keyboard shortcut the first times and was stuck in Windows ! ;-)
In the OpenBSD ports tree, check x11/rdesktop.
I have not found any answer about this so I share my fixed. I wanted
to use mbsync with one IMAP server and encountered the following
error.
IMAP command 'AUTHENTICATE DIGEST-MD5' returned an error: NO Authentication failed
A fix is to add the following to your ~/.mbsyncrc IMAPAccount
declaration.
AuthMechs LOGIN
Using LOGIN instead of DIGEST-MD5 is still secure if you have an
encrypted connection (IMAPS or STARTTLS). The login will be given
plaintext inside the connection.
I am using FreeBSD in virtual machines and sometimes I need to
increase the disk capacity of the storage. From your VM Host, increase
the capacity of the storage backend, then on the FreeBSD system (10.3
when writing), you should see this in the last line of dmesg.
GEOM_PART: vtbd0 was automatically resized.
Use `gpart commit vtbd0` to save changes or `gpart undo vtbd0` to revert them.
Here is the gpart show
output on the system:
> 34 335544253 vtbd0 GPT (160G)
34 1024 1 freebsd-boot (512K)
1058 159382528 2 freebsd-ufs (76G)
159383586 8388540 3 freebsd-swap (4.0G)
167772126 167772161 - free - (80G)
The process is a bit harder here because I have my partition swap at
the end of the storage, so if I want to increase the size of the ufs
partition, I will need to remove the swap partition, increase the data
partition and recreate the swap. This is not that hard but having the
freebsd-ufs partition at the end would have been easier.
- swapoff the device :
swapoff /dev/vtbd0p3
- delete the swap partition :
gpart delete -i 3 vtbd0
- resize the freebsd-ufs partition :
gpart resize -i 2 -a 4k -s 156G vtbd0
- create the swap :
gpart add -t freebsd-swap -a 4k vtbd0
- swapon :
swapon /dev/vtbd0p3
- tell UFS to resize :
growfs /
If freebsd-ufs was the latest in the gpart order, only steps 3 and 6
would have been necessary.
Sources: FreeBSD Handbook and gpart(8)
Hello
You have a git repository where you work in, and you would like to
work on a clone of it and push the data back to it ? You may encounter
issues if your git repository isn’t a bare one. I have been facing
this problem by using gitit, which works with a non-bare git
repository.
What is a bare git repository ?
Here is how to create a bare repository and what it looks like.
$ git init --bare repo
$ ls -a repo/
. HEAD config hooks objects
.. branches description info refs
You can’t work in this, but this is the kind of repository that should
be used to store/push/clone etc..
What is a non-bare git repository ?
Here is how to create a non-bare repository and what it looks like.
$ git init repo2
$ ls -a repo2
. .. .git
You may use this one for local use, but you may want to clone it
later, and work with this repository and doing push/pull. That’s how
gitit works, it has a folder “wikidata” that should be initiated as
git, and it will works locally. But if you want to clone it on your
computer, work on the documentation and then push your changes to
gitit, you may get this error when pushing :
Problem when pushing
I cloned the repository, made changes, committed and now I want to
push, but no…
Décompte des objets: 3, fait.
Écriture des objets: 100% (3/3), 232 bytes | 0 bytes/s, fait.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
! [remote rejected] master -> master (branch is currently checked out)
git is unhappy, I can’t push
Solution
You can fix this “problem” by changing a config in the server
repository with this command :
$ git config --local receive.denyCurrentBranch updateInstead
Now you should be able to push to your non-bare repository.
Source: Stack Overflowk link where I found the solution
This week I will talk about the command line image viewer
sxiv. While it’s a command line tool, of course it spawn a X
window to display the pictures. It’s very light and easy of use,
it’s my favorite image viewer.
Quick start: (you should read the man page for more informations)
- sxiv file1 file2… : Sxiv open only files given as
parameter or filenames from stdin
- p/n : previous/next
- f : fullscreen
- 12 G : go to 12th image of the list
- Return : switch to the thumbnails mode / select the image from the thumbnails mode
- q : quit
- a lot more in the well written man page !
For power users who have a LOT of pictures to sort: Sxiv has a nice
function that let you mark images you see and dump the list of
marked images in a file (see parameter -o).
- Tip for zsh users, if you want to read every jpg files in a tree, you
- can use **
sxiv **/*.jpg
** globbing as seen in the Zsh cheat sheet
- ).
In OpenBSD ports tree, check graphics/sxiv.
I am starting a periodic posting for something I wanted to do since a
long time. Take a port in the tree and introduce it quickly. There are
tons of ports in the tree that we don’t know about. So, I will write
frequently about ports that I use frequently and that I find useful,
if you read this, maybe I will find a new tool to your collection of
“useful program”. :-)
For a first one, I would like to present net/bwm-ng. Its name
stands for “_BandWitch Monitor next-generation_”, it allows the user
to watch in real-time the bandwith usage of the different network
interfaces. By default, it will update the display every 0.5
second. You can change the frequency of updating by pressing keys ‘+’
and ‘-’.
Let see the bindings of the interactive mode :
- ‘t’ will cycle between current rate, maximum peak, sum, average
on 30 seconds.
- ‘n’ will cycle between data sources, on OpenBSD it defaults to
“getifaddrs” and you can also choose “sysctl” or “netstat -i”.
- ‘d’ will change the unit, by default it shows KB but you can
change to another units that suits better your current data.
Summary output after downloading a file
bwm-ng v0.6.1 (probing every 5.700s), press 'h' for help
input: getifaddrs type: sum
- iface Rx Tx Total
==============================================================================
lo0: 0.00 B 0.00 B 0.00 B
em0: 19.89 MB 662.82 KB 20.54 MB
pflog0: 0.00 B 0.00 B 0.00 B
------------------------------------------------------------------------------
total: 19.89 MB 662.82 KB 20.54 MB
It’s available on *BSD, Linux and maybe others.
In OpenBSD ports tree, look for net/bwm-ng.
I am learning mutt and I am lost. If you are like me, you may like the
following cheat sheet!
I am using it through imap, it may be different with local mailbox.
Case is important !
- Change folder : Y
- Filter the display : l (for limit) and then a filter like this
- ~d <2w : ~d for date and <2w for “less than 2 weeks” no space in
<2w !
- ~b “hello mate” : ~b is for body and the string is something to
find in the body
- ~f somebody@zxy.abc : ~f for from and you can make an expression
- ~s “Urgent” : ~s stands for subject and use a pattern
- Delete messages with filter : D with a filter, if you used limit
before it will propose by default the filter of limit
- Delete a message : d (it will be marked as Deleted)
Deleted messages will be removed when you change the folder or if you
exit. Pressing $ can do it manually.
I may add new things in the future, as they come for me, if I find new
features useful.
How to repeat a command n time
repeat 5 curl http://localhost/counter_add.php
How to expand recursively
If you want to find every file ending by .lisp in the folder and
subfolder you can use the following syntax. Using ****** inside a
pattern while do a recursive globbing.
ls **/*.lisp
Work with temp files
If you want to work on some command outputs without having to manage
temporary files, zsh can do it for you with the following syntax:
=(command that produces stdout).
In the example we will use emacs to open the list of the files in our
personal folder.
emacs =(find ~ -type f)
This syntax will produce a temp file that will be removed when emacs
exits.
My ~/.zshrc
here is my ~/.zshrc, very simple (I didn’t pasted the aliases I have),
I have a 1000 lines history that skips duplicates.
HISTFILE=~/.histfile
HISTSIZE=1000
SAVEHIST=1000
setopt hist_ignore_all_dups
setopt appendhistory
bindkey -e
zstyle :compinstall filename '/home/solene/.zshrc'
autoload -Uz compinit
compinit
export LANGUAGE=fr_FR.UTF-8
export LANG=fr_FR.UTF-8
export LC_ALL=fr_FR.UTF-8
export LC_CTYPE=fr_FR.UTF-8
export LC_MESSAGES=fr_FR.UTF-8
Here is a dump of my emacs config file. That may be useful for some
emacs users who begin.
If you doesn’t want to have your_filename.txt~ files with a tilde at
the end (this is a default backup file), add this
; I don't want to have backup files everywhere with filename~ name
(setq backup-inhibited t)
(setq auto-save-default nil)
To have parenthesis highlighting on match, which is very useful, you
will need this
; show match parenthesis
(show-paren-mode 1)
I really like this one. It will save the cursor position in every file
you edit. When you edit it again, you start exactly where you leaved
the last time.
; keep the position of the cursor after editing
(setq save-place-file "~/.emacs.d/saveplace")
(setq-default save-place t)
(require 'saveplace)`
If you write in utf–8 (which is very common now) you should add this.
; utf8
(prefer-coding-system 'utf-8)
Emacs modes are used depending on the extension of a file. Sometime
you need to edit files with a custom extension but you want to use a
mode for it. So, you just need to add some line like this to get your
mode automatically when you load the file.
; associate extension - mode
(add-to-list 'auto-mode-alist '("\\.md\\'" . markdown-mode))
(add-to-list 'auto-mode-alist '("\\.tpl$" . html-mode))
My Org-mode part in the config file
(require 'org)
(define-key global-map "\C-ca" 'org-agenda)
(setq org-log-done t)
(setq org-agenda-files (list "~/Org/work.org" "~/Org/home.org"))
Stop mixing tabs and space when indenting
(setq indent-tabs-mode nil)
If someday under FreeBSD you have a system with multiple IP address on
the same network and you need to use a specific IP for a route, you
have to use the -ifa parameter in the route command.
In our example, we have to use the address 192.168.1.140 to access
the network 192.168.30.0 through the router 192.168.1.1, this
is as easy as the following.
route add -net 192.168.30.0 192.168.1.1 -ifa 192.168.1.140
You can add this specific route like any other route in your rc.conf
as usual, just add the -ifa X.X.X.X parameter.