Introduction
This article features the very useful program "checkrestart" which is OpenBSD specific. The purpose of checkrestart is to display which programs and their according PID for which the binaries doesn't exist anymore.
Why would their binary be absent? The obvious case is that the program was removed, but what it is really good at, is when you upgrade a package with running binaries, the old binary is deleted and the new binary installed. In that case, you will have to stop all the running binaries and restart them. Hence the name "checkrestart".
Installation
Installing it is as simple as running pkg_add checkrestart
Usage
This is simple too, when you run checkrestart, you will have a list of PID numbers with the binary name.
For example, on my system, checkrestart gives me information about what programs got updated that I should restart to run the new binary.
69575 lagrange
16033 lagrange
9664 lagrange
77211 dhcpleased
6134 dhcpleased
21860 dhcpleased
Real world usage
If you run OpenBSD -stable, you will want to use checkrestart after running pkg_add -u. After a package update, most often related to daemons, you will have to restart the related services.
On my server, in my daily script updating packages and running syspatch, I use it to automatically restart some services.
checkrestart | grep php && rcctl restart php-fpm
checkrestart | grep postgres && rcctl restart postgresql
checkrestart | grep nginx && rcctl restart nginx
Introduction
I would like to introduce you to a very nice game I discovered a few months ago, its name is Shapez.io and is a "factory" game, a genre popularized by the famous game Factorio. In this game you will have to extract shapes and colors and rework the shapez, mix colors and mix the whole thing together to produce wanted pieces.
The game
The gameplay is very cool, the early game is an introduction to the game mechanics, you can extract shapes, cut them rotate pieces, merge conveys belts into one, paint shapes etc... and logic circuits!
In those games, you will have to learn how to make efficient factories and mostly "tile-able" installations. A tile-able setup means that if you copy a setup and paste it next to it, it will be bigger and functional, meaning you can extend it to infinity (except that the input conveyors will starve at some point).
It can be quite addictive to improve your setups over and over. This game is non violent and doesn't require any reflex but you need to think. You can't loose, it's between a puzzle and a management game.

Where to get it
On OpenBSD since version 6.9 (not released yet when I publish this) you can install the package shapezio and find a launcher in your desktop environment Game menu.
I also compiled a web version that you can play in your web browser (I discourage using Firefox due to performance..) without installing it, it's legal because the game is open source :)
Play shapez.io in the web browser
The game is also sold on Steam, pre-compiled and ready to run, if you prefer it, it's also a nice way to support the developer.
shapez.io on Steam
More content
Official website
Youtube video of "Real civil engineer" explaining the game
Introduction
In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement.
I will explain how to install and configure Nginx and how to parse logs to obtain useful information. I will use an OpenBSD system for the examples.
It is important to understand that in this context Nginx is not doing anything related to HTTP.
Installation
On OpenBSD we need the package nginx-stream, if you are unsure about which package is required on your system, search which package provide the file ngx_stream_module.so . To enable Nginx at boot, you can use rcctl enable nginx.
Nginx stream module core documentation
Nginx stream module log documentation
Configuration
The default configuration file for nginx is /etc/nginx/nginx.conf , we will want it to listen on port 1965 and relay to 127.0.0.1:11965.
worker_processes 1;
load_module modules/ngx_stream_module.so;
events {
worker_connections 5;
}
stream {
log_format basic '$remote_addr $upstream_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';
access_log logs/nginx-access.log basic;
upstream backend {
hash $remote_addr consistent;
server 127.0.0.1:11965;
}
server {
listen 1965 ssl;
ssl_certificate /etc/ssl/perso.pw:1965.crt;
ssl_certificate_key /etc/ssl/private/perso.pw:1965.key;
proxy_pass backend;
}
}
In the previous configuration file, the backend defines the destination, multiples servers could be defined, with weights and timeouts, there is only one in this example.
The server block will tell on which port Nginx should listen and if it has to handle TLS (which is named ssl because of history), usual TLS configuration can be used here, then for a request, we have to tell to which backend Nginx have to relay the connections.
The configuration file defines a custom log format that is useful for TLS connections, it includes remote host, backend destination, connection status, bytes transffered and duration.
Log parsing
Using awk to calculate time performance
I wrote a quite long shell command parsing the log defined earlier that display the number of requests, and median/min/max session time.
$ awk '{ print $NF }' /var/www/logs/nginx-access.log | sort -n | awk '{ data[NR] = $1 } END { print "Total: "NR" Median:"data[int(NR/2)]" Min:"data[2]" Max:"data[NR] }'
Total: 566 Median:0.212 Min:0.000 Max:600.487
Find bad clients using awk
Sometimes in the logs there are clients that obtains a status 500, meaning the TLS connection haven't been established correctly. It may be some scanner that doesn't try a TLS connection, if you want to get statistics about those and see if it would be worth to block them if they do too many attempt, it is easy to use awk to get the list.
awk '$(NF-3) == 500 { print $1 }' /var/www/logs/nginx-access.log
Using goaccess for real time log visualization
It is also possible to use the program Goaccess to view logs in real time with many information, it is really an awesome program.
goaccess --date-format="%d/%b/%Y" \
--time-format="%H:%M:%S" \
--log-format="%h %r [%d:%t %^] TCP %s %^ %b %L" /var/www/logs/nginx-access.log
Goaccess official website
Conclusion
I was using relayd before trying Nginx with stream module, while relayd worked fine it doesn't provide any of the logs Nginx offer. I am really happy with this use of Nginx because it is a very versatile program that shown to be more than a http server over time. For a minimal setup I would still recommend lighter daemon such as relayd.
Introduction
In this Port of the Week I will introduce you to the IRC client catgirl. While there are already many IRC clients available (and good ones), there was a niche that wasn't filled yet, between minimalism (ii, irCII) and full featured clients (irssi, weechat) in the terminal world. Here comes catgirl, a simple IRC client coming with enough features to be comfortable to use for heavy IRC users.
Catgirl has the following features: tab completion, split scrolling, URL detection, nick coloring, ignores filter. On the other hand, it doesn't support non-TLS networks, CCTP, multi networks or dynamic configuration. If you want to use catgirl with multiples networks, you have to run it once per network.
Catgirl will be available as a package in OpenBSD starting with version 6.9.
OpenBSD security bonus: catgirl features a very good use of unveil to reduce file system access to the minimum required (configuration+logs+certs), reducing the severity of an exploit. It also has a restricted mode when using the -R parameter that reduce features like notifications or url handling and tight the pledge list (allowing systems calls).
Catgirl official website

Configuration
A simple configuration file to connect to the irc.tilde.chat server would look like the following file that must be stored under ~/.config/catgirl/tilde
nick = solene_nickname
real = Solene
host = irc.tilde.chat
join = #foobar-channel
You can then run catgirl and use the configuration file but passing the config file name as parameter.
$ catgirl tilde
Usage and tips
I recommend reading catgirl man page, everything is well explained there. I will cover most basics needs here.
Catgirl man page
Catgirl only display one window at a time, it is not possible to split the display, but if you scroll up you will see the last displayed lines and the text stream while keeping the upper part displaying the history, it is a neat way to browse the history without cutting yourself from what's going on in the channel.
Channels can be browsed from keyboard using Ctrl+N or Ctrl+P like in Irssi or by typing /window NUMBER, with number being the buffer number. Alt+NUMBER could also be used to switch directly to buffer NUMBER.
Searches in buffer could be used by typing a word in your input and using Ctrl+R to search backward or Ctrl+S for searching forward (given you are in the history of course).
Finally, my most favorite feature which is missing in minimal clients is Alt+A, jumping to next buffers I have to read (also yes, catgirl keep a line with information about how many messages in channels since last time you didn't read them). Even better, when you press alt+A while there is nothing to read, you jump back to the channel you manually selected last, this allow to quickly read what you missed and return to the channel you spend all your time on.
Conclusion
I really love this IRC client, it replaced Irssi that I used for years really easily because most of the key bindings are the same, but I am also very happy to use a client that is a lot safer (on OpenBSD). It can be used with tmux for persistence but also connect to multiple servers and make it manageable.
Introduction
This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).
From all this list, the following list is started by default: cron, pflogd, sndiod, openssh, ntpd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.
Service list
I extracted the list of base install services by looking at /etc/rc.conf.
$ grep _flags /etc/rc.conf | cut -d '_' -f 1
amd
This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".
amd man page
apmd
This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.
apmd man page
apm man page
bgpd
This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.
OpenBGPD website
bootparamd
This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.
Information about a diskless setup
cron
This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.
Cron man page
Crontab command
Crontab format
dhcpd
This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.
dhcrelay
This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.
dvmrpd
This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.
eigrpd
This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.
ftpd
This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).
ftpd man page
ftpproxy
This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.
ftpproxy6
Same as above but for IPv6. Using IPv6 behind a NAT make no sense.
hostapd
This is the daemon that turns OpenBSD into a WiFi access point.
hostapd man page
hostapd configuration file man page
hotplugd
hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.
hotplugd man page
httpd
httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.
httpd man page
httpd configuration file man page
identd
Identd is a daemon for the Identification Protocol which returns the login name of an user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.
ifstated
This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.
ifstated man page
ifstated configuration file man page
iked
This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.
OpenBSD FAQ about VPN
inetd
This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.
inetd man page
isakmpd
This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.
iscsid
This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.
ldapd
This is a light LDAP server, offering version 3 of the protocol.
ldap client man page
ldapd daemon man page
ldapd daemon configuration file man page
ldattach
This daemon allows to configure programs that are exposed as a serial port, such as gps devices.
ldomd
This daemon is specific to the sparc64 platform and provide services for dom feature.
lockd
This daemon is used as part of a NFS environment to support file locking.
ldpd
This daemon is used by MPLS routers to get labels.
lpd
This daemon is used to manage print access to a line printer.
mountd
This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.
mountd man page
showmount man page
mopd
This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.
mrouted
Similar to dvmrpd.
nfsd
This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.
nfsd man page
nfsstat man page
npppd
This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.
nsd
This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.
nsd man page
nsd configuration file man page
ntpd
This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.
ntpd man page
ospfd
It is a daemon for the OSPF routing protocol (Open Shortest Path First).
ospf6d
Same as above for IPv6.
pflogd
This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.
pflogd man page
tcpdump
portmap
This daemon is used as part of a NFS environment.
rad
This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.
radiusd
This daemon is used to offer RADIUS protocol authentication.
rarpd
This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.
Information about a diskless setup
rbootd
Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».
relayd
This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.
relayd man page
relayd control tool man page
relayd configuration file man page
ripd
This is a routing daemon using an old protocol but widely supported.
route6d
Same as above but for IPv6.
sasyncd
This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.
sensorsd
This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.
sensorsd man page
sensorsd configuration file man page
slaacd
This service is a daemon that will automatically pick up auto IPv6 configuration on the network.
slowcgi
This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.
slowcgi man page
smtpd
This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.
smtpd man page
smtpd configuration file man page
smtpd control command man page
sndiod
This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.
sndiod man page
sndiod control command man page
mixerctl man page to control an audio device
OpenBSD FAQ about multimedia devices
snmpd
This daemon is a SNMP server exposing some system metrics to SNMP client.
snmpd man page
snmpd configuration file man page
spamd
This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.
spamlogd
This daemon is dedicated to the update of spamd whitelist.
sshd
This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.
sshd man page
sshd configuration file man page
statd
This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.
switchd
This daemon is used to control a switch pseudo device.
switch pseudo device man page
syslogd
This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.
syslogd man page
syslogd configuration file man page
newsyslog man page
logger man page
tftpd
This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.
Information about a diskless setup
tftpproxy
This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.
unbound
This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.
unbound man page
unbound configuration file man page
unwind
This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.
unwind man page
unwind configuration file man page
unwind control command man page
vmd
This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.
vmd man page
vmd configuration file man page
vmd control command man page
vmm driver man page
OpenBSD FAQ about virtualization
watchdogd
This daemon is used to trigger watchdog timer devices if any.
wsmoused
This daemon is used to provide a mouse support to the console.
xenodm
This daemon is used to start the X server and allow users to authenticate themselves and log in their session.
xenodm man page
ypbind
This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.
ypldap
This daemon offers a YP service using a LDAP backend.
ypserv
This daemon is a YP server.
Introduction
In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.
There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.
Security matters
Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.
In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.
Pledge / unveil on userland
Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.
Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.
Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.
OpenBSD presentation about Unveil (BSDCan2019)
OpenBSD presentation of Pledge and Unveil (BSDCan2018)
Privilege separation
Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.
Clock synchronization
The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.
X display not as root
If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.
Resources limits
Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.
Genuine full disk encryption
When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.
W^X
Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.
OpenBSD presentation « Kernel W^X Improvements In OpenBSD »
Only one reliable randomness source
When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.
OpenBSD presentation about arc4random
Accurate documentation
OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.
OpenBSD man pages online
EuroBSDcon 2018 about « Better documentation »
IPSec and Wireguard out of the box
If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.
Memory safeties
OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.
Dedicated root account
When you install the system, a root account is created and its password is asked, then you create an user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.
Small network attack surface
The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).
Encrypted swap
By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.
SMT disabled
Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.
Meltdown: one of the first security issue related to speculative execution in the CPU
Micro and Webcam disabled
With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.
Maintainability, release often, update often
The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.
Signify chain of trust
Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.
Signify at BSDCan 2015
Packages
While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.
Chroot by default when available
Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.
Dedicated users for services
Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.
Installing a service doesn't enable it
When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.
rcctl man page
Conclusion
Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.
There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.
« Mitigations and other real security features » by Theo De Raadt
OpenBSD innovations
OpenBSD events, often including slides or videos
This is a February 2021 update of a text originally published in April 2017.
Introduction
I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.
OpenBSD pf.conf man page about queuing
Prerequisites
My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).
PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.
Configuration
Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.
# we define a main queue (requirement)
queue main on em0 bandwidth 1G
# set a queue for everything
queue normal parent main bandwidth 200K max 200K default
And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`
QUEUE BW/FL SCH PKTS BYTES DROP_P DROP_B QLEN
main on em0 1000M fifo 0 0 0 0 0
normal 1000M fifo 535424 36032467 0 0 60
More control (per user / protocol)
This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.
# within the queue rules
queue oasis parent main bandwidth 150K max 150K
# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis
Instead of an user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.
Introduction
In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.
OpenBSD pf.conf man page about OS Fingerprinting
Explanations
Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.
Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.
How to setup
The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).
# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22
# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22
This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.
This quick article will explain how to install pkgsrc packages on an OpenBSD installation. This is something regulary asked on #openbsd freenode irc channel. I am not convinced by the relevant use of pkgsrc under OpenBSD but why not :)
I will cover an unprivileged installation that doesn't require root. I will use packages from 2020Q4 release, I may not update regularly this text so you will have to adapt to your current year.
$ cd ~/
$ ftp https://cdn.NetBSD.org/pub/pkgsrc/pkgsrc-2020Q4/pkgsrc.tar.gz
$ tar -xzf pkgsrc.tar.gz
$ cd pkgsrc/bootstrap
$ ./bootstrap --unprivileged
From now you must add the path ~/pkg/bin to your $PATH environment variable. The pkgsrc tree is in ~/pkgsrc/ and all the relevant files for it to work are in ~/pkg/.
You can install programs by searching directories of software you want in ~/pkgsrc/ and run "bmake install", for example in ~/pkgsrc/chat/irssi/ to install irssi irc client.
I'm not sure X11 software compiles well, I got issues compiling dbus as a dependency of x11/xterm and I got compilation errors, maybe clashing with Xenocara from base system... I don't really want to investigate more about this though.
Introduction
In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD
What is TOTP (Time-based One time Password)
When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.
TOTP software
Here is a quick list of TOTP software
- command line: oathtool from package oath-toolkit
- GUI and multiplatform: KeepassXC
- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)
Setup
A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.
# pkg_add login_oath
You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:
# totp OR password
totp:\
:auth=-totp,passwd:\
:tc=default:
# totp AND password
totppw:\
:auth=-totp-and-pwd:\
:tc=default:
If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.
Local login
Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.
# usermod -L totp some_user
In the user directory, you have to generate a key and give it the correct permissions.
$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key
The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.
python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"
SSH login
It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.
This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.
sshd_config man page about AuthenticationMethods
# enable for everyone
AuthenticationMethods publickey,password
# for one user
Match User solene
AuthenticationMethods publickey,password
Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.
Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.
Conclusion
In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.
Dans ce billet je vais vous livrer mon ressenti sur ce que j'aime dans OpenBSD.
Respect de la vie privée
Il n'y a aucune télémétrie dans OpenBSD, je n'ai pas à m'inquiéter pour le respect de ma vie privée. Pour rappel, la télémétrie est un mécanisme qui consiste à remonter des informations de l'utilisateur afin d'analyser l'utilisation du produit.
De plus, le défaut du système a été de désactiver entièrement le micro, à moins d'une intervention avec le compte root, le microphone enregistre du silence (ce qui permet de ne pas le bloquer quant à des droits d'utilisation). A venir dans 6.9, la caméra suit le même chemin et sera désactivée par défaut. Il s'agit pour moi d'un signal fort quant à la nécessité de protéger l'utilisateur.
Navigateurs web sécurisés
Avec l'ajout des fonctionnalités de sécurité (pledge et surtout unveil) dans les sources de Firefox et Chromium, je suis plus sereine quant à leur utilisation au quotidien. À l'heure actuelle, l'utilisation d'un navigateur web est quasiment incontournable, mais ils sont à la fois devenus extrêmement complexes et mal maîtrisés. L'exécution de code côté client via Javascript qui a de plus en plus de possibilité, de performances et de nécessités, ajouter un peu de sécurité dans l'équation était nécessaire. Bien que ces ajouts soient parfois un peu dérangeants à l'utilisation, je suis vraiment heureuse de pouvoir en bénéficier.
Avec ces sécurités ajoutés (par défaut), les navigateurs cités précédemment ne peuvent pas parcourir les répertoires en dehors de ce qui leur est nécessaire à leur bon fonctionnement plus les dossiers ~/Téléchargements/ et /tmp/. Ainsi, des emplacements comme ~/Documents ou ~/.gnupg sont totalement inaccessibles ce qui limite grandement les risques d'exfiltration de données par le navigateur.
On pourrait refaire grossièrement la même fonctionnalité sous Linux en utilisant AppArmor mais l'intégration est extrêmement compliquée (là où c'est par défaut sur OpenBSD) et un peu moins efficace, il est plus facile d'agir au bon moment depuis le code plutôt qu'en encapsulant le programme entier d'un groupe de règles.
Pare-feu PF
Avec PF, il est très simple de vérifier le fichier de configuration pour comprendre les règles en place sur le serveur ou un ordinateur de bureau. La centralisation des règles dans un fichier et le système de macros permet d'écrire des règles simples et lisibles.
J'utilise énormément la fonctionnalité de gestion de bande passante pour limiter le débit de certaines applications qui n'offrent pas ce réglage. C'est très important pour moi n'étant pas la seule utilisatrice du réseau et ayant une connexion assez lente.
Sous Linux, il est possible d'utiliser les programmes trickle ou wondershaper pour mettre en place des limitations de bande passante, par contre, iptables est un cauchemar à utiliser en tant que firewall!
C'est stable
A part à l'utilisation sur du matériel peu répandu, OpenBSD est très stable et fiable. Je peux facilement atteindre deux semaines d'uptime sur mon pc de bureau avec plusieurs mises en veille par jour. Mes serveurs OpenBSD tournent 24/24 sans problème depuis des années.
Je dépasse rarement deux semaines puisque je dois mettre à jour le système de temps en temps pour continuer les développements sur OpenBSD :)
Peu de maintenance
Garder à jour un système OpenBSD est très simple. Je lance les commandes syspatch et pkg_add -u tous les jours pour garder mes serveurs à jour. Une mise à jour tous les six mois est nécessaire pour monter en version mais à part quelques instructions spécifiques qui peuvent parfois arriver, une mise à jour ressemble à ça :
# sysupgrade
[..attendre un peu..]
# pkg_add -u
# reboot
Documentation de qualité
Installer OpenBSD avec un chiffrement complet du disque est très facile (il faudra que j'écrive un billet sur l'importance de chiffrer ses disques et téléphones).
La documentation officielle expliquant l'installation d'un routeur avec NAT est parfaitement expliquée pas à pas, c'est une référence dès qu'il s'agit d'installer un routeur.
Tous les binaires du système de base (ça ne compte pas les packages) ont une documentation, ainsi que leurs fichiers de configuration.
Le site internet, la FAQ officielle et les pages de man sont les seules ressources nécessaires pour s'en sortir. Elles représentent un gros morceau, il n'est pas toujours facile de s'y retrouve mais tout y est.
Si je devais me débrouiller pendant un moment sans internet, je préférerais largement être sur un système OpenBSD. La documentation des pages de man suffit en général à s'en sortir.
Imaginez mettre en place un routeur qui fait du trafic shaping sous OpenBSD ou Linux sans l'aide de documents extérieurs au système. Personnellement je choisis OpenBSD à 100% pour ça :)
Facilité de contribution
J'adore vraiment la façon dont OpenBSD gère les contributions. Je récupère les sources sur mon système et je procède aux modifications, je génère un fichier de diff (différence entre avant/après) et je l'envoie sur la liste de diffusion. Tout ça peut être fait en console avec des outils que je connais déjà (git/cvs) et des emails.
Parfois, les nouveaux contributeurs peuvent penser que les personnes qui répondent ne sont vraiment pas sympa. **Ce n'est pas vrai**. Si vous envoyez un diff et que vous recevez une critique, cela signifie déjà qu'on vous accorde du temps pour vous expliquer ce qui peut être amélioré. Je peux comprendre que cela puisse paraître rude pour certaines personnes, mais ce n'est pas ça du tout.
Cette année, j'ai fait quelques modestes contributions aux projets OpenIndiana et NixOS, c'était l'occasion de découvrir comment ces projets gèrent les contributions. Les deux utilisent github et la manière de faire est très intéressante, mais la comprendre demande beaucoup de travail car c'est relativement compliqué.
Site officiel d'OpenIndiana
Site officiel de NixOS
La méthode de contribution nécessite un compte sur Github, de faire un fork du projet, cloner le fork en local, créer une branche, faire les modifications en local, envoyer le fork sur son compte github et utiliser l'interface web de github pour faire un "pull request". Ça c'est la version courte. Sur NixOS, ma première tentative de faire un pull request s'est terminée par une demande contenant six mois de commits en plus de mon petit changement. Avec une bonne documentation et de l'entrainement c'est tout à fait surmontable. Cette méthode de travail présente certains avantages comme le suivi des contributeurs, l'intégration continue ou la facilité de critique de code, mais c'est rebutoire au possible pour les nouveaux.
Packages top qualité
Mon opinion est sûrement biaisée ici (bien plus que pour les éléments précédents) mais je pense sincèrement que les packages d'OpenBSD sont de très bonne qualité. La plupart d'entre eux fonctionnent "out of the box" avec des paramètres par défaut corrects.
Les packages qui nécessitent des instructions particulières sont fournis avec un fichier "readme" expliquant ce qui est nécessaire, par exemple créer certains répertoires avec des droits particuliers ou comment mettre à jour depuis une version précédente.
Même si par manque de contributeurs et de temps (en plus de certains programmes utilisant beaucoup de linuxismes pour être faciles à porter), la plupart des programmes libres majeurs sont disponibles et fonctionnent très bien.
Je profite de l'occasion de ce billet pour critiquer une tendance au sein du monde Open Source.
- les programmes distribués avec flatpak / docker / snap fonctionnent très bien sur Linux mais sont hostiles envers les autres systèmes. Ils utilisent souvent des fonctionnalités spécifiques à Linux et les méthodes de compilation sont tournées vers Linux. Cela complique grandement le portage de ces applications vers d'autres systèmes.
- les programmes avec nodeJS: ils nécessitent parfois des centaines voir des milliers des libs et certaines sont mêmes un peu bancales. C'est vraiment compliqué de faire fonctionner ces programmes sur OpenBSD. Certaines libs vont même jusqu'à embarquer du code rust ou à télécharger un binaire statique sur un serveur distant sans solution de compilation si nécessaire ou sans regardant si ce binaire est disponible dans $PATH. On y trouve des aberrations incroyables.
- les programmes nécessitant git pour compiler: le système de compilation dans les ports d'OpenBSD fait de son mieux pour faire au plus propre. L'utilisateur dédié à la création des packages n'a pas du tout accès à internet (bloqué par le pare-feu avec une règle par défaut) et ne pourra pas exécuter de commande git pour récupérer du code. Il n'y a aucune raison pour que la compilation d'un programme nécessite de télécharger du code au milieu de l'étape de compilation!
Évidemment je comprends que ces trois points ci-dessus existent car cela facilite la vie des développeurs, mais si vous écrivez un programme et que vous le publiez, ce serait très sympa de penser aux systèmes non-linux. N'hésite pas à demander sur les réseaux sociaux si quelqu'un veut tester votre code sur un autre système que Linux. On adore les développeurs "BSD friendly" qui acceptent nos patches pour améliorer le support OpenBSD.
Ce que j'aimerais voir évoluer
Il y a certaines choses où j'aimerais voir OpenBSD s'améliorer. Cette liste est personnelle et reflète pas l'opinion des membres du projet OpenBSD.
- Meilleur support ARM
- Débit du Wifi
- Meilleures performances (mais ça s'améliore un peu à chaque version)
- Améliorations de FFS (lors de crashs j'ai parfois des fichiers dans lost+found)
- Un pkg_add -u plus rapide
- Support du décodage vidéo matériel
- Meilleur support de FUSE avec une possibilité de monter des systèmes CIFS/samba
- Plus de contributeurs
Je suis consciente de tout le travail nécessaire ici, et ce n'est certainement pas moi qui vais y faire quelque chose. J'aimerais que cela s'améliore sans toutefois me plaindre de la situation actuelle :)
Malheureusement, tout le monde sait qu'OpenBSD évolue par un travail acharné et pas en envoyant une liste de souhaits aux développeurs :)
Quand on pense à ce qu'arrive à faire une petite équipe (environ 150 développeurs impliqués sur les dernières versions) en comparaison d'autres systèmes majeurs, je pense qu'on est assez efficace!
In this article I will explain how to deploy your own cryptpad instance with OpenBSD.
Cryptpad official website
Cryptpad is a web office suite featuring easy real time collaboration on documents. Cryptpad is written in JavaScript and the daemon acts as a web server.
Pre-requisites
You need to install the packages git, node, automake and autoconfig to be able to fetch the sources and run the program.
# pkg_add node git autoconf--%2.69 automake--%1.16
Another web front-end software will be required to allow TLS connections and secure the network access to the Cryptpad instance. This can be relayd, haproxy, nginx or lighttpd. I'll cover the setup using httpd, and relayd. Note that Cryptpad developers will provide support only to Nginx users.
Installation
I really recommend using dedicated users daemons. We will create a new user with the command:
# useradd -m _cryptpad
Then we will continue the software installation as the `_cryptpad` user.
# su -l _cryptpad
We will mainly follow the official instructions with some exceptions to adapt to OpenBSD:
Official installation guide
$ git clone https://github.com/xwiki-labs/cryptpad
$ cd cryptpad
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install bower
$ node_modules/.bin/bower install
$ cp config/config.example.js config/config.js
Configuration
There are a few variables important to customize:
- "httpUnsafeOrigin" should be set to the public address on which cryptpad will be available. This will certainly be a HTTPS link with an hostname. I will use https://cryptpad.kongroo.eu
- "httpSafeOrigin" should be set to a public address which is different than the previous one. Cryptpad requires two different addresses to work. I will use https://api.cryptpad.kongroo.eu
- "adminEmail" must be set to a valid email used by the admin (certainly you)
Make a rc file to start the service
We need to automatically start the service properly with the system.
Create the file /etc/rc.d/cryptpad
#!/bin/ksh
daemon="/usr/local/bin/node"
daemon_flags="server"
daemon_user="_cryptpad"
location="/home/_cryptpad/cryptpad"
. /etc/rc.d/rc.subr
rc_start() {
${rcexec} "cd ${location}; ${daemon} ${daemon_flags}"
}
rc_bg=YES
rc_cmd $1
Enable the service and start it with rcctl
# rcctl enable cryptpad
# rcctl start cryptpad
Operating
Make an admin account
Register yourself on your Cryptpad instance then visit the *Settings* page of your profile: copy your public signing key.
Edit Cryptpad file config.js and search for the pattern "adminKeys", uncomment it by removing the "/* */" around and delete the example key and paste your key as follow:
adminKeys: [
"[solene@cryptpad.kongroo.eu/YzfbEYwZq6Xhl7ET6AHD01w3QqOE7STYgGglgSTgWfk=]",
],
Restart Cryptpad, the user is now admin and has access to a new administration panel from the web application.
Backups
In the cryptpad directory, you need to backup `data` and `datastore` directories.
Extra configuration
In this section I will explain how to configure generate your TLS certificate with acme-client and how to configure httpd and relayd to publish cryptpad. I consider it besides the current article because if you have nginx and already a setup to generate certificates, you don't need it. If you start from scratch, it's the easiest way to get the job done.
Acme client man page
Httpd man page and
Relayd man page
From here, I consider you use OpenBSD and you have blank configuration files.
I'll use the domain **kongroo.eu** as an example.
httpd
We will use httpd in a very simple way. It will only listen on port 80 for all domain to allow acme-client to work and also to automatically redirect http requests to https.
# cp /etc/examples/httpd.conf /etc/httpd.conf
# rcctl enable httpd
# rcctl start httpd
acme-client
We will use the example file as a default:
# cp /etc/examples/acme-client.conf /etc/acme-client.conf
Edit `/etc/acme-client.conf` and change the last domain block, replace `example.com` and `secure.example.com` with your domains, like `cryptpad.kongroo.eu` and `api.cryptpad.kongroo.eu` as alternative name.
For convenience, you will want to replace the path for the full chain certificate to have `hostname.crt` instead of `hostname.fullchain.pem` to match relayd expectations.
This looks like this paragraph on my setup:
domain kongroo.eu {
alternative names { api.cryptpad.kongroo.eu cryptpad.kongroo.eu }
domain key "/etc/ssl/private/kongroo.eu.key"
domain full chain certificate "/etc/ssl/kongroo.eu.crt"
sign with buypass
}
Note that with the default acme-client.conf file, you can use *letsencrypt* or *buypass* as a certification authority.
acme-client.conf man page
You should be able to create your certificates now.
# acme-client kongroo.eu
Done!
You will want the certificate to be renewed automatically and relayd to restart upon certificate change. As stated by acme-client.conf man page, add this to your root crontab using `crontab -e`:
~ * * * * acme-client kongroo.eu && rcctl reload relayd
relayd
This configuration is quite easy, replace `kongroo.eu` with your domain.
Create a /etc/relayd.conf file with the following content:
relayd.conf man page
tcp protocol "https" {
tls keypair kongroo.eu
}
relay "https" {
listen on egress port 443 tls
protocol https
forward to 127.0.0.1 port 3000
}
Enable and start relayd using rcctl:
# rcctl enable relayd
# rcctl start relayd
Conclusion
You should be able to reach your Cryptpad instance using the public URL now. Congratulations!
Introduction
In this article I will explain how to install and configure Vger, a gemini server.
What is the gemini protocol
Short introduction about Gemini: it's a very recent protocol that is being simplistic and limited. Keys features are: pages are written in markdown like, mandatory TLS, no header, UTF-8 encoding only.
Vger program
Vger source code
I wrote Vger to discover the protocol and the Gemini space. I had a lot of fun with it, it was the opportunity for me to rediscover the C language with a better approach. The sources include a full test suite. This test suite was unvaluable for the development process.
Vger was really built with security in mind from the first lines of code, now it offers the following features:
- chroot and privilege dropping, and on OpenBSD it uses unveil/pledge all the time
- virtualhost support
- language selection
- MIME detection
- handcrafted man page, OpenBSD quality!
The name Vger is a reference to the 1979 first Star Trek movie.
Star Trek: The Motion Picture
Install Vger
Compile vger.c using clang or gcc
$ make
# install -o root -g bin -m 755 vger /usr/local/bin/vger
Vger receives requests on stdin and gives the result on stdout. It doesn't take account of the hostname given but a request MUST start with `gemini://`.
vger official homepage
Setup on OpenBSD
Create directory /var/gemini/, files will be served from there.
Create the `_gemini` user:
useradd -s /sbin/nologin _gemini
Configure vger in /etc/inetd.conf
11965 stream tcp nowait _gemini /usr/local/bin/vger vger
Inetd will run vger` with the _gemini user. You need to take care that /var/gemini/ is readable by this user.
inetd is a wonderful daemon listening on ports and running commands upon connections. This mean when someone connects on the port 11965, inetd will run vger as _gemini and pass the network data to its standard input, vger will send the result to the standard output captured by inetd that will transmit it back to the TCP client.
Tell relayd to forward connections in relayd.conf
log connection
relay "gemini" {
listen on 163.172.223.238 port 1965 tls
forward to 127.0.0.1 port 11965
}
Make links to the certificates and key files according to relayd.conf documentation. You can use acme / certbot / dehydrate or any "Let's Encrypt" client to get certificates. You can also generate your own certificates but it's beyond the scope of this article.
# ln -s /etc/ssl/acme/cert.pem /etc/ssl/163.172.223.238\:1965.crt
# ln -s /etc/ssl/acme/private/privkey.pem /etc/ssl/private/163.172.223.238\:1965.key
Enable inetd and relayd at boot and start them
# rcctl enable relayd inetd
# rcctl start relayd inetd
From here, what's left is populating /var/gemini/ with the files you want to publish, the `index.md` file is special because it will be the default file if no file are requests.
In this article I will explain how to install a lsp plugin for kakoune to add language specific features such as autocompletion, syntax error reporting, easier navigation to definitions and more.
The principle is to use "Language Server Protocol" (LSP) to communicate between the editor and a daemon specific to a programming language. This can be also done with emacs, vim and neovim using the according plugins.
Language Server Protocol on Wikipedia
For python, _pyls_ would be used while for C or C++ it would be _clangd_.
The how-to will use OpenBSD as a base. The package names may certainly vary for other systems.
Pre-requisites
We need _kak-lsp_ which requires rust and cargo. We will need git too to fetch the sources, and obviously kakoune.
# pkg_add kakoune rust git
Building
Official building steps documentation
I recommend using a dedicated build user when building programs from sources, without a real audit you can't know what happens exactly in the build process. Mistakes could be done and do nasty things with your data.
$ git clone https://github.com/kak-lsp/kak-lsp
$ cd kak-lsp
$ cargo install --locked --force --path .
Configuration
There are a few steps. kak-lsp has its own configuration file but the default one is good enough and kakoune must be configured to run the kak-lsp program when needed.
Take care about the second command if you built from another user, you have to fix the path.
$ mkdir -p ~/.config/kak-lsp
$ cp kak-lsp.toml ~/.config/kak-lsp/
This configuration file tells what program must be used depending of the programming language required.
[language.python]
filetypes = ["python"]
roots = ["requirements.txt", "setup.py", ".git", ".hg"]
command = "pyls"
offset_encoding = "utf-8"
Taking the configuration block for python, we can see the command used is _pyls_.
For kakoune configuration, we need a simple configuration in ~/.config/kak/kakrc
eval %sh{/usr/local/bin/kak-lsp --kakoune -s $kak_session}
hook global WinSetOption filetype=(rust|python|go|javascript|typescript|c|cpp) %{
lsp-enable-window
}
Note that I used the full path of kak-lsp binary in the configuration file, this is due to a rust issue on OpenBSD.
Link to Rust issue on github
Trying with python
To support python programs you need to install python-language-server which is available in pip. There are no package for it on OpenBSD. If you install the program with pip, take care to have the binary in your $PATH (either by extending $PATH to ~/.local/bin/ or by copying the binary in /usr/local/bin/ or whatever suits you).
The pip command would be the following (your pip binary name may change):
$ pip3.8 install --user 'python-language-server[all]'
Then, opening python source file should activate the analyzer automatically. If you add a mistake, you should see `!` or `*` in the most left column.
Trying with C
To support C programs, clangd binary is required. On OpenBSD it is provided by the clang-tools-extra package. If clangd is in your $PATH then you should have working support.
Using kak-lsp
Now that it is installed and working, you may want to read the documentation.
kak-lsp usage
I didn't look deep for now, the autocompletion automatically but may be slow in some situation.
Default keybindings for "gr" and "gd" are made respectively for "jump to reference" and "jump to definition".
Typing "diag" in the command prompt runs "lsp-diagnostics" which will open a new buffer explaining where errors are warnings are located in your source file. This is very useful to fix errors before compiling or running the program.
Debugging
The official documentation explains well how you can check what is wrong with the setup. It consists into starting kak-lsp in a terminal and kakoune separately and check kak-lsp output. This helped me a lot.
Official troubleshooting guide
In this article I will explain how to download and run the FuguITA OpenBSD live-cd, which is not an official OpenBSD project (it is not endorsed by the OpenBSD project), but is available since a long time and is carefully updated at every release and errata published.
FuguITA official homepage
I do like this project and I am running their European mirror, it was really long to download it from Europe before.
Please note that if you have issues with FuguITA, you must report it to the FuguITA team and not report it to the OpenBSD project.
Preparing
Download the img or iso file on a mirror.
Mirror list from official project page
The file is gzipped, run gunzip on the img file FuguIta-6.8-amd64-202010251.img.gz (name may change over time because they get updated to include new erratas).
Then, copy the file to your usb memory stick. This can be dangerous if you don't write the file to the correct disk!
To avoid mistakes, I plug in the memory stick when I need it, then I check the last lines of the output of dmesg command which looks like:
sd1 at scsibus2 targ 1 lun 0: removable serial.1b1c1a03800000000060
sd1: 15280MB, 512 bytes/sector, 31293440 sectors
This tells me my memory stick is the sd1 device.
Now I can copy the image to the memory stick:
# dd if=FuguIta-6.8-amd64-202010251.img of=/dev/rsd1c bs=10M
Note that I use /dev/rsd1c for the sd1 device. I've added a r to use the raw mode (in opposition of buffered mode) so it gets faster, and the c stands for the whole disk (there is a historical explanation).
Starting the system
Boot on your usb memory stick. You will be prompted for a kernel, you can wait or type enter, the default is to use the multiprocessor kernel and there are no reason to use something else.
If will see a prompt "scanning partitions: sd0i sd1a sd1d sd1i" and be asked which is the FuguIta operating device, proposing a default that should be the correct one.
FROM HERE, YOUR KEYBOARD IS IN QWERTY.
Just type enter.
The second question will be the memory disk allowed size (using TMPFS), just press enter for "automatic".
Then, a boot mode will be showed: the best is the mode 0 for a livecd experience.
Official documentation in regards to FuguITA specifics options
Keyboard type will be asked, just type the layout you want. Then answer to questions:
- root password
- hostname (you can just press enter)
- IP to use (v4, v6, both [default])
When prompted for your network interfaces, WIFI may not work because the livecd doesn't have any firmware.
Finally, you will be prompted for C for console or X for xenodm. THERE ARE NO USER except root, so if you start X you can only use root as an user, which I STRONGLY discourage.
You can login console as root, use the two commands "useradd -m username" and "passwd username" to give a password to that user, and then start xenodm.
The livecd can restore data from a local hard drive, this is explained in the start guide of the FuguITA project.
Conclusion
Having FuguITA around is very handy. You can use it to check your hardware compatibility with OpenBSD without installing it. Packages can be installed so it's perfect to check how OpenBSD performs for you and if you really want to install it on your computer.
You can also use it as an usb live system to transport OpenBSD anywhere (the system must be compatible) by using the persistent mode, encryption being a feature! This may be very useful for people traveling on lot and who don't necesserarly want to travel with an OpenBSD laptop.
As I said in the introduction, the team is doing a very good job at producing FuguITA releases shortly after the OpenBSD release, and they continuously update every release with new erratas.
In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.
Privacy
There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.
The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.
Secure firefox / chromium
While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.
With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.
This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.
It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.
PF firewall
With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.
I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.
Linux users could use the software such as trickle or wondershaper for this.
It's stable
Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.
I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.
Low maintenance
Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.
Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):
# sysupgrade
[..wait..]
# pkg_add -u
# reboot
Documentation is accurate
Setting up an OpenBSD system with full disk encryption is easy.
Documentation to create a router with NAT is explained step by step.
Every binary or configuration file have their own up-to-date man page.
The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.
If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.
Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.
Contributing is easy
This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.
There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.
This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.
OpenIndiana official website
NixOS official website
One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.
High quality packages
My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.
Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.
Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.
I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.
- programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
- nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
- programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.
I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.
What I would like to see improved
This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.
- Better ARM support
- Wifi speed
- Better performance (gently improving every release)
- FFS improvements in regards to reliability (I often get files in lost+found)
- Faster pkg_add -u
- hardware video decoding/encoding support
- better FUSE support and mount cifs/smb support
- scaling up the contributions (more contributors and reviewers for ports@)
I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.
Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)
When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.
Since my previous article about a continous integration service to track OpenBSD ports contribution I made a simple proof of concept that allowed me to track what works and what doesn't work.
The continuous integration goal
A first step for the CI service would be to create a database of diffs sent to ports. This would allow people to track what has been sent and not yet committed and what the state of the contribution is (build/don't built, apply/don't apply). I would proceed following this logic:
- a mail arrive and is sent to the pipeline
- it's possible to find a pkgpath out of the file
- the diff applies
- distfiles can be fetched
- portcheck is happy
Step 1 is easy, it could be mail dumped into a directory that get scanned every X minutes.
Step 2 is already done in my POC using a shell script. It's quite hard and required tuning. Submitted diffs are done with diff(1), cvs diff or git diff. The important part is to retrieve the pkgpath like "lang/php/7.4". This allow testing the port exists.
Step 3 is important, I found three cases so far when applying a diff:
- it works, we can then register in the database it can be used to build
- it doesn't work, human investigation required
- the diff is already applied and patch think you want to reverse it. It's already committed!
Being able to check if a diff is applied is really useful. When building the contributions database, a daily check of patches that are known to apply can be done. If a reverse patch is detected, this mean it's committed and the entry could be delete from the database. This would be rather useful to keep the database clean automatically over time.
Step 4 is an inexpensive extra check to be sure the distfiles can be downloaded over the internet.
Step 5 is also an inexpensive check, running portinfo can reports easy to fix mistakes.
All the steps only require a ports tree. Only the step 4 could be tricked by someone malicious, using a patch to make the system download very huge files or files with some legal concerns, but that message would also appear on the mailing list so the risk is quite limited.
To go further in the automation, building the port is required but it must be done in a clean virtual machine. We could then report into the database if the diff has been producing a package correctly, if not, provide the compilation log.
Automatic VM creation
Automatically creating an OpenBSD-current virtual machine was tricky but I've been able to sort this out using vmm, rsync and upobsd.
The script download the last sets using rsync, that directory is served from a mail server. I use upobsd to create an automatic installation with bsd.rd including my autoinstall file. Then it gets tricky :)
vmm must be started with its storage disk AND the bsd.rd, as it's an auto install, it will reboot after the install finishes and then will install again and again.
I found that using the parameters "-B disk" would make the vm to shutdown after installation for some reasons. I can then wait for the vm to stop and then start it without bsd.rd.
My vmm VM creation sequence:
upobsd -i autoinstall-vmm-openbsd -m http://localhost:8080/pub/OpenBSD/
vmctl stop -f -w integration
vmctl start -B disk -m 1G -L -i 1 -d main.qcow2 -b autobuild_vm/bsd.rd integration
vmctl wait integration
vmctl start -m 1G -L -i 1 -d main.qcow2 integration
The whole process is long though. A derivated qcow image could be used after creation to try each port faster until we want to update the VM again.
Multplies vm could be used at once to make parallel testing and make good use of host ressources.
What's done so far
I'm currently able to deposite email as files in a directory and run a script that will extract the pkgpath, try to apply the patch, download distfiles, run portcheck and run the build on the host using PORTS_PRIVSEP. If the ports compiled fine, the email file is deleted and a proper diff is made from the port and moved into a staging directory where I'll review the diffs known to work.
This script would stop on blocking error and write a short text report for each port. I intended to sent this as a reply to the mailing at first, but maintaining a parallel website for people working on ports seems a better idea.
Simple article for posterity or future-me. I will share here my tweaks
to make the IBook G4 laptop (apple keyboard) suitable for OpenBSD ,
this should work for Linux too as long as you run X.
Command should be alt+gr
I really need the alt+gr key which is not there on the keyboard, I
solved this by using this line in my ~/.xsession
.
xmodmap -e "keycode 115 = ISO_Level3_Shift"
i3 and mod4
As the touchpad is incredibely bad by nowadays standards (and it only
has 1 button and no scrolling feature!), I am using a window manager
that could be entirely keyboard driven, while I’m not familiar with
tiling window manager, i3 was easy to understand and light
enough. Long time readers may remember I am familiar with stumpwm but
it’s not really a dynamic tiling window manager, I can only tolerate
i3 using the tabs mode.
But an issue arise, there are no “super” key on the keyboard, and
using “alt” would collide with way too many programs. One solution is
to use “caps lock” as a “super” key.
I added this in my ~/.xsession
file:
xmodmap ~/.Xmodmap
with ~/.Xmodmap
having the following instructions:
clear Lock
keycode 66 = Hyper_L
add mod4 = Hyper_L
clear Lock
This will disable to “toggling” effect of caps lock, and will turn it
into a “Super” key that will be refered as mod4 for i3.
Today post is about
Brutaldon, a
Mastodon/Pleroma interface in old fashion HTML like in the web 1.0
era. I will explain how it works and how to install it. Tested and
approved on an 16 years old powerpc laptop, using Mastodon with w3m
or dillo web browsers!
Introduction
Brutaldon is a mastodon client running as a web server. This mean you
have to connect to a running brutaldon server, you can use a public
one like Brutaldon.online and then you
will have two ways to connect to your account:
- using oauth which will redirect through a dedicated API page of
your mastodon instance and will give back a token once you logged
in properly, this is totally safe of use, but requires javascript
to be enabled to works due to the login page on the instance
- there is “old login” method in which you have to provide your
instance address, your account login and password. This is not
really safe because the brutaldon instance will known about your
credentials, but you can use any web browser with that. There are
not much security issues if you use a local brutaldon instance
How to install it
The installation is quite easy, I wish this could be as easy more
often. You need a python3 interpreter and pipenv
. If you don’t have
pipenv, you need pip
to install pipenv
. On OpenBSD this would
translates as:
$ pip3.8 install --user pipenv
Note that on some system, pip3.8 could be pip3, or pip. Due to the
coexistence of python2 and python3 for some time until we can get ride
of python2, most python related commands have a suffix to tell which
python version it uses.
If you install pipenv with pip, the path will be
~/.local/bin/pipenv
.
Now, very easy to proceed! Clone the code, run pipenv to get the
dependencies, create a sqlite database and run the server.
$ git clone http://git.carcosa.net/jmcbray/brutaldon.git
$ cd brutaldon
$ pipenv install
$ pipenv run python ./manage.py migrate
$ pipenv run python ./manage.py runserver
And voilà! Your brutaldon instance is available on
http://localhost:8000, you only need to open
it on your web browser and log-in to your instance.
As explained in the INSTALL.md
file of the project, this method
isn’t suitable for a public deployment. The code is a Django webapp
and could be used with wsgi and a proper web server. This setup is
beyond the scope of this article.
In this article I will tell you about the
Scuttlebutt social network,
what makes it special and how to join it using OpenBSD. From here,
I’ll refer to Scuttlebutt as SSB.
Introduction to the protocol
You can find all the related documentation on
the official website.
I will make a simplification of the protocol to present it.
SSB is decentralized, meaning there are no central server with
clients around it (think about Twitter model) nor it has a constellation
of servers federating to each others (Fediverse: mastodon, plemora,
peertube…). SSB uses a peer to peer model, meaning nodes exchanges
data between others nodes. A device with an account is a node,
someone using SSB acts as a node.
The protocol requires people to be mutual followers to make the
private messaging system to work (messages are encrypted end-to
end).
This peer to peer paradigm has specific implications:
- Internet is not required for SSB to work. You could use it with
other people in a local network. For example, you could visit a
friend’s place exchange your SSB data over their network.
- Nodes owns the data: when you join, this can be very long to
download the content of nodes close to you (relatively to people
you follow) because the SSB client will download the data, and then
serves everything locally. This mean you can use SSB while being
offline, but also that in the case seen previously at your friend’s
place, you can exchange data from mutual friends. Example: if A
visits B, B receives A updates. When you visit B, you will receive
B updates but also A updates if you follow B on the network.
- Data are immutables: when you publish something on the network,
it will be spread across nodes and you can’t modify those data.
This is important to think twice before publishing.
- Moderation: there are no moderation as there are no autority in
control, but people can block nodes they don’t want to get data
from and this blocking will be published, so other people can easily
see who gets blocked and block it too. It seems to work, I don’t
have opinion about this.
- You discover parts of the network by following people, giving
you access to the people they follow. This makes the discovery of
the network quite organic and should create some communities by
itself. Birds of feather flock together!
- It’s complicated to share an account across multiples devices
because you need to share all your data between the devices, most
people use an account per device.
SSB clients
There are differents clients, the top clients I found were:
There are also lot of applications using the protocol, you can find
a list on this link.
One particularly interesting project is git-ssb, hosting a git
repository on the network.
Most of the code related to SSB is written in NodeJS.
In my opinion, Patchwork is the most user-friendly client but Oasis
is very nice too. Patchwork has more features, like being able to
publish pictures within your messages which is not currently possible
with Oasis.
Manyverse works fine but is rather limited in term of features.
The developer community working on the projects seems rather small
and would be happy to receive some help.
How to install Oasis on OpenBSD
I’ve been able to get the Oasis client to run on OpenBSD. The NodeJS
ecosystem is quite hostile to anything non linux but following the
path of qbit (who solved few libs years
ago), this piece of software works.
$ doas pkg_add libvips git node autoconf--%2.69 automake--%1.16 libtool
$ git clone https://github.com/fraction/oasis
$ cd oasis
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install --only=prod
There is currently ONE issue that require a hack to start Oasis.
The lo0
interface must not have any IPv6 address.
You can use the following command as root to remove the IPv6
addresses.
# ifconfig lo0 -inet6
I reported this bug as I’ve not been able to fix it myself.
How to use Oasis on OpenBSD
When you want to use Oasis, you have to run
$ node /path/to/oasis_sources
You can add --help
to have the usage output, like --offline
if
you don’t want oasis to do networking.
When you start oasis, you can then open http://localhost:3000
to
access network. Beware that this address is available to anyone
having access to your system.
You have to use an invitation from someone to connect to a node
and start following people to increase your range in this small
world.
You can use a public server which acts as a 24/7 node to connect
people together on
https://github.com/ssbc/ssb-server/wiki/Pub-Servers.
How to backup your account
You absolutely need to backup your ~/.ssb/
directory if you don’t
want to lose your account. There are no central server able to
help you recover your account in case of data lass.
If you want to use another client on another computer, you have
to copy this directory to the new place.
I don’t think the whole directory is required, but I have not
been able to find more precise information.
In this long blog post, I will write about the technical details
of the OpenBSD stable packages building infrastructure. I have setup
the infrastructure with the help of Theo De Raadt who provided me
the hardware in summer 2019, since then, OpenBSD users can upgrade
their packages using pkg_add -u
for critical updates that has
been backported by the contributors. Many thanks to them, without
their work there would be no packages to build. Thanks to pea@ who
is my backup for operating this infrastructure in case something
happens to me.
The total lines of code used is around 110 lines of shell.
Original design
In the original design, the process was the following. It was done
separately on each machine (amd64, arm64, i386, sparc64).
Updating ports
First step is to update the ports tree using cvs up
from a cron
job and capture its output. If there is a result, the process
continues into the next steps and we discard the result.
With CVS being per-directory and not using a database like git or
svn, it is not possible to “poll” for an update except by verifying
every directory if a new version of files is available. This check
is done three time a day.
Make a list of ports to compile
This step is the most complicated of the process and weights for a
third of the total lines of code.
The script uses cvs rdiff
between the cvs release and stable
branches to show what changed since release, and its output is
passed through a few grep and awk scripts to only retrieve the
“pkgpaths” (the pkgpath of curl is net/curl) of the packages
that were updated since the last release.
From this raw output of cvs rdiff:
File ports/net/dhcpcd/Makefile changed from revision 1.80 to 1.80.2.1
File ports/net/dhcpcd/distinfo changed from revision 1.48 to 1.48.2.1
File ports/net/dnsdist/Makefile changed from revision 1.19 to 1.19.2.1
File ports/net/dnsdist/distinfo changed from revision 1.7 to 1.7.2.1
File ports/net/icinga/core2/Makefile changed from revision 1.104 to 1.104.2.1
File ports/net/icinga/core2/distinfo changed from revision 1.40 to 1.40.2.1
File ports/net/synapse/Makefile changed from revision 1.13 to 1.13.2.1
File ports/net/synapse/distinfo changed from revision 1.11 to 1.11.2.1
File ports/net/synapse/pkg/PLIST changed from revision 1.10 to 1.10.2.1
The script will produce:
net/dhcpcd
net/dnsdist
net/icinga/core2
net/synapse
From here, for each pkgpath we have sorted out, the sqlports database
is queried to get the full list of pkgpaths of each packages, this
will include all packages like flavors, subpackages and multipackages.
This is important because an update in editors/vim
pkgpath will
trigger this long list of packages:
editors/vim,-lang
editors/vim,-main
editors/vim,gtk2
editors/vim,gtk2,-lang
[...40 results hidden for readability...]
editors/vim,no_x11,ruby
editors/vim,no_x11,ruby,-lang
editors/vim,no_x11,ruby,-main
Once we gathered all the pkgpaths to build and stored them in a
file, next step can start.
Preparing the environment
As the compilation is done on the real system (using PORTS_PRIVSEP
though) and not in a chroot we need to clean all packages installed
except the minimum required for the build infrastructure, which are
rsync and sqlports.
dpb(1)
can’t be used because it didn’t gave good results for
building the delta of the packages between release and stable.
The various temporary directories used by the ports infrastructure
are cleaned to be sure the build starts in a clean environment.
Compiling and creating the packages
This step is really simple. The ports infrastructure is used
to build the packages list we produced at step 2.
env SUBDIRLIST=package_list BULK=yes make package
In the script there is some code to manage the logs of the previous
batch but there is nothing more.
Every new run of the process will pass over all the packages which
received a commit, but the ports infrastructure is smart enough to
avoid rebuilding ports which already have a package with the correct
version.
Transfer the package to the signing team
Once the packages are built, we need to pass only the built
packages to the person who will manually sign the packages before
publishing them and have the mirrors to sync.
From the package list, the package file lists are generated and
reused by rsync to only copy the packages generated.
env SUBDIRLIST=package_list show=PKGNAMES make | grep -v "^=" | \
grep ^. | tr ' ' '\n' | sed 's,$,\.tgz,' | sort -u
The system has all the -release packages in
${PACKAGE_REPOSITORY}/${MACHINE_ARCH}/all/
(like
/usr/ports/packages/amd64/all
) to avoid rebuilding all dependencies
required for building a package update, thus we can’t copy all the
packages from the directory where the packages are moved after
compilation.
Send a notification
Last step is to send an email with the output of rsync to send an
email telling which machine built which package to tell the people
signing the packages that some packages are available.
As this process is done on each machine and that they
don’t necessarily build the same packages (no firefox on sparc64)
and they don’t build at the same speed (arm64 is slower), mails
from the four machines could arrive at very different time, which
led to a small design change.
The whole process is automatic from building to delivering the
packages for signature. The signature step requires a human to be
done though, but this is the price for security and privilege
separation.
Current design
In the original design, all the servers were running their separate
cron job, updating their own cvs ports tree and doing a very long
cvs diff. The result was working but not very practical for the
people signing who were receiving mails from each machine for each
batch.
The new design only changed one thing: One machine was chosen to
run the cron job, produce the package list and then will copy that
list to the other machines which update their ports tree and run
the build. Once all machines finished to build, the initiator machine
will gather outputs and send an unique mail with a summary of each
machine. This became easier to compare the output of each architecture
and once you receive the email this means every machine finished
their job and the signing can be done.
Having the summary of all the building machines resulted in another
improvement: In the logic of the script, it is possible to send an
email telling absolutely no package has been built while the process
was triggered, which means, something went wrong. From here, I
need to check the logs to understand why the last commit didn’t
produce a package. This can be failures like a distinfo file
update forgotten in the commit.
Also, this permitted fixing one issue: As the distfiles are shared
through a common NFS mount point, if multiples machines try to fetch
a distfile at the same time, both will fail to build. Now, the
initiator machine will download all the required distfiles before
starting the build on every node.
All of the previous scripts were reused, except the one
sending the email which had to be rewritten.
If you plan to use an OpenVPN tunnel to reach your default gateway,
which would make the tun interface in the egress
group, and use
tun0
in your pf.conf
which is loaded before OpenVPN starts?
Here are the few tips I use to solve the problems.
Remove your current default gateway
We don’t want a default gateway on the system. You need to know
the remote address of the VPN server.
If you have a /etc/mygate
file, remove it.
The /etc/hostname.if
file (with if being your interface name,
like em0 for example), should look like this:
192.168.1.200
up
!route add -host A.B.C.D 192.168.1.254
- First line is the IP on my lan
- Second line is to make the interface up.
- Third line is means you want to reach
A.B.C.D
via 192.168.1.254
,
with the IP A.B.C.D
being the remote VPN server.
Create the tun0 interface at boot
Create a /etc/hostname.tun0
file with only up
as content,
that will create tun0
at boot and make it available to pf.conf
and you prevent it from loading the configuration.
You may think one could use “egress” instead of the interface name,
but this is not allowed in queuing.
Don’t let OpenVPN manage the route
Don’t use redirect-gateway def1 bypass-dhcp
from the OpenVPN
configuration, this will create a route which is not default
and
so the tun0 interface won’t be in the egress group, which is not
something we want.
Add those two lines in your configuration file, to execute
a script once the tunnel is established, in which we will make
the default route.
script-security 2
up /etc/openvpn/script_up.sh
In /etc/openvpn/script_up.sh
you simply have to write
#!/bin/sh
/sbin/route add -net default X.Y.Z.A
If you have IPv6 connectivity, you have to add this line:
/sbin/route add -inet6 2000::/3 fe80::%tun0
(not sure it’s 100% correct for IPv6 but it works fine for me! If
it’s wrong, please tell me how to make it better).
After modest contributions to the NixOS operating system which made
me learn about the contribution process, I found enjoyable to have
an automatic report and feedback about the quality of the submitted
work. While on NixOS this requires GitHub, I think this could be
applied as well on OpenBSD and the mailing list contributing system.
I made a prototype before starting the real work and actually I’m
happy with the result.
This is what I get after feeding the script with a mail containing
a patch:
Determining package path ✓
Verifying patch isn't committed ✓
Applying the patch ✓
Fetching distfiles ✓
Distfile checksum ✓
Applying ports patches ✓
Extracting sources ✓
Building result ✓
It requires a lot of checks to find a patch in the file, because
we have have patches generated from cvs or git which have a slightly
different output. And then, we need to find from where to apply
this patch.
The idea would be to retrieve mails sent to ports@openbsd.org by
subscribing, then store metadata about that submission into a
database:
Sender
Date
Diff (raw text)
Status (already committed, doesn't apply, apply, compile)
Then, another program will pick a diff from the database, prepare a VM using a
derivated qcow2 disk from a base image so it always start fresh and
clean and ready, and do the checks within the VM.
Once it is finished, a mail could be sent as a reply to the original
mail to give the status of each step until error or last check. The
database could be reused to make a web page to track what compiles
but is not yet committed. As it’s possible to verify if a patch is
committed in the tree, this can automatically prune committed patches
over time.
I really think this can improve tracking patches sent to ports@ and
ease the contribution process.
DISCLAIMER
- This would not be an official part of the project, I do it on my own
- This may be cancelled
- This may be a bad idea
- This could be used “as a service” instead of pulling automatically
from ports, meaning people could send mails to it to receive an
automatic review. Ideally this should be done in portcheck(1) but
I’m not sure how to verify a diff apply on the ports tree without
enforcing requirements
- Human work will still be required to check the content and verify
the port works correctly!
There is one very handy package on OpenBSD named pkglocatedb
which provides the command pkglocate
.
If you need to find a file or binary/program and you don’t know
which package contains it, use pkglocate.
$ pkglocate */bin/exiftool
p5-Image-ExifTool-12.00:graphics/p5-Image-ExifTool:/usr/local/bin/exiftool
With the result, I know that the package p5-Image-ExifTool
will provide me
the command exiftool
.
Another example looking for files containing the pattern “libc++”
$ pkglocate libc++
base67:/usr/lib/libc++.so.5.0
base67:/usr/lib/libc++abi.so.3.0
comp67:/usr/lib/libc++.a
comp67:/usr/lib/libc++_p.a
comp67:/usr/lib/libc++abi.a
comp67:/usr/lib/libc++abi_p.a
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.app
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/Info.plist.lib
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qmake.conf
qt4-4.8.7p23:x11/qt4,-main:/usr/local/lib/qt4/mkspecs/unsupported/macx-clang-libc++/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++-32/qplatformdefs.h
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qmake.conf
qtbase-5.13.2p0:x11/qt5/qtbase,-main:/usr/local/lib/qt5/mkspecs/linux-clang-libc++/qplatformdefs.h
As you can see, base sets are also in the database used by pkglocate,
so you can easily find if a file is from a set (that you should
have) or if the file comes from a package.
Find which package installed a file
Klemmens Nanni (kn@) told me it’s possible to find which package
installed a file present in the filesystem using pkg_info
command
which comes from the base system. This can be handy to know from
which package an installed file comes from, without requiring
pkglocatedb.
$ pkg_info -E /usr/local/bin/convert
/usr/local/bin/convert: ImageMagick-6.9.10.86p0
ImageMagick-6.9.10.86p0 image processing tools
This tells me convert
binary was installed by ImageMagick package.
I manage my birthday list so I don’t forget about them in a
calendar file so I can use
it in scripts
The calendar file format is easy but sadly it only works using
English month names.
This is an example file with differents spacing:
7 August This is 7 august birthday!
8 August This is 8 august birthday!
16 August This is 16 august birthday!
Now you have a calendar file you can use the calendar binary
on it and show incoming events in the next n days using -A flag.
calendar -A 20
Note that the default file is ~/.calendar/calendar
so if you
use this file you don’t need to use the -f
flag in calendar.
Now, I also use it in crontab with xmessage to show a popup once a
day with incoming birthdays.
30 13 * * * calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -
You have to set the DISPLAY variable so it appear on the screen.
It’s important to check if calendar will have any output before
calling xmessage to prevent having an empty window.
While no one would expect this, there are huge efforts from a small team to
bring more games into OpenBSD. In fact, now some commercial games works
natively now, thanks to Mono or Java. There are no wine or linux emulation
layer in OpenBSD.
Here is a small list of most well known games that run on OpenBSD:
- Northguard (RTS)
- Dead Cells (Side scroller action game)
- Stardew Valley (Farming / Roguelike)
- Slay The Spire (Card / Roguelike)
- Axiom Verge (Side scroller, metroidvania)
- Crosscode (top view twin stick shooter)
- Terraria (Side scroller action game with craft)
- Ion Fury (FPS)
- Doom 3 (FPS)
- Minecraft (Sandbox - not working using latest version)
- Tales Of Maj’Eyal (Roguelike with lot of things in it - open source and free)
I would also like to feature the recently made compatible games from
Zachtronics developer, those are ingenious puzzles games requiring efficiency.
There are games involving Assembly code, pseudo code, molecules etc…
- Opus Magnum
- Exapunks
- Molek-Syntez
Finally, there are good RPG running thanks to devoted developer spending their
free time working on game engine reimplementation:
- Elder Scroll III: Morrowind (openmw engine)
- Baldur’s Gate 1 and 2 (gemrb engine)
- Planescape: Torment (gemrb engine)
There is a Peertube (opensource decentralized Youtube alternative) channel
where I started publishing gaming videos recorded from OpenBSD. Now there are
also videos from others people that are published. OpenBSD Gaming
channel
The full list of running games is available in the Shopping guide
webpage including information how they
run, on which store you can buy them and if they are compatible.
Big thanks to thfr@ who works hard to keep the shopping guide up to date and
who made most of this possible. Many thanks to all the other people in the
OpenBSD Gaming community :)
Note that it seems last Terraria release/update doesn’t work on OpenBSD yet.
While the title may appear quite strange, the article is about installing a
package to have a new random wallpaper everytime you start the X session!
First, you need to install a package named openbsd-backgrounds
which is quite
large with a size of 144 MB. This package made by Marc Espie contains lot of
pictures shot by some OpenBSD developers.
You can automatically set a picture as a background when xenodm start and
prompt for your username by uncommenting a few lines in the file
/etc/X11/xenodm/Xsetup_0
:
Uncomment this part
if test -x /usr/local/bin/openbsd-wallpaper
then
/usr/local/bin/openbsd-wallpaper
fi
The command openbsd-wallpaper
will display a different random picture on
every screen (if you have multiples screen connected) every time you run it.
This article is exceptionnaly in French because it’s about a French OpenBSD
community.
Bonjour à toutes et à tous.
Exceptionnellement je publie un billet en français sur mon blog car je tiens à
faire passer le mot concernant la communauté française obsd4a.
Vous pourrez par exemple trouver la quasi intégralité de la FAQ OpenBSD
traduite
à cette adresse
Sur l’accueil du site vous pourrez trouver des liens vers le forum, le wiki, le
blog, la mailing list et aussi les informations pour rejoindre le salon irc
(#obsd4* sur freenode)
https://openbsd.fr.eu.org/
Hello, as there are so many questions about OpenBSD -current on IRC, Mastodon
or reddit I’m writing this FAQ in hope it will help people.
The official FAQ already contains answers about -current like Following
-current and using snapshots and
Building the system from
sources.
What is OpenBSD -current?
OpenBSD -current is the development version of OpenBSD. Lot of people use it
for everyday tasks.
How to install OpenBSD -current?
OpenBSD -current refers to the last version built from sources obtained with
CVS, however, it’s also possible to get a pre-built system (a snapshot) usually
built and pushed on mirrors every 1 or 2 days.
You can install OpenBSD -current by getting an installation media like usual,
but on the path /pub/OpenBSD/snapshots/ on the mirror.
How do I upgrade from -release to -current?
There are two ways to do so:
- Download bsd.rd file from the snapshots directory and boot it to upgrade
like for a -release to -release upgrade
- Run
sysupgrade -s
command as root, this will basically download all sets
under /home/_sysupgrade
and boot on bsd.rd with an autoinstall(8)
config.
How do I upgrade my -current snapshot to a newer snapshot?
Exactly the same process as going from -release to -current.
Can I downgrade to a -release if I switch to -current?
No.
What issues can I expect in OpenBSD -current?
There are a few issues possibles that one can expect
Out of sync packages
If a library get updated into the base system and you want to update packages,
they won’t be installable until packages are rebuilt with that new library,
this usually takes 1 up to 3 days.
This only create issues in case you want to install a package you don’t have.
The other way around, you can have an old snapshot and packages are not
installable because the libraries linked to by the packages are newer than what
is available in your system, in this case you have to upgrade snapshot.
Snapshots sets are getting updated on the mirror
If you download the sets on the mirror to update your -current version, you may
have an issue with the sha256 sum, this is because the mirror is getting
updated and the sha256 file is the first to be transferred, so sets you are
downloading are not the one the sha256 will compare.
Unexpected system breakage
Sometimes, very rarely (maybe 2 or 3 time in a year?), some snapshots are
borked and will prevent system to boot or lead to regularly crashes. In that
case, it’s important to report the issue with the sendbug
utility.
You can fix this by using an older snapshot from this archives
server and prevent this to happen by
reading bugs@ mailing list before updating.
Broken package
Sometimes, a package update will break it or break some others packages, this
is often quickly fixed on popular packages but in some niche packages you may
be the only one using it on -current and the only one who can report about it.
If you find breakage on something you use, it may be a good idea to report the
problem on ports@openbsd.org mailing list if nobody did before. By doing so,
the issue will be fixed and next -release users will be able to install a
working package.
Is -current stable enough for a server or a workstation?
It’s really up to you. Developers are all using -current and are forbidden to
break it, so the system should totally be usable for everyday use.
What may be complicated on a server is keep updating it regularly and face
issues requires troubleshooting (like major database upgrade which was missing
a quirk).
For a workstation I think it’s pretty safe as long as you can deal with
packages that can’t be installed until they are in sync.
This is a little story that happened a few days ago, it explains well how I
usually get involved into ports in OpenBSD.
1 - Lurking into ports/graphics/
At first, I was looking in various ports there are in the graphics category,
searching for an image editor that would run correctly on my offline laptop.
Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open
ports randomly to read their pkg/DESCR file.
This way, I often find gems I reuse later, sometimes I have less luck and I
only tried 20 ports which are useless to me. It happens I find issues in ports
looking randomly like this…
2 - Find the port « comix »
Then, the second or third port I look at is « comix », here is the DESCR file.
Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.
That looked awesome, I have lot of books as PDF I want to read but it’s not
convenient in a “normal” PDF reader, so maybe comix would help!
3 - Using comix
Once comix was compiled (a mix of python and gtk), I start it and I get errors
opening PDFs… I start it again from console, and in the output I get the
explanation that PDF files are not usable in comix.
Then I read about the CBZ or CBT files, they are archives (zip or tar)
containing pictures, definitely not what a PDF is.
4 - mcomix > comix
After a few searches on the Internet, I find that comix last release is from
2009 and it never supported PDF, so nothing wrong here, but I also found comix
had a fork named mcomix.
mcomix forked a long time ago from comix to fix issues and add support for new
features (like PDF support), while last release is from 2016, it works and
still receive commits (last is from late 2019). I’m going for using comix!
5 - Installing mcomix from ports
Best way to install a program on OpenBSD is to make a port, so it’s correctly
packaged, can be deinstalled and submit to ports@ mailing list later.
I did copy comix folder into mcomix, use a brain dead sed command to replace all
occurrence of comix by mcomix, and it mostly worked! I won’t explain little
details, but I got mcomix to work within a few minutes and I was quite happy!
Fun fact is that comix port Makefile was mentioning mcomix as a suggestion
for upgrade.
6 - Enjoying a CBR reader
With mcomix installed, I was able to read some PDF, it was a good experience
and I was pretty happy with it. I’ve spent a few hours reading, a few moments
after mcomix was installed.
7 - mcomix works but not all the time
After reading 2 longs PDFs, I got issues with the third, some pages were not
rendered and not displayed. After digging this issue a bit, I found about
mcomix internals. Reading PDF is done by rendering every page of the PDF using
mutool binary from mupdf software, this is quite CPU intensive, and for
some reason in mcomix the command execution fails while I can do the exact same
command a hundred time with no failure. Worse, the issue is not reproducible in
mcomix, sometimes some pages will fail to be rendered, sometimes not!
8 - Time to debug some python
I really want to read those PDF so I take my favorite editor and start
debugging some python, adding more debug output (mcomix has a -W parameter
to enable debug output, which is very nice), to try to understand why it
fails at getting output of a working command.
Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just
found it fail, sometimes, but I wasn’t able to understand why.
9 - mcomix on PowerPC
While mcomix is clunky with PDF, I wanted to check if it was working on
PowerPC, it took some times to get all the dependencies installed on my old
computer but finally I got mcomix displayed on the screen… and dying on PDF
loading! Crash seems related to GTK and I don’t want to touch that, nobody will
want to patch GTK for that anyway so I’ve lost hope there.
10 - Looking for alternative
Once I knew about mcomix, I was able to search the Internet for alternatives of
it and also CBR readers. A program named zathura seems well known here and
we have it in the OpenBSD ports tree.
Weird thing is that it comes with two different PDF plugins, one named
mupdf and the other one poppler. I did try quickly on my amd64 machine
and zathura was working.
11 - Zathura on PowerPC
As Zathura was working nice on my main computer, I installed it on the PowerPC,
first with the poppler plugin, I was able to view PDF, but installing this
plugin did pull so many packages dependencies it was a bit sad. I deinstalled
the poppler PDF plugin and installed mupdf plugin.
I opened a PDF and… error. I tried again but starting zathura from the
terminal, and I got the message that PDF is not a supported format, with a lot
of lines related to mupdf.so file not being usable. The mupdf plugin work on
amd64 but is not usable on powerpc, this is a bug I need to report, I don’t
understand why this issue happens but it’s here.
12 - Back to square one
It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT
files and then use any CBT reader out there and not having to deal with that
PDF madness!!
13 - Use big calibre for the job
I have found on the Internet that Calibre is the most used tool to convert a
PDF into CBT files (or into something else but I don’t really care here). I
installed calibre, which is not lightweight, started it and wanted to change
the default library path, the software did hang when it displayed the file
dialog. This won’t stop me, I restart calibre and keep the default path, I
click on « Add a book » and then it hang again on file dialog. I did report
this issue on ports@ mailing list, but it didn’t solve the issue and this mean
calibre is not usable.
14 - Using the command line
After all, CBT files are images in a tar file, it should be easy to reproduce
the mcomix process involving mutool to render pictures and make a tar of that.
IT WORKED.
I found two ways to proceed, one is extremely fast but may not make pages in
the correct order, the second requires CPU time.
Making CBT files - easiest process
The first way is super easy, it requires mutool (from mupdf package) and it
will extract the pictures from the PDF, given it’s not a vector PDF, not sure
what would happen on those. The issue is that in the PDF, the embedded pictures
have a name (which is a number from the few examples I found), and it’s not
necessarily in the correct order. I guess this depend how the PDF is made.
$ mutool extract The_PDF_file.pdf
$ tar cvf The_PDF_file.tar *jpg
That’s all you need to have your CBT file. In my PDF there was jpg files in it,
but it may be png in others, I’m not sure.
Making CBT files - safest process (slow)
The other way of making pictures out of the PDF is the one used in mcomix, call
mutool for rendering each page as a PNG file using width/height/DPI you
want. That’s the tricky part, you may not want to produce pictures with larger
resolution than the original pictures (and mutool won’t automatically help you
for this) because you won’t get any benefit. This is the same for the DPI. I
think this could be done automatically using a correct script checking each PDF
page resolution and using mutool to render the page with the exact same
resolution.
As a rule of thumb, it seems that rendering using the same width as your screen
is enough to produce picture of the correct size. If you use large values, it’s
not really an issue, but it will create bigger files and take more time for
rendering.
$ mutool draw -w 1920 -o page%d.png The_PDF_file.pdf
$ tar cvf The_PDF_file.tar page*.png
You will get PNG files for each page, correctly numbered, with a width of 1920
pixels. Note that instead of tar, you can use zip to create a zip file.
15 - Finally reading books again
After all this LONG process, I was finally able to read my PDF with any CBR
reader out there (even on phone), and once the process is done, it uses no cpu
for viewing files at the opposite of mcomix rendering all the pages when you
open a file.
I have to use zathura on PowerPC, even if I like it less due to the continuous
pages display (can’t be turned off), but mcomix definitely work great when not
dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports
tree if it fails randomly on random pages with PDF.
16 - Being an open source activist is exhausting
All I wanted was to read a PDF book with a warm cup of tea at hand.
It ended into learning new things, debugging code, making ports, submitting
bugs and writing a story about all of this.
What this article is about ?
For some times I wanted to share how I manage my personal laptop and
systems. I got the habit to create a lot of users for just
everything for security reasons.
Creating a new users is fast, I can connect as this user using doas
or ssh -X if I need a X app and this allows preventing some code to
steal data from my main account.
Maybe I went this way too much, I have a dedicated irssi users which
is only for running irssi, same with mutt. I also have a user with
a stupid name and I can use it for testing X apps and I can wipe
the data in its home directory (to try fresh firefox profiles in
case of ports update for example).
How to proceed?
Creating a new user is as easy as this command (as root):
# useradd -m newuser
# echo "permit nopass keepenv solene as newuser" >> /etc/doas.conf
Then, from my main user, I can do:
$ doas -u newuser 'mutt'
and it will run mutt as this user.
This way, I can easily manage lots of services from packages which
don’t come with dedicated daemons users.
For this to be effective, it’s important to have a chmod 700 on
your main user account, so others users can’t browse your files.
Graphicals software with dedicated users
It becomes more tricky for graphical users. There are two options there:
- allow another user to use your X session, it will have native performance but
in case of security issue in the software your whole X session is accessible
(recording keys, screnshots etc…)
- running the software through ssh -X will restricts X access to the software
but the rendering will be a bit sluggish and not suitable for some uses.
Example of using ssh -X compared to ssh -Y:
$ ssh -X foobar@localhost scrot
X Error of failed request: BadAccess (attempt to access private resource denied)
Major opcode of failed request: 104 (X_Bell)
Serial number of failed request: 6
Current serial number in output stream: 8
$ ssh -Y foobar@localhost scrot
(nothing output but it made a screenshot of the whole X area)
Real world example
On a server I have the following new users running:
- torrents
- idlerpg
- searx
- znc
- minetest
- quake server
- awk cron parsing http
they can have crontabs.
Maybe I use it too much, but it’s fine to me.
This blog post is about a nginx rtmp module for turning your nginx
server into a video streaming server.
The official website of the project is located on github at:
https://github.com/arut/nginx-rtmp-module/
I use it to stream video from my computer to my nginx server, then
viewers can use mpv rtmp://perso.pw/gaming
in order to view the
video stream. But the nginx server will also relay to twitch for
more scalability (and some people prefer viewing there for some
reasons).
The module will already be installed with nginx package since OpenBSD
6.6 (not already out at this time).
There is no package for install the rtmp module before 6.6.
On others operating systems, check for something like “nginx-rtmp” or
“rtmp” in an nginx context.
Install nginx on OpenBSD:
pkg_add nginx
Then, add the following to the file /etc/nginx/nginx.conf
load_module modules/ngx_rtmp_module.so;
rtmp {
server {
listen 1935;
buflen 10s;
application gaming {
live on;
allow publish 176.32.212.34;
allow publish 175.3.194.6;
deny publish all;
allow play all;
record all;
record_path /htdocs/videos/;
record_suffix %d-%b-%y_%Hh%M.flv;
}
}
}
The previous configuration sample is a simple example allowing
172.32.212.34 and 175.3.194.6 to stream through nginx, and that will
record the videos under /htdocs/videos/ (nginx is chrooted in
/var/www).
You can add the following line in the “application” block to relay the
stream to your Twitch broadcasting server, using your API key.
push rtmp://live-ams.twitch.tv/app/YOUR_API_KEY;
I made a simple scripts generating thumbnails of the videos and
generating a html index file.
Every 10 minutes, a cron check if files have to be generated,
make thumbnails for videos (tries at 05:30 of the video and then
00:03 if it doesn’t work, to handle very small videos) and then
create the html.
The script checking for new stuff and starting html generation:
#!/bin/sh
cd /var/www/htdocs/videos
for file in $(find . -mmin +1 -name '*.flv')
do
echo $file
PIC=$(echo $file | sed 's/flv$/jpg/')
if [ ! -f "$PIC" ]
then
ffmpeg -ss 00:05:30 -i "$file" -vframes 1 -q:v 2 "$PIC"
if [ ! -f "$PIC" ]
then
ffmpeg -ss 00:00:03 -i "$file" -vframes 1 -q:v 2 "$PIC"
if [ ! -f "$PIC" ]
then
echo "problem with $file" | mail user@my-tld.com
fi
fi
fi
done
cd ~/dev/videos/ && sh html.sh
This one makes the html:
#!/bin/sh
cd /var/www/htdocs/videos
PER_ROW=3
COUNT=0
cat << EOF > index.html
<html>
<body>
<h1>Replays</h1>
<table>
EOF
for file in $(find . -mmin +3 -name '*.flv')
do
if [ $COUNT -eq 0 ]
then
echo "<tr>" >> index.html
INROW=1
fi
COUNT=$(( COUNT + 1 ))
SIZE=$(ls -lh $file | awk '{ print $5 }')
PIC=$(echo $file | sed 's/flv$/jpg/')
echo $file
echo "<td><a href=\"$file\"><img src=\"$PIC\" width=320 height=240 /><br />$file ($SIZE)</a></td>" >> index.html
if [ $COUNT -eq $PER_ROW ]
then
echo "</tr>" >> index.html
COUNT=0
INROW=0
fi
done
if [ $INROW -eq 1 ]
then
echo "</tr>" >> index.html
fi
cat << EOF >> index.html
</table>
</body>
</html>
EOF
Hello, this is a long time I want to work on a special project using an
offline device and work on it.
I started using computers before my parents had an internet access and
I was enjoying it. Would it still be the case if I was using a laptop
with no internet access?
When I think about an offline laptop, I immediately think I will miss
IRC, mails, file synchronization, Mastodon and remote ssh to my servers.
But do I really need it _all the time_?
As I started thinking about preparing an old laptop for the experiment,
differents ideas with theirs pros and cons came to my mind.
Over the years, I produced digital data and I can not deny this. I
don't need all of them but I still want some (some music, my texts,
some of my programs). How would I synchronize data from the offline
system to my main system (which has replicated backups and such).
At first I was thinking about using a serial line over the two
laptops to synchronize files, but both laptop lacks serial ports and
buying gears for that would cost too much for its purpose.
I ended thinking that using an IP network _is fine_, if I connect for a
specific purpose. This extended a bit further because I also need to
install packages, and using an usb memory stick from another computer
to get packages and allow the offline system to use it is _tedious_
and ineffective (downloading packages and correct dependencies is a
hard task on OpenBSD in the case you only want the files). I also
came across a really specific problem, my offline device is an old
Apple PowerPC laptop being big-endian and amd64 is little-endian, while
this does not seem particularly a problem, OpenBSD filesystem is
dependent of endianness, and I could not share an usb memory device
using FFS because of this, alternatives are fat, ntfs or ext2 so it is a
dead end.
Finally, using the super slow wireless network adapter from that
offline laptop allows me to connect only when I need for a few file
transfers. I am using the system firewall pf to limit access to outside.
In my pf.conf, I only have rules for DNS, NTP servers, my remote server,
OpenBSD mirror for packages and my other laptop on the lan. I only
enable wifi if I need to push an article to my blog or if I need to
pull a bit more music from my laptop.
This is not entirely _offline_ then, because I can get access to the
internet at any time, but it helps me keeping the device offline.
There is no modern web browser on powerpc, I restricted packages to
the minimum.
So far, when using this laptop, there is no other distraction than the
stuff I do myself.
At the time I write this post, I only use xterm and tmux, with moc as a
music player (the audio system of the iBook G4 is surprisingly good!),
writing this text with ed and a 72 long char prompt in order to wrap
words correctly manually (I already talked about that trick!).
As my laptop has a short battery life, roughly two hours, this also
helps having "sessions" of a reasonable duration. (Yes, I can still
plug the laptop somewhere).
I did not use this laptop a lot so far, I only started the experiment
a few days ago, I will write about this sometimes.
I plan to work on my gopher space to add new content only available
there :)
Hi,
I’m happy to announce the OpenBSD project will now provide -stable binary
packages. This mean, if you run last release (syspatch applied or not),
pkg_add -u will update packages to get security fixes.
Remember to restart services that may have been updated, to be sure to run new
binaries.
Link to official announcement
I said I will rewrite ttyplot examples to
make them work on OpenBSD.
Here they are, but a small notice before:
Examples using systat will only work for 10000 seconds , or increase that
-d parameter, or wrap it in an infinite loop so it restart (but don’t loop
systat for one run at a time, it needs to start at least once for producing
results).
The systat examples won’t work before OpenBSD 6.6, which is not yet
released at the time I’m writing this, but it’ll work on a -current after 20 july 2019.
I made a change to systat so it flush output at every cycle, it was not
possible to parse its output in realtime before.
Enjoy!
Examples list
ping
Replace test.example by the host you want to ping.
ping test.example | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"
cpu usage
vmstat 1 | awk 'NR>2 { print 100-$(NF); fflush(); }' | ttyplot -t "Cpu usage" -s 100
disk io
systat -d 1000 -b iostat 1 | awk '/^sd0/ && NR > 20 { print $2/1024 ; print $3/1024 ; fflush }' | ttyplot -2 -t "Disk read/write in kB/s"
load average 1 minute
{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($8,0,length($8)-1) ; fflush }' | ttyplot -t "load average 1"
load average 5 minutes
{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($9,0,length($9)-1) ; fflush }' | ttyplot -t "load average 5"
load average 15 minutes
{ while :; do uptime ; sleep 1 ; done } | awk '{ print $10 ; fflush }' | ttyplot -t "load average 15"
wifi signal strengh
Replace iwm0 by your interface name.
{ while :; do ifconfig iwm0 | tr ' ' '\n' ; sleep 1 ; done } | awk '/%$/ { print ; fflush }' | ttyplot -t "Wifi strength in %" -s 100
cpu temperature
{ while :; do sysctl -n hw.sensors.cpu0.temp0 ; sleep 1 ; done } | awk '{ print $1 ; fflush }' | ttyplot -t "CPU temperature in °C"
pf state searches rate
systat -d 10000 -b pf 1 | awk '/state searches/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"
pf state insertions rate
systat -d 10000 -b pf 1 | awk '/state inserts/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"
network bandwidth
Replace trunk0 by your interface.
This is the same command as in my previous article.
netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
Tip
You can easily use those examples over ssh for gathering data, and leave the
plot locally as in the following example:
ssh remote_server "netstat -b -w 1 -I trunk0" | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
or
ssh remote_server "ping test.example" | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"
If for some reasons you want to visualize your bandwidth traffic on an
interface (in or out) in a terminal with a nice graph, here is a small script
to do so, involving ttyplot, a nice software making graphics in a terminal.
The following will works on OpenBSD.
You can install ttyplot by pkg_add ttyplot
as root, ttyplot package appeared
since OpenBSD 6.5.
For Linux, the ttyplot official website
contains tons of examples.
Example
Output example while updating my packages:
IN Bandwidth in KB/s
↑ 1499.2 KB/s#
│ #
│ #
│ #
│ ##
│ ##
│ 1124.4 KB/s##
│ ##
│ ##
│ ##
│ ##
│ ##
│ 749.6 KB/s ##
│ ##
│ ##
│ ## #
│ ## # # # # ##
│ ## # ### # ## # # # ## ## # # ##
│ 374.8 KB/s ## ## #### # # ## # # ### ## ## ### # ## ### # # # # ## # ##
│ ## ### ##### ########## ############# ### # ## ### ##### #### ## ## ###### ## ##
│ ## ### ##### ########## ############# ### #### ### ##### #### ## ## ## ###### ## ###
│ ## ### ##### ########## ############## ### #### ### ##### #### ## ## ######### ## ####
│ ## ### ##### ############################## ######### ##### #### ## ## ############ ####
│ ## ### #################################################### #### ## #####################
│ ## ### #################################################### #############################
└────────────────────────────────────────────────────────────────────────────────────────────────────→
# last=422.0 min=1.3 max=1499.2 avg=352.8 KB/s Fri Jul 19 08:30:25 2019
github.com/tenox7/ttyplot 1.4
In the following command, we will use trunk0 with INBOUND traffic as the
interface to monitor.
At the end of the article, there is a command for displaying both in and out at
the same time, and also instructions for customizing to your need.
Article update: the following command is extremely long and complicated, at
the end of the article you can find a shorter and more efficient version,
removing most of the awk code.
You can copy/paste this command in your OpenBSD system shell, this will produce
a graph of trunk0 inbound traffic.
{ while :; do netstat -i -b -n ; sleep 1 ; done } | awk 'BEGIN{old=-1} /^trunk0/ { if(!index($4,":") && old>=0) { print ($5-old)/1024 ; fflush ; old = $5 } if(old==-1) { old=$5 } }' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"
The script will do an infinite loop doing netstat -ibn
every second and
sending that output to awk.
You can quit it with Ctrl+C.
Explanations
Netstat output contains total bytes (in or out) since system has started so awk
needs to remember last value and will display the difference between two
output, avoiding first value because it would make a huge spike (aka the total
network transfered since boot time).
If I decompose the awk script, this is a lot more readable.
Awk is very readable if you take care to format it properly as any source code!
#!/bin/sh
{ while :;
do
netstat -i -b -n
sleep 1
done
} | awk '
BEGIN {
old=-1
}
/^trunk0/ {
if(!index($4,":") && old>=0) {
print ($5-old)/1024
fflush
old = $5
}
if(old==-1) {
old = $5
}
}' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"
Customization
- replace trunk0 by your interface name
- replace both instances of $5 by $6 for OUT traffic
- replace /1024 by /1048576 for MB/s values
- remove /1024 for B/s values
- replace 1 in sleep 1 by another value if you want to have the value every
n seconds
IN/OUT version for both data on the same graph + simpler
Thanks to leot on IRC, netstat can be used in a lot more efficient way and remove all the awk parsing!
ttyplot supports having two graphs at the same time, one being in opposite color.
netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"
I write this blog post as I spent too much time setting up nginx and
SSL on OpenBSD with acme-client, due to nginx being chrooted and not
stripping path and not doing it easily.
First, you need to set up /etc/acme-client.conf correctly. Here is
mine for the domain ports.perso.pw:
authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}
domain ports.perso.pw {
domain key "/etc/ssl/private/ports.key"
domain full chain certificate "/etc/ssl/ports.fullchain.pem"
sign with letsencrypt
}
This example is for OpenBSD 6.6 (which is current when I write this)
because of Let’s encrypt API URL. If you are running 6.5 or 6.4,
replace v02 by v01 in the api url
Then, you have to configure nginx this way, the most important part in
the following configuration file is the location block handling
acme-challenge request. Remember that nginx is in chroot /var/www so
the path to acme directory is acme
.
http {
include mime.types;
default_type application/octet-stream;
index index.html index.htm;
keepalive_timeout 65;
server_tokens off;
upstream backendurl {
server unix:tmp/plackup.sock;
}
server {
listen 80;
server_name ports.perso.pw;
access_log logs/access.log;
error_log logs/error.log info;
root /htdocs/;
location /.well-known/acme-challenge/ {
rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
root /acme;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
server_name ports.perso.pw;
access_log logs/access.log;
error_log logs_error.log info;
root /htdocs/;
ssl_certificate /etc/ssl/ports.fullchain.pem;
ssl_certificate_key /etc/ssl/private/ports.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
[... stuff removed ...]
}
}
That’s all! I wish I could have find that on the Internet so I share
it here.
This blog post is an update (OpenBSD 6.5 at that time) of this very same
article I published in June 2018. Due to rtadvd replaced by rad, this text
was not useful anymore.
I subscribed to a VPN service from the french association Grifon (Grifon
website[FR] to get an IPv6 access to the world and play
with IPv6. I will not talk about the VPN service, it would be pointless.
I now have an IPv6 prefix of 48 bits which can theorically have 280 addresses.
I would like my computers connected through the VPN to let others computers in
my network to have IPv6 connectivity.
On OpenBSD, this is very easy to do. If you want to provide IPv6 to Windows
devices on your network, you will need one more.
In my setup, I have a tun0 device which has the IPv6 access and re0 which is my
LAN network.
First, configure IPv6 on your lan:
# ifconfig re0 inet6 autoconf
that’s all, you can add a new line “inet6 autoconf” to your file
/etc/hostname.if
to get it at boot.
Now, we have to allow IPv6 to be routed through the differents
interfaces of the router.
# sysctl net.inet6.ip6.forwarding=1
This change can be made persistent across reboot by adding
net.inet6.ip6.forwarding=1
to the file /etc/sysctl.conf
.
Automatic addressing
Now we have to configure the daemon rad to advertise the we are routing,
devices on the network should be able to get an IPv6 address from its
advertisement.
The minimal configuration of /etc/rad.conf is the following:
interface re0 {
prefix 2a00:5414:7311::/48
}
In this configuration file we only define the prefix available, this is
equivalent to a dhcp addresses range. Others attributes could provide DNS
servers to use for example, see rad.conf man page.
Then enable the service at boot and start it:
# rcctl enable rad
# rcctl start rad
Tweaking resolv.conf
By default OpenBSD will ask for IPv4 when resolving a hostname (see
resolv.conf(5) for more explanations). So, you will never have IPv6
traffic until you use a software which will request explicit IPv6
connection or that the hostname is only defined with a AAAA field.
# echo "family inet6 inet4" >> /etc/resolv.conf.tail
The file resolv.conf.tail is appended at the end of resolv.conf
when dhclient modifies the file resolv.conf.
Microsoft Windows
If you have Windows systems on your network, they won’t get addresses
from rad. You will need to deploy dhcpv6 daemon.
The configuration file for what we want to achieve here is pretty
simple, it consists of telling what range we want to allow on DHCPv6
and a DNS server. Create the file /etc/dhcp6s.conf
:
interface re0 {
address-pool pool1 3600;
};
pool pool1 {
range 2a00:5414:7311:1111::1000 to 2a00:5414:7311:1111::4000;
};
option domain-name-servers 2001:db8::35;
Note that I added “1111” into the range because it should not be on the
same network than the router. You can replace 1111 by what you want, even CAFE
or 1337 if you want to bring some fun to network engineers.
Now, you have to install and configure the service:
# pkg_add wide-dhcpv6
# touch /etc/dhcp6sctlkey
# chmod 400 /etc/dhcp6sctlkey
# echo SOME_RANDOM_CHARACTERS | openssl enc -base64 > /etc/dhcp6sctlkey
# echo "dhcp6s -c /etc/dhcp6s.conf re0" >> /etc/rc.local
The openbsd package wide-dhcpv6 doesn’t provide a rc file to
start/stop the service so it must be started from a command line, a
way to do it is to type the command in /etc/rc.local
which is run at
boot.
The openssl command is needed for dhcpv6 to start, as it requires a
base64 string as a secret key in the file /etc/dhcp6sctlkey.
I am happy to announce there is now a RSS feed for getting news in case of new
packages available on my repository
https://stable.perso.pw/
The file is available at https://stable.perso.pw/rss.xml.
I take the occasion of this blog post to explain how the file is generated as I
did not find easy tool for this task, so I ended up doing it myself.
I choosed to use XSLT, which is not quite common. Briefly, XSLT allows
to use some kind of XML template on a XML data file, this allow loops,
filtering etc… It requires only two parts: the template and the data.
Simple RSS template
The following file is a template for my RSS file, we can see a few tags
starting by xsl
like xsl:for-each
or xsl:value-of
.
It’s interesting to note that the xsl-for-each
can use a condition like
position < 10
in order to limit the loop to the 10 first items.
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<description></description>
<!-- BEGIN CONFIGURATION -->
<title>OpenBSD unofficial stable packages repository</title>
<link>https://stable.perso.pw/</link>
<atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" />
<!-- END CONFIGURATION -->
<!-- Generating items -->
<xsl:for-each select="feed/news[position()<10]">
<item>
<title>
<xsl:value-of select="title"/>
</title>
<description>
<xsl:value-of select="description"/>
</description>
<pubDate>
<xsl:value-of select="date"/>
</pubDate>
</item>
</xsl:for-each>
</channel>
</rss>
</xsl:template>
</xsl:stylesheet>
Simple data file
Now, we need some data to use with the template.
I’ve added a comment block so I can copy / paste it to add a new entry into the
RSS easily. As the date is in a painful format to write for a human, I added to
my Makefile starting the commands a call to a script replacing the string DATE
by the current date with the correct format.
<feed>
<news>
<title>www/mozilla-firefox</title>
<description>Firefox 67.0.1</description>
<date>Wed, 05 Jun 2019 06:00:00 GMT</date>
</news>
<!-- copy paste for a new item
<news>
<title></title>
<description></description>
<date></date>
</news>
-->
</feed>
Makefile
I love makefiles, so I share it even if this one is really short.
all:
sh replace_date.sh
xsltproc template.xml news.xml | xmllint -format - | tee rss.xml
scp rss.xml perso.pw:/home/stable/
clean:
rm rss.xml
When I want to add an entry, I copy / paste the comment block in news.xml, add
DATE, run make
and it’s uploaded :)
The command xsltproc is available from the package libxslt on OpenBSD.
And then, after writing this, I realise that manually editing the result file
rss.xml is as much work as editing the news.xml file and then process it with
xslt… But I keep that blog post as this can be useful for more complicated
cases. :)
This article explains how to set up a simple samba server to have a CIFS /
Windows shared folder accessible by everyone. This is useful in some cases but
samba configuration is not straightforward when you need it for a one shot time
or this particular case.
The important covered case here is that no user are needed. The trick comes
from map to guest = Bad User
configuration line in [global]
section. This
option will automatically map an unknown user or no provided user to the guest
account.
Here is a simple /etc/samba/smb.conf
file to share /home/samba to
everyone, except map to guest and the shared folder, it’s the stock file with
comments removed.
[global]
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server
log file = /var/log/samba/smbd.%m
max log size = 50
dns proxy = no
map to guest = Bad User
[myfolder]
browseable = yes
path = /home/samba
writable = yes
guest ok = yes
public = yes
If you want to set up this on OpenBSD, it’s really easy:
# pkg_add samba
# rcctl enable smbd nmbd
# vi /etc/samba/smb.conf (you can use previous config)
# mkdir -p /home/samba
# chown nobody:nobody /home/samba
# rcctl start smbd nmbd
And you are done.
I switched from a homemade script using mblaze to neomutt (after being using mutt, alpine and mu4e) and it’s difficult to remember everything. So, let’s do a cheatsheet!
- Mark as read: Ctrl+R
- Mark to delete: d
- Execute deletion: $
- Tag a mail: t
- Operation on tagged mails: ;[OP] with OP being the key for that operation.
- Move a mail: s (for save, which is a copy + delete)
- Save a mail: c (for copy)
Delete mails based on date
- use
T
to enter a date range, format [before]-[after] with before/after being a DD/MM/YYYY format (YYYY is optional)
~d 24/04-
to mark mails after 24/04 of this year
~d -24/04
to mark mails before 24/04 of this year
~d 24/04-25/04
to mark mails between 24/04 and 25/04 (inclusive)
;d
to tell neomutt we want to delete marked mails
$
to make deletion happen
I use ssh tunneling A LOT, for everything. Yesterday, I removed the
public access of my IMAP server, it’s now only available through ssh
tunneling to access the daemon listening on localhost. I have plenty
of daemons listening only on localhost that I can only reach through a
ssh tunnel. If you don’t want to bother with ssh and redirect ports you
need, you can also make a VPN (using ssh, openvpn, iked, tinc…)
between your system and your server. I tend to avoid setting up VPN for
the current use case as it requires more work and more maintenance than
running ssh server and a ssh client.
The last change, for my IMAP server, added an issue. I want my phone
to access the IMAP server but I don’t want to connect to my main
account from my phone for security reasons. So, I need a dedicated
user that will only be allowed to forward ports.
This is done very easily on OpenBSD.
The steps are:
1. generate ssh keys for the new user
2. add an user with no password
3. allow public key for port forwarding
Obviously, you must allow users (or only this one) to make port forwarding in
your sshd_config.
Generating ssh keys
Please generate the keys in a safe place, using
ssh-keygen
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:SOMETHINGSOMETHINSOMETHINSOMETHINSOMETHING user@myhost
The key's randomart image is:
+---[RSA 3072]----+
| |
| ** |
| * ** . |
| * * |
| **** * |
| **** |
| |
| |
| |
+----[SHA256]-----+
This will create your public key in ~/.ssh/id_rsa.pub and the private key in
~/.ssh/id_rsa
Adding an user
On OpenBSD, we will create an user named tunnel, this is done with the
following command as root:
# useradd -m tunnel
This user has no password and can’t login on ssh.
Allow the public key to port forward only
We will use the command restriction in the authorized_keys file to
allow the previously generated key to only forward.
Edit /home/tunnel/.ssh/authorized_keys as following
command="echo 'Tunnel only!'" ssh-rsa PUT_YOUR_PUBLIC_KEY_HERE
This will tell “Tunnel only” and abort the connection if the user connects and
with a shell or a command.
Connect using ssh
You can connect with ssh(1) as usual but you
will require the flag -N to not start a shell on the remote server.
$ ssh -N -L 10000:localhost:993 tunnel@host
If you want the tunnel to stay up in the most automated way possible, you can
use autossh from ports, which will do a great job at keeping ssh up.
$ autossh -M 0 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "TCPKeepAlive yes" -N -v -L 9993:localhost:993 tunnel@host
This command will start autossh, restart if forwarding doesn’t work which is
likely to happens when you lose connectivity, it takes some time for the remote
server to disable the forwarding effectively. It will make a keep alive check
so the tunnel stays up and ensure it’s up (this is particularly useful on
wireless connection like 4G/LTE).
The others flags are also ssh parameters, to not start a shell, and for making
a local forwarding. Don’t forget that as a regular user, you can’t bind on
ports less than 1024, that’s why I redirect the port 993 to the local port
9993 in the example.
Making the tunnel on Android
If you want to access your personal services from your Android phone, you can
use ConnectBot ssh client. It’s really easy:
- upload your private key to the phone
- add it in ConnectBot from the main menu
- create a new connection the user and your remote host
- choose to use public key authentication and choose the registered key
- uncheck “start a shell session” (this is equivalent to -N ssh flag)
- from the main menu, long touch the connection and edit the forwarded ports
Enjoy!
The following guide is a real world example of drist usage. We will
create a script to deploy munin-node on OpenBSD systems.
We need to create a script that will install munin-node package but
also configure it using the default proposal. This is done easily
using the script file.
#!/bin/sh
# checking munin not installed
pkg_info | grep munin-node
if [ $? -ne 0 ]; then
pkg_add munin-node
munin-node-configure --suggest --shell | sh
rcctl enable munin_node
fi
rcctl restart munin_node
The script contains some simple logic to prevent trying installing
munin-node each time we will run it, and also prevent re-configuring it
automatically every time. This is done by checking if pkg_info output
contains munin-node.
We also need to provide a munin-node.conf file to allow our munin
server to reach the nodes. For this how-to, I’ll dump the
configuration in the commands using cat, but of course, you can use
your favorite editor to create the file, or copy an original
munin-node.conf file and edit it to suit your needs.
mkdir -p files/etc/munin/
cat <<EOF > files/etc/munin/munin-node.conf
log_level 4
log_file /var/log/munin/munin-node.log
pid_file /var/run/munin/munin-node.pid
background 1
setsid 1
user root
group wheel
ignore_file [\#~]$
ignore_file DEADJOE$
ignore_file \.bak$
ignore_file %$
ignore_file \.dpkg-(tmp|new|old|dist)$
ignore_file \.rpm(save|new)$
ignore_file \.pod$
allow ^127\.0\.0\.1$
allow ^192\.168\.1\.100$
allow ^::1$
host *
port 4949
EOF
Now, we only need to use drist on the remote host:
drist root@myserver
Last version of drist as now also supports privilege escalation using
doas instead of connecting to root by ssh:
drist -s -e doas user@myserver
Thanks to a hard work from thfr@, it is now possible to play the commercial game **Slay The Spire** on OpenBSD.
Small introduction to the game: it's a solo deck building game where you need to escalate a tower. Each floor may contain enemie(s) or a treasure or a merchant or an elite (harder enemies) or an event.
There are four characters playable, each unlocked after playing with the previous one. The game is really easy to understand, every game (or run) restart from the beginning with your character, at every new floor you may earn items and cards to build a deck for this run.
When you die, you can unlock some new items per characters and unlock cards for next runs. The goal is to reach the top of the tower. Each character is really different to play and each allow a few obvious deck builds.
The game work with an OpenBSD 6.5 minimum but this method using libgdx will work since 6.9. For this you will need:
1. Buy Slay The Spire on GOG or Steam
2. Copy files from a Slay The Spire installation (Windows or Linux) to your OpenBSD system or unzip the linux installer .sh file
3. Install some packages with pkg_add: openal jdk-11 lwjgl libgdx
4. Search for the .jar file (biggest file), then run libgdx-setup to extract data from the jar file and prepare the game.
5. Run the game with libgdx-run
4. Don't forget to eat, hydrate yourself and sleep. This game is time consuming :)
All settings and saves are stored in the game folder, so you may want to backup it if you don't want to lose your progression.
Again, thanks to thfr@ for his huge work on making games working on OpenBSD!
This article explains how to use haproxy to add a TLS layer to any TCP
protocol. This includes http or gopher. The following example explains
the minimal setup required in order to make it work, haproxy has a lot
of options and I won’t use them.
The idea is to let haproxy manage the TLS part and let your http server
(or any daemon listening on TCP) replying within the wrapped connection.
You need a simple haproxy.cfg which can looks like that:
defaults
mode tcp
timeout client 50s
timeout server 50s
timeout connect 50s
frontend haproxy
bind *:7000 ssl crt /etc/ssl/certificat.pem
default_backend gopher
backend gopher
server gopher 127.0.0.1:7070 check
The idea is that it waits on port 7000 and will use the file
/etc/ssl/certificat.pem as a certificate, and forward requests to the
backend on 127.0.0.1:7070. That is ALL. If you want to do https, you need
to listen on port 443 and redirect to your port 80.
The PEM file is made from the privkey concatenated with the fullchain
certificate. If you use a self signed certificate, you can make it with the
following command:
cat secret.key certificate.crt > cert.pem
One can use a folder with PEM certificates files inside instead of using a
file. This will allow haproxy to receive connections for ALL the certificates
loaded.
For more security, I recommend using the chroot feature and a dh file but it’s
out of the current topic.
Hi,
In this article I will explain how to setup a gopher server supporting
TLS. Gopher TLS support is not “official” as there is currently no RFC
to define it. It has been recently chose by the community how to make
it work, while keeping compatibility with old servers / clients.
The way to do it is really simple.
Client A tries to connects to Server B, Client A tries TLS handshake,
if Server B answers correctly to the TLS handshakes, then Client A
sends the gopher request and Server B answers the gopher requests. If
Server B doesn’t understand the TLS handshakes, then it will probably
output a regular gopher page, then this is throwed and Client A
retries the connection using plaintext gopher and Server B answers the
gopher request.
This is easy to achieve because gopher protocol doesn’t require the
server to send anything to the client before the client sends its
request.
The way to add the TLS layer and the dispatching can be achieved using
sslh and relayd. You could use haproxy instead of relayd, but
the latter is in OpenBSD base system so I will use it. Thanks parazyd
for sharing about sslh for this use case.
sslh is a protocol demultiplexer, it listens on a port, and
depending on what it receives, it will try to guess the protocol used
by the client and send it to the according backend. It’s first purpose
was to make ssh available on port 443 while still having https daemon
working on that server.
Here is a schema of the setup
+→ relayd for TLS + forwarding
↑ ↓
↑ tls? ↓
client -> sslh TCP 70 → + ↓
↓ not tls ↓
↓ ↓
+→ → → → → → → gopher daemon on localhost
This method allows to wrap any server to make it TLS compatible. The
best case would be to have TLS compatibles servers which do all the
work without requiring sslh and something to add the TLS. But it’s
currently a way to show TLS for gopher is real.
Relayd
The relayd(1) part is easy, you first need a x509 certificate for the
TLS part, I will not explain here how to get one, there are already
plenty of how-to and one can use let’s encrypt with acme-client(1) to
get one on OpenBSD.
We will write our configuration in /etc/relayd.conf
log connection
relay "gopher" {
listen on 127.0.0.1 port 7000 tls
forward to 127.0.0.1 port 7070
}
In this example, relayd listens on port 7000 and our gopher daemon
listens on port 7070. According to relayd.conf(5), relayd will look
for the certificate at the following places:
/etc/ssl/private/$LISTEN_ADDRESS:$PORT.key
and
/etc/ssl/$LISTEN_ADDRESS:$PORT.crt
, with the current example you
will need the files: /etc/ssl/private/127.0.0.1:7000.key and
/etc/ssl/127.0.0.1:7000.crt
relayd can be enabled and started using rcctl:
# rcctl enable relayd
# rcctl start relayd
Gopher daemon
Choose your favorite gopher daemon, I recommend geomyidae but any
other valid daemon will work, just make it listening on the correct
address and port combination.
# pkg_add geomyidae
# rcctl enable geomyidae
# rcctl set geomyidae flags -p 7070
# rcctl start geomyidae
SSLH
We will use sslh_fork (but sslh_select would be valid too, they have
differents pros/cons). The --tls
parameters tells where to forward a
TLS connection while --ssh
will forward to the gopher daemon. This
is so because the protocol ssh is already configured within sslh and
acts exactly like a gopher daemon: the client doesn’t expect the
server to be the first sending data.
# pkg_add sslh
# rcctl enable sslh_fork
# rcctl set sslh_fork flags --tls 127.0.0.1:7000 --ssh 127.0.0.1:7070 -p 0.0.0.0:70
# rcctl start sslh_fork
Client
You can easily test if this works using openssl to connect by hand to the port 70
$ openssl s_client -connect 127.0.0.1:7000
You should see a lot of output, which is the TLS handshake, then you
can send a gopher request like “/” and you should get a result. Using
telnet on the same address and port should give the same result.
My gopher client clic already supports gopher TLS and is available
at git://bitreich.org/clic and only requires the ecl common lisp
interpreter to compile.
This is the second article of the serie about iSCSI. In this one, you will
learn how to connect to an iSCSI target using OpenBSD base daemon iscsid.
The configuration file of iscsid doesn’t exist by default, its location is
/etc/iscsi.conf. It can be easily written using the following:
target1="100.64.2.3"
myaddress="100.64.2.2"
target "disk1" {
initiatoraddr $myaddress
targetaddr $target1
targetname "iqn.1994-04.org.netbsd.iscsi-target:target0"
}
While most lines are really obvious, it is mandatory to have the line
initiatoraddr, many thanks to cwen@ for pointing this out when I was stuck on
it.
The targetname value will depend of the iSCSI target server. If you use
netbsd-iscsi-target, then you only need to care about the last part, aka
target0 and replace it by the name of your target (which is target0 for the
default one).
Then we can enable the daemon and start it:
# rcctl enable iscsid
# rcctl start iscsid
In your dmesg, you should see a line like:
sd4 at scsibus0 targ 1 lun 0: <NetBSD, NetBSD iSCSI, 0> SCSI3 0/direct fixed t10.NetBSD_0x5c6cf1b69fc3b38a
If you use netbsd-iscsi-target, the whole line should be identic except for the
sd4 part which can change, depending of your hardware.
If you don’t see it, you may need to reload iscsid configuration file with
iscsictl reload
.
Warning: iSCSI is a bit of pain to debug, if it doesn’t work, double check the
IPs in /etc/iscsi.conf, check your PF rules on the initiator and the
target. You should be at least able to telnet into the target IP port 3260.
Once you found your new sd device, you can format it and mount it as a regular
disk device:
# newfs /dev/rsd4c
# mount /dev/sd4c /mnt
iSCSI is far mor efficient and faster than NFS but it has a total different
purpose. I’m using it on my powerpc machines to build packages on it. This
reduce their old IDE disks usage while giving better response time and
equivalent speed.
This is the first article of a series about iSCSI.
iSCSI is a protocol designed for sharing a block device across
network as if it was a local disk. This doesn’t permit using that
disk from multiples places at once though, except if you use a
specific filesystem like GFS2 or OCFS2 (Linux only). In this article,
we will learn how to create an iSCSI target, which is the “server”
part of iSCSI, the target is the system holding the disk and making
it available to others on the network.
OpenBSD does not have an target server in base, we will have to use
net/netbsd-iscsi-target for this. The setup is really simple.
First, we obviously need to install the package and we will activate the daemon
so it start automatically at boot, but don’t start it yet:
# pkg_add netbsd-iscsi-target
# rcctl enable iscsi_target
The configurations files are in /etc/iscsi/ folder, it contains files
auths and targets. The default configuration files are the same. By
looking at the source code, it seems that auths is used there but it seems
to have no use at all. We will just overwrite it everytime we modify
targets to keep them in sync.
Default /etc/iscsi/targets (with comments stripped):
extent0 /tmp/iscsi-target0 0 100MB
target0 rw extent0 10.4.0.0/16
The first line defines the file holding our disk in the second field, and the
last field defines the size of it. When iscsi-target will be started, it will
create files as required with the size defined here.
The second line defines permissions, in that case, the extent0 disk can be used
read/write by the net 10.4.0.0/16. For this example, I will only change the
netmask to suit my network, then I copy targets over auths.
Let’s start the daemon:
# rcctl start iscsi_target
# rcctl check iscsi_target
iscsi_target(ok)
If you want to restrict ports using PF, you only have to allows the TCP port
3260 from the network that will connect to the target. The according line would
looks like this:
pass in proto tcp to port 3260
Done!
Long time I didn’t write a “port of the week”.
This week, I am happy to present you sct, a very small utility software to
set the color of your screen. You can install it on OpenBSD with pkg_add
sct
and its usage is really simple, just run sct $temp
where $temp is the
temperature you want to get on your screen.
The default temperature is 6500, if you lower this value, the screen will
change toward red, meaning your screen will appear less blue and this may be
more comfortable for some people. The temperature you want to use depend from
the screen and from your feeling, I have one screen which is correct at 5900
but another old screen which turn too much red below 6200!
You can add sct 5900
to your .xsession file to start it when you start your
X11 session.
There is an alternative to sct whose name is redshift, it is more complicated
as you need to tell it your location with latitude and longitude and, as a
daemon, it will correct continuously your screen temperature depending on the
time. This is possible because when you know your location on earth and the
time, you can compute the sunrise time and dawn time. sct is not a daemon,
you run it once and does not change the temperature until you call it again.
Hi, I rarely post about external links or other people work, but at FOSDEM
2019 Vincent Delft had a
talk about running OpenBSD as a full featured NAS.
I do use OpenBSD on my NAS, I wanted to write an article about it since long
time but never did it. Thanks to Vincent, I can just share his work which is
very very interesting if you plan to make your own NAS.
Videos can be downloaded directly with following links provided by Fosdem:
In this third Tor article, we will discover the web browser Tor
Browser.
The Tor Browser is an official Tor project. It is a modified
Firefox, including some defaults settings changes and some extensions.
The default changes are all related to privacy and anonymity. It has
been made to be easy to browse the Internet through Tor without
leaving behing any information which could help identify you, because
there are much more informations than your public IP address which
could be used against you.
It requires tor daemon to be installed and running, as I covered in my
first Tor article.
Using it is really straightforward.
How to install tor-browser
$ pkg_add tor-browser
How to start tor-browser
$ tor-browser
It will create a ~/TorBrowser-Data folder at launch. You can remove it
as you want, it doesn’t contain anything sensitive but is required for
it to work.
If you are using opensmtpd on a device not
always connected on the internet, you may want to see what mail did not go, and
force it to be delivered NOW when you are finally connected to the
Internet.
We can use smtpctl to show the current queue.
$ doas smtpctl show queue
1de69809e7a84423|local|mta|auth|so@tld|dest@tld|dest@tld|1540362112|1540362112|0|2|pending|406|No MX found for domain
The previous command will report nothing if the queue is empty.
In the previous output, we see that there is one mail from me to
dest@tld which is pending due to “NO MX found for domain” (which is
normal as I had no internet when I sent the mail).
We need to extract the first field, which is 1de69809e7a84423 in the
current example.
In order to tell opensmtpd to deliver it now, we will use the
following command:
$ doas smtpctl schedule 1de69809e7a84423
1 envelope scheduled
$ doas smtpctl show queue
My mail was delivered, it’s not in the queue anymore.
If you wish to deliver all enveloppes in the queue, this is as simple as:
$ doas smtpctl schedule all
In this second Tor article, I will present an interesting Tor feature
named hidden service. The principle of this hidden service is to
make available a network service from anywhere, with only
prerequisites that the computer must be powered on, tor not blocked
and it has network access.
This service will be available through an address not disclosing
anything about the server internet provider or its IP, instead, a
hostname ending by .onion will be provided by tor for
connecting. This hidden service will be only accessible through Tor.
There are a few advantages of using hidden services:
- privacy, hostname doesn’t contain any hint
- security, secure access to a remote service not using SSL/TLS
- no need for running some kind of dynamic dns updater
The drawback is that it’s quite slow and it only work for TCP
services.
From here, we assume that Tor is installed and working.
Running an hidden service require to modify the Tor daemon
configuration file, located in /etc/tor/torrc on OpenBSD.
Add the following lines in the configuration file to enable a hidden
service for SSH:
HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
The directory /var/tor/ssh_service will be be created. The
directory /var/tor is owned by user _tor and not readable by
other users. The hidden service directory can be named as you want,
but it should be owned by user _tor with restricted
permissions. Tor daemon will take care at creating the directory with
correct permissions once you reload it.
Now you can reload the tor daemon to make the hidden service
available.
$ doas rcctl reload tor
In the /var/tor/ssh_service directory, two files are created. What
we want is the content of the file hostname which contains the
hostname to reach our hidden service.
$ doas cat /var/tor/ssh_service/hostname
piosdnzecmbijclc.onion
Now, we can use the following command to connect to the hidden service
from anywhere.
$ torsocks ssh piosdnzecmbijclc.onion
In Tor network, this feature doesn’t use an exit node. Hidden services
can be used for various services like http, imap, ssh, gopher etc…
Using hidden service isn’t illegal nor it makes the computer to relay
tor network, as previously, just check if you can use Tor on your
network.
Note: it is possible to have a version 3 .onion address which will
prevent hostname collapsing, but this produce very long
hostnames. This can be done like in the following example:
HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
HiddenServiceVersion 3
This will produce a really long hostname like
tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion
If you want to have the short and long hostnames, you need to specify
twice the hidden service, with differents folders.
Take care, if you run a ssh service on your website and using this
same ssh daemon on the hidden service, the host keys will be the same,
implying that someone could theoricaly associate both and know that
this public IP runs this hidden service, breaking anonymity.
Tor is a network service allowing to hide your traffic. People
sniffing your network will not be able to know what server you reach
and people on the remote side (like the administrator of a web
service) will not know where you are from. Tor helps keeping your
anonymity and privacy.
To make it quick, tor make use of an entry point that you reach
directly, then servers acting as relay not able to decrypt the data
relayed, and up to an exit node which will do the real request for
you, and the network response will do the opposite way.
You can find more details on the
Tor project homepage.
Installing tor is really easy on OpenBSD. We need to install it,
and start its daemon. The daemon will listen by default on localhost
on port 9050. On others systems, it may be quite similar, install the
tor package and enable the daemon if not enabled by default.
# pkg_add tor
# rcctl enable tor
# rcctl start tor
Now, you can use your favorite program, look at the proxy settings and
choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and
use the default address: 127.0.0.1
with port 9050
.
If you need to use tor with a program that doesn’t support setting a
SOCKS proxy, it’s still possible to use torsocks to wrap it, that
will work with most programs. It is very easy to use.
# pkg_add torsocks
$ torsocks ssh remoteserver
This will make ssh going through tor network.
Using tor won’t make you relaying anything, and is legal in most
countries. Tor is like a VPN, some countries has laws about VPN, check
for your country laws if you plan to use tor. Also, note that using
tor may be forbidden in some networks (companies, schools etc..)
because this allows to escape filtering which may be against some kind
of “Agreement usage” of the network.
I will cover later the relaying part, which can lead to legal
uncertainty.
Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to
wrap network system calls, there is a way to do it more cleanly with
ssh (or any program supporting a custom command for initialize the
connection) using netcat.
ssh -o ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p' address.onion
This can be simplified by adding the following lines to your
~/.ssh/config file, in order to automatically use the proxy
command when you connect to a .onion hostname:
Host *.onion
ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p'
This netcat command is tested under OpenBSD, there are differents
netcat implementations, the flags may be differents or may not even
exist.
The default OpenBSD partition layout uses a pre-defined template. If
you have a disk more than 356 GB you will have unused space with the
default layout (346 GB before 6.4).
It’s possible to create a new partition to use that space if you did
not modify the default layout at installation. You only need to start
disklabel with flag -E* and type a to add a partition,
default will use all remaining space for the partition.
# disklabel -E sd0
Label editor (enter '?' for help at any prompt)
> a
partition: [m]
offset: [741349952]
size: [258863586]
FS type: [4.2BSD]
> w
> q
No label changes.
The new partition here is m. We can format it with:
# newfs /dev/rsd0m
Then, you should add it to your /etc/fstab, for that, use the same
uuid as for other partitions, it would look something like
52fdd1ce48744600
52fdd1ce48744600.e /data ffs rw,nodev,nosuid 1 2
It will be auto mounted at boot, you only need to create the folder
/data. Now you can do
# mkdir /data
# mount /data
and /data is usable right now.
You can read disklabel(8) and
newfs for more informations.
Simple command line to display your installed packages listed by size
from smallest to biggest.
$ pkg_info -sa | paste - - - - | sort -n -k 5
Thanks to sthen@ for the command, I was previously using one involving
awk which was less readable. paste is often forgotten, it has very
specifics uses which can’t be mimic easily with other tools, its
purpose is to joins multiples lines into one with some specific rules.
You can easily modify the output to convert the size from bytes to
megabytes with awk:
$ pkg_info -sa | paste - - - - | sort -n -k 5 | awk '{ NF=$NF/1024/1024 ; print }'
This divides the last element (using space separator) of each line
twice by 1024 and displays the line.
Following a discussion on the OpenBSD mailing list misc, today I
will write about how to manage the priority (as in nice priority) of
your daemons or services.
In man page rc(8), one can read:
Before init(8) starts rc, it sets the process priority, umask, and
resource limits according to the “daemon” login class as described in
login.conf(5). It then starts rc and attempts to execute the sequence of
commands therein.
Using /etc/login.conf we can manage some limits for services and
daemon, using their rc script name.
For example, to make jenkins at lowest priority (so it doesn’t
make troubles if it builds), using this line will set it to nice 20.
jenkins:priority=20
If you have a file /etc/login.conf.db you have to update it from
/etc/login.conf using the software cap_mkdb
. This creates a
hashed database for faster information retrieval when this file is
big. By default, that file doesn’t exist and you don’t have to run
cap_mkdb
. See login.conf(5) for
more informations.
In this article I will show how to configure OpenSMTPD, the default mail server
on OpenBSD, to relay mail sent locally to your smtp server. In pratice, this
allows to send mail through “localhost” by the right relay, so it makes also
possible to send mail even if your computer isn’t connected to the internet.
Once connected, opensmtpd will send the mails.
All you need to understand the configuration and write your own one is in the
man page smtpd.conf(5). This is only a
highlight on was it possible and how to achieve it.
In OpenBSD 6.4 release, the configuration of opensmtpd changed drasticaly, now
you have to defines rules and action to do when a mail match the rules, and you
have to define those actions.
In the following example, we will see two kinds of relay, the first is through
smtp over the Internet, it’s the most likely you will want to setup. And the
other one is how to relay to a remote server not allowing relaying from
outside.
/etc/mail/smtpd.conf
table aliases file:/etc/mail/aliases
table secrets file:/etc/mail/secrets
listen on lo0
action "local" mbox alias <aliases>
action "relay" relay
action "myserver" relay host smtps://myrelay@perso.pw auth <secrets>
action "openbsd" relay host localhost:2525
match mail-from "@perso.pw" for any action "myserver"
match mail-from "@openbsd.org" for any action "openbsd"
match for local action "local"
match for any action "relay"
I defined 2 actions, one from “myserver”, it has a label “myrelay” and we use
auth <secrets>
to tell opensmtpd it needs authentication.
The other action is “openbsd”, it will only relay to localhost on port 2525.
To use them, I define 2 matching rules of the very same kind. If the mail that
I want to send match the @domain-name, then choose relay “myserver” or
“openbsd”.
The “openbsd” relay is only available when I create a SSH tunnel, binding the
local port 25 of the remote server to my port 2525, with flags
-L 2525:127.0.0.1:25
.
For a relay using authentication, the login and passwords must be defined in
the file /etc/mail/secrets like this: myrelay login:Pa$$W0rd
smtpd.conf(5) explains creation
of /etc/mail/secrets like this:
touch /etc/mail/secrets
chmod 640 /etc/mail/secrets
chown root:_smtpd /etc/mail/secrets
Now, restarts your server. Then if you need to send mails, just use “mail”
command or localhost as a smtp server. Depending on your From address, a
different relay will be used.
Deliveries can be checked in /var/log/maillog log file.
See mails in queue
doas smtpctl show queue
Try to deliver now
doas smtpctl schedule all
I wrote a script generating a RSS file from the content of the page
https://www.openbsd.org/faq/current.html
This allow to be notified when a big change is made in -current.
The file is available at this place : https://perso.pw/openbsd-current.xml
Today I will cover a specific topic on OpenBSD networking. If you are using a
laptop, you may switch from ethernet to wireless network from time to time.
There is a simple way to keep the network instead of having to disconnect /
reconnect everytime.
It’s possible to aggregate your wireless and ethernet devices into one trunk
pseudo device in failover mode, which give ethernet the priority if connected.
To achieve this, it’s quite simple. If you have devices em0 and iwm0
create the following files.
/etc/hostname.em0
up
/etc/hostname.iwm0
join "office_network" wpakey "mypassword"
join "my_home_network" wpakey "9charshere"
join "roaming phone" wpakey "something"
join "Public Wifi"
up
/etc/hostname.trunk0
trunkproto failover trunkport em0 trunkport iwm0
dhcp
As you can see in the wireless device configuration we can specify multiples
network to join, it is a new feature that will be available from 6.4 release.
You can enable the new configuration by running sh /etc/netstart
as root.
This setup is explained in trunk(4)
man page and in the
OpenBSD FAQ as well.
Old article
Hello, it turned out that this article is obsolete. The security used in is not
safe at all so the goal of this backup system isn’t achievable, thus it should
not be used and I need another backup system.
One of the most important feature of dump for me was to keep track of the inodes
numbers. A solution is to save the list of the inodes numbers and their path in
a file before doing a backup. This can be achieved with the following command.
$ doas ncheck -f "\I \P\n" /var
If you need a backup tool, I would recommend the following:
Duplicity
It supports remote backend like ftp/sftp which is quite convenient as you don’t
need any configuration on this other side. It supports compression and
incremental backup. I think it has some GUI tools available.
Restic
It supports remote backend like cloud storage provider or sftp, it doesn’t
require any special tool on the remote side. It supports deduplication of the
files and is able to manage multiples hosts in the same repository, this
mean that if you backup multiple computers, the deduplication will work across
them. This is the only backup software I know allowing this (I do not count
backuppc which I find really unusable).
Borg
It supports remote backend like ssh only if borg is installed on the other side.
It supports compression and deduplication but it is not possible to save
multiples hosts inside the same repository without doing a lot of hacks (which I
won’t recommend).
This article will explain quickly how to bind a folder to access it
from another path. It can be useful to give access to a specific
folder from a chroot without moving or duplicating the data into the
chroot.
Real world example: “I want to be able to access my 100GB folder
/home/my_data/ from my httpd web server chrooted in /var/www/”.
The trick on OpenBSD is to use NFS on localhost. It’s pretty simple.
# rcctl enable portmap nfsd mountd
# echo "/home/my_data -network=127.0.0.1 -mask=255.255.255.255" > /etc/exports
# rcctl start portmap nfsd mountd
The order is really important. You can check that the folder is
available through NFS with the following command:
$ showmount -e
Exports list on localhost:
/home/my_data 127.0.0.1
If you don’t have any line after “Exports list on localhost:”, you
should kill mountd with pkill -9 mountd
and start mountd again. I
experienced it twice when starting all the daemons from the same
commands but I’m not able to reproduce it. By the way, mountd only
supports reload.
If you modify /etc/exports, you only need to reload mountd using
rcctl reload mountd
.
Once you have check that everything was alright, you can mount the
exported folder on another folder with the command:
# mount localhost:/home/my_data /var/www/htdocs/my_data
You can add -ro
parameter in the /etc/exports file on the export
line if you want it to be read-only where you mount it.
Note: On FreeBSD/DragonflyBSD, you can use mount_nullfs /from /to
,
there is no need to setup a local NFS server. And on Linux you can use
mount --bind /from /to
and some others ways that I won’t cover here.
If you have enough memory on your system and that you can afford to
use a few hundred megabytes to store temporary files, you may want to
mount a mfs filesystem on /tmp. That will help saving your SSD drive,
and if you use an old hard drive or a memory stick, that will reduce
your disk load and improve performances. You may also want to mount a
ramdisk on others mount points like ~/.cache/ or a database for some
reason, but I will just explain how to achieve this for /tmp with is a
very common use case.
First, you may have heard about tmpfs, but it has been disabled in
OpenBSD years ago because it wasn’t stable enough and nobody fixed
it. So, OpenBSD has a special filesystem named mfs, which is a FFS
filesystem on a reserved memory space. When you mount a mfs
filesystem, the size of the partition is reserved and can’t be used
for anything else (tmpfs, as the same on Linux, doesn’t reserve the
memory).
Add the following line in /etc/fstab (following fstab(5)):
swap /tmp mfs rw,nodev,nosuid,-s=300m 0 0
The permissions of the mountpoint /tmp should be fixed before
mounting it, meaning that the /tmp
folder on /
partition
should be changed to 1777:
# umount /tmp
# chmod 1777 /tmp
# mount /tmp
This is required because mount_mfs inherits permissions from the
mountpoint.
Frequently asked questions (with answers) on #openbsd IRC channel
Please read the official OpenBSD FAQ
I am writing this to answer questions asked too many times.
If some answers get good enough, maybe we could try to merge it in the OpenBSD
FAQ if the topic isn’t covered.
If the topic is covered, then a link to the official FAQ should be used.
If you want to participate, you can fetch the page using gopher protocol and
send me a diff:
$ printf '/~solene/article-openbsd-faq.txt\r\n' | nc dataswamp.org 70 > faq.md
OpenBSD features / not features
Here is a list for newcomers to tell what is and what is not OpenBSD
See OpenBSD Innovations
Packet Filter : super awesome firewall
Sane defaults : you install, it works, no tweak
Stability : upgrades go smooth and are easy
pledge and unveil : security features to reduce privileges of software, lots of ports are patched
W^X security
Microphone muted by default, unlockable by root only
Video devices owned by root by default, not usable by users until permission change
Has only FFS file system which is slow and has no “feature”
No wine for windows compatibility
No linux compatibility
No bluetooth support
No usb3 full speed performance
No VM guest additions
Only in-house VMM for being a VM host, only supports OpenBSD and some Linux
Poor fuse support (it crashes quite often)
No nvidia support (nvidia’s fault)
No container / docker / jails
Does OpenBSD has a Code Of Conduct?
No and there is no known plan of having one.
This is a topic upsetting OpenBSD people, just don’t ask about it and send
patches.
What is the OpenBSD release process?
OpenBSD FAQ official information
The last two releases are called “-release” and are officially supported
(patches for security issues are provided).
-stable version is the latest release with the base system patches applied,
the -stable ports tree has some patches backported from -current, mainly to fix
security issues. Official packages for -stable are built and are picked up
automatically by pkg_add(1).
What is -current?
It’s the development version with latest packages and latest code.
You shouldn’t use it only to get latest package versions.
How do I install -current ?
OpenBSD FAQ about current
- download the latest snapshot install .iso or .fs file from your
favorite mirror under /snapshots/ directory
- boot from it
How do I upgrade to -current
OpenBSD FAQ about current
You can use the script sysupgrade -s
, note that the flag is only useful if
you are not running -current right now but harmless otherwise.
When you fetch OpenBSD src or ports from CVS and that you want to save
bandwidth during the process there is a little trick that change
everything: compression
Just add -z9
to the parameter of your cvs command line and the
remote server will send you compressed files, saving 10 times the
bandwidth, or speeding up 10 times the transfer, or both (I’m in the
case where I have differents users on my network and I’m limiting my
incoming bandwidth so other people can have bandwidth too so it is
important to reduce the packets transffered if possible).
The command line should looks like:
$ cvs -z9 -qd anoncvs@anoncvs.fr.openbsd.org:/cvs checkout -P src
Don’t abuse this, this consumes CPU on the mirror.
Today OpenBSD 6.1 has been released, I won’t copy & paste the change
list but, in a few words, it gets better.
Link to the official announce
I already upgraded a few servers, with both methods. One with bsd.rd
upgrade but that requires physical access to the server and the other
method well explained in the upgrade guide which requires to untar the
files and do move some files. I recommend using bsd.rd if
possible.
Hello,
I have a pfsense appliance (Netgate 2440) with a usb console port,
while it used to be a serial port, now devices seems to have a usb
one. If you plug an usb wire from an openbsd box to it, you woull see this in your dmesg
uslcom0 at uhub0 port 5 configuration 1 interface 0 "Silicon Labs CP2104 USB to UART Bridge Controller" rev 2.00/1.00 addr 7
ucom0 at uslcom0 portno 0
To connect to it from OpenBSD, use the following command:
# cu -l /dev/cuaU0 -s 115200
And you’re done
Let’s encrypt is a free service which provides free SSL
certificates. It is fully automated and there are a few tools to
generate your certificates with it. In the following lines, I will
just explain how to get a certificate in a few minutes. You can find
more informations on Let’s Encrypt website.
To make it simple, the tool we will use will generate some keys on the
computer, send a request to Let’s Encrypt service which will use http
challenging (there are also dns and another one kind of challenging)
to see if you really own the domain for which you want the
certificate. If the challenge process is ok, you have the certificate.
Please, if you don’t understand the following commands, don’t type
it.
While the following is right for OpenBSD, it may change slightly for
others systems. Acme-client is part of the base system, you can read
the man page acme-client(1).
Prepare your http server
For each certificate you will ask a certificate, you will be
challenged for each domain on the port 80. A file must be available in
a path under “/.well-known/acme-challenge/”.
You must have this in your httpd config file. If you use another
web server, you need to adapt.
server "mydomain.com" {
root "/empty"
listen on * port 80
location "/.well-known/acme-challenge/*" {
root { "/acme/" , request strip 2 }
}
}
The request strip 2
part is IMPORTANT. (I’ve lost 45 minutes figuring
out why root “/acme/” wasn’t working.)
Prepare the folders
As stated in acme-client man page and if you don’t need to change the
path. You can do the following commands with root privileges :
# mkdir /var/www/acme
# mkdir -p /etc/ssl/acme/private /etc/acme
# chmod 0700 /etc/ssl/acme/private /etc/acme
Request the certificates
As root, in the acme-client sources folder, type the following the
generate the certificates. The verbose flag is interesting and you
will see if the challenging step work. If it doesn’t work, you should
try manually to get a file like with the same path tried from Let’s
encrypt, and try again the command when you succeed.
$ acme-client -vNn mydomain.com www.mydomain.com mail.mydomain.com
Use the certificates
Now, you can use your SSL certificates for your mail server, imap
server, ftp server, http server…. There is a little drawback, if you
generate certificates for a lot of domains, they are all written in
the certificate. This implies that if someone visit one page, look at
the certificate, this person will know every domain you have under
SSL. I think that it’s possible to ask every certificate independently
but you will have to play with acme-client flags and make some kind of
scripts to automatize this.
Certificate file is located at /etc/ssl/acme/fullchain.pem and
contains the full certification chain (as its name is explicit). And
the private key is located at /etc/ssl/acme/private/privkey.pem.
Restart the service with the certificate.
Renew certificates
Certificates are valid for 3 months. Just type
./acme-client mydomain.com www.mydomain.com mail.mydomain.com
Restart your ssl services
EASY !