About me: My name is Solène Rapenne, pronouns she/her. I like learning and
sharing knowledge. Hobbies: '(BSD OpenBSD Qubes OS Lisp cmdline gaming security QubesOS internet-stuff). I
love percent and lambda characters. OpenBSD developer solene@. No AI is involved in this blog.
Contact me: solene at dataswamp dot org or
@solene@bsd.network (mastodon).
I recently took a very hard decision: I moved my emails to Proton Mail.
This is certainly a shock for people following this blog for a long time, this was a shock for me as well! This was actually pretty difficult to think this topic objectively, I would like to explain how I came up to this decision.
I have been self-hosting my own email server since I bought my first domain name, back in 2009. The server have been migrated multiple times, from hosting companies to another and regularly changing the underlying operating system for fun. It has been running on: Slackware, NetBSD, FreeBSD, NixOS and Guix.
First, I need to explain my previous self-hosted setup, and what I do with my emails.
I have two accounts:
one for my regular emails, mailing lists, friends, family
one for my company to reach client, send quotes and invoices
Ideally, having all the emails retrieved locally and not stored on my server would be ideal. But I am using a lot of devices (most are disposable), and having everything on a single computer will not work for me.
Due to my emails being stored remotely and containing a lot of private information, I have never been really happy with how emails work at all. My dovecot server has access to all my emails, unencrypted and a single password is enough to connect to the server. Adding a VPN helps to protect dovecot if it is not exposed publicly, but the server could still be compromised by other means. OpenBSD smtpd server got critical vulnerabilities patched a few years ago, basically allowing to get root access, since then I have never been really comfortable with my email setup.
I have been looking for ways to secure my emails, this is how I came to the setup encrypting incoming emails with GPG. This is far from being ideal, and I stopped using it quickly. This breaks searches, the server requires a lot of CPU and does not even encrypt all information.
Someone shown me a dovecot plugin to encrypt emails completely, however my understanding of the encryption of this plugin is that the IMAP client must authenticate the user using a plain text password that is used by dovecot to unlock an asymmetric encryption key. The security model is questionable: if the dovecot server is compromised, users passwords are available to the attacker and they can decrypt all the emails. It would still be better than nothing though, except if the attacker has root access.
One thing I need from my emails is to arrive to the recipients. My emails were almost always considered as spam by big email providers (GMail, Microsoft), this has been an issue for me for years, but recently it became a real issue for my business. My email servers were always perfectly configured with everything required to be considered as legit as possible, but it never fully worked.
Why did I choose Proton Mail over another email provider? There are a few reasons for it, I evaluated a few providers before deciding.
Proton Mail is a paid service, actually this is an argument in itself, I would not trust a good service to work for free, this would be too good to be true, so it would be a scam (or making money on my data, who knows).
They offer zero-knowledge encryption and MFA, which is exactly what I wanted. Only me should be able to read my email, even if the provider is compromised, adding MFA on top is just perfect because it requires two secrets to access the data. Their zero-knowledge security could be criticized for a few things, ultimately there is no guarantee they do it as advertised.
Long story short, when making your account, Proton Mail generates an encryption key on their server that is password protected with your account password. When you use the service and log-in, the encrypted key is sent to you so all crypto operations happens locally, but there is no way to verify if they kept your private key unencrypted at the beginning, or if they modified their web apps to key log the password typed. Applications are less vulnerable to the second problem as it would impact many users and this would leave evidences. I do trust them for doing the things right, although I have no proof.
I did not choose Proton Mail for end-to-end encryption, I only use GPG occasionally and I could use it before.
IMAP is possible with Proton Mail when you have a paid account, but you need to use a "connect bridge", it is a client that connects to Proton with your credentials and download all encrypted emails locally, then it exposes an IMAP and SMTP server on localhost with dedicated credentials. All emails are saved locally and it syncs continuously, it works great, but it is not lightweight. There is a custom implementation named hydroxide, but it did not work for me. The bridge does not support caldav and cardav, which is not great but not really an issue for me anyway.
Before migrating, I verified that reversibility was possible, aka being able to migrate my emails away from Proton Mail. In case they stop providing their export tool, I would still have a local copy of all my IMAP emails, which is exactly what I would need to move it somewhere else.
There are certainly better alternatives than Proton with regard to privacy, but Proton is not _that_ bad on this topic, it is acceptable enough for me.
I did not know I would appreciate scheduling emails sending, but it's a thing and I do not need to keep the computer on.
It is possible to generate aliases (10 or unlimited depending on the subscription), what's great with it is that it takes a couple seconds to generate a unique alias, and replying to an email received on an alias automatically uses this alias as the From address (webmail feature). On my server, I have been using a lot of different addresses using a "+" local prefix, it was rarely recognized, so I switched to a dot, but these are not real aliases. So I started managing smtpd aliases through ansible, and it was really painful to add a new alias every time I needed one. Did I mention I like this alias feature? :D
If I want to send an end-to-end encrypted email without GPG, there is an option to use a password to protect the content, the email would actually send a link to the recipient, leading to a Proton Mail interface asking for the password to decrypt the content, and allow that person to reply. I have no idea if I will ever use it, but at least it is a more user-friendly end-to-end encryption method. Tuta is offering the same feature, but it is there only e2e method.
Proton offer logs of login attempts on my account, this was surprising.
There is an onion access to their web services in case you prefer to connect using tor.
The web interface is open source, one should be able to build it locally to connect to Proton servers, I guess it should work?
Proton Mail cannot be used as an SMTP relay by my servers, except through the open source bridge hydroxide.
The calendar only works on the website and the smartphone app. The calendar it does not integrate with the phone calendar, although in practice I did not find it to be an issue, everything works fine. Contact support is less good on Android, they are restrained in the Mail app and I still have my cardav server.
The web app is first class citizen, but at least it is good.
Nothing prevents Proton Mail from catching your incoming and outgoing emails, you need to use end-to-end encryption if you REALLY need to protect your emails from that.
I was using two accounts, this would require a "duo" subscription on Proton Mail which is more expensive. I solved this by creating two identities, label and filter rules to separate my two "accounts" (personal and professional) emails. I actually do not really like that, although it is not really an issue at the moment as one of them is relatively low traffic.
The price is certainly high, the "Mail plus" plan is 4€ / month (48€ / year) if you subscribe for 12 months, but is limited to 1 domain, 10 aliases and 15 GB of storage. The "Proton Unlimited" plan is 10€ / month (120€ / year) but comes with the kitchen sink: infinite aliases, 3 domains, 500 GB storage, and access to all Proton services (that you may not need...) like VPN, Drive and Pass. In comparison, hosting your email service on a cheap server should not cost you more than 70€ / year, and you can self-host a nextcloud / seafile (equivalent to Drive, although it is stored encrypted there), a VPN and a vaultwarden instance (equivalent to Pass) in addition to the emails.
Emails are limited to 25MB, which is low given I always configured my own server to allow 100 MB attachments, but it created delivery issues on most recipient servers, so it is not a _real_ issue, but I prefer when I can decide of this kind of limitation.
If I was to self-host again (which may be soon! Who knows), I would do it differently to improve the security:
one front server with the SMTP server, cheap and disposable
one server for IMAP
one server to receive and analyze the logs
Only the SMTP server would be publicly available, all ports would be closed on all servers, servers would communicate between each other through a VPN, and exports their logs to a server that would only be used for forensics and detecting security breaches.
Such setup would be an improvement if I was self-hosting again my emails, but the cost and time to operate is non-negligible. It is also an ecological nonsense to need 3 servers for a single person emails.
I started this blog post with the fact that the decision was hard, so hard that I was not able to decide up to a day before renewing my email server for one year. I wanted to give Proton a chance for a month to evaluate it completely, and I have to admit I like the service much more than I expected...
My Unix hacker heart hurts terribly on this one. I would like to go back to self-hosting, but I know I cannot reach the level of security I was looking for, simply because email sucks in the first place. A solution would be to get rid of this huge archive burden I am carrying, but I regularly search information into this archive and I have not found any usable "mail archive system" that could digest everything and serve it locally.
I wrote this blog post two days ago, and I cannot stop thinking about this topic since the migration.
The real problem certainly lies in my use case, not having my emails on the remote server would solve my problems. I need to figure how to handle it. Stay tuned :-)
A domain name must expose some information through WHOIS queries, basically who is the registrar responsible for it, and who could be contacted for technical or administration matters.
Almost every registrar will offer you feature to hide your personal information, you certainly not want to have your full name, full address and phone number exposed on a single WHOIS request.
You can perform a WHOIS request on the link below, directly managed by ICANN.
If you use TLS certificates for your services, and ACME (Let's Encrypt or alternatives), all the domains for which a certificate was emitted can easily be queried.
You can visit the following website, type a domain name, and you will immediately have a list of existing domain names.
If you use a custom domain in your email, it is highly likely that you have some IT knowledge and that you are the only user of your email server.
Using this statement (IT person + only domain user), someone having access to your email address can quickly search for anything related to your domain and figure it is related to you.
Anywhere you connect, your public IP is known of the remote servers.
Some bored sysadmin could take a look at the IPs in their logs, and check if some public service is running on it, polling for secure services (HTTPS, IMAPS, SMTPS) will immediately give associated domain name on that IP, then they could search even further.
There are not many solutions to prevent this, unfortunately.
The public IP situation could be mitigated by either continuing hosting at home by renting a cheap server with a public IP and establish a VPN between the two and use the public IP of the server for your services, or to move your services to such remote server. This is an extract cost of course. When possible, you could expose the service over Tor hidden service or I2P if it works for your use case, you would not need to rent a server for this.
The TLS certificates names being public could be easily solved by generating self-signed certificates locally, and deal with it. Depending on your services, it may be just fine, but if you have strangers using the services, the fact to accept to trust the certificate on first use (TOFU) may appear dangerous. Some software fail to connect to self-signed certificates and do not offer a bypass...
Self-hosting at home can be practical for various reasons: reusing old hardware, better local throughput, high performance for cheap... but you need to be aware of potential privacy issues that could come with it.
If you use Proton VPN with the paid plan, you have access to their port forwarding feature. It allows you to expose a TCP and/or UDP port of your machine on the public IP of your current VPN connection.
This can be useful for multiple use cases, let's see how to use it on Linux and OpenBSD.
If you do not have a privacy need with regard to the service you need to expose to the Internet, renting a cheap VPS is a better solution: cheaper price, stable public IP, no weird script for port forwarding, use of standard ports allowed, reverse DNS, etc...
Proton VPN port forwarding feature is not really practical, at least not as practical as doing a port forwarding with your local router. The NAT is done using NAT-PMP protocol (an alternative to UPnP), you will be given a random port number for 60 seconds. The random port number is the same for TCP and UDP.
There is a NAT PMPC client named natpmpc (available almost everywhere as a package) that need to run in an infinite loop to renew the port lease before it expires.
This is rather not practical for multiple reasons:
you get a random port assigned, so you must configure your daemon every time
the lease renewal script must run continuously
if something wrong happens (script failing, short network failure) that prevent renewing the lease, you will get a new random port
Although it has shortcomings, it is a useful feature that was dropped by other VPN providers because of abuses.
Install the package natpmpd to get the NAT-PMP client.
Create a script with the following content, and make it executable:
#!/bin/sh
PORT=$(natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')
# check if the current port is correct
grep "$PORT" /var/i2p/router.config || /etc/rc.d/i2p stop
# update the port in I2P config
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," /var/i2p/router.config
# make sure i2p is started (in case it was stopped just before)
/etc/rc.d/i2p start
while true
do
date # use for debug only
natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
sleep 45
done
The script will search for the port number in I2P configuration, stop the service if the port is not found. Then the port line is modified with sed (in all cases, it does not matter much). Finally, i2p is started, this will only do something in case i2p was stopped before, otherwise nothing happens.
Then, in an infinite loop with a 45 seconds frequency, there is a renewal of the TCP and UDP port forwarding happening. If something wrong happens, the script exits.
If you want to use supervisord to start the script at boot and maintain it running, install the package supervisor and create the file /etc/supervisord.d/nat.ini with the following content:
[program:natvpn]
command=/etc/supervisord.d/continue_nat.sh ; choose the path of your script
autorestart=unexpected ; when to restart if exited after running (def: unexpected)
Enable supervisord at boot, start it and verify it started (a configuration error prevents it from starting):
The setup is exactly the same as for OpenBSD, just make sure the package providing natpmpc is installed.
Depending on your distribution, if you want to automate the script running / restart, you can run it from a systemd service with auto restart on failure, or use supervisord as explained above.
If you use a different network namespace, just make sure to prefix the commands using the VPN with ip netns exec vpn.
Here is the same example as above but using a network namespace named "vpn" to start i2p service and do the NAT query.
#!/bin/sh
PORT=$(ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')
FILE=/var/i2p/.i2p/router.config
grep "$PORT" $FILE || sudo -u i2p /var/i2p/i2prouter stop
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," $FILE
ip netns exec vpn sudo -u i2p /var/i2p/i2prouter start
while true
do
date
ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 && ip netns exec vpn natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
sleep 45
done
Proton VPN port forwarding feature is useful when need to expose a local network service on a public IP. Automating it is required to make it work efficiently due to the unusual implementation.
In this blog post, you will learn how to configure your email server to encrypt all incoming emails using user's GPG public keys (when it exists). This will prevent anyone from reading the emails, except if you own the according GPG private key. This is known as "encryption at rest".
This setup, while effective, has limitations. Headers will not be encrypted, search in emails will break as the content is encrypted, and you obviously need to have the GPG private key available when you want to read your emails (if you read emails on your smartphone, you need to decide if you really want your GPG private key there).
Encryption is CPU consuming (and memory too for emails of a considerable size), I tried it on an openbsd.amsterdam virtual machine, and it was working fine until someone sent me emails with 20MB attachments. On a bare-metal server, there is absolutely no issue. Maybe GPG makes use of hardware acceleration cryptography, and it is not available in virtual machines hosted under the OpenBSD hypervisor vmm.
This is not an original idea, Etienne Perot wrote about a similar setup in 2012 and enhanced the gpgit script we will use in the setup. While his blog post is obsolete by now because of all the changes that happened in Dovecot, the core idea remains the same. Thank you very much Etienne for your job!
This setup is useful to protect your emails stored on the IMAP server. If the server or your IMAP account are compromised, the content of your emails will be encrypted and unusable.
You must be aware that emails headers are not encrypted: recipients / senders / date / subject will remain in clear text even after encryption. If you already use end-to-end encryption with your recipients, there are no benefits using this setup.
An alternative is to not let any emails on the IMAP server, although they could be recovered as they are written in the disk until you retrieve them.
Personally, I keep many emails of my server, and I am afraid that a 0day vulnerability could be exploited on my email server, allowing an attacker to retrieve the content of all my emails. OpenSMTPD had critical vulnerabilities a few years ago, including a remote code execution, so it is a realistic threat.
I wrote a privacy guide (for a client) explaining all the information shared through emails, with possible mitigations and their limitations.
This setup makes use of the program gpgit which is a Perl script encrypt emails received over the standard input using GPG, it is a complicated task because the email structure can be very complicated. I have not been able to find any alternative to this script. In gpgit repository there is a script to encrypt an existing mailbox (maildir format), that script must be run on the server, I did not test it yet.
You will configure a specific sieve rule which is "global" (not user-defined) that will process all emails before any other sieve filter. This sieve script will trigger a filter (a program allowed to modify the email) and pass the email on the standard input of the shell script encrypt.sh, which in turn will run gpgit with the according username after verifying a gnupg directory existed for them. If there is no gnupg directory, the email is not encrypted, this allows multiple users on the email server without enforcing encryption for everyone.
If a user has multiple addresses, this is the system account name that is used in the local part of the GPG key address.
All the following paths will be relative to the directory /usr/local/lib/dovecot/sieve/, you can cd into it now.
Create the file encrypt.sh with this content, replace the variable DOMAIN with the domain configured in the GPG key:
#!/bin/sh
DOMAIN="puffy.cafe"
NOW=$(date +%s)
DATA="$(cat)"
if test -d ~/.gnupg
then
echo "$DATA" | /usr/local/bin/gpgit "${USER}@${DOMAIN}"
NOW2=$(date +%s)
echo "Email encryption for user ${USER}: $(( NOW2 - NOW )) seconds" | logger -p mail.info
else
echo "$DATA"
echo "Email encryption for user for ${USER} none" | logger -p mail.info
fi
Make the script executable with chmod +x encrypt.sh. This script will create a new log line in your email logs every time an email is processed, including the username and the time required for encryption (in case of encryption). You could extend the script to discard the Subject header from the email if you want to hide it, I do not provide the implementation as I expect this task to be trickier than it looks like if you want to handle all corner cases.
You may have sieve_global_extensions already set, in that case update its value.
The variable sieve_filter_exec_timeout allows the script encrypt.sh to run for 200 seconds before being stopped, you should adapt the value to your system. I came up with 200 seconds to be able to encrypt email with 20MB attachments on an openbsd.amsterdam virtual machine. On a bare metal server with a Ryzen 5 CPU, it takes less than one second for the same email.
The full file should look like the following (in case you followed my previous email guide):
##
## Plugin settings
##
# All wanted plugins must be listed in mail_plugins setting before any of the
# settings take effect. See <doc/wiki/Plugins.txt> for list of plugins and
# their configuration. Note that %variable expansion is done for all values.
plugin {
sieve_plugins = sieve_imapsieve sieve_extprograms
# From elsewhere to Spam folder
imapsieve_mailbox1_name = Spam
imapsieve_mailbox1_causes = COPY
imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.siev
# From Spam folder to elsewhere
imapsieve_mailbox2_name = *
imapsieve_mailbox2_from = Spam
imapsieve_mailbox2_causes = COPY
imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve
sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve
# for GPG encryption
sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.filter
sieve_before = /usr/local/lib/dovecot/sieve/global.sieve
sieve_filter_exec_timeout = 200s
}
Open the file /etc/dovecot/conf.d/10-master.conf and uncomment the variable default_vsz_limit and set its value to 1024M. This is required as GPG uses a lot of memory and without this, the process will be killed and the email lost. I found 1024M to works with attachments up to 45 MB, however you should raise this value higher value if you plan to receive bigger attachments.
Restart dovecot to take account of the changes: rcctl restart dovecot.
You need to create a GPG keyring for each users you want use encryption, the simplest method is to setup a passwordless keyring and import your public key:
If you use a spam filter such as rspamd or spamassassin relying on bayes filter, it will only work if it process the emails before arriving at dovecot, for instance in my email setup this is the case as rspamd is a filter of opensmtpd and pass the email before being delivered to Dovecot.
Such service can have privacy issues, especially if you use encryption. Bayes filter works by splitting an email content into tokens (not really words but almost) and looking for patterns using these tokens, basically each emails is split and stored in the anti-spam local database in small parts. I am not sure one could recreate the emails based on tokens, but if someone like an attacker is able to access the token list, they may have some insights about your email content. If this is part of your threat model, disable your anti-spam Bayes filter.
This setup is quite helpful if you want to protect all your emails on their storage. Full disk encryption on the server does not prevent anyone able to connect over SSH (as root or the email user) from reading the emails, even file recovery is possible when the volume is unlocked (not on the real disk, but the software encrypted volume), this is where encryption at rest is beneficial.
I know from experience it is complicated to use end-to-end encryption with tech-savvy users, and that it is even unthinkable with regular users. This is a first step if you need this kind of security (see the threat model section), but you need to remember a copy of all your emails certainly exist on the servers used by the persons you exchange emails with.
Firefox has an interesting features for developers, its ability to connect a Firefox developers tools to a remote Firefox instance. This can really interesting in the case of a remote kiosk display for instance.
The remote debugging does not provide a display of the remote, but it gives you access to the developer tools for tabs opened on the remote.
The remote firefox you want to connect to must be started using the command line parameter --start-debugger-server. This will make it listen on the TCP port 6000 on 127.0.0.1. Be careful, there is another option named remote-debugging-port which is not what you want here, but the names can be confusing (trust me, I wasted too much time because of this).
Before starting Firefox, a few knobs must be modified in its configuration. Either search for the options in about:config or create a user.js file in the Firefox profile directory with the following content:
This enables the remote management and removes a prompt upon each connection, while this is a good safety measure, it is not practical for remote debugging.
When you start Firefox, the URL input bar should have a red background.
Now, you need to make a SSH tunnel to that remote host where Firefox is running in order to connect to the port. Depending on your use case, a local NAT could be done to expose the port to a network interface or VPN interface, but pay attention to security as this would allow anyone on the network to control the Firefox instance.
The SSH tunnel is quite standard: ssh -L 6001:127.0.0.1:6000, the remote port 6000 is exposed locally as 6001, this is important because your own Firefox may be using the port 6000 for some reasons.
In your own local Firefox instance, visit the page about:debugging, add the remote instance localhost:6001 and then click on Connect on its name on the left panel. Congratulations, you have access to the remote instance for debugging or profiling websites.
This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5. It was commissioned by a customer of my consultancy who wanted it to be published on my blog.
Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system.
The features list can be found below:
email access through IMAP, POP or Webmail
secure SMTP server (mandatory server to server encryption, personal information hiding)
state-of-the-art setup to be considered as legitimate as possible
firewall filtering (bot blocking, all ports closes but the required ones)
anti-spam
In the example, I will set up a temporary server for the domain puffy.cafe with a server using the subdomain mail.puffy.cafe. From there, you can adapt with your own domain.
I prepared a few diagrams explaining how all the components are used together, in three cases: when sending an email, when the SMTP servers receives an email from the outside and when you retrieve your emails locally.
Packet Filter is OpenBSD's firewall. In our setup, we want all ports to be blocked except the few ones required for the email stack.
The following ports will be required:
opensmtpd 25/tcp (smtp): used for email delivery from other servers, supports STARTTLS
opensmtpd 465/tcp (smtps): used to establish a TLS connection to the SMTP server to receive or send emails
opensmtpd 587/tcp (submission): used to send emails to external servers, supports STARTTLS
httpd 80/tcp (http): used to generate TLS certificates using ACME
dovecot 993/tcp (imaps): used to connect to the IMAPS server to read emails
dovecot 995/tcp (pop3s): used to connect to the POP3S server to download emails
dovecot 4190/tcp (sieve): used to allow remote management of an user SIEVE rules
Depending on what services you will use, only the opensmtpd ports are mandatory. In addition, we will open the port 22/tcp for SSH.
set block-policy drop
set loginterface egress
set skip on lo0
# normalisation des paquets
match in all scrub (no-df random-id max-mss 1440)
antispoof quick for { egress }
tcp_ports = "{ smtps smtp submission imaps pop3s sieve ssh http }"
block all
pass out inet
pass out inet6
# allow ICMP (ping)
pass in proto icmp
# allow IPv6 to work
pass in on egress inet6 proto icmp6 all icmp6-type { routeradv neighbrsol neighbradv }
pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state
# allow our services
pass in on egress proto tcp from any to any port $tcp_ports
# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010
# Port build user does not need network
block return out log proto {tcp udp} user _pbuild
The MX records list the servers that should be used by outside SMTP servers to send us emails, this is the public list of our servers accepting emails for a given domain. They have a weight associated to each of them, the server with the lowest weight should be used first and if it does not respond, the next server used will be the one with a slightly higher weight. This is a simple mechanism that allow setting up a hierarchy.
I highly recommend setting up at least two servers, so if your main server fails is unreachable (host outage, hardware failure, upgrade ongoing) the emails will be sent to the backup server. Dovecot bundles a program to synchronize mailboxes between servers, one way or two-way, one shot or continuously.
If you have no MX records in your domain name, it is not possible to send you emails. It is like asking someone to send you a post card without giving them any clue about your real address.
Your server hostname can be different from the domain apex (raw domain name without a subdomain), a simple example would be to use mail.domain.example for the server name, this will not prevent it from receiving/sending emails using @domain.example in email addresses.
In my example, the domain puffy.cafe mail server will be mail.puffy.cafe, giving this MX record in my DNS zone:
The SPF record is certainly the most important piece of the email puzzle to detect spam. With the SPF, the domain name owner can define which servers are allowed to send emails from that domain. A properly configured spam filter will give a high spam score to incoming emails that are not in the sender domain SPF.
To ease the configuration, that record can automatically include all MX defined for a domain, but also A/AAAA records, so if you only use your MX servers for sending, a simple configuration allowing MX servers to send is enough.
In my example, only mail.puffy.cafe should be legitimate for sending emails, any future MX server should also be allowed to send emails, so we configure the SPF to allow all MX defined servers to be senders.
When used, the DKIM is a system allowing a receiver to authenticate a sender, based on an asymmetric cryptographic keys. The sender publishes its public key on a TXT DNS record before signing all outgoing emails using the private key. By doing so, receivers can validate the email integrity and make sure it was sent from a server of the domain claimed in the From header.
DKIM is mandatory to not be classified as a spamming server.
The following set of commands will create a 2048 bits RSA key in /etc/mail/dkim/private/puffy.cafe.key with its public key in /etc/mail/dkim/puffy.cafe.pub, the umask 077 command will make sure any file created during the process will only be readable by root. Finally, you need to make the private key readable to the group _rspamd.
Note: the umask command will persist in your shell session, if you do not want to create files/directory only readable by root after this, either spawn a new shell, or run the set of commands in a new shell and then exit from it once you are done.
In this example, we will name the DKIM selector dkim to keep it simple. The selector is the name of the key, this allows having multiple DKIM keys for a single domain.
Add the DNS record like the following, the value in p is the public key in the file /etc/mail/dkim/puffy.cafe.pub, you can get it as a single line with the command awk '/PUBLIC/ { $0="" } { printf ("%s",$0) } END { print }' /etc/mail/dkim/puffy.cafe.pub:
Your registrar may offer to add the entry using a DKIM specific form. There is nothing wrong doing so, just make sure the produced entry looks like the entry below.
dkim._domainkey IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAo3tIFelMk74wm+cJe20qAUVejD0/X+IdU+A2GhAnLDpgiA5zMGiPfYfmawlLy07tJdLfMLObl8aZDt5Ij4ojGN5SE1SsbGC2MTQGq9L2sLw2DXq+D8YKfFAe0KdYGczd9IAQ9mkYooRfhF8yMc2sMoM75bLxGjRM1Fs1OZLmyPYzy83UhFYq4gqzwaXuTvxvOKKyOwpWzrXzP6oVM7vTFCdbr8E0nWPXWKPJhcd10CF33ydtVVwDFp9nDdgek3yY+UYRuo/iJvdcn2adFoDxlE6eXmhGnyG4+nWLNZrxIgokhom5t5E84O2N31YJLmqdTF+nH5hTON7//5Kf/l/ubwIDAQAB"
The DMARC record is an extra mechanism that comes on top of SPF/DKIM, while it does not do much by itself, it is important to configure it.
DMARC could be seen as a public notice explaining to servers receiving emails whose sender looks like your domain name (legit or not) what they should do if SPF/DKIM does not validate.
As of 2024, DMARC offers three actions for receivers:
do nothing but make a report to the domain owner
"quarantine" mode: tell the receiver to be suspicious without rejecting it, the result will depend on the receiver (most of the time it will be flagged as spam) and make a report
"reject" mode: tell the receiver to not accept the email and make a report
In my example, I want invalid SPF/DKIM emails to be rejected. It is quite arbitrary, but I prefer all invalid emails from my domain to be discarded rather than ending up in a spam directory, so p and sp are set to reject. In addition, if my own server is misconfigured I will be notified about delivery issues sooner than if emails were silently put into quarantine.
An email address should be provided to receive DMARC reports, they are barely readable and I never made use of them, but the email address should exist so this is what the rua field is for.
The field aspf is set to r (relax), basically this allows any servers with a hostname being a subdomain of .puffy.cafe to send emails for @puffy.cafe, while if this field is set to s (strict), the domain of the sender should match the domain of the email server (mail.puffy.cafe would only be allowed to send for @mail.puffy.cafe).
An older mechanism used to prevent spam was to block, or consider as spam, any SMTP server whose advertised hostname did not match the result of the reverse lookup of its IP.
Let's say "mail.foobar.example" (IP: A.B.C.D) is sending an email to my server, if the result of the DNS request to resolve the PTR of A.B.C.D is not "mail.foobar.example", the email would be considered as spam or rejected. While this is superseded by SPF/DKIM and annoying as it is not always possible to define a PTR for a public IP, the reverse DNS setup is still a strong requirement to not be considered as a spamming platform.
Make sure the PTR matches the system hostname and not the domain name itself, in the example above the PTR should be mail.foobar.example and not foobar.example.
The first step is to obtain a valid TLS certificate, this requires configuring acme-client, httpd and start httpd daemon.
Copy the acme-client example cp /etc/examples/acme-client.conf /etc/
Modify /etc/acme-client.conf and edit only the last entry to configure your own domain, mine looks like this:
#
# $OpenBSD: acme-client.conf,v 1.5 2023/05/10 07:34:57 tb Exp $
#
authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}
authority letsencrypt-staging {
api url "https://acme-staging-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-staging-privkey.pem"
}
authority buypass {
api url "https://api.buypass.com/acme/directory"
account key "/etc/acme/buypass-privkey.pem"
contact "mailto:me@example.com"
}
authority buypass-test {
api url "https://api.test4.buypass.no/acme/directory"
account key "/etc/acme/buypass-test-privkey.pem"
contact "mailto:me@example.com"
}
domain mail.puffy.cafe {
# you can remove the line "alternative names" if you do not need extra subdomains
# associated to this certificate
# imap.puffy.cafe is purely an example, I do not need it
alternative names { imap.puffy.cafe pop.puffy.cafe }
domain key "/etc/ssl/private/mail.puffy.cafe.key"
domain full chain certificate "/etc/ssl/mail.puffy.cafe.fullchain.pem"
sign with letsencrypt
}
Now, configure httpd, starting from the OpenBSD example: cp /etc/examples/httpd.conf /etc/
Edit /etc/httpd.conf, we want the first block to match all domains but not "example.com", and we do not need the second block listen on 443/tcp (except if you want to run a https server with some content, but you are on your own then). The resulting file should look like the following:
Enable and start httpd with rcctl enable httpd && rcctl start httpd.
Run acme-client -v mail.puffy.cafe to generate the certificate with some verbose output (if something goes wrong, you will have a clue).
If everything went fine, you should have the full chain certificate in /etc/ssl/mail.puffy.cafe.fullchain.pem and the private key in /etc/ssl/private/mail.puffy.cafe.key.
You will use rspamd to filter spam and sign outgoing emails for DKIM.
Install rspamd and the filter to plug it to opensmtpd:
pkg_add rspamd-- opensmtpd-filter-rspamd
You need to configure rspamd to sign outgoing emails with your DKIM private key, to proceed, create the file /etc/rspamd/local.d/dkim_signing.conf (the filename is important):
# our usernames does not contain the domain part
# so we need to enable this option
allow_username_mismatch = true;
# this configures the domain puffy.cafe to use the selector "dkim"
# and where to find the private key
domain {
puffy.cafe {
path = "/etc/mail/dkim/private/puffy.cafe.key";
selector = "dkim";
}
}
For better performance, you need to use redis as a cache backend for rspamd:
rcctl enable redis
rcctl start redis
Now you can start rspamd:
rcctl enable rspamd
rcctl start rspamd
For extra information about rspamd (like statistics or its web UI), I wrote about it in 2021:
If you do not want to use rspamd, it is possible to replace the DKIM signing part using opendkim, dkimproxy or opensmtpd-filter-dkimsign. The spam filter could be either replaced by the featureful spamassassin available as a package, or partially with the base system program spamd (it does not analyze emails).
This guide only focus on rspamd, but it is important to know alternatives exist.
OpenSMTPD configuration file on OpenBSD is /etc/mail/smtpd.conf, here is a working configuration with a lot of comments:
## this defines the paths for the X509 certificate
pki puffy.cafe cert "/etc/ssl/mail.puffy.cafe.fullchain.pem"
pki puffy.cafe key "/etc/ssl/private/mail.puffy.cafe.key"
pki puffy.cafe dhe auto
## this defines how the local part of email addresses can be split
# defaults to '+', so solene+foobar@domain matches user
# solene@domain. Due to the '+' character being a regular source of issues
# with many online forms, I recommend using a character such as '_',
# '.' or '-'. This feature is very handy to generate infinite unique emails
# addresses without pre-defining aliases.
# Using '_', solene_openbsd@domain and solene_buystuff@domain lead to the
# same address
smtp sub-addr-delim '_'
## this defines an external filter
# rspamd does dkim signing and spam filter
filter rspamd proc-exec "filter-rspamd"
## this defines which file will contain aliases
# this can be used to define groups or redirect emails to users
table aliases file:/etc/mail/aliases
## this defines all the ports to use
# mask-src hides system hostname, username and public IP when sending an email
listen on all port 25 tls pki "puffy.cafe" filter "rspamd"
listen on all port 465 smtps pki "puffy.cafe" auth mask-src filter "rspamd"
listen on all port 587 tls-require pki "puffy.cafe" auth mask-src filter "rspamd"
## this defines actions
# either deliver to lmtp or to an external server
action "local" lmtp "/var/dovecot/lmtp" alias <aliases>
action "outbound" relay
## this defines what should be done depending on some conditions
# receive emails (local or from external server for "puffy.cafe")
match from any for domain "puffy.cafe" action "local"
match from local for local action "local"
# send email (from local or authenticated user)
match from any auth for any action "outbound"
match from local for any action "outbound"
In addition, you can configure the advertised hostname by editing the file /etc/mail/mailname: for instance my machine's hostname is ryzen so I need this file to advertise it as mail.puffy.cafe.
For ports using STARTTLS (25 and 587), there are different options with regard to TLS encryption.
do not allow STARTTLS
offer STARTTLS but allow not using it (option tls)
require STARTTLS: drop connection when the remote peer does ask for STARTTLS (option tls-require)
require STARTTLS: drop connection when no STARTTLS, and verify the remote certificate (option tls-require verify)
It is recommended to enforce STARTTLS on port 587 as it is used by authenticated users to send emails, preventing them to send emails without network encryption.
On port 25, used by external servers to reach yours, it is important to allow STARTTLS because most server will deliver emails over an encrypted TLS session, however it is your choice to enforce it or not.
Enforcing STARTTLS might break email delivery from some external servers that are outdated or misconfigured (or bad actors).
By default, OpenSMTPD is configured to deliver email to valid users in the system. In my example, if user solene exists, then email address solene@puffy.cafe will deliver emails to solene user mailbox.
Of course, as you do not want the system daemons to receive emails, a file contains aliases to redirect emails from a user to another, or simply discard it.
In /etc/mail/aliases, you can redirect emails to your username by adding a new line, in the example below I will redirect root emails to my user.
root: solene
It is possible to redirect to multiple users using a comma to separate them, this is handful if you want to create a local group delivering emails to multiple users.
Instead of a user, it is possible to append the incoming emails to a file, pipe them to a command or return an SMTP code. The aliases(5) man pages contains all you need to know.
If you need to handle emails for multiple domains, this is rather simple:
Add this line to the file /etc/mail/smtpd.conf by changing puffy.cafe to the other domain name: match from any for domain "puffy.cafe" action "local"
Configure the other domain DNS MX/SPF/DKIM/DMARC
Configure /etc/rspamd/local.d/dkim_signing.conf to add a new block with the other domain, the dkim selector and the dkim key path
The PTR does not need to be modified as it should match the machine hostname advertised over SMTP, and it is an unique value anyway
If you want to use a different aliases table for the other domain, you need to create a new aliases file and configure /etc/mail/smtpd.conf accordingly where the following lines should be added:
table lambda file:/etc/mail/aliases-lambda
action "local_mail_lambda" lmtp "/var/dovecot/lmtp" alias <lambda>
match from any for domain "lambda-puffy.eu" action "local_mail_lambda"
Note that the users will be the same for all the domains configured on the server. If you want to have separate users per domains, or that "user a" on domain A and "user a" on domain B could be different persons / logins, you would need to setup virtual users instead of using system users. Such setup is beyond the scope of this guide.
It is possible to not use Dovecot. Such setup can suit users who would like to download the maildir directory using rsync on their local computer, this is a one-way process and does not allow sharing a mailbox across multiple devices. This reduces maintenance and attack surface at the cost of convenience.
This may work as a two-way access (untested) when using a software such as unison to keep both the local and remote directories synchronized, but be prepared to manage file conflicts!
If you want this setup, replace the following line in smtpd.conf
action "local" lmtp "/var/dovecot/lmtp" alias <aliases>
by this line: if you want to store the emails into a maildir format (a directory per email folder, a file per email), emails will be stored in the directory "Maildir" in user's homes.
action "local" maildir "~/Maildir/" junk alias <aliases>
or this line if you want to keep the mbox format (a single file with emails appended to it, not practical), the emails will be stored in /var/mail/$user.
Dovecot is an important piece of software for the domain end users, it provides protocols like IMAP or POP3 to read emails from a client. It is the most popular open source IMAP/POP server available (the other being Cyrus IMAP).
Install dovecot with the following command line:
pkg_add dovecot-- dovecot-pigeonhole--
Dovecot has a lot of configuration files in /etc/dovecot/conf.d/ although most of them are commented and ready to be modified, you will have to edit a few of them. This guide provides the content of files with empty lines / comments stripped so you can quickly check if your file is ok, you can use the command awk '$1 !~ /^#/ && $1 ~ /./' on a file to display its "useful" content only (awk will not modify the file).
Modify /etc/dovecot/conf.d/10-ssl.conf and search the lines ssl_cert and ssl_key, change their values to your certificate full chain and private key.
Generate a Diffie-Hellman file for perfect forward secrecy, this will make each TLS negociation unique, so if the private key ever leak, every past TLS communication will remain safe.
Modify /etc/dovecot/conf.d/10-mail.conf, search for a commented line mail_location, uncomment it and set the value to maildir:~/Maildir, this will tell Dovecot where users mailboxes are stored and in which format, we want to use the maildir format.
Modify the file /etc/dovecot/conf.d/20-lmtp.conf, LMTP is the protocol used by opensmtpd to transmit incoming emails to dovecot. Search for the commented variable mail_plugins and uncomment it with the value mail_plugins = $mail_plugins sieve:
IMAP is an efficient protocol that returns headers of emails per directory, so you do not have to download all your emails to view the directory list, emails are downloaded upon read (by default in most email clients). It allows some cool features like server side search, incoming email sorting with sieve filters or multi devices access.
Edit /etc/dovecot/conf.d/20-imap.conf and configure the last lines accordingly to the result file:
The number of connections per user/IP should be high if you have an email client tracking many folders, in IMAP a connection is required for each folder, so the number of connections can quickly increase. On top of that, if you have multiple devices under the same public IP you could quickly reach the limit. I found 25 worked fine for me with 3 devices.
POP3 is a pretty old protocol that is rarely considered by users, I still consider it a viable alternative to IMAP depending on your needs.
A major incentive for using POP is that it downloads all emails locally before removing them from the server. As we have no tooling to encrypt emails stored on remote email servers, POP3 is a must if you want to not leave any email on the server. POP3 does not support remote folders, so you can not use Sieve filters on the server to sort your emails and then download them as-this. A POP3 client downloads the Inbox and then sorts the emails locally.
It can support multiple devices under some conditions: if you delete the emails after X days, your devices should synchronize before the emails are removed. In such case they will have all the emails stored locally, but they will not be synced together: if both computers A and B are up-to-date, when deleting an email on A, it will still be in B.
There are no changes required for POP3 in Dovecot as the defaults are good enough.
For information, a replacement for IMAP called JMAP is in development, it is meant to be better than IMAP in every way and also include calendars and address book management.
JMAP Implementations are young but exist, although support in email clients is almost non-existent. For instance, it seems Mozilla Thunderbird is not interested in it, an issue in their bug tracker about JMAP from December 2016 only have a couple of comments from people who would like to see it happening, nothing more.
Dovecot has a plugin to offer Sieve filters, they are rules applied to received emails going into your mailbox, whether you want to sort them into dedicated directories, mark them read or block some addresses. That plugin is called pigeonhole.
You will need Sieve to enable the spam filter learning system when moving emails from/to the Junk folder as it is triggered by a Sieve rule. This improves rspamd Bayes (a method using tokens to understand information, the story of the person behind it is interesting) filter ability to detect spam accurately.
Edit /etc/dovecot/conf.d/90-plugin.conf with the following content:
This piece of configuration was taken from the official Dovecot documentation: https://doc.dovecot.org/configuration_manual/howto/antispam_with_sieve/ . It will trigger shell scripts calling rspamd to make it learn what does a spam look like, and what is legit (ham). One script will run when an email is moved out of the spam directory (ham), another one when an email is moved to the spam directory (spam).
Modify /etc/dovecot/conf.d/15-mailboxes.conf to add the following snippet inside the block namespace inbox { ... }, it will associate the Junk directory as the folder containing spam and automatically create it if it does not exist:
mailbox Spam {
auto = create
special_use = \Junk
}
To make this work completely, you need to write the two extra sieve filters that will run trigger the scripts:
By default, Sieves rules are a file located on the user home directory, however there is a standard protocol named "managesieve" to manage Sieve filters remotely from an email client.
It is enabled out of the box in Dovecot configuration, although you need to make sure you open the port 4190/tcp in the firewall if you want to allow users to use it.
A webmail will allow your users to read / send emails from a web interface instead of having to configure a local email client. While they can be convenient, they enable a larger attack surface and are often affected by vulnerability issues, you may prefer to avoid webmail on your server.
The two most popular open source webmail are Roundcube mail and Snappymail (a fork of the abandoned rainloop) and Roundcube, they both have pros and cons.
Roundcube is packaged in OpenBSD, it will pull in all required dependencies and occasionally receive backported security updates.
Install the package:
pkg_add roundcubemail
When installing the package, you will be prompted for a database backend for PHP. If you have one or two users, I highly recommend choosing SQLite as it will work fine without requiring a running daemon, thus less maintenance and server resources locked. If you plan to have a lot of users, there are no wrong picks between MySQL or PostgreSQL, but if you already have one of them running it would be better to reuse it for Roundcube.
Specific instructions for installing Roundcube are provided by the package README in /usr/local/share/doc/pkg-readmes/roundcubemail.
We need to enable a few PHP modules to make Roundcube mail working:
Note that more PHP modules may be required if you enable extra features and plugins in Roundcube.
PHP is ready to be started:
rcctl enable php82_fpm
rcctl start php82_fpm
Add the following blocks to /etc/httpd.conf, make sure you opened the port 443/tcp in your pf.conf and that you reloaded it with pfctl -f /etc/pf.conf:
server "mail.puffy.cafe" {
listen on egress tls
tls key "/etc/ssl/private/mail.puffy.cafe.key"
tls certificate "/etc/ssl/mail.puffy.cafe.fullchain.pem"
root "/roundcubemail"
directory index index.php
location "*.php" {
fastcgi socket "/run/php-fpm.sock"
}
}
types {
include "/usr/share/misc/mime.types"
}
Restart httpd with rcctl restart httpd.
You need to configure Roundcube to use a 24 bytes security key and configure the database: edit the file /var/www/roundcubemail/config/config.inc.php:
Search for the variable des_key, replace its value by the output of the command tr -dc [:print:] < /dev/urandom | fold -w 24 | head -n 1 which will generate a 24 byte random string. If the string contains a quote character, either escape this character by prefixing it with a \ or generate a new string.
For the database, you need to search the variable db_dsnw.
To make sure the files cert.pem and openssl.cnf stay in sync after upgrades, add the two commands to a file /etc/rc.local and make this file executable. This script always starts at boot and is the best place for this kind of file copy.
If your IMAP and SMTP hosts are not on the same server where Roundcube is installed, adapt the variables imap_host and smtp_host to the server name.
If Roundcube mail is running on the same server where OpenSMTPD is running, you need to disable certificate validation because localhost will not match the certificate and authentication will fail. Change smtp_host line to $config['smtp_host'] = 'tls://127.0.0.1:587'; and add this snippet to the configuration file:
It is always possible to improve the security of this stack, all the following settings are not mandatory, but they can be interesting depending on your needs.
7.1. Always allow the sender per email or domain §
It is possible to configure rspamd to force it to accept emails from a given email address or domain, bypassing the anti-spam.
To proceed, edit the file /etc/rspamd/local.d/multimap.conf to add this content:
local_wl_domain {
type = "from";
filter = "email:domain";
map = "$CONFDIR/local.d/whitelist_domain.map";
symbol = "LOCAL_WL_DOMAIN";
score = -10.0;
description = "domains that are always accepted";
}
local_wl_from {
type = "from";
map = "$CONFDIR/local.d/whitelist_email.map";
symbol = "LOCAL_WL_FROM";
score = -10.0;
description = "email addresses that are always accepted";
}
Create the files /etc/rspamd/local.d/whitelist_domain.map and /etc/rspamd/local.d/whitelist_email.map using the command touch.
Restart the service rspamd with rcctl restart rspamd.
The created files use a simple syntax, add a line for each entry you want to allow:
a domain name in /etc/rspamd/local.d/whitelist_domain.map to allow the domain
an email address in /etc/rspamd/local.d/whitelist_email.map to allow this address
There is no need to restart or reload rspamd after changing the files.
Reusing the same technique can be done to block domains/addresses directly in rspamd by giving a high positive score.
If you want to improve your email setup security further, the best method is to split each part into dedicated systems.
As dovecot is responsible for storing and exposing emails to users, this component would be safer in a dedicated system, so if a component of the email stack (other than dovecot) is compromised, the mailboxes will not be exposed.
If this does not go against usability of the email server users, I strongly recommend limiting the publicly opened ports in the firewall to the minimum: 25, 80, 465, 587. This would prevent attackers to exploit any network related 0day or unpatched vulnerabilities of non-exposed services such as Dovecot.
A VPN should be deployed to allow users to reach Dovecot services (IMAP, POP) and other services if any.
SSH port could be removed from the public ports as well, however, it would be safer to make sure your hosting provider offers a serial access / VNC / remote access to the system because if the VPN stops working, you will not be able to log in into the system using SSH to debug it.
There is an online service providing you a random email address to send a test email to, then you can check the result on their website displaying if the SPF, DKIM, DMARC and PTR records are correctly configured.
The score you want to be displayed on their website is no least than 10/10. The service can report meaningless issues like "the email was poorly formatted" or "you did not include an unsubscribe link", they are not relevant for the current test.
While it used to be completely free last time I used it, I found it would ask you to pay after three free checks if you do not want to wait 24h. It uses your public IP address for the limit.
The following processes list should always be running: using a program like monit, zabbix or reed-alert to notify you when they stop working could be a good idea.
In addition, the TLS certificate should be renewed regularly as ACME generated certificates are valid for a few months. Edit root crontab with crontab -e as root to add this line:
This will try to renew the certificate for mail.puffy.cafe every Sunday at 04h10 and upon renewal restart the services using the certificate: dovecot, httpd and smtpd.
Finally, OpenSMTPD will stop delivering emails locally if the /var partition has less than 4% of free disk space, be sure to monitor the disk space of this partition otherwise you will not receive emails anymore for a while before noticing something is wrong.
Congratulations, you configured a whole email stack that will allow you to send emails to the world, using your own domain and hardware. Keeping your system up to date is important as you have network services exposed to the wild Internet.
Even with a properly configured setup featuring SPF/DKIM/DMARC/PTR, it is not guaranteed to not end in the spam directory of our recipients. The IP reputation of your SMTP server also account, and so is the domain name extension (I have a .pw domain which I learned too late that it was almost always considered as spam because it is not mainstream).
The Xbox Ultimate subscription bundles a game library for Xbox and Windows games with high price titles, this makes the price itself quite cheap compared to the price of available games as a high-priced game is more expensive than four months of subscription. However, I have mixed feelings about the associated streaming service: on one hand it works perfectly fine (no queue, input lag is ok) but the video quality is not fantastic on a 1080p screen. The service seems perfectly fitted to be played on smartphones, every touchscreen compatible games have a specific layout customized for that game, making the touchscreen a lot more usable than displaying a full controller over the layout when you only need a few buttons, in addition to the low bandwidth usage it makes a good service for handheld devices. On desktop, you may want to use the streaming to try a game before installing it, but not much more.
There is no client for Android TV, so you can not use these devices except if you can run a web browser in it.
Really, with a better bitrate, the service would be a blast (not for 4k and/or 120 fps users though), but at the moment it is only ok as a game library, or as a streaming service to play on small or low resolution screens.
The service could be good with a better bitrate, the input lag is ok and I did not experience any waiting time. The hardware specs seem good except the loading times, it feels like the data are stored on a network storage with poor access time or bandwidth. The bitrate is so bad that I can not recommend playing anything in first person view or moving too fast as it would look like a pixel mess. However, playing slow paced games is perfectly fine.
There have a killer feature that is unique to their service, you can invite a friend to play a game in streaming with you by just sending them a link, they will join your game, and you can start playing together in a minute. While it is absolutely cool, the service lacks fun games to play in couch coop...
As you can use Luna if you have Amazon Prime, I think it is a good fit for casual players who do not want to pay for games but would enjoy a session from time to time on any hardware.
I mentioned the subscription cancelling process twice, here are the facts: on your account you click on unsubscribe, then it asks if you are really sure because you will lose access to your service, you have to agree, then it will remind you that you are about to cancel, and maybe it is a mistake, so you need to agree again, then there is a trick. The web page says that your account will be cancelled and that you can still use your account up to cancel date, it looks fine here, but it is not, there is a huge paragraph of blah blah below and a button to confirm the cancel! Then you are done. But first time I cancelled I did not pass the third step as I thought it was fine, when double-checking my account status before the renewal, I saw I missed something.
I wrote a review of their services a few months ago. Since then, I renewed my account with 6 months of priority tier. I mostly use it to play resource intensive games when it is hot at home (so my computer does not heat at all), at night when I want to play a bit in silence without fan noise, finally I enjoy it a lot with slow paced games like walking simulators on my TV.
On one hand, Luna seems to target casual users: people who may not notice the bad quality or input lag and who will just play what is available.
On the other hand, Xbox service is a game library first, with a streaming feature. It is quite perfect for people playing Xbox library games on PC / Xbox who wants to play on a smartphone / tablet occasionally, but not for customers looking only for playing streaming games.
Both services would not need much to be _good_ streaming services, the minimum upgrade should be a higher bitrate. Better specs would be appreciated too: improved loading times for Luna, and Xbox games running on a better platform than Xbox Series S.
This guide explains how to setup a WireGuard tunnel on Linux using a dedicated network namespace so you can choose to run a program on the VPN or over clearnet.
I have been able to figure the setup thanks to the following blog post, I enhanced it a bit using scripts and sudo rules.
By default, if you connect WireGuard tunnel, its "allowedIps" field will be used as a route with a higher priority than your current default route. It is not always ideal to have everything routed through a VPN, so you will create a dedicated network namespace that uses the VPN as a default route, without affecting all other software.
Unfortunately, compared to OpenBSD rdomain (which provide the same features in this situation), network namespaces are much more complicated to deal with and requires root to run a program under a namespace.
You will create a SAFE sudo rule to allow your user to run commands under the new namespace, making it more practical for daily use.
You need a wg-quick compatible WireGuard configuration file, but do not make it automatically used at boot.
Create a script (for root use only) with the following content, then make it executable:
#!/bin/sh
# your VPN configuration file
CONFIG=/etc/wireguard/my-vpn.conf
# this directory is used to have a per netns resolver file
mkdir -p /etc/netns/vpn/
# cleanup any previous VPN in case you want to restart it
ip netns exec vpn ip l del tun0
ip netns del vpn
# information to reuse later
DNS=$(awk '/^DNS/ { print $3 }' $CONFIG)
IP=$(awk '/^Address/ { print $3 }' $CONFIG)
# the namespace will use the DNS defined in the VPN configuration file
echo "nameserver $DNS" > /etc/netns/vpn/resolv.conf
# now, it creates the namespace and configure it
ip netns add vpn
ip -n vpn link set lo up
ip link add tun0 type wireguard
ip link set tun0 netns vpn
ip netns exec vpn wg setconf tun0 <(wg-quick strip "$CONFIG")
ip -n vpn a add "$IP" dev tun0
ip -n vpn link set tun0 up
ip -n vpn route add default dev tun0
ip -n vpn add
# extra check if you want to verify the DNS used and the public IP assigned
#ip netns exec vpn dig ifconfig.me
#ip netns exec vpn curl https://ifconfig.me
This script autoconfigure the network namespace and the VPN interface + the DNS server to use. There are extra checks at the end of the script that you can uncomment if you want to take a look at the public IP and DNS resolver used just after connection.
Running this script will make the netns "vpn" available for use.
The command to run a program under the namespace is ip netns exec vpn your command, it can only be run as root.
When using this command line, you MUST use full paths exactly as in the sudo configuration file, this is important otherwise it would allow you to create a script called ip with whatever commands and run it as root, while /usr/sbin/ip can not be spoofed by a local script in $PATH.
If I want a shell session with the VPN, I can run the following command:
It is not a real limitation, but you may be caught by it, if you make a program listening on localhost in the netns vpn, you can only connect to it from another program in the same namespace. There are methods to connect two namespaces, but I do not plan to cover it, if you need to search about this setup, it can be done using socat (this is explained in the blog post linked earlier) or a local bridge interface.
Network namespaces are a cool feature on Linux, but it is overly complicated in my opinion, unfortunately I have to deal with it, but at least it is working fine in practice.
The Old Computer Challenge 4th edition will begin 13th July to 20th July 2024. It will be the prequel to Olympics, I was not able to get the challenge accepted there so we will do it our way.
While the three previous editions had different rules, I came to agree with the community for this year. Choose your rules!
When I did the challenge for the first time, I did not expect it to become a yearly event nor that it would gather aficionados during the trip. The original point of the challenge was just to see if I could use my oldest laptop as my main computer for a week, there were no incentive, it was not a contest and I did not have any written rules.
Previous editions rules were about using an old laptop, use a computer with limited hardware (and tips to slow down a modern machine) or limit Internet access to a single hour per day. I always insist on the fact it should not hinder your job, so people participating do not have to "play" during work. Smartphones became complicated to handle, especially with the limited Internet access, all I can recommend to people is to define some rules you want to stick to, and apply to it the best you can. If you realllyyyy need once to use a device that would break the rules, so be it if it is really important, nobody will yell at you.
People doing the OCC enjoy it for multiple reasons, find yours! Some find the opportunity to disconnect a bit, change their habit, do some technoarcheology to run rare hardware, play with low-tech, demonstrate obsolescence is not a fatality etc...
Some ideas if you do not know what to do for the challenge:
use your oldest device
do not use graphical interface
do not use your smartphone (and pick a slow computer :P)
limit your Internet access time
slow down your Internet access
forbid big software (I indented to do this for 4th OCC but it was hard to prepare, the idea was to setup an OpenBSD mirror where software with more than some arbitrary line of codes in their sources would be banned, resulting in a very small set of packages due to missing transitive dependencies)
You can join the community and share your experience.
There are many ways! It's the opportunity to learn how to use Gopher or Gemini to publish content, or to join the mailing list and participate with the other or simply come to the IRC channel to chat a bit.
Well, as nobody enforces you to do the OCC, you can just do it when you want, even in December if it suits your calendar better than mid July, nobody will complain at you.
There is a single rule, do it for fun! Do not impede yourself for weird reasons, it is here for fun, and doing the whole week is as good as failing and writing about the why you failed. It is not a contest, just try and see how it goes, and tell us your story :)
If you ever happen to mount a .iso file on OpenBSD, you may wonder how to proceed as the command mount_cd9660 requires a device name.
While the solution is entirely documented into man pages and in the official FAQ, it may not be easy to find it at first glance, especially since most operating system allow to mount an iso file in a single step where as OpenBSD requires an extra step.
On OpenBSD you need to use the command vnconfig to map a file to a device node, allowing interesting actions such as using a file as a storage disk (which you can encrypt) or mounting a .iso file.
This command must be used as root as it manipulates files in /dev.
If you are done with the file, you have to umount it with umount /mnt and destroy the vnd device using vnconfig -u vnd0.
5. Going further: Using a file as an encrypted disk §
If you want to use a single file as a file system, you have to provision the file with disk space using the command dd, you can fill it with zeroes but if you plan to use encryption on top of it, it's better to use random data. In the following example, you will create a file my-disk.img of a size of 10 GB (1000 x 10 MB):
Now you can use vnconfig to expose it as a device:
vnconfig vnd0 my-disk.img
Finally, the command bioctl can be used to configure encryption on the disk, disklabel to partition it and newfs to format the partitions. You can follow OpenBSD FAQ guides, make sure use the the device name /dev/vnd0 instead of wd0 or sd0 from the examples.
This blog post explains how to configure an OpenBSD workstation with extreme privacy in mind.
This is an attempt to turn OpenBSD into a Whonix or Tails alternative, although if you really need that level of privacy, use a system from this list and not the present guide. It is easy to spot OpenBSD using network fingerprinting, this can not be defeated, you can not hide the fact you use OpenBSD to network operators.
I did this guide as a challenge for fun, but I also know some users have a use for this level of privacy.
Note: this guide explains steps related to increase privacy of OpenBSD and its base system, it will not explain how to configure a web browser or how to choose a VPN.
OpenBSD does not have much network activity with a default installation, but the following programs generate traffic:
the installer connects to 199.185.178.80 to associate chosen timezone with your public IP to reuse the answer for a future installation
ntpd (for time sync) uses pool.ntp.org, 9.9.9.9, 2620:fe::fe, www.google.com and time.cloudflare.com
fw_update connects to firmware.openbsd.org (resolves as openbsd.map.fastlydns.net), fw_update is used at the end of the installer, and at the end of each sysupgrade
sysupgrade, syspatch and pkg_* tools use the address defined in /etc/installurl (defaults to cdn.openbsd.org)
During the installation, do not configure the network at all. You want to avoid syspatch and fw_update to run at the end of the installer, and also ntpd to ping many servers upon boot.
Once OpenBSD booted after the installation, you need to take a decision for ntpd (time synchronization daemon).
you can disable ntpd entirely with rcctl disable ntpd, but it is not really recommended as it can create issues with some network software if the time is desynchronized
you can edit the file /etc/ntpd.conf which contains the list of servers used to keep the time synchronized, and choose which server to connect to (if any)
you can configure ntpd to use a sensor providing time (like a GPS receiver) and disable everything else
Whonix (maybe Tails too?) uses a custom tailored program named swdate to update the system clock over Tor (because Tor only supports TCP while NTP uses UDP), it is unfortunately not easily portable on OpenBSD.
Next step is to edit the file /etc/hosts to disable the firmware server whose hostname is hard-coded in the program fw_update, add this line to the file:
The firmware installation and OpenBSD mirror configuration using Tor and I2P are covered in my previous article, it explains how to use tor or i2p to download firmware, packages and system sets to upgrade.
There is a chicken / egg issue with this though, on a fresh install you have neither tor nor i2p, so you can not download tor or i2p packages through it. You could download the packages and their dependencies from another system and install them locally using USB.
Wi-Fi and some other devices requiring a firmware may not work until you run fw_update, you may have to download the files from another system and pass the network interface firmware over a USB memory stick to get network. A smartphone with USB tethering is also a practical approach for downloading firmware, but you will have to download it over clearnet.
DNS is a huge topic for privacy-oriented users, I can not really recommend a given public DNS servers because they all have pros and cons, I will use 1.1.1.1 and 9.9.9.9 for the example, but use your favorite DNS.
Enable the daemon unwind, it is a local DNS resolver with some cache, and supports DoT, DoH and many cool features. Edit the file /etc/unwind.conf with this configuration:
forwarder { 1.1.1.1 9.9.9.9 }
As I said, DoT and DoH is supported, you can configure it directly in the forwarder block, the man page explains the syntax:
A program named resolvd is running by default, when it finds that unwind is running, resolvd modifies /etc/resolv.conf to switch DNS resolution to 127.0.0.1, so you do not have anything to do.
A sane firewall configuration for workstations is to block all incoming connections. This can be achieved with the following /etc/pf.conf: (reminder, last rule matches)
set block-policy drop
set skip on lo
match in all scrub (no-df random-id max-mss 1440)
antispoof quick for egress
# block all traffic (in/out)
block
# allow reaching the outside (IPv4 + IPv6)
pass out quick inet
pass out quick inet6
# allow ICMP (ping) for MTU discovery
pass in proto icmp
# uncomment if you use SLAAC or ICMP6 (IPv6)
#pass in on egress inet6 proto icmp6
#pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state
When you upgrade your OpenBSD system from a release to another or to a newer snapshot using sysupgrade, the command fw_update will automatically be run at the very end of the installer.
It will bypass any /etc/hosts changes as it runs from a mini root filesystem, if you do not want fw_update to be used over clearnet at this step, the only method is to disable network at this step, which can be done by using sysupgrade -n to prepare the upgrade without rebooting, and then:
disconnect your computer Ethernet cable if any, if you use Wi-Fi and you have a physical killswitch this will be enough to disable Wi-Fi
if you do not have such a killswitch and Wi-Fi is configured, rename its configuration file in /etc/hostname.if to another invalid name, you will have to rename it back after sysupgrade.
You could use this script to automate the process:
It will move all your network configuration in /root/, run sysupgrade, and configure the next boot to restore the hostname files back to place and start the network.
By default, OpenBSD "filters" webcam and microphone use, if you try to use them, you get a video stream with a black background and no audio on the microphone. This is handled directly by the kernel and only root can change this behavior.
To toggle microphone recording, change the sysctl kern.audio.record to 1 or 0 (default).
To toggle webcam recording, change the sysctl kern.video.record to 1 or 0 (default).
What is cool with this mechanism is it makes software happy when they make webcam/microphone a requirement, they exist but just record nothing.
Congratulations, you achieved a high privacy level with your OpenBSD installation! If you have money and enough trust in some commercial services, you could use a VPN instead (or as a base) of Tor/I2P, but it is not in the scope of this guide.
I did this guide after installing OpenBSD on a laptop connected to another laptop doing NAT and running Wireshark to see exactly what was leaking over the network. It was a fun experience.
For an upcoming privacy related article about OpenBSD I needed to setup an access to an OpenBSD mirror both from a Tor hidden service and I2P.
The server does not contain any data, it only act as a proxy fetch files from a random existing OpenBSD mirror, so it does not waste bandwidth mirroring everything, the server does not have the storage required anyway. There is a little cache to keep most requested files locally.
It is only useful if you can not reach OpenBSD mirrors, or if you really need to hide your network activity. Tor or I2P will be much slower than connecting to a mirror using HTTP(s).
However, as they exist now, let me explain how to start using them.
If you want to install or update your packages from tor, you can use the onion address in /etc/installurl. However, it will not work for sysupgrade and syspatch, and you need to export the variable FETCH_CMD="/usr/local/bin/curl -L -s -q -N -x socks5h://127.0.0.1:9050" in your environment to make pkg_* programs able to use the mirror.
To make sysupgrade or syspatch able to use the onion address, you need to have the program torsocks installed, and patch the script to use torsocks:
sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/sysupgrade for sysupgrade
sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/syspatch for syspatch
These patches will have to be reapplied after each sysupgrade run.
If you want to install or update your packages from i2p, install i2pd with pkg_add i2pd, edit the file /etc/i2pd/i2pd.conf to set notransit = true except if you want to act as an i2p relay (high cpu/bandwidth consumption).
Replace the file /etc/i2pd/tunnels.conf by the following content (or adapt your current tunnels.conf if you configured it earlier):
[MIRROR]
type = client
address = 127.0.0.1
port = 8080
destination = 2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p
destinationport = 8081
keys = mirror.dat
Now, enable and start i2pd with rcctl enable i2pd && rcctl start i2pd.
After a few minutes to let i2pd establish tunnels, you should be able to browse the mirror over i2p using the address http://127.0.0.1:8080/. You can configure the port 8080 to another you prefer by modifying the file tunnels.conf.
You can use the address http://127.0.0.1:8080/pub/OpenBSD/ in /etc/installurl to automatically use the I2P mirror for installing/updating packages, or keeping your system up to date with syspatch/sysupgrade.
Note: from experience the I2P mirror works fine to install packages, but did not play well with fw_update, syspatch and sysupgrade, maybe because they use ftp command that seems to easily drop the connection. Downloading the files locally using a proper HTTP client supporting transfer resume would be better. On the other hand, this issue may be related to the current attack the I2P network is facing as of the time of writing (May 2024).
OpenBSD pulls firmware from a different server than the regular mirrors, the address is http://firmware.openbsd.org/firmware/, the files on this server are signed packages, they can be installed using fw_update $file.
Both i2p and tor hidden service hostname can be reused, you only have to change /pub/OpenBSD/ by /firmware/ to browse the files.
The proxy server does not cache any firmware, it directly proxy to the genuine firmware web server. They are on a separate server for legal matter, it seems to be a grey area.
For maximum privacy, you need to neutralize firmware.openbsd.org DNS lookup using a hosts entry. This is important because fw_update is automatically used after a system upgrade (as of 2024).
In /etc/hosts add the line:
127.0.0.9 firmware.openbsd.org
The IP in the snippet above is not a mistake, it will avoid fw_update to try to connect to a local web server if any.
If you are using SSH quite often, it is likely you use an SSH agent which stores your private key in memory so you do not have to type your password every time.
This method is convenient, but it comes at the expense of your SSH key use security, anyone able to use your session while the agent holds the key unlocked can use your SSH key. This scenario is most likely to happen when using a compromised build script.
However, it is possible to harden this process at a small expense of convenience, make your SSH agent ask for confirmation every time the key has to be used.
The tooling provided with OpenSSH comes with a simple SSH agent named ssh-agent. On OpenBSD, the agent is automatically started and ask to unlock your key upon graphical login if it finds a SSH key in the default path (like ~/.ssh/id_rsa).
Usually, the method to run the ssh-agent is the following. In a shell script defining your environment at an early stage, either your interactive shell configuration file or the script running your X session, you use eval $(ssh-agent -s). This command runs ssh-agent and also enable the environment variables to make it work.
Once your ssh-agent is correctly configured, it is required to add a key into it, now, here are two methods to proceed.
If you want to have a GUI confirmation upon each SSH key use, just add the flag -c to this command line: ssh-add -c /path/to/key.
In OpenBSD, if you have your key at a standard location, you can modify the script /etc/X11/xenodm/Xsession to change the first occurrence of ssh-add by ssh-add -c. You will still be greeting for your key password upon login, but you will be asked for each of its use.
It turns out the password manager KeepassXC can hold SSH keys, it works great for having used for a while. KeepassXC can either store the private key within its database or load a private key from the filesystem using a path and unlock it using a stored password, the choice is up to you.
You need to have the ssh-agent variables in your environment to have the feature work, as KeepassXC will replace ssh-add only, not the agent.
KeepassXC documentation has a "SSH Agent integration" section explaining how it works and how to configure it.
I would recommend to automatically delete the key from the agent after some time, this is especially useful if you do not actively use your SSH key.
In ssh-add, this can be achieved using -t time flag (it's tea time, if you want to remember about it), where time is a number of seconds or a time format specified in sshd_config, like 5s for 5 seconds, 10m for 10 minutes, 16h for 16 hours or 2d for 2 days.
In KeepassXC, it's in the key settings, within the SSH agent tab, you can configure the delay before the key is removed from the agent.
The ssh-agent is a practical software that ease the use of SSH keys without much compromise with regards to security, but some extra security could be useful in certain scenarios, especially for developers running untrusted code as their user holding the SSH key.
While the extra confirmation could still be manipulated by a rogue script, it would come with a greater complexity at the cost of being spotted more easily. If you really want to protect your SSH keys, you should use them from a hardware token requiring a physical action to unlock it. While I find those tokens not practical and expensive, they have their use and they can not be beaten by a pure software solution.
This program is particularly useful when you have repeated tasks to achieve in a terminal, or if you want to automate your tmux session to save your fingers from always typing the same commands.
tmuxinator is packaged in most distributions and requires tmux to work.
tmuxinator requires a configuration file for each "session" you want to manage with it. It provides a command line parameter to generate a file from a template:
$ tmuxinator new name_here
By default, it will create the yaml file for this project in $HOME/.config/tmuxinator/name_here.yml, if you want the project file to be in a directory (to make it part of a versioned project repository?), you can add the parameter --local.
Here is a tmuxinator configuration file I use to automatically do the following tasks, the commands include a lot of monitoring as I love watching progress and statistics:
update my ports tree using git before any other task
run a script named dpb.sh
open a shell and cd into a directory
run an infinite loop displaying ccache statistics
run an infinite loop displaying a MFS mount point disk usage
display top
display top for user _pbuild
I can start all of this using tmuxinator start dpb, or stop only these "parts" of tmux with tmuxinator stop dpb which is practical when using tmux a lot.
Here is my file dpb.yml:
name: dpb
root: ~/
# Runs on project start, always
on_project_start: cd /usr/ports && doas -u solene git pull -r
windows:
- dpb:
layout: tiled
panes:
- dpb:
- cd /root/packages/packages
- ./dpb.sh -P list.txt -R
- watcher:
- cd /root/logs
- ls -altrh locks
- date
- while true ; do clear && env CCACHE_DIR=/build/tmp/pobj/.ccache/ ccache -s ; sleep 5 ; done
- while true ; do df -h /build/tmp/pobj_mfs/ | grep % ; sleep 10 ; done
- top
- top -U _pbuild
Tmuxinator could be used to ssh into remote servers, connect to IRC, open your email client, clean stuff, there are no limits.
This is particularly easy to configure as it does not try to run commands, but only send the keys to each tmux panes, which mean it will send keystrokes like if you typed them. In the example above, you can see how the pane "dpb" can cd into a directory and then run a command, or how the pane "watcher" can run multiple commands and leave the shell as is.
I knew about tmuxinator for a while, but I never gave it a try before this week. I really regret not doing it earlier. Not only it allows me to "script" my console usage, but I can also embed some development configuration into my repositories. While you can use it as an automation method, I would not rely too much on it though, it only types blindly on the keyboard.
If you use commercial VPN, you may have noticed they all provide WireGuard configurations in the wg-quick format, this is not suitable for an easy use in OpenBSD.
As I currently work a lot for a VPN provider, I often have to play with configurations and I really needed a script to ease my work.
I made a shell script that turns a wg-quick configuration into a hostname.if compatible file, for a full integration into OpenBSD. This is practical if you always want to connect to a given VPN server, not for temporary connections.
It is really easy to use, download the script and mark it executable, then run it with your wg-quick configuration as a parameter, it will output the hostname.if file to the standard output.
wg-quick-to-hostname-if fr-wg-001.conf | doas tee /etc/hostname.wg0
In the generated file, it uses a trick to dynamically figure the current default route which is required to keep a non-vpn route to the VPN gateway.
If you need your WireGuard VPN to be leakproof (= no network traffic should leave the network interface outside the VPN if it's not toward the VPN gateway), you should absolutely do the following:
your WireGuard VPN should be on rdomain 0
WireGuard VPN should be established on another rdomain
use PF to block traffic on the other rdomain that is not toward the VPN gateway
use the VPN provider DNS or a no-log public DNS provider
OpenBSD's ability to configure WireGuard VPNs with ifconfig has always been an incredible feature, but it was not always fun to convert from wg-quick files. But now, using a commercial VPN got a lot easier thanks to a few piece of shell.
I always had an interest for practical security on computers, being workstations or servers. Many kinds of threats exist for users and system administrators, it's up to them to define a threat model to know what is acceptable or not. Nowadays, we have choice in the operating system land to pick what works best for that threat model: OpenBSD with its continuous security mechanisms, Linux with hardened flags (too bad grsec isn't free anymore), Qubes OS to keep everything separated, immutable operating system like Silverblue or MicroOS (in my opinion they don't bring much to the security table though) etc...
My threat model always had been the following: some exploit on my workstation remaining unnoticed almost forever, stealing data and capturing keyboard continuously. This one would be particularly bad because I have access to many servers through SSH, like OpenBSD servers. Protecting against that is particularly complicated, the best mitigations I found so far is to use Qubes OS with disposable VMs or restricting outbound network, but it's not practical.
My biggest grip with computers always have been "states". What is a state? It is something that distinguish a computer from another: installed software, configuration, data at rest (pictures, documents etc…). We use states because we don't want to lose work, and we want our computers to hold our preferences.
But what if I could go stateless? The best defense against data stealer is to own nothing, so let's go stateless!
My idea is to be able to use any computer around, and be able to use it for productive work, but it should always start fresh: stateless.
A stateless productive workstation obviously has challenges: How would it help with regard to security? How would I manage passwords? How would I work on a file over time? How to achieve this?
I have been able to address each of these questions. I am now using a stateless system.
States? Where we are going, we don't need states! (certainly Doc Brown in a different timeline)
It is obvious that we need to keep files for most tasks. This setup requires a way to store files on a remote server.
Here are different methods to store files:
Nextcloud
Seafile
NFS / CIFS over VPN
iSCSI over VPN
sshfs / webdav mount
Whatever works for you
Encryption could be done locally with tools like cryfs or gocryptfs, so only encrypted files would be stored on the remote server.
Nextcloud end-to-end encryption should not be used as of April 2024, it is known to be unreliable.
Seafile, a less known alternative to Nextcloud but focused only on file storage, supports end-to-end encryption and is reliable. I chose this one as I had a good experience with it 10 years ago.
Having access to the data storage in a stateless environment comes with an issue: getting the credentials to access the files. Passwords should be handled differently.
The main driving force for this project is to increase my workstation security, I had to think hard about this part.
Going stateless requires a few changes compared to a regular workstation:
data should be stored on a remote server
passwords should be stored on a remote server
a bootable live operating system
programs to install
This is mostly a paradigm change with pros and cons compared to a regular workstation.
Data and passwords stored in the cloud? This is not really an issue when using end-to-end encryption, this is true as long as the software is trustable and its code is correct.
A bootable live operating system is quite simply to acquire. There is a ton of choice of Linux distributions able to boot from a CD or from USB, and also non Linux live system exist. A bootable USB device could be compromised while a CD is an immutable media, but there are USB devices such as the Kanguru FlashBlu30 with a physical switch to make the device read-only. A USB device could be removed immediately after the boot, making it safe. As for physically protecting the USB device in case you would not trust it anymore, just buy a new USB memory stick and resilver it.
As for installed programs, it is fine as long as they are packaged and signed by the distribution, the risks are the same as for a regular workstation.
The system should be more secure than a typical workstation because:
the system never have access to all data at once, user is supposed to only pick what they will need for a given task
any malware that would succeed to reach the system would not persist to the next boot
The system would be less secure than a typical workstation because:
remote servers could be exploited (or offline, not a security issue but…), this is why end-to-end encryption is a must
To circumvent this, I only have the password manager service reachable from the Internet, which then allows me to create a VPN to reach all my other services.
I think it is a dimension that deserves to be analyzed for such setup. A stateless system requires remote servers to run, and use bandwidth to reinstall programs at each boot. It is less ecological than a regular workstation, but at the same time it may also enforce some kind of rationalization of computer usage because it is a bit less practical.
Here is a list of setup that already exist which could provide a stateless experience, with support for either a custom configuration or a mechanism to store files (like SSH or GPG keys, but an USB smart card would be better for those):
NixOS with impermanence, this is an installed OS, but almost everything on disk is volatile
NixOS live-cd generated from a custom config
Tails, comes with a mechanism to locally store encrypted files, privacy-oriented, not really what I need
Alpine with LBU, comes with a mechanism to locally store encrypted files and cache applications
FuguITA, comes with a mechanism to locally store encrypted files (OpenBSD based)
Guix live-cd generated from a custom config
Arch Linux generated live-cd
Ubuntu live-cd, comes with a mechanism to retrieve files from a partition named "casper-rw"
Otherwise, any live system could just work.
Special bonus to NixOS and Guix generated live-cd as you can choose which software will be in there, in latest version. Similar bonus with Alpine and LBU, packages are always installed from a local cache which mean you can update them.
A live-cd generated a few months ago is certainly not really up to date.
I decided to go with Alpine with its LBU mechanism, it is not 100% stateless but hit the perfect spot between "I have to bootstrap everything from scratch" and "I can reduce the burden to a minimum".
one with Alpine installer, upgrading to a newer Alpine version only requires me to write the new version on that stick
a second to store the packages cache and some settings such as the package list and specific changes in /etc (user name, password, services)
While it is not 100% stateless, the files on the second memory stick are just a way to have a working customized Alpine.
This is a pretty cool setup, it boots really fast as all the packages are already in cache on the second memory stick (packages are signed, so it is safe). I made a Firefox profile with settings and extensions, so it is always fresh and ready when I boot.
I decided to go with the following stack, entirely self-hosted:
Vaultwarden for passwords
Seafile for data (behind VPN)
Nextcloud for calendar and contacts (behind VPN)
Kanboard for task management (behind VPN)
Linkding for bookmarks (behind VPN)
WireGuard for VPN
This setup offered me freedom. Now, I can bootstrap into my files and passwords from any computer (a trustable USB memory stick is advisable though!).
I can also boot using any kind of operating system on any on my computer, it became so easy it's refreshing.
I do not make use of dotfiles or stored configurations because I use vanilla settings for most programs, a git repository could be used to fetch all settings quickly though.
A tricky part with this setup is to proceed with serious backups. The method will depend on the setup you chose.
With my self-hosted stack, restic makes a daily backup to two remote locations, but I should be able to reach the backup if my services are not available due to a server failure.
If you use proprietary services, it is likely they should handle backups, but it is better not to trust them blindly and checkout all your data on a regular schedule to make a proper backup.
This is an interesting approach to workstations management, I needed to try. I really like how it freed me from worrying about each workstation, they are now all disposable.
I made a mind map for this project, you can view it below, it may be useful to better understand how things articulate.
Yesterday Red Hat announced that xz library was compromised badly, and could be use as a remote execution code vector. It's still not clear exactly what's going on, but you can learn about this on the following GitHub discussion that also links to original posts:
As far as we currently know, xz-5.6.0 and xz-5.6.1 contains some really obfsucated code that would trigger only in sshd, this only happen in the case of:
the system is running systemd
openssh is compiled with a patch to add a feature related to systemd
the system is using glibc (this is mandatory for systemd systems afaik anyway)
xz package was built using release tarballs published on GitHub and not auto-generated tarballs, the malicious code is missing in the git repository
So far, it seems openSUSE Tumbleweed, Fedora 40 and 41 and Debian sid were affected and vulnerable. Nobody knows what the vulnerability is doing exactly yet, when security researchers get their hands on it, we will know more.
OpenBSD, FreeBSD, NixOS and Qubes OS (dom0 + official templates) are unaffected. I didn't check for other but Alpine and Guix shouldn't be vulnerable either.
This is really unfortunate that a piece of software as important and harmless in appareance got compromised. This made me think about how could we protect the most against this kind of issues, I came to the conclusion:
packages should be built from source code repository instead of tarballs whenever possible (sometimes tarballs contain vendoring code which would be cumbersome to pull otherwise), at least we would know what to expect
public network services that should be only used by known users (like openssh, imap server in small companies etc..) should be run behind a VPN
OpenBSD style to have a base system developed as a whole by a single team is great, such kind of vulnerability is barely possible to happen (on base system only, ports aren't audited)
whenever possible, separate each network service within their own operating system instance (using hardware machines, virtual machines or even containers)
avoid daemons running as root as possible
use opensnitch on workstations (linux only)
control outgoing traffic whenever you can afford to
I don't have much opinion about what could be done to protect supply chain. As a packager, it's not possible to audit code of each software we update. My take on this is we have to deal with it, xz may certainly not be the only one vulnerable library running in production.
However, the risks could be reduced by:
using less programs
using less complex programs
compiling programs with less options to pull in less dependencies (FreeBSD and Gentoo both provide this feature and it's great)
I actually have two systems that were running the vulnerable libs on openSUSE MicroOS which updates very aggressively (daily update + daily reboot). There are no magic balance between "update as soon as possible" and "wait for some people to take the risks first".
I'm going to rework my infrastructure and expose the bare minimum to the Internet, and use a VPN for all my services that are for known users. The peace of mind will obtained be far greater than the burden of setting up WireGuard VPNs.
While testing the cloud gaming service GeForce Now, I've learned that PlayStation also had an offer.
Basically, if you use a PlayStation 4 or 5, you can subscribe to the first two tiers to benefit some services and games library, but the last tier (premium) adds more content AND allows you to play video games on a computer with their client, no PlayStation required. I already had the second tier subscription, so I paid the small extra to switch to premium in order to experiment with the service.
Compared to GeForce Now, while you are subscribed you have a huge game library at hand. This makes the service a lot cheaper if you are happy with the content. The service costs 160$€ / year if you take for 12 months, this is roughly the price of 2 AAA games nowadays...
The service is only available using the PlayStation Plus Windows program. It's possible to install it on Linux, but it will use more CPU because hardware decoding doesn't seem to work on Wine (even wine-staging with vaapi compatibility checked).
There are no clients for Android, and you can't use it in a web browser. The Xbox Game Pass streaming and GeForce now services have all of that.
Sadness will start here. The service is super promising, but the application is currently a joke.
If you don't plug a PS4 controller (named a dualshock 4), you can't use the "touchpad" button, which is mandatory to start a game in Tales of Arise, or very important in many games. If you have a different controller, on Windows you can use the program "DualShock 4 emulator" to emulate it, on Linux it's impossible to use, even with a genuine controller.
A PS5 controller (dualsense) is NOT compatible with the program, the touchpad won't work.
There are absolutely no settings in the application, you can run a game just by clicking on it, did I mention there are no way to search for a game?
I guess games are started in 720p, but I'm not sure, putting the application full screen didn't degrade the quality, so maybe it's 1080p but doesn't go full screen when you run it...
Frame rate... this sucks. Games seem to run on a PS4 fat, not a PS4 pro that would allow 60 fps. On most games you are stuck with 30 fps and an insane input lag. I've not been able to cope with AAA games like God of War or Watch Dogs Legion as it was horrible.
Independent games like Alex Kidd remaster, Monster Boy or Rain World did feel very smooth though (60fps!), so it's really an issue with the hardware used to run the games.
Don't expect any PS5 games in streaming from Windows, there are none.
The service allows PlayStation users to play all games from the library (including PS5 games) in streaming up to 2160p@120fps, but not the application users. This feature is only useful if you want to try a game before installing it, or if your PlayStation storage is full.
This is fun here too. There are game saves in the PlayStation Plus program cloud, but if you also play on a PlayStation, their saves are sent to a different storage than the PlayStation cloud saves.
There is a horrible menu to copy saves from one pool to the other.
This is not an issue if you only use the stream application or the PlayStation, but it gets very hard to figure where is your save if you play on both.
I have been highly disappointed by the streaming service (outside PlayStation use). The Windows programs required to sign in twice before working (I tried on 5 devices!), most interesting games run poorly due to a PS4 hardware, there is no way to enable the performance mode that was added to many games to support the PS4 Pro. This is pretty curious as the streaming from a PlayStation device is a stellar experience, it's super smooth, high quality, no input lag, no waiting, crystal clear picture.
No Android application? Curious... No support for a genuine PS5 controller, WTF?
The service is still young, I really hope they will work at improving the streaming ecosystem.
At least, it works reliably and pretty well for simpler games.
It could be a fantastic service if the following requirements were met:
proper hardware to run games at 60fps
greater controller support
allow playing in a web browser, or at least allow people to run it on smartphones with a native application
I'm finally done with ADSL now as I got access to optical fiber last week! It was time for me to try cloud gaming again and see how it improved since my last use in 2016.
If you are not familiar with cloud gaming, please do not run away, here is a brief description. Cloud gaming refers to a service allowing one to play locally a game running on a remote machine (either locally or over the Internet).
There are a few commercial services available, mainly: GeForce Now, PlayStation Plus Premium (other tiers don't have streaming), Xbox game pass Ultimate and Amazon Luna. Two major services died in the long run: Google Stadia and Shadow (which is back now with a different formula).
A note on Shadow, they are now offering access to an entire computer running Windows, and you do what you want with it, which is a bit different from other "gaming" services listed above. It's expensive, but not more than renting an AWS system with equivalent specs (I know some people doing that for gaming).
This article is about the service Nvidia GeForce Now (not sponsored, just to be clear).
I tried the free tier, premium tier and ultimate tier (thanks to people supporting me on Patreon, I could afford the price for this review).
This is the first service I tried in 2016 when I received an Nvidia Shield HTPC, the experience was quite solid back in the days. But is it good in 2024?
The answer is clear, yes, it's good, but it has limitations you need to be aware of. The free tier allows playing for a maximum of 1 hour in a single session, and with a waiting queue that can be fast (< 1 minute) or long (> 15 minutes), but the average waiting time I had was like 9 minutes. The waiting queue also displays ads now.
The premium tier at 11€$/month removes the queue system by giving you priority over free users, always assigns an RTX card and allows playing up to 6 hours in a single session (you just need to start a new session if you want to continue).
Finally, the ultimate tier costs 22€$/month and allows you to play in 4K@120fps on a RTX 4080, up to 8h.
The tiers are quite good in my opinion, you can try and use the service for free to check if it works for you, then the premium tier is affordable to be used regularly. The ultimate tier will only be useful to advanced gamers who need 4K, or higher frame rates.
Nvidia just released a new offer early March 2024, a premium daily pass for $3.99 or ultimate daily pass for 8€. This is useful if you want to evaluate a tier before deciding if you pay for 6 months. You will understand later why this daily pass can be useful compared to buying a full month.
I tried the service using a Steam Deck, a Linux computer over Wi-Fi and Ethernet, a Windows computer over Ethernet and in a VM on Qubes OS. The latency and quality were very different.
If you play in a web browser (Chrome based, Edge, Safari), make sure it supports hardware acceleration video decoding, this is the default for Windows but a huge struggle on Linux, Chrome/Chromium support is recent and can be enabled using chromium --enable-features=VaapiVideoDecodeLinuxGL --use-gl=angle. There is a Linux Electron App, but it does nothing more than bundling the web page in chromium, without acceleration.
On a web browser, the codec used is limited to h264 which does not work great with dark areas, it is less effective than advanced codecs like av1 or hevc (commonly known as h265). If you web browser can't handle the stream, it will lose packets and then Geforce service will instantly reduce the quality until you do not lose packets, which will make things very ugly until it recover, until it drops again. Using hardware acceleration solves the problem almost entirely!
Web browser clients are also limited to 60 fps (so ultimate tier is useless), and Windows web browsers can support 1440p but no more.
On Windows and Android you can install a native Geforce Now application, and it has a LOT more features than in-browser. You can enable Nvidia reflex to remove any input lag, HDR for compatible screens, 4K resolution, 120 fps frame rate etc... There is also a feature to add color filters for whatever reason... The native program used AV1 (I only tried with the ultimate tier), games were smooth with stellar quality and not using more bandwidth than in h264 at 60 fps.
I took a screenshot while playing Baldur's Gate 3 on different systems, you can compare the quality:
In my opinion, the best looking one is surprisingly the Geforce Now on Windows, then the native run on Steam and finally on Linux where it's still acceptable. You can see a huge difference in terms of quality in the icons in the bottom bar.
When I upgraded from free to premium tier, I paid for 1 month and was instantly able to use the service as a premium user.
Premium gives you priority in the queues, I saw the queue display a few times for a few seconds, so there is virtually no queue, and you can play for 6 hours in a row.
When I upgraded from premium to ultimate tier, I was expecting to pay the price difference between my current subscription and the new one, but it was totally different. I had to pay for a whole month of ultimate tier, and my current remaining tier was converted as an ultimate tier, but as ultimate costs a bit more than twice premium, a pro rata was applied to the premium time, resulting in something like 12 extra days of ultimate for the premium month.
Ultimate tier allows reaching a 4K resolution and 120 fps refresh rate, allow saving video settings in games, so you don't have to tweak them every time you play, and provide an Nvidia 4080 for every session, so you can always set the graphics settings to maximum. You can also play up to 8 hours in a row. Additionaly, you can record gaming sessions or the past n minutes, there is a dedicated panel using Ctrl+G. It's possible to achieve 240 fps for compatible monitors, but only for 1080p resolution.
Due to the tier upgrade method, the ultimate pass can be interesting, if you had 6 months of premium, you certainly don't want to convert it into 2 months of ultimate + paying 1 month of ultimate just to try.
As a gamer, I'm highly sensitive to latency, and local streaming has always felt poor with regard to latency, and I've been very surprised to see I can play an FPS game with a mouse on cloud gaming. I had a ping of 8-75 ms with the streaming servers, which was really OK. Games featuring "Nvidia reflex" have no sensitive input lag, this is almost magic.
When using a proper client (native Windows client or a web browser with hardware acceleration), the quality was good, input lag barely noticeable (none in the app), it made me very happy :-)
Using the free tier, I always had a rig good enough to put the graphics quality on High or Ultra, which surprised me for a free service. On premium and later, I had an Nvidia 2080 minimum which is still relevant nowadays.
The service can handle multiple controllers! You can use any kind of controller, and even mix Xbox / PlayStation / Nintendo controllers, no specific hardware required here. This is pretty cool as I can visit my siblings, bring controllers and play together on their computer <3.
Another interesting benefit is that you can switch your gaming session from a device to another by connecting with the other device while already playing, Geforce Now will switch to the new connecting device without interruption.
This is where GeForce now is pretty cool, you don't need to buy games to them. You can import your own libraries like Steam, Ubisoft, Epic store, GOG (only CD Projekt Red games) or Xbox Game Pass games. Not all games from your libraries will be playable though! And for some reasons, some games are only available when run from Windows (native app or web browser), like Genshin Impact which won't appear in the games list if connected from non-Windows client?!
If you already own games (don't forget to claim weekly free Epic store games), you can play most of them on GeForce Now, and thanks to cloud saves, you can sync progression between sessions or with a local computer.
There are a bunch of free-to-play games that are good (like Warframe, Genshin Impact, some MMOs), so you could enjoy playing video games without having to buy one (until you get bored?).
If you don't currently own a modern gaming computer, and you subscribe to the premium tier (9.17 $€/month when signing for 6 months), this costs you 110 $€ / year.
Given an equivalent GPU costs at least 400€$ and could cope with games in High quality for 3 years (I'm optimistic), the GPU alone costs more than subscribing to the service. Of course, a local GPU can be used for data processing nowadays, or could be sold second hand, or be used for many years on old games.
If you add the whole computer around the GPU, renewed every 5 or 6 years (we are targeting to play modern games in high quality here!), you can add 1200 $€ / 5 years (or 240 $€ / year).
When using the ultimate tier, you instantly get access to the best GPU available (currently a Geforce 4080, retail value of 1300€$). Cost wise, this is impossible to beat with owned hardware.
I did some math to figure how much money you can save from electricity saving: the average gaming rig draws approximately 350 Watts when playing, a Geforce now thin client and a monitor would use 100 Watts in the worst case scenario (a laptop alone would be more around 35 Watts). So, you save 0.25 kWh per hour of gaming, if one plays 100 hours per month (that's 20 days playing 5h, or 3.33 hours / day) they would save 25 kWh. The official rate in France is 0.25 € / kWh, that would result in a 6.25€ saving in electricity. The monthly subscription is immediately less expensive when taking this into account. Obviously, if you are playing less, the savings are less important.
Most of the time, the streaming was using between 3 and 4 MB/s for a 1080p@60fps (full-hd resolution, 1920x1080, at 60 frames per second) in automatic quality mode. Playing at 30 fps or on smaller resolutions will use drastically less bandwidth. I've been able to play in 1080p@30 on my old ADSL line! (quality was degraded, but good enough). Playing at 120 fps slightly increased the bandwidth usage by 1 MB/s.
I remember a long tech article about ecology and cloud gaming which concluded cloud gaming is more "eco-friendly" than running locally if you play it less than a dozen hours. However, it always assumed you had a capable gaming computer locally that was already there, whether you use the cloud gaming or not, which is a huge bias in my opinion. It also didn't account that one may install a video games multiple times and that a single game now weights 100 GB (which is equivalent to 20h of cloud gaming bandwidth wise!). The biggest cons was the bandwidth requirements and the whole worldwide maintenance to keep high speed lines for everyone. I do think Cloud gaming is way more effective as it allows pooling gaming devices instead of having everyone with their own hardware.
As a comparison, 4K streaming at Netflix uses 25 Mbps of network (~ 3.1 MB/s).
Geforce Now allows you to play any compatible game on Android, is it worth? I tried it with a Bluetooth controller on my BQ Aquaris X running LineageOS (it's a 7 years old phone, average specs with a 720p screen).
I was able to play in Wi-Fi using the 5 GHz network, it felt perfect except that I had to put the smartphone screen in a comfortable way. This was drawing the battery at a rate of 0.7% / minute, but this is an old phone, I expect newer hardware to do better.
On 4G, the battery usage was less than Wi-Fi with 0.5% / minute. The service at 720p@60fps used an average of 1.2 MB/s of data for a gaming session of Monster Hunter world. At this rate, you can expect a data usage of 4.3 GB / hour of gameplay, which could be a lot or cheap depending on your usage and mobile subscription.
Globally, playing on Android was very good, but only if you have a controller. There are interesting folding controllers that sandwich the smartphone between two parts, turning it into something looking like a Nintendo Switch, this can be a very interesting device for players.
You can use "Ctrl+G" to change settings while in game or also display information about the streaming.
In GeForce Now settings (not in-game), you can choose the servers location if you want to try a different datacenter. I set to choose the nearest otherwise I could land on a remote one with a bad ping.
GeForce Now even works on OpenBSD or Qubes OS qubes (more on that later on Qubes OS forum!).
GeForce Now is a pretty neat service, the free tier is good enough for occasional gamers who would play once in a while for a short session, but also provide a cheaper alternative than having to keep a gaming rig up-to-date. I really like that they allow me to use my own library instead of having to buy games on their own store.
I'm preparing another blog post about local and self hosted cloud gaming, and I have to admit I haven't been able to do better than Geforce Now even on local network... Engineers at Geforce Now certainly know their stuff!
The experience was solid even on a 10 years old laptop, and enjoyable. A "cool" feature when playing is the surrounding silence, as no CPU/GPU are crunching for rendering! My GPU is still capable to handle modern games at an average quality at 60 FPS, I may consider using the premium tier in the future instead of replacing my GPU.
As a daily Qubes OS user, I often feel the need to expose a port of a given qube to my local network. However, the process is quite painful because it requires doing the NAT rules on each layer (usually net-vm => sys-firewall => qube), it's a lost of wasted time.
I wrote a simple script that should be used from dom0 that does all the job: opening the ports on the qube, and for each NetVM, open and redirect the ports.
It's quite simple to use, the hardest part will be to remember how to copy it to dom0 (download it in a qube and use qvm-run --pass-io from dom0 to retrieve it).
Make the script executable with chmod +x nat.sh, now if you want to redirect the port 443 of a qube, you can run ./nat.sh qube 443 tcp. That's all.
Be careful, the changes ARE NOT persistent. This is on purpose, if you want to always expose ports of a qube to your network, you should script its netvm accordingly.
The script is not altering the firewall rules handled by qvm-firewall, it only opens the ports and redirect them (this happens at a different level). This can be cumbersome for some users, but I decided to not touch rules that are hard-coded by users in order to not break any expectations.
Running the script should not break anything. It works for me, but it was only slightly tested though.
The avahi daemon uses the UDP port 5353. You need this port to discover devices on a network. This can be particularly useful to find network printers or scanners and use them in a dedicated qube.
It could be possible to use this script in qubes-rpc, this would allow any qube to ask for a port forwarding. I was going to write it this way at first, but then I thought it may be a bad idea to allow a qube to run a dom0 script as root that requires reading some untrusted inputs, but your mileage may vary.
The following list of features are not all OpenBSD specific as some can be found on other BSD systems. Most of the knowledge will not be useful to Linux users.
The secure level is a sysctl named kern.securelevel, it has 4 different values from level -1 to level 2, and it's only possible to increase the level. By default, the system enters the secure level 1 when in multi-user (the default when booting a regular installation).
It's then possible to escalate to the last secure level (2), which will enable the following extra security:
all raw disks are read-only, so it's not possible to try to make a change to the storage devices
the time is almost lock, it's only possible to modify the clock slowly by small steps (maybe 1 second max every so often)
the PF firewall rules can't be modified, flushed or altered
This feature is mostly useful for dedicated firewall with rules that rarely change. Preventing the time to change is really useful for remote logging as it allows being sure of "when" things happened, and you can be assured the past logs weren't modified.
The default security level 1 already enable some extra security like "immutable" and "append-only" file flags can't be removed, these overlooked flags (that can be applied with chflags) can lock down files to prevent anyone from modifying them. The append-only flag is really useful for logs because you can't modify the content, but this doesn't prevent adding new content, history can't be modified this way.
OpenBSD's memory allocator can be tweaked, system-wide or per command, to add extra checks. This could be either used for security reasons or to look for memory allocation related bugs in a program (this is VERY common...).
There are two methods to apply the changes:
system-wide by using the sysctl vm.malloc_conf, either immediately with the sysctl command, or at boot in /etc/sysctl.conf (make sure you quote its value there, some characters such as > will create troubles otherwise, been there...)
on the command line by prepending env MALLOC_OPTIONS="flags" program_to_run
The man page gives a list of flags to use as option, the easiest to use is S (for security checks). It is stated in the man page that a program misbehaving with any flag other than X is buggy, so it's not YOUR fault if you use malloc options and the program is crashing (except if you wrote the code ;-) ).
You are certainly used to files attributes like permissions or ownership, but on many file systems (including OpenBSD ffs), there are flags as well!
The file flags can be altered with the command chflags, there are a couple of flags available:
nodump: prevent the files from being saved by the command dump (except if you use a flag in dump to bypass this)
sappnd: the file can only be used in writing append mode, only root can set / remove this flag
schg: the file can not be change, it becomes immutable, only root can alter this flag
uappnd: same as sappnd mode but the user can alter the flag
uchg: same as schg mode but the user can alter the flag
As explained in the secure level section above, in the secure level 1 (default !), the flags sappnd and schg can't be removed, you would need to boot in single user mode to remove these flags.
Tip: remove the flags on a file with chflags 0 file [...]
You can check the flags on files using ls -ol, this would look like this:
OpenBSD crontab format received a few neat additions over the last years.
random number for time field: you can use ~ in a field instead of a number or * to generate a random value that will remain stable until the crontab is reloaded. Things like ~/5 work. You can force the random value within a range with 20~40 to get values between 20 and 40.
only send an email if the return code isn't 0 for the cron job: add -n between the time and the command, like in 0 * * * * -n /bin/something.
only run one instance of a job at a time: add -s between the time and the command, like in * * * * * -s /bin/something. This is incredibly useful for cron job that shouldn't be running twice in parallel, if the job duration is longer than usual, you are ensured it will never start a new instance until the previous one is done.
no logging: add -q between the time and the command, like in * * * * -q /bin/something, the effect will be that this cron job will not be logged in /var/cron/log.
It's possible to use a combination of flags like -ns. The random time is useful when you have multiple systems, and you don't want them to all run a command at the same time, like in a case they would trigger a huge I/O on a remote server. This was created to prevent the usual 0 * * * * sleep $(( $RANDOM % 3600 )) && something that would run a sleep command for a random time up to an hour before running a command.
One cool feature on OpenBSD is the ability to easily create an installation media with pre-configured answers. This is done by injecting a specific file in the bsd.rd install kernel.
There is a simple tool named upobsd that was created by semarie@ to easily modify such bsd.rd file to include the autoinstall file, I forked the project to continue its maintenance.
In addition to automatically installing OpenBSD with users, ssh configuration, sets to install etc... it's also possible to add a site.tgz archive along with the usual sets archives that includes files you want to add to the system, this can include a script to run at first boot to trigger some automation!
These features are a must-have if you run OpenBSD in production, and you have many of them to manage, enrolling a new device to the fleet should be automated as possible.
Apmd is certainly running on most OpenBSD laptop and desktop around, but it has features that aren't related to its command line flags, so you may have missed them.
There are different file names that can contain a script to be run upon some event such as suspend, resume, hibernate etc...
A classic usage is to run xlock in one's X session on suspend, so the system will require a password on resume.
A bit similar to what apmd by running a script upon events, hotplugd is a service that allow running a script when a device is added / removed.
A typical use is to automatically mount an USB memory stick when plugged in the system, or start cups daemon when powering on your USB printer.
The script receives two parameters that represents the device class and device name, so you can use them in your script to know what was connected. The example provided in the man page is a good starting point.
The scripts aren't really straightforward to write, you need to make a precise list of hardware you expect and what to run for each, and don't forget to skip unknown hardware. Don't forget to make the scripts executable, otherwise it won't work.
Finally, there is a feature that looks pretty cool. In the daily script, if an OpenBSD partition /altroot/ exists in /etc/fstab and the daily script environment has a variable ROOTBACKUP=1, the root partition will be duplicated to it. This permit keeping an extra root partition in sync with the main root partition. Obviously, it's more useful if the altroot partition is on another drive. The duplication is done with dd. You can look at the exact code by checking the script /etc/daily.
However, it's not clear how to boot from this partition if you didn't install a bootloader or created an EFI partition on the disk...
OpenBSD comes with a program named "talk", this creates a 1 to 1 chat with another user, either on the local system or a remote one (setup is more complicated). This is not asynchronous, the two users must be logged in the system to use talk.
This program isn't OpenBSD specific and can be used on Linux as well, but it's so fun, effective and easy to setup I wanted to write about it.
The communication happens on localhost on UDP ports 517 and 518, don't open them to the Internet! If you want to allow a remote system, use a VPN to encrypt the traffic and allow ports 517/518 only for the VPN.
The usage is simple, if you want alice and bob to talk to each other:
alice type talk bob, and bob must be logged in as well
bob receives a message in their terminal that alice wants to talk
bob type talk alice
a terminal UI appears for both users, what they write will appear on the top half of the UI, and the messages from recipient will appear on the half bottom
This is a bit archaic, but it works fine and comes with the base system. It does the job when you just want to speak to someone.
There are interesting features on OpenBSD that I wanted to highlight a bit, maybe you will find them useful. If you know cool features that could be added to this list, please reach me!
I've been doing a simple speed test using dd to measure the write speed compare to a tmpfs.
The vramfs mount point was able to achieve 971 MB/s, it was CPU bound by the FUSE program because FUSE isn't very efficient compared to a kernel module handling a file system.
t470 /mnt/vram # env LC_ALL=C dd if=/dev/zero of=here.disk bs=64k count=30000
30000+0 records in
30000+0 records out
1966080000 bytes (2.0 GB, 1.8 GiB) copied, 2.02388 s, 971 MB/s
Meanwhile, the good old tmpfs reached 3.2 GB/s without using much CPU, this is a clear winner.
t470 /mnt/tmpfs # env LC_ALL=C dd if=/dev/zero of=here.disk bs=64k count=30000
30000+0 records in
30000+0 records out
1966080000 bytes (2.0 GB, 1.8 GiB) copied, 0.611312 s, 3.2 GB/s
I tried to use the vram mount point as a temporary directory for portage (the Gentoo tool building packages), but it didn't work due to an error. After this error, I had to umount and recreate the mount point otherwise I was left with an irremovable directory. There are bugs in vramfs, no doubts here :-)
Arch Linux wiki has a guide explaining how to use vramfs to store a swap file, but it seems to be risky for the system stability.
It's pretty cool to know that on Linux you can do almost what you want, even store data in your GPU memory.
However, I'm still trying to figure a real use case for vramfs except that it's pretty cool and impressive. If you figure a useful situation, please let me know.
This guide explains how to install the PHP web service Shaarli on OpenBSD.
Shaarli is a bookmarking service and RSS feed reader, you can easily add new links and associate a text / tag and share it with other or keep each entry private if you prefer.
Extract the archive and move the directory Shaarli in /var/www/.
Change the owner of the following directories to the user www. It's required for Shaarli to work properly. For security’s sake, don't chown all the files to Shaarli, it's safer when a program can't modify itself.
By default, on OpenBSD the PHP modules aren't enabled, you can do it with:
for i in gd curl intl opcache; do ln -s "/etc/php-8.3.sample/${i}.ini" /etc/php-8.3/ ; done
Now, enable and start PHP service:
rcctl enable php83_fpm
rcctl start php83_fpm
If you want Shaarli to be able to do outgoing connections to fetch remote content, you need to make some changes in the chroot directory to make it work, everything is explained in the file /usr/local/share/doc/pkg-readmes/php-INSTALLED.VERSION.
Now you should have a working Shaarli upon opening http://YOUR_HOSTNAME_HERE/index.php/, all lights should be green, and you are now able to configure the instance as you wish.
Shaarli is a really handy piece of software, especially for active RSS readers who may have a huge stream of news to read. What's cool is the share service, and you may allow some people to subscribe to your own feed.
We need some kind of label "not AI powered" :D I'll add something like that on my template
There is one exception as I wrote one blog post about machine learning, and obviously the pictures in it were generated/colored by a program to demonstrate the tools.
I have no incentive adding an AI in the process of writing, I do mistakes, I may make poor sentences and I have my own style for the best of the worst. I think throwing an AI into this would just make the result bland.
For a pretty similar reason, I keep my custom website generator and template instead of using a program like Hugo with an awesome template because I need to have this "authentic" feeling for my blog.
This blog is my own space, it represents who I am.
It's hard to stay confident in your own skills when you feel you accomplished nothing in your life or career. I would recommend everyone to always keep a very detailed CV/Résumé up-to-date, with all the projects you worked on. When you feel in doubt about your own skills, just check this list, and you will certainly be surprised about what you achieve in the past.
If you are a developer, looking at your projects histories in git/mg/svn/whatever is also a nice way to review your own past work. There are dedicated git tools to write such nice reports, even across multiple repositories.
When I look back at my blog index, I realize how many things I learned. I forgot about most of the previous content and topics I wrote about! This is my own list, it's really helpful to me.
It seems IS exists because it's hard to differentiate "low value general knowledge" and what we know and should know as a technician, knowledge that makes us a professional in our job. In IT it's really hard to evaluate a work/product/service, compared to let's say, a sculpted piece of wood. I'm not saying sculpting wood is easy, but at least it doesn't require an audit by a dedicated team to know if it was nicely done in the state of the art.
My confidence got better when I started spending time with the new colleagues when joining a new company. Being able to know how the other worked helped me to evaluate my own work, it was also the opportunity to ask them to review my work and methods. Honest feedback from a competent person is invaluable.
By spending more time with my colleagues, I was finally able to establish some kind of reference to auto-evaluate my work more accurately.
Moving to a new job is also the opportunity to meet real slackers with poor skills, and in most cases you will notice they don't even care. After all, if they got a job and their boss is happy, your work will just be better, so there is no reason to not stay confident in yourself.
This seems boring and obvious, but you need to stay confident in yourself to start building some confidence. If you succeeded in a project in the past, there is no reason for you to fail in another project later.
Being able to overcome failures is an important part of the process. It's common for anyone to fail at something, but instead of lamenting about it, see it as the opportunity to improve yourself for the next time. There is a lot more to learn from failures than from successes.
When you see someone's work/article/video, you may be impressed by it and feel bad that you would never be able to achieve something similar because it's "too hard". But did you ever think that you only saw the tip of the iceberg, and that you dismissed all the hard work and researches done in order to succeed?
For instance, maybe that person spent hundreds of hours making a two minutes video: the result looks incredible to you, and it's only two minutes, so you immediately think "I would never be able to do this myself", but what if you had hundreds hours and the skills to do it? Could you?
If you ever feel bad listening to someone's story that makes you feel incompetent and useless, you could think: "do they know how to do [this], and [this]?". ([this] being someone you know)
Yes, they are a programming compiler expert, but do they know like me how to cook? Do they know how to change a car wheel? Do they know how to grow vegetables?
I'm not a psychologist, a personal coach or an imposter syndrome specialist. But I've been able to work around it, and I'm now gradually getting rid of it for good. It's really refreshing!
It's important to not feel over-confident in the process, there is a balance to keep, but don't think about it too early ;)
Have fun, you are awesome in your own way, like everyone else!
2023 was a special year for me, I've been terribly sick early January, and this motivated me to change a lot of things in my life. I stuck to this idea the whole year and I still continue to lurk for changing things in my life.
I left the company I was working for, and started to work as a freelance DevSecOps/DevOps. The word "Sysadmin" would be the best job title for me, but people like buzzwords and nobody talk about system administrators anymore.
Since the end of the year, I also work as a technical writer for a VPN provider (that I consider ethical), and it makes me think that in the future, I may have a career shift to being a technical writer "only".
Since 2023, I have a page on Patreon allowing my readers to support me financially, in exchange for a few days of early access for most blog posts. This is an advantage to reward my supporters without being a loss for all other readers. Patreon helps me a lot as it allows me to plan on a monthly income and spend more time on my blog or contributing to open source projects. I also added other payments option as some wanted to support me using more free (as in freedom) methods like liberapay, BTC or XMR.
The blog also received a few technical changes, mostly in the HTML rendering like captions on pictures or headers numbering. I'm quite pleased with the result right now, and the use of GemText (from Gemini) markup was a right choice a few years ago as it gives a simple structure enforcing clarity (of course it's bad if you need a complex layout).
The content finally got a proper license: CC-BY-4.0, I'm an open source person, but my own content was under no license, what a shame for all this time...
Last year, I started using Qubes OS as it's the best operating system for my needs (a blog post will cover this "soon") and I got involved into the community and in testing the 4.2 release that got out a few weeks ago by now.
I'm still contributing to OpenBSD, but not as much as I want, simply because of lack of hardware (and a bit of time), but this is now solved after my deal with NovaCustom. I still maintain the packages updates build cluster.
In 2023, I entirely dropped NixOS, but I preferred to not write a blog post about it to avoid a flame war, but maybe I'll write one. In a few words, I didn't like the governance issues of the project, it seems company driven to me and from my point of view it's harmful for the open source project. The technology is awesome, but the "core team" struggles to get somewhere. I'll investigate more Guix as I always enjoyed this project, and they proved they are a reliable and solid project able to maintain their pace over time.
It's my favorite pet project, even though it's a lot of work to publish a single issue.
Working with Prahou for the special Halloween issue was really fun as instead of writing the content, I had to give some direction to keep the issue on rails for being a Webzine issue, while being able to enjoy it like any other reader as I didn't make the content itself.
For no reasons, I decided to experiment vegetarian diet up to end of February (I still eat eggs, milk, butter, cheese or rarely fish). I'm bad at cooking, I don't enjoy it much but mostly because I have no idea what to cook. This forces me to learn about new food and recipes I was not aware of. Buying a recipes book is definitely a must for this :-). I never really enjoyed meat, and it's possible that I may keep the vegetarian diet for a longer time.
I'd like to thank all my readers. I regularly receive emails about your enjoyments, or typos reports, or suggestions to improve the content, this really drives me continuing writing.
Hello! Today, I present you a quite special blog post, resulting from a partnership with the PC Manufacturer NovaCustom. I offered them to write an honest review for their product and also share my feedback as a user, in exchange for a NV41 laptop. This is an exceptional situation, I insist that it's not a sponsorship, I actually needed a laptop for my freelance work, and it turns they agreed. In our agreements, I added that I would return the laptop in the case I wouldn't like it, I don't want to generate electronic wastes and company's money for nothing.
I have no plans to turn my blog into an advertisement platform and do this on a regular basis. Stars aligned well here, NovaCustom is making the only modern laptop Qubes OS certified, and the CEO is a very open source friendly person.
In this blog post, I'll share my experience using a NV41 laptop from NovaCustom, I tried many operating systems on it for a while, run some benchmarks, and ultimately used Qubes OS on it for a month and half for my freelance work.
This is a 14-inch laptop, the best form factor in my opinion for being comfortable when used for a long time while being easy to carry.
It looks great with its metal look with blueish reflection and the engraved logo "NV" on the cover (logo can be customized).
The frame feels solid and high-end, I'm not afraid to carry it or manipulate it. Compared to my ThinkPad T470, that's a change, I always fear to press its plastic frame too much when carrying with a single hand.
The power button is on the right side, this is quite unusual, but it looks great, there are LED around the power plug near the power button that tells the state of the system (running, off, sleeping) and if the battery is running low or charging.
It's running the open-source Firmware Dasharo coreboot, and optionally the security oriented firmware Heads can be installed.
The machine came in a box containing a box containing the actual box with the laptop inside, it was greatly packaged.
The laptop screen had a removable sleeve that can be reused, I appreciated this as it's smart because it's possible to put it back in case you don't use the laptop for a long time or want to sell it later.
The box contained the laptop, the power supply and the power plug, the full length of the power supply is 2 meters which is great, I hate laptops chargers that only have 1 meter of cable.
The default wireless card is an Intel AX-200/201 compatible with Wi-Fi 6 and Bluetooth 5.2, but I received the blob-free card which was convenient for most operating systems as it doesn't need a firmware (works out of the box on Guix for instance).
There are options to remove the webcam or add a slider to it, a screen privacy filter or secure screws+tape for the packaging to be sure the laptop hasn't been intercepted during transit.
You can also choose the keyboard layout from a large list, or even have your own layout.
Kudos to NovaCustom for guaranteeing the sell of replacement parts for at least 7 years after you buy them a laptop! They also provide a PDF will full details about the internals.
This is my very first Hybrid CPU, it has 4 Performance cores capable of hyperthreading, and 8 Efficient cores that should draw less power at the expense of being slower.
I made a benchmark, only on Qubes OS, to compare the different cores to a Ryzen 5 5600X and my T470 i5-7300U.
If your operating system doesn't know (Linux does) how to make use of E/P cores (like OpenBSD or FreeBSD), it will use them like if they were similar, so no worry here. However, the performance and battery saving aren't optimized because the system won't balance the load at the right place.
TL;DR: the P cores compete with my desktop Ryzen 5 5600X! And the E cores are faster than the i5-7300U! Linux and Xen (in Qubes OS) does a great job at balancing the workload at the right place, so you don't have to worry about pinning a specific task to a P or E core pool.
I think this deserves an entry because it's a plague on many modern computers. If you don't know about it, it's an electric noise that happens under certain conditions. On my T470, it's when charging the battery.
I've been able to get some coil whine noise, only if I forced the CPU frequency to the maximum in the operating system, instead of letting the computer scaling the frequency. This resulted in no performance improvement and some coil whine noise.
In my daily "normal" use with Linux or Qubes OS, I never heard a coil whine. But on OpenBSD for which the frequency management is still not good with these modern CPUs (intel p-state support isn't great) there is a constant noise. However, using obsdfreqd reduced the noise to almost nothing, but still appeared a bit on CPU load.
There is a specific topic where coil whine on this laptop was discussed, a fix was provided by NovaCustom using heat pads (sent for free for their customers) placed at a specific place. I don't think this should be required except if your operating system has a poor support for frequency scaling.
The screen coloring is excellent, which is expected as it covers 98% of sRGB palette, it's really bright, and I rarely turn the brightness more than 50%. I didn't try to use it outdoor, but the brightness at full level should allow reading the screen.
However, it has a noticeable ghosting which make it annoying for playing video games (that's not really the purpose of this model though), or if you are really sensitive to it. I'm used to a 144 Hz display on my desktop and I became really sensitive to refresh rate. However, I have to admit the ghosting isn't really annoying for productivity work, development or browsing the web. Watching a video is fine too.
One slightly annoying limitation is that it's not possible to open the screen more than a 140° angle, this sounds reasonable, but I got used to my T470 screen able to open at ~180°. This is not a real issue, but if you have a weird setup in which you store your laptop vertically against your desktop AND with the screen opened, you won't be able to use the screen.
I've been surprised by the speakers, the audio quality is good up to ~80% of the max volume, but then the quality drops when you set it too high.
I have no way to measure it, but the speakers appear to be quite loud compared to my other laptops when set to 100%, I don't recommend doing it though due to quality drop, but it can be handy sometimes.
The headphones port works fine, there are no noises, and it's able to drive my DT 770 Pro 80 ohm.
I’ve been able to figure an equalizer setting improving the audio to be pretty good (that's subjective). I’m absolutely not an audio expert, but it sounded a lot better for pop, rock, metal or piano.
31 Hz: 0 db
63 Hz: 0 db
125 Hz: 0 db
250 Hz: 0 db
500 Hz: -4 db
1 kHz: -5 db
2 kHz: -8 db
4 kHz: -3 db
8 kHz: -3 db
16 kHz: +2 db
The idea is to lower the trebles instead of pushing the bass which quickly saturate. Depending on what you listen to and your tastes, you could try +1 or +2 db for the four first settings, but it may produce saturated sounds.
I think the cooling system is one of the best part of the laptop, it's always running at 10% of its speed and is inaudible.
Under a huge load, the fan can be heard, but it's still less loud than my idling silent desktop...
There is a special key combination (Fn+1) that triggers the turbo fan mode, forcing them to run at 100%, it is recommended if the laptop is used to run at full CPU 24/7 or for a very long period of time, however, this is as loud as a 1U rack server! For a more comprehensive comparison, let's say it is as annoying as a microwave device.
I was surprised that the laptop never burned my knees, although under heavy load for 30 minutes it felt a bit too hot to keep it on my bare skin without fabric between, that's a genuine lap-top laptop, compatible with short skirts :D.
The keyboard isn't bad, but not good either. Typing on it is pleasant, but it's no match against my mechanical keyboards. The touch is harder than on my Lenovo T470 laptop, I think it feels like most modern laptop keyboards.
Check the layout for the keys like "home", "end", "page up/down", on mine they are tiny keys near the arrows, this may not be to your taste.
The type is quite silent, and there are 5 levels of back-light, I don't really like this feature, so I turned it off, but it's there if you like it.
There are NO indicators for the status of caps lock or num lock (neither for scroll lock, but do people really use it?), this can be annoying for some users.
The touchpad may be a no-go for many, there are no extra physical buttons but you can physically click on the bottom area to make/hold a click. It also features no trackpoint (the little joystick in the middle of the keyboard).
However, it has a large surface and can make use of multitouch clicks. While I was annoyed at first because I was used to ThinkPad's extra physical buttons, over time I got used to multitouch click (click is different depending on the number of fingers used), or the "split-area" click, where a click in a bottom left does a left click, in the middle it does a middle click, and in the bottom right it does a right click.
It reacts well to movements and clicks and does the job, it's not the greatest touchpad I ever used, but it's good enough.
Unfortunately, it's not possible for NovaCustom to propose a variant touchpad featuring extra physical buttons.
Nothing special to say about it, it's like most laptop webcams, it has a narrow angle and the image quality is good enough to show your face during VoIP meetings.
I tested the battery using different operating systems (OpenBSD, Qubes OS, Fedora, Ubuntu) and different methods, there are more details later in the text, but long story short, you can expect the following:
battery life when idling: 6h00
battery life with normal usage: 3h00-5h00 for viewing videos, browsing the web, playing emulated games, code development and some compilation
battery life in continuous heavy use: 2h00 (I accidentally played a long video with no hardware-acceleration, it was using 500% CPU)
On the I/O, the laptop is well-equipped. I appreciated seeing an Ethernet port on a modern laptop.
On the left side:
1x Thunderbolt 4 / USB-c (supports external screen and charging)
1x USB
anti-thief system
Ethernet port
Multi-card reader (a SD card plugged in doesn't go completely inside, so it's not practical for a persistent extra storage)
On the right side:
1x USB-c (supports external screen)
1x headphone
Charge port
Power button and two discrete states LEDs
1x HDMI
1x USB
The rear of the laptop is fully used for the cooling system, and there are nothing on the front (Hopefully! I hate connecting headphones on the front side).
The laptop ships by Dasharo coreboot firmware (that's the correct name for nowadays devices when we speak of the BIOS), it's an open-source firmware that allows to manage your own secure boot keys, disable some Intel features like "ME"
I guess their website will be a better place to understand what it's doing compared to a proprietary firmware.
NovaCustom is building laptops based on Clevo (a manufacturer doing high-end laptop frames, but they rarely sell directly) while ensuring compatibility with Linux systems, especially Qubes OS for this specific model as it's certified (it guarantees the laptop and all its features will work correctly).
They contribute to dasharo development for their own laptops.
They ship their product worldwide, and as I heard from some users, the custom support is quite reactive.
Fedora Linux support (tested with Fedora 39) was excellent, GNOME worked fine. The Wi-Fi network worked immediately even during the installer, Bluetooth was working as well with my few devices. Changing the screen brightness from the GNOME panel was working. However, after a Dasharo update, the keyboard slider in GNOME stopped working, it's a known bug that also affects System76 laptops if I've read correctly, this may be an issue with the Linux driver itself.
The touchpad was working on multitouch out of the box, suspending and resuming the laptop never produced any issue.
Enabling Secure Boot worked out of the box with Fedora, which is quite enjoyable.
Ubuntu 23.10 support was excellent as well, it's absolutely identical to the Fedora report above.
Note: if you use VLC from the Snap store, it won't have hardware decoded acceleration and will use a lot of CPU (and draw battery, and waste watts for nothing), I guess it's an Ubuntu issue here. VLC from Flatpak worked fine, as always.
Alpine Linux support (tested with Alpine 3.18.4) was excellent, I installed GNOME and everything worked out of the box. The Atheros card worked without firmware (this is expected for a blob free device), CPU scheduling was correctly handled for Efficient/Performance cores as the provided kernel is quite recent.
The touchpad default behavior was to click left/right/middle depending on the number of fingers used to click, suspend and resume worked fine, playing video games was also easy thanks to flatpak and Steam.
It's possible to enable Secure Boot by generating your own keys.
Guix support is mixed. I've been able to install it with no issue, thanks to the blob-free atheros network interface, it worked without having to use guix-nonfree repository (that contains firmware).
However, I was surprised to notice that the graphical acceleration wasn't working, it seems that Intel Xe GPU aren't blob free. This only mean you can't plan video games or that any kind of GPU related encoding/decoding won't work, but this didn't prevent GNOME to work fine.
Suspend and resume was OK, and the touchpad worked out-of-the-box in multi-tap mode.
Secure Boot didn't work, and I have no idea how a Secure Boot setup with your own keys would look like on Guix, but it's certainly achievable with enough Grub-foo.
Trisquel is a 100% libre GNU/Linux distribution, this mean it doesn't provide proprietary software or drivers, and no device firmware.
I've been able to install Trisquel and use it, the Wi-Fi was working out of the box because of the blob-free Atheros card.
The main components of the system: CPU / Memory / Storage were correctly detected, the default kernel isn't too old, and it was able to make use of the Efficient/Performance core of the CPU.
When not using the laptop, I was able to suspend it to reduce the battery usage, and then resume instantly the session when I needed, this worked flawlessly.
The touchpad was working great using the "3 zones" mode in which you tap on the touchpad in the left/center/right bottom of it to make a left/middle/right click, this is actually as convenient as using 1, 2 or 3 fingers depending on the click you want to make, this is something that could be configured though.
Sound was working out of the box, the audio jack is also working fine when plugging in headphones.
There is one issue with the webcam, when trying to use it, X crashes instantly. This may be an issue in Trisquel software stack because it works fine on other OS.
A major issue right now is the lack of graphical hardware acceleration, I'm not sure if it's due to the i7-1260P integrated GPU needing a proprietary firmware or if the linux-libre kernel didn't catch up with this GPU yet.
Qubes OS support (tested with 4.1, 4.2-RC2 to RC5 and 4.2) is excellent, this is exactly what I expected for a Qubes OS certified laptop (the only modern and powerful certified laptop as of January 2024!).
Qubes OS is my main OS as I use it for writing this blog, for work (freelancer with different clients) and general use except gaming, so I needed a reliable system that would be fast, with a pretty good battery life.
So far, I never experienced issues except one related to the Atheros Wi-Fi card (this is not the stock Wi-Fi device): 1 time out of 10 when I suspend and resume, the card is missing, and I need to restart the qube sys-net to have it again. I didn't try with the latest Dasharo update though, it may be solved.
Watching 1080p videos x265 10 bits encoded is smooth and only draw ~40% of a CPU, without any kind of GPU accelerated decoding.
The battery life when using the system to write emails, browse the Internet and look at some videos was of 3 hours, if I only do stuff in LibreOffice offline it lasts 5h30.
I'm able to have smooth videoconferences with the integrated webcam and a USB headset, this kind of task may be the most CPU consuming popular job that Qubes OS need, and it worked well.
The 64 GB are very appreciated, I "only" have 32 GB on my desktop computer, but sometimes it lacks memory... 64 GB allows to not ever think about memory anymore.
The touchpad is working fine, by default on the split-area behavior (left/middle/right click depending on the touchpad area you click on).
There is a single USB controller that drives the webcam and card reader + the USB ports, including a USB-c docked that would be connected on either the thunderbolt or USB-c ports. The thunderbolt device is on a separate controller, but if you attach it to a qube (that is not sys-usb), you lose all USB connectivity from a dock connected to it (there is still the other plain USB-c port). The qube sys-usb isn't even required to run if you don't use any USB devices (this saves many headaches and annoying times).
Connecting a usb-c dock on the thunderbolt port allows to have USB passthrough with sys-usb, an additional ethernet port and external screen working with sound, it's also capable of charging the computer. Whereas the simple usb-c port can only carry USB devices or the integrated ethernet port of my dock, it should be able to support a screen but I guess it's not working on Qubes OS. I didn't try adding more than one screen on either ports, I guess it should work on the thunderbolt port.
I tried OpenBSD and FreeBSD with the laptop. I always have bad luck with NetBSD, so I preferred to not try it, and DragonFly BSD support should be pretty close to FreeBSD for which it didn't work well.
I tried OpenBSD 7.4 and -current, everything went really well except the Atheros WiFi card that isn't supported, but this was to be expected. If you want the NV41 with OpenBSD, you need to take the Intel AX-200/201 which is supported by the iwx driver.
Suspend and resume works fine, the touchpad is using the "3 zones" behavior by default where you need to tap left/center/right bottom to make a left/middle/right click. The webcam and sound card were working fine too.
The GPU is fully supported, you can use it for 3D rendering: I've been able to play a PSP game using PPSSPP emulator. OpenBSD doesn't support hardware accelerated video encoding/decoding at all, so I didn't test it.
I installed FreeBSD 14.0 RC4 with ZFS on root and full disk encryption, the process went fine, I had Wi-Fi at the installer step (thanks to the blob free Atheros card).
However, once I booted into the system, I didn't succeed to get X to run, the GPU isn't supported yet and using VESA display didn't work for me. Suspend and resume didn't work either.
I gave another try with GhostBSD 23.10.1 in hope I did something wrong on FreeBSD 14 RC4 like a misconfiguration as I never had any good experience with FreeBSD on desktop with regard to the setup. But GhostBSD failed to start X and was continuously displaying its logo on screen, only booting in safe mode allowed me to figure what was wrong.
I was really surprised that the hardware is still "too new" for FreeBSD while OpenBSD support is almost excellent.
I tried the freshly released OpenIndiana Hipster 2023.10 liveUSB.
After letting the bootloader display and start the boot process, the init process seemed stuck and was printing errors about CPU every minute. I haven't been able to get past this step.
I had fun measuring a lot of things like power usage at the outlet, battery duration with many workloads and gaming FPS (Frames per Second, 30 is okayish depending on people, 40 is acceptable, 60 is perfect as it's the refresh rate of the screen).
I measured the power usage in watts using a watt-o-meter in different situations:
power supply connected, but not to the laptop: 0 watt (some power supplies draw a few watts doing nothing... hello Nintendo Switch with its 2.1 watts!)
charging, sleeping: 30 watts
charging, idling: 37 watts
charging and heavy use: 79 watts
connected to AC (not charging), sleeping: 1 watt
connected to AC (not charging), idling, screen at full brightness: 17 watts
connected to AC (not charging), downloading a file over Wi-Fi, screen at full brightness: 22 watts
This is actually good in my opinion, to have a comparison point, a standard 24-inch screen usually draw around 40 watts alone.
The power consumption of the laptop itself is within the range of other laptop. I was happy to see it use no power when the AC is connected but not to the computer, and on idling it's only 1 watt, I have another laptop idling at 7 watts!
One method was to play a 2160p x265 10 bits encoded video using VLC, 1h39 long, with full brightness and no network.
With hardware accelerated decoding support: 33% of the battery was used, so the battery life would theoretically be almost 6 hours (299 minutes) while playing a video at full brightness
Without hardware acceleration: 90% of the battery was used (VLC was using 480% of the CPU, but I didn't notice it as the fans were too silent!), this would mean a battery life of 1h49 (110 minutes) using the computer under heavy load
The other method was to play the video game "Risk of Rain Returns" with a USB PS5 controller, and at full brightness, for a given duration (measured at 20 25 minutes).
Risk of Rain Returns: 15% of battery used in 20 minutes, this mean I should have been able to play 2h13 (133 minutes) before having to charge.
I did play a bit on the laptop on Linux using Steam on Flatpak. I tested it on Fedora 39, Ubuntu 23.10 and Alpine Linux 3.18.3, results were identical.
A big surprise while playing was that the fans remained almost silent, they were spinning faster than usual of course, but that didn't require me to increase the moderate volume I used in my gaming session.
Baldur's Gate 3: Playable at stable 30 FPS with all settings to low and FSR2.2 enabled in ultra performance mode
Counter Strike 2: Stable 60 FPS in 1600x900 with all settings set to minimum
Spin Rhythm XD: Stable at 60 FPS
Rain world: Stable at 60 FPS
HELLDIVERS: Stable at 60 FPS with native resolution and graphical settings set to maximum
Beam NG;Drive: Playable with a mix of low/normal settings at 30 FPS
Resident Evil: Solid 45 FPS with the few settings set to maximum, better lock the game at 30 FPS though
Risk of Rain Returns: Stable 60 FPS
Risk of Rain 2: Stable 60 FPS using 1600x900 with almost all settings to lowest
Endless Dungeon: with the lowest settings and resolution lowered to 1600x900, it was able to maintain stable 30 FPS, it was kinda playable
I didn't try using an external GPU on the thunderbolt port, but you can expect way better performance as the games were never CPU bound.
I'm glad I dared asking NovaCustom about this partnership about the NV41, this is exactly the laptop I needed. It's reliable, no weird features, it's almost full open source (at least for the software stack?), very powerful, and I can buy replacement parts for at least 7 years if I break something. It's also SILENT, I despise laptop having a high pitch fan noise.
I still have to play with Dasharo coreboot, I'm really new to this open-source firmware world, so I have to learn before trying weird and dangerous things (I would like to try Heads for its anti-evil maid features, it should be possible to install it on Dasharo systems "soon").
Writing this blog post was extremely hard, I had to stay mindful that this must be an HONEST and NEUTRAL review: writing about a product you are happy with leads to some excitement moments and one may forget to share some little annoyance because it's "not _that_ bad", but I did my best to stay neutral when writing. And this is the agreement I had with NovaCustom.
Honesty is an important value to me. You, dear readers, certainly trust me to some point, I don't want to lose your trust.
Feel free to pick any tweak you find useful for your use-case, many are certainly overkill for most people, but depending on the context, these changes could make sense for others.
In some cases, it may be desirable to have a multiple factor authentication, this mean that in order to log in your system, you would need a TOTP generator (phone app typically, or a password manager such as KeePassXC) in addition to your regular password.
This would protect against people nearby who may be able to guess your system password.
I already wrote a guide explaining how to add TOTP to an OpenBSD login.
By default, it's good practice to disable all incoming traffic except the responses to established sessions (so servers can reply to your requests). This protects against someone on your local network / VPN to access network services that would be listening on the network interfaces.
In /etc/pf.conf you would have to replace the default:
block return
pass
By the following:
block all
pass out inet
# allow ICMP because it's useful
pass in proto icmp
Then, reload with pfctl -f /etc/pf.conf, if you ever need to allow a port on the network, add the according rule in the file.
It may be useful and effective to block outbound traffic, but this only work effectively if you know exactly what you need because you will have to allow hosts and remote ports manually.
It would protect against a program trying to exfiltrate data using a non-allowed port/host.
Disabling network by default is an important mitigation in my opinion. This will protect against any program your run and try to act rogue, if they can't figure there is a proxy, they won't be able to connect to the Internet.
This could also save you from mistaken commands that would pull stuff from the network like pip, npm and co. I think it's always great to have a tight control on which program should do networking and which shouldn't. On Linux this is actually easy to do, but on OpenBSD we can't restrict a single program so a proxy is the only solution.
This can be done by creating a new user named _proxy (or whatever the name you prefer) using useradd -s /sbin/nologin -m _proxy and adding your SSH key to its authorized_keys file.
Add this rule at the end of your file /etc/pf.conf and then reload with pfctl -f /etc/pf.conf:
block return out proto {tcp udp} user solene
Now, if you want to allow a program to use the network, you need to:
toggle the proxy ON with the command: ssh -N -D 10000 _proxy@localhost which is only possible if your SSH private key is unlocked
Most programs will react to a proxy configured in a variable named http_proxy or https_proxy or all_proxy, however it's not a good idea to globally define these variables for your user as it would be a lot easier to a program to use the proxy automatically, which is against the essence of this proxy.
If you didn't configure GNOME proxy settings, Chromium / Ungoogled Chromium won't use a proxy, except if you add a command line parameter --proxy-server=socks5://localhost:10000.
I tried to manually modified the dconf database where the "GNOME" settings are to configure the proxy, but I didn't get it to work (it used to work for me, but I can't succeed anymore).
If you use syncthing, you need to proxy all its traffic through the SSH tunnel. This is done by setting the environment variable all_proxy=socks5://localhost:10000 in the program environment.
It's possible to have most of your home directory be a temporary file system living in memory, with a few directories with persistency.
This change would prevent anyone from using temporary files or cache left-over from previous session.
The most efficient method to achieve this is to use the program home-impermanence that I wrote for this use case, it handles a list of files/directories that should be persistent.
If you only want to start fresh using a template (that doesn't evolve on use), you can check the flag -P of mount_mfs which allows populating the fresh memory based file system using an existing directory.
Good news! I take the opportunity here to remember OpenBSD disables by default the video and audio recording of the various capable devices, instead, they will appear to work but record empty stream of data.
They can be manually enabled by changing the sysctls kern.audio.record or kern.video.record to 1 when you need to use them.
Some laptop manufacturer offer the option to have a physical switch to disable microphone and webcam, so you can be confident about their state (Framework). Some other manufacturer also allow to not put any webcam and microphone (NovaCustom, Nitropad). Finally, open source firmwares like Coreboot can offer a bios setting to disable these peripherals, it should be trustable in my opinion.
If you need to protect your system from malicious USB devices (usually in an office environment), you should disable them in the BIOS/Firmware if possible.
If it's not possible, then you could still disable the kernel drivers at boot time using this method.
Create the file /etc/bsd.re-config and add the content to it:
disable usb
disable xhci
This will disable the support for USB 3 and 2 controllers. On a desktop computer, you may want to use PS/2 peripherals in these conditions.
While this one may make you smile, if there is a chance it saves you once, I think it's still a valuable addition to any kind of hardening. A downloaded attachment from an email, or rogue JPG file could still harm your system.
OpenBSD ships a fully working clamav service, don't forget to enable freshclam, the viral database updater.
I already covered it in a previous article about anacron, but in my opinion, auto-updating the packages and base system daily on a computer is the minimum that should be done everywhere.
The OpenBSD malloc system allows you to enable some extra checks, like use after free, heap overflow or guard pages, they can be all enabled at once. This is really efficient for security as most security exploits relies on memory management issues, BUT it may break software that have memory management issues (there are many of them). Using this mode will also impact the performance negatively, as the system needs to do more checks for each piece of allocated memory.
In order to enable it, add this to /etc/sysctl.conf:
vm.malloc_conf=S
It can be immediately enabled with sysctl vm.malloc_conf=S, and disabled by setting no value sysctl vm.malloc_conf="".
The program ssh and sshd always run with this flag enabled, even if it's disabled system-wide.
It could be possible to have different proxy users, with each restriction to the remote ports allowed, we could imagine proxies like:
http / https / ftp
ssh only
imap / smtp
etc....
Of course, this is even more tedious than the multipurpose proxy, but at least, it's harder for a program to guess what proxy to use, especially if you don't connect them all at once.
I wrote a bit about this in the past, for command line programs, running them in dedicated local users over SSH make sense, as long as it's still practical.
But if you need to run graphical programs, this becomes tricky. Using ssh -Y gives the remote program a full access to your display server, which has access to everything else running, not great... You could still rely on ssh -X which enables X11 Security extensions, but you have to trust the implementation, and it comes with issues like no shared clipboard, poor performance and programs crashing when attempting to access a legit resource that is blocked by the security protocol...
In my opinion, the best way to achieve isolation for graphical programs would be to run a dedicated VNC server in the local user, and connect from your own user. This should be better than running on your own X locally.
In a setup where the computer is used by multiple person, the system encryption may be tedious because everyone have to remember the main passphrase, you have no guarantee one won't write it down on a post-it... In that case, it may be better to have a personal volume, encrypted, for each user.
I don't have an implementation yet, but I got a nice idea. Adding a volume for a user would look like the following:
take a dedicated USB memory stick for this user, this will be used as a "key" to unlock their data directory
overwrite the memory stick with random data
create an empty disk file on the system, it will contain the encrypted virtual disk, use a random part of the USB disk for the passphrase (you will have to write down the length + offset)
write a rc file that looks for the USB disk volume if present, if so, tries to unlock and mount the partition upon boot
This way, you only need to have your USB memory stick plugged in when the system is booting, and it should automatically unlock and mount your personal encrypted volume. Note that if you want to switch user, you would have to reboot to unlock their drive if you don't want to mess with the command line.
It's always possible to harden a system more and more, but the balance between real world security and actual usability should always be studied.
No one will use a too-much hardened system if they can't work on it efficiently, on the other hand, users expect their system to protect them against most common threats.
Depending on one's environment and threat model, it's important to configure their system accordingly.
With the recent release of Qubes OS 4.2, I took the opportunity to migrate to a newer laptop (from a Thinkpad T470 to a NovaCustom NV41) so I had to backup all the qubes from the T470 and restore them on the NV41.
The fastest way to proceed is to create the backups on the new laptop directly from the old one, which is quite complicated to achieve due to Qubes OS compartmentalization.
In this guide, I'll share how I created a qube with a network file server to allow one laptop to send the backups to the new laptop.
Of course, this whole process could be avoided by using a NAS or external storage, but they are in my opinion slower than directly transferring the files on the new machine, and you may not want to leave any trace of your backups.
As the new laptop has a very fast NVME disk, I thought it would be nice to use it for saving the backups as it will offload a bit of disk activity for the one doing backups, and it shouldn't be slowed down during the restore process even if it has to write and read the backups at the same time.
The setup consists in creating a dedicated qube on the new laptop offering an NFS v4 share, make the routing at the different levels, and mount this disk in a qube on the old laptop, so the backup could be saved there.
I used a direct Ethernet connection between the two computers as it allows to not think much about NFS security
On the new laptop, create a standalone qube with the name of your choice (I'll refer to it as nfs), the following commands have been tested with the fedora-38-xfce template. Make sure to give it enough storage space for the backup.
First we need to configure the NFS server, we need to install the related package first:
$ sudo dnf install nfs-utils
After this, edit the file /etc/exports to export the path /home/user/backup to other computers, using the following content:
/home/user/backup *(rw,sync)
Create the directory we want to export, and make user the owner of it:
install -d -o user /home/user/backup
Now, run the NFS server now and at boot time:
systemctl enable --now nfs-server
You can verify the service started successfully by using the command systemctl status nfs-server
You can check the different components of the NFS server are running correctly, if the two following commands have an output this mean it's working:
ss -lapteun | grep 2049
ss -lapteun | grep 111
Allow the NFS server at the firewall level, run the following commands AND add them at the end of /rw/config/rc.local:
Now the service is running within the qube, we need to allow the remote computer to reach it, by default the network should look like this:
We will make sys-net to nat the UDP port 111 and TCP port 2049 to sys-firewall, which will nat them to the nfs qube, which will already accept connections on those ports.
Write the following script inside the sys-net qube of the destination system, make sure to update the value of the variable DESTINATION with sys-firewall's IP address, it can be found by looking at the qube settings.
Write the following script inside the sys-firewall qube of the destination system, make sure to update the value of the variable DESTINATION with nfs's IP address, it can be found by looking at the qube settings.
On the source system, we need to have a running qube that will mount the remote NFS server, this can be a disposable qube, an AppVM qube with temporary changes, a standalone etc...
In this step, you need to configure the network with the direct Ethernet cable, so the two systems can speak to each other, please disconnect from any Wi-Fi connections as you didn't set any security for the file transfer (it's encrypted but still).
You can choose any address as long as the two hosts are in the same subnet, an easy pick could be 192.168.0.2 for the source system, and 192.168.0.3 for the new system.
Now, both systems should be able to ping each other, it's time to execute the scripts in sys-firewall and sys-net to enable the routing.
On the "mounting" qube, run the following command as root to mount the remote file system:
mount.nvfs4 192.168.0.3:/home/user/backup /mnt
You can verify it worked if the output of df shows a line starting by 192.168.0.3:/home/user/backup, and you can ensure your user can actually write in this remote directory by running touch /mnt/test with the regular user user.
Now, we can start the backup tool to send the backup to the remote storage.
In the source system dom0, run the Qubes OS backup tool, choose the qubes you want to transfer, uncheck "Compress backups" (except if you are tight on storage for the new system) and click on "Next".
In the field "Target qube", select the "mounting qube" and set the path to /mnt/, choose an encryption passphrase and run the backup.
If everything goes well, you should see a new file named qubes-backup-YYYY-MM-DDThhmmss in the directory /home/user/backups/ of the nfs qube.
In the destination system dom0, you can run the Restore backup tool to restore all the qubes, if the old sys-net and sys-firewall have any value, you may want to delete yours first otherwise the restored one will be renamed.
When you backup and restore dom0, only the directory /home/ is part of the backup, so it's only about the desktop settings themselves and not the Qubes OS system configuration. I actually use versioned files in the salt directories to have reproducible Qubes OS machines because the backups aren't enough.
When you restore dom0, it creates a directory /home/solene/home-restore-YYYY-MM-DDThhmmss on the new dom0 that contains the previous /home/ directory.
Restoring this directory verbatim requires some clever trick as you should not be logged in for the operation!
reboot qubes OS
don't log in, instead press ctrl+alt+F2 to run commands as the root user in a console (tty)
move the backup outside /home/solene with mv /home/solene/home-restore* /home/
delete your home directory /home/solene with rm -fr /home/solene
put the old backup at the right place with mv /home/home-restore*/dom0-home/solene /home/
press ctrl+alt+F1
log-in as user
Your desktop environment should be like you left if during the backup. If you used some specific packages or desktop environment, make sure you also installed the according packages in the new dom0
Moving my backup from the old system to the new one was pretty straightforward once the NFS server was established, I was able to quickly have a new working computer that looked identical to the previous one, ready to be used.
If you ever required continuous integration pipelines to do some actions in an OpenBSD environment, you certainly figured that most Git "forge" didn't provide OpenBSD as a host environment for the CI.
It turns out that sourcehut is offering many environments, and OpenBSD is one among them, but you can also find Guix, NixOS, NetBSD, FreeBSD or even 9front!
Note that the CI is only available to paid accounts, the minimal fee is "$2/month or $20/year". There are no tiers, so as long as you pay something you have a paid account. sourcehut is offering a clutter-free web interface, and developing an open source product that is also capable of running OpenBSD in a CI environment, I decided to support them (I really rarely subscribe to any kind of services).
Upon each CI trigger, a new VM is created, it's possible to define the operating system and version you want for the environment, and then what to do in it.
The CI works when you have a "manifest" file in your project with the path .build.yml at the root of your project, it contains all the information about what to do.
Here is a simple example of a manifest file I use to build a website using the static generator hugo, and then push the result on a remote server.
image: openbsd/latest
packages:
- hugo--
- rsync--
secrets:
- f20c67ec-64c2-46a2-a308-6ad929c5d2e7
sources:
- git@git.sr.ht:~solene/my-project
tasks:
- init: |
cd my-project
git clone https://github.com/adityatelange/hugo-PaperMod themes/PaperMod --depth=1
- build: |
cd my-project
echo 'web.perso.pw ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRj0NK7ZPMQgkgqw8V4JUcoT4GP6CIS2kjutB6xdR1P' | tee -a ~/.ssh/known_hosts
make
On the example above, we can notice different parts:
image: this tells the manifest which OS to use, openbsd/latest means latest release.
packages: this tells which packages to install, it's OS-agnostic. I use extra dashes because some alternate versions of these packages exists, I just want the simple flavour for each.
secrets: this tells which secret I want among the secrets stored in sourcehut. This is a dedicated private SSH key in this case.
sources: this tells which sources to clone in the CI. Be careful though, if a repository is private, the CI needs to have the SSH key to access the repository. I spent some time figuring this the hard way.
tasks: this defines which commands to run, they are grouped in jobs.
If you use SSH, don't forget to either use ssh-keyscan to generate the content for ~/.ssh/known_hosts, or add the known fingerprint like me that would require an update if the SSH host key changes.
A cool thing is when your CI job failed, the environment will continue to live for at least 10 minutes while offering an SSH access for debug purpose.
I finally found a Git forge that is ethic and supportive of niche operating system. Its interface may be rude with fewer features, but it loads faster and is cleaner to understand. The price ($20/year) is higher than the competition (GitHub or GitLab) which can be used freely (up to some point) but they don't offer the CI choice and the elegant workflow sourcehut has.
In earlier blog posts, I covered the program Syncthing and its features, then how to self-host a discovery server. I'll finish the series with the syncthing relay server.
The Syncthing relay is the component that receives file from a peer to transmit it to the other when two peers can't establish a direct connection, by default Syncthing uses its huge worldwide community pool of relays. However, while data are encrypted, this leaks some information and some relays may be malicious and store files until it could be possible to make use of the content (weakness in encryption algorithm, better computers etc…).
Running your own Syncthing relay server will allow you to secure the whole synchronization between peers.
A simple use case for a relay: you have Syncthing configured between a smartphone on its WAN network and a computer behind a NAT, it's unlikely they will be able to communicate to each other directly, they will need a relay to synchronize.
On OpenBSD, you will need the binary strelaysrv provided by the package syncthing.
# pkg_add syncthing
There is no rc file to start the relay as a service on OpenBSD 7.3, I added it to -current and will be available from OpenBSD 7.5, create an rc file /etc/rc.d/syncthing_relay with the following content:
The special flag -pools='' is there to NOT join the community pool. If you want to contribute to the pool, remove this flag.
There is nothing else to configure, except enabling the service at boot, and running it, at the exception the need to retrieve an information from its runtime output:
You need to open the port TCP/22067 for the relay to work, in addition, you can open the port 22070 which can be used to display a JSON with statistics.
To reach the status page, you need to visit the page http://$SERVER_IP:22070/status
On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.
In the "Connections tab", you need to enter the relay URI in the first field "Sync Protocol Listen Addresses", you can add it after default by separating the two values with a comma, that would add your own relay in addition to the community pool. You could entirely replace the value with the relay URI, in such situation, all peers must use the same relay, if they need a relay.
Don't forget to check the option "Enable relaying", otherwise the relay won't be used.
Syncthing is greatly modular, it's pretty cool to be able to self-host all of its components separately. In addition, it's also easy to contribute to the community pool if one decides to.
My relay is set up within a VPN where all my networks are connected, so my data are never leaving the VPN.
It's possible to use a shared passphrase to authenticate with the remote relay, this can be useful in the situation where the relay is on a public IP, but you only want the nodes holding the shared secret to be able to use it.
You may already have encountered emails in raw text that contained weird characters sequences like =E3 or =09, especially if you work with patch files embedded as text in emails.
There is nothing wrong with the text itself, or the sender email client. In fact, this shows the email client is doing the right thing by applying the RFC 1521. Non-ASCII character should be escaped in some way in emails.
This is where qprint enters in action, it can be used to encode using the quoted-printable, or decode such content. The software can be installed on OpenBSD with the package named qprint.
If you search for an email from the OpenBSD mailing list, and display it in raw format, you may encounter this encoding. There isn't much you can do with the file, it's hard to read and can't be used with the program patch.
In a previous article, I covered the software Syncthing and mentioned a specific feature named "discovery server".
The discovery server is used to allow clients to connect each other through NATs to help connect each other, this is NOT a relay server (which is a different service) that serves as a proxy between clients.
A motivation to run your own discovery server(s) would be for security, privacy or performance reasons.
security: using global servers with the software synchronizing your data can be dangerous if a remote exploit is found in the protocol, running your own server will reduce the risks
privacy: the global servers know a lot about your client if you sync online: time of activity, IP address, number of remote nodes, the ID of everyone involved etc...
my specific use case where I have two Qubes OS computer with multiple syncthing inside, they can't see each other as they are in separate networks, and I don't want the data to go through my slow ADSL to sync locally...
Let's see how to install your own Syncthing discovery daemon on OpenBSD.
On OpenBSD, the binary we need is provided by syncthing package.
# pkg_add syncthing
The relay service is done by the binary stdiscosrv, you need to create a service file to enable it at boot. We can use the syncthing service file as a template for the new one. In OpenBSD-current and from OpenBSD 7.5 the rc file will be installed with the package.
You created a service named syncthing_discovery, it's time to enable and start it.
# rcctl enable syncthing_discovery
You need to retrieve the line "Server device IS is XXXX-XXXX......" from the output, keep the ID (which is the XXXX-XXXX-XXXX-XXXX part) because we will need to reuse it later. We will start the service in debug mode to display the binary output in the terminal.
# rcctl -d start syncthing_discovery
Make sure your firewall is correctly configured to let pass incoming connections on port TCP/8443 used by the discovery daemon.
On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.
In the "Connections tab", you need to change the value of "Global Discovery servers" from "Default" to https://IP:8443/?id=ID where IP is the IP address where the discovery daemon is running, and ID is the value retrieved at the previous step when running the daemon.
Depending on your use case, you may want to have the global discovery server plus yours, it's possible to use multiple servers, in which case you would use the value default,https://IP:8443/?id=ID.
If you change the default discovery server by your own, make sure all the peers can reach it, otherwise your syncthing clients may not be able to connect to each other.
By default, the discovery daemon will generate self-signed certificate, you could use a Let's Encrypt certificate if you prefer.
There are some other options like prometheus export for getting metrics or changing the connection port, you will find all the extra options in the documentation / man page.
As stated earlier, Syncthing is a network daemon that synchronize files between computers/phones. Each Syncthing instance must know the other instance ID to trust them and find them over the network. The transfer are encrypted and efficient, the storage itself can be encrypted.
Some Syncthing vocabulary:
a folder: a local directory that is shared with a remote device,
a remote device: a remote computer running Syncthing, each of them have a unique ID and a user-defined name, you can choose which shared folders you want to synchronize with them
an item: this word appears when syncing two remotes, an item can be either a directory or a file that isn't synchronized yet
a discovery server: a server which helps remotes finding known remotes over the Internet, or in the worst case scenario, relays data from a remote to another if they can't communicate directly
When you need to add a new remote, you need to add the remote's ID on a Syncthing and trust that one on the remote one. The ID is a human representation of the Syncthing instance certificate fingerprint. When you exchange ID, you are basically asked to review each certificate and allow each instance to trust the other.
All network transfers occurring between two Syncthing are encrypted using TLS, as the remote certificate can be checked, the incoming data integrity can be verified and authenticated.
I guess this is Syncthing killer feature. Connecting two remotes is very easy and file transfer between them can bypass firewalls and NATs.
This works because the Syncthing offers a default discovery server which has two purposes:
if the two servers could potentially communicate to each other but are behind NATs, it does what we call "hole punching" to establish a connection between the two remotes and allow them to transfer directly from one to the other
if the two servers can't communicate to each other, the discovery server acts as a relay for the data
The file transfer is still encrypted, but having a third party server involved may rise privacy issues, and security risks if a vulnerability can be exploited.
My next blog post will show how to self-host your own Syncthing relay, for better privacy and even more complicated setups!
Note that the discovery server or the relaying can be disabled! You could also build a mesh VPN and run Syncthing on each node without using any relay or discovery server.
On a given Syncthing instance, you can enable per shared folder a retention policy, aka file versioning in the interface.
Basically, if a file is modified / removed in the share by a remote, the local instance can keep a hidden copy for a while.
There are different versioning modes, from a simple "trash bin" style keeping the files for n days, or more elaborated policies like you could have in backup tools.
For each share, it's possible to write an exclusion filter, this allows you to either discard sync changes for some pattern (like excluding vim swap files) or entire directories if you don't want to retrieve all the shared folder.
The filter works in both way, if you accept a remote, you could write a filter before starting the synchronization and remove some huge directories you may not want locally. But this could also allow preventing a directory to be sent to the remotes, like a temporary directory for instance.
This is a topic I covered with a very specific use case, only sync a single file in a directory.
A pretty cool feature I found recently was the support for encrypted shared folders per remote. I'm using syncthing to keep my KeepassXC databases synchronized between my computers.
As I don't always have at least two of my computers turned ON at the same time, they can't always synchronize directly with each other, so I use a remote dedicated server as a buffer to hold the files, Syncthing encryption is activated for this remote, both my computers can exchange data with it, but on the server itself you can't get my KeepassXC databases.
This is also pretty cool as it doesn't leave any readable data on the storage drive if you use 3rd party systems.
Taking the opportunity here, KeepassXC has a cool feature that allows you to add a binary file as a key in addition to a password / FIDO key. If this binary file isn't part of the synchronized directory, even someone who could access your KeepassXC database and steal your password shouldn't be able to use it.
When Syncthing scans a directory, it will hash all the file into chunks and synchronize all these chunks to other remotes, this is basically how BitTorrent work too.
This may sound boring, but basically, this allows Syncthing to move or rename files on a remote instead of transferring the data again when you rename / move files in a local shared directory. Indeed, only the changed paths list is sent, and the chunks used in the files, as the files already exist on the remote, the data chunks don't have to be retrieved.
Note that this doesn't work for encrypted remotes as the chunks contain some path information, once encrypted, the same file with different paths will look as two different encrypted chunks.
Syncthing GUI allows you to define inbound or outbound bandwidth limitation, either globally or per remote. If like me, you have a slow ADSL line with slow upload, you may want to limit the bandwidth used to send data to set the non-local remotes.
This may sound more niche, but it's important for some users: Syncthing can synchronize file permissions, ownership or even extended attributes. This is not enabled by default as Syncthing requires elevated privileges (typically running as root) to make it work.
Syncthing is a Go program, it's a small binary with no dependencies, it's quite portable and runs on Linux, all the BSD, Android, Windows, macOS etc... There is nothing worse than a synchronization utility that can't be installed on a specific computer...
I really love this software, especially since I figured the file versioning and the encrypted remotes, now I don't fear conflicts or lost files anymore when syncing my files between computers.
My computers also use a local discovery server that allows my Qubes OS to be kept in sync together over the LAN.
When you install Syncthing on your system, you can enable the service as your user, this will make Syncthing start properly when you log in with your user:
Syncthing has to listen for each file change, you will need to increase the maximum opened files limit for your user, and maybe the limit in the kernel using the according sysctl.
You can find more detailed information about using Syncthing on OpenBSD in the file /usr/local/share/doc/pkg-readmes/syncthing.
I often see a lot of confusion with regard to OpenBSD, either assimilate as a Linux distribution or mixed up with FreeBSD.
Let's be clear, OpenBSD is a stand alone operating system. It came as a fork of NetBSD in 1994, there isn't much things in common between the two nowadays.
While OpenBSD and the other BSDs are independant projects, they share some very old roots in their core, and regularly see source code changes in one being imported to another, but this is really a very small amount of the daily code changes though.
a complete operating system with X, network services, compilers, all out of the box
100% community driven
more than 11000 packages with stuff like GNOME, Xfce, LibreOffice, Chromium, Firefox, KDE applications, GHC etc... (and KDE Plasma SOON!)
a release every 6 months
sandboxed web browsers
stack smash memory protection
where OpenSSH is developped
accurate manual pages for everything
It's used with success on workstations, either for personal or professional use. It's also widely used as a server, being for network services or just routing/filtering network!
You can install OpenBSD on your system, or a spare computers you don't use anymore. You need at least 48 MB of memory for it to work, and many architectures are supported like arm64, amd64, i386, sparc64, powerpc, riscv...
You can rent an OpenBSD VM on OpenBSD Amsterdam, a company doing OpenBSD hosting on OpenBSD servers using the OpenBSD hypervisor! And they give money to the OpenBSD project for each VM they host!
We are in October 2023, let's celebrate the first OctOpenBSD event, the month where OpenBSD users show to the world our favorite operating system is still relevant.
The event will occur from 1st October up to 31st October. A surprise will be revealed on the OpenBSD Webzine for the last day!
Dear Firefox users, what if I told you it's possible to harden Firefox by changing a lot of settings? Something really boring to explain and hard to reproduce on every computer. Fortunately, someone did the job of automating all of that under the name Arkenfox.
Arkenfox design is simple, it's a Firefox configuration file (more precisely a user.js file), that you have to drop in your profile directory to override many Firefox defaults with a lot of curated settings to harden privacy and security. Cherry on cake, it features an updater and a way to override some of its values with a user defined file.
This makes Arkenfox easy to use on any system (including Windows), but also easy to tweak or distribute across multiple computers.
The official documentation contains more information, but basically the steps are the following:
find your Firefox profile directory: open about:support and search for an entry name profile directory
download latest Arkenfox user.js release archive
if the profile is not new, there is an extra step to clean it using scratchpad-scripts/arkenfox-cleanup.js which contains instructions at the top of the file
save the file user.js in the profile directory
add update.sh to the profile directory, so you can update user.js easily later
create user-overrides.js in the profile directory if you want to override some settings and keep them, the updater is required for the override
Basically, Arkenfox disables a lot of persistency such as cache storage, cookies, history. But it also enforces a canvas of fixed size to render the content, reset the preferred languages to English only (that defines which language is used to display a multilingual website) and many more changes.
You may want to override some settings because you don't like them. In the project's Wiki, you can find all Arkenfox overrides, with the explanation of its new value, and which value you may want to use in your own override.
By default, cookies aren't saved, so if you don't want to log in every time you restart Firefox, you have to specifically allow cookies for each website.
The easiest method I found is to press Ctrl+I, visit the Permissions tab, and uncheck the "Default permissions" relative to cookies. You could also do it by visiting Firefox settings, and search for an exception button in which you can enter a list of domains where cookies shouldn't be cleared on shutdown.
By default, entering text in the address bar won't trigger a search anymore, so instead of using Ctrl+L to type in the bar, you can use Ctrl+K to type for a search.
Arkenfox wiki recommends to use uBlock Origin and Skip redirect extensions only, with some details. I agree they both work well and do the job.
It's possible to harden uBlock Origin by disabling 3rd party scripts / frames by default, and giving you the opportunity to allow per domain / globally some sources, this is called the blocking mode. I found it to be way more usable than NoScript.js.
I found that Arkenfox was a bit hard to use at first because I didn't fully understand the scope of its changes, but it didn't break any website even if it disables a lot of Firefox features that aren't really needed.
This reduces Firefox attack surface, and it's always a welcome improvement.
Arkenfox user.js isn't the only set of Firefox settings around, there is also Betterfox (thanks prx!) which provides different profiles, even one for performance. I didn't try any of these profiles yet, Arkenfox and Betterfox are parallel projects and not forks, it's actually complicated to compare which one would be better.
I recently wanted to improve Qubes OS accessibility to new users a bit, yesterday I found why GNOME Software wasn't working in the offline templates.
Today, I'll explain how to install programs from Flatpak in a template to provide to other qubes. I really like flatpak as it provides extra security features and a lot of software choice, and all the data created by Flatpak packaged software are compartmentalized into their own tree in ~/.var/app/program.some.fqdn/.
Make the environment variable persistent for the user user, this will allow GNOME Software to work with flatpak and all flatpak commands line to automatically pick the proxy.
In order to circumvent a GNOME Software bug, if you want to use it to install packages (Flatpak or not), you need to add the following line to /rw/config/rc.local:
If you install or remove flatpak programs, either from the command line or with the Software application, you certainly want them to be easily available to add in the qubes menus.
Here is a script to automatically keep the applications list in sync every time a change is made to the flatpak applications.
If you don't want to use the automated script, you will need to run /etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh, or click on "Sync applications" in the template qube settings after each flatpak program installation / deinstallation.
For the setup to work, you will have to install the package inotify-tools in the template, this will be used to monitor changes in a flatpak directory.
#!/bin/sh
# when a desktop file is created/removed
# - links flatpak .desktop in /usr/share/applications
# - remove outdated entries of programs that were removed
# - sync the menu with dom0
inotifywait -m -r \
-e create,delete,close_write \
/var/lib/flatpak/exports/share/applications/ |
while IFS=':' read event
do
find /var/lib/flatpak/exports/share/applications/ -type l -name "*.desktop" | while read line
do
ln -s "$line" /usr/share/applications/
done
find /usr/share/applications/ -xtype l -delete
/etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh
done
You have to mark this file as executable with chmod +x /usr/local/sbin/sync-app.sh.
You can automatically run flatpak upgrade after a template update. After a dnf change, all the scripts in /etc/qubes/post-install.d/ are executed.
Create /etc/qubes/post-install.d/05-flatpak-update.sh with the following content, and make the script executable:
#!/bin/sh
# abort if not in a template
if [ "$(qubesdb-read /type)" = "TemplateVM" ]
then
export all_proxy=http://127.0.0.1:8082/
flatpak upgrade -y --noninteractive
fi
Every time you update your template, flatpak will upgrade after and the application menus will also be updated if required.
With this setup, you can finally install programs from flatpak in a template to provide it to other qubes, with bells and whistles to not have to worry about creating desktop files or keeping them up to date.
Please note that while well-made Flatpak programs like Firefox will add extra security, the repository flathub allows anyone to publish programs. You can browse flathub to see who is publishing which software, they may be the official project team (like Mozilla for Firefox) or some random people.
This article is meant to be a simple guide explaining how to make use of the OpenBSD specific feature pledge in order to restrict a software capabilities for more security.
While pledge falls in the sandboxing features, it's different than the traditional sandboxing we are used to see because it happens within the source code itself, and can be really tightened. Actually, many programs requires lot of privileges like reading files, doing DNS etc... when initializing, then those privileges could be removed, this is possible with pledge but not for traditional sandboxing wrappers.
In OpenBSD, most of the base userland have support for pledge, and more and more packaged software (including Chromium and Firefox) received some code to add pledge. If a program tries to use a system call that isn't in pledge promises list, it dies and the violation is reported in the system logs.
What makes pledge pretty cool is how it's easy to implement it in your software, it has a simple mechanism of system call families so you don't have to worry about listing every system calls, but only their categories (named promises), like reading a file, writing a file, executing binaries etc...
I found a small utility that I will use to illustrate how to add pledge to a program. The program is qprint, a C quoted printable encoder/decoder. This kind of converter is quite easy to pledge because most of the time, they only take an input, do some computation and make an output, they don't run forever and don't do network.
When extracting the sources, we can find a bunch of files, we will focus at reading the *.c files, the first thing we want to find is the function main().
It happens the main function is in the file qprint.c. It's important to call pledge as soon as possible in the program, most of the time after variable initialization.
Adding pledge to a program requires to understand how it works, because some feature that aren't often used may be broken by pledge, and some programs having live reloading or being able to change behavior during runtime are complicated to pledge.
Within the function main below variables declaration, We will add a call to pledge for stdio because the program can display the result on the output, rpath because it can read files and wpath as it can also write files.
It's ok, we imported the library providing pledge, and called it from within. But what if the pledge call fails for some reasons? We need to ensure it worked or abort the program. Let's add some checks.
This is a lot better now, if pledge call failed, the program will stop and we will be warned about it. I don't know exactly under which circumstance it could fail, but maybe if promise name changes or doesn't exist anymore in a program, that would be bad if pledge silently failed.
Now we made some changes to the program, we need to verify it's still working as expected.
Fortunately, qprint comes with a test suite which can be used with make wringer, if the test suite pass and the tests have a good coverage, this mean we may have not break anything. If the test suite fails, we should have an error in the output of dmesg telling us why it failed.
And, it failed!
qprint[98802]: pledge "cpath", syscall 5
This error (which killed the PID instantly) indicates that the pledge list is missing cpath, this makes sense because it has to create new files if you specify an output file.
Adding cpath to the list, and running the test suite again, all tests pass! Now, we exactly know that the software can't do anything except using the system calls we whitelisted.
We could tighten pledge more by dropping rpath if the file is read from stdin, and cpath wpath if the output is sent to stdout. I left this exercise to the reader :-)
It's actually possible to call pledge() in other programming languages, Perl has a library provided in OpenBSD base system that will work out of the box. For some other, such library may be packaged already (for python and Golang at least). If you use something less common, you can define an interface to call the library.
It's possible to find which running programs are currently using pledge() by using ps auxww | awk '$8 ~ "p" { print }', any PID with a state containing p indicates it's pledged.
If you want to add pledge to a packaged program on OpenBSD, make sure it still fully work.
Adding pledge to a program that contain most promises won't be doing much...
Now, if you want to practice, you can tighten the pledge calls to only allow qprint to use the pledge stdio only in the case it's used in a pipe for input and output like this: ./qprint < input.txt > output.txt.
Ideally, it should add the pledge cpath wpath only when it writes into a file, and rpath only when it has to read a file, so in the case of using stdin and stdout, only stdio would have been added at the beginning.
Good luck, Have fun! Thanks to Brynet@ for the suggestion!
The system call pledge() is a wonderful security feature that is reliable, and as it must be done in the source code, the program isn't run from within a sandboxed environment that may be possible to escape. I can't say pledge can't be escaped, but I think it's a lot less likely to be escaped than any other sandbox mechanism (especially since the program immediately dies if it tries to escape).
Next time, I'll present its companion system called unveil which is used to restrict access to the filesystem, except some developer defined files.
I wanted to share my favorite games list of all time. Making the list wasn't easy though, but I've set some rules to help deciding myself.
Here are the criteria:
if you show me the game, I'd be happy to play it again
if it's a multiplayer game, let's assume we could still play it
the nostalgia factor should be discarded
let's try to avoid selecting multiple similar games
I'd love being able to forget the story to play it again from a fresh point of view
Trivia, I'm not a huge gamer, I still play many games nowaday, but I only play each of them for a couple of hours to see what they have to offer in term of gameplay, mechanics, and see if they are innovative in some way. If a game is able to surprise me or give me something new, I may spend a bit more time on it.
Here is the list of my top 20 games I enjoyed, and with which I'd be fine to enjoy play them again anytime.
I tried to elect some games to be a bit better than the other, so there is my top 3, top 10, and the top 20. I haven't been able to rank them from 1 to 20, so I just made tiers.
I spent so many hours playing with my brother or friends, sharing the mouse each turn so everyone could play with a single computer.
And not only the social factor was nice, the game was cool, there are many different factions to play, the game is cool and there is strategy at play to win. A must have.
The Sega Saturn hasn't been very popular, but it had some good games, and one is Saturn Bomberman. From all the games from the Bomberman franchise, this looks really the best, it featured some dinosaurs with unique abilities, and they could grow up, some weird items, many maps.
And it had an excellent campaign that was long to play, and could be played in coop! The campaign was really really top notch for this kind of game, with unique items you couldn't find in multiplayer.
I guess this is a classic, I played a lot the Nintendo 64 version, and now we have the 1+2 games into one, with high refresh rate, HD textures and still the same good music.
This may sound like heresy, but I never played the campaign of this game. I just played skirmish or in multiplayer with friends, and with the huge factions choice with different gameplay, it's always cool even if the graphics aged a bit.
Being able to send dreadnought from space directly into the ork base, or send legions of necrons to that Tau player is always source of joy.
2.1.6. Street Fighter 2 Special Champion Edition §
A classic on the megadrive/genesis, it's smooth, music is good. So many characters and stages, incredible soundtracks. The combos were easy to remember, just enough to give each character their own identity and allow players to quickly onboard.
Maybe the super NES version is superior, but I always played it on megadrive.
Maybe the game which demonstrated we can do great deck based video games.
Playing a character with a set of skills as cards, gathering items while climbing a tower, it can get a bit repetitive over time though, but the game itself is good and doing a run occasionally is always tempting.
The community made a lot of mods, even adding new characters with very specific mechanics, I highly recommend it for anyone looking for a card based game.
My first Monster Hunter game, on 3DS. I absolutely loved it, insane fights against beloved monsters (we need to study them carefully, so we need to hunt a lot of them :P).
While Monster Hunter World shown better graphics and smoother gameplay, I still prefer the more rigid MH like MH4U or MH Generations Ultimate.
The 3D effect on the console was working quite well too!
A very good card game with multiple factions, but not like Slay the Spire.
There are lot of combos to create as cards are persistent within the train, and runs are not that much depending on RNG (random number generator), which make it a great game.
A classic among the RPG, I wanted to put an Elder Scrolls game into the list and I went with Oblivion. In my opinion, this was the coolest one compared to Morrowind or Skyrim. I have to say, I just hesitated with Morrowind, but because of all Morrowind flaws and issues, Oblivion built a better game. Skyrim was just bad for me, really boring and not interesting.
Oblivion gave the opportunity to discover many cities with day/night cycle, NPC that had homes and were at work during day, the game was incredible when it was released, and I think it's still really good.
Trivia, I never did the story of Morrowind or Oblivion, but yet I spent a lot of time playing them!
The greatest puzzle game I ever played. It's like chess, but actually fun. Moving some mechas on a small tiled board when it's your turn, you must think about everything that will happen and in which order.
The number of mechas and equipment you find in the game make it really replayable, and game sessions can be short so it's always tempting to start yet another run.
My first Yakuza / Like a dragon game, I didn't really know what to expect, and I was happy to discover it!
A Japanese RPG / turn based game featuring the most stupid skills or quests I've ever seen. The story was really engaging, unlocking new jobs / characters leads to more stupidity around.
A super NES classic, and it was possible to play in coop with a friend!
The game had so much content, lot of weapons, of magic, of monsters, the soundtrack is just incredible all along. And even more, at some point in the game you have the opportunity to move from your current location by riding a dragon in a 3D view over the planet!
At the moment, it's the best RPG I played, and it's turn based like how I like them.
I'd have added Neverwinter Night, but BG3 does better than it in every way, so I retained BG3 instead.
Every new game could be played a lot differently than the previous one, there are so many possibilities out there, it's quite the next level of RPG compared to what we had before.
After hesitating between Factorio and Dyson Sphere Program in the list, I chose to retain Factorio, because DSP is really good, but I can't see myself starting it again and again like Factorio. DSP has a very very slow beginning, while Factorio provides fun much faster.
Factorio invented a new genre of game: automation. I get crazy with automation, optimization. It's like doing computer stuff in a game, everything is clean, can be calculated, I could stare at conveyor belts transporting stuff like I could stare at Gentoo compilation logs for hours. The game is so deep, you can do crazy things, even more when you get into the logic circuits.
While I finished the game, I'm always up for a new world with some goals, and modding community added a lot of high quality content.
The only issue with this game is that it's hard to stop playing.
While I played Street of Rage 2 a lot more than the 4Th, I think this modern version is just better.
You can play with a friend almost immediately, fun is there, brawling bad guys is pretty cool. The music are good, the character roster is complete, it's just 100% fun to play it again and again.
That's one game I wish I could forget to play it again...
It gave me a truly unique experience as a gamer.
It's an adventure game featuring a time loop of 15 minutes, the only things you acquire in the game is knowledge in your own mind. With that knowledge, you can complete the game in different ways, but first, you need to find clues leading to other clues, leading to some pieces of the whole puzzle.
There are some games I really enjoyed, but for some reasons I haven't been able to put them in the list, could be replayability issues or the nostalgia factor that was too high maybe?
Let me show you a very practical feature of qcow2 virtual disk format, that is available in OpenBSD vmm, allowing you to easily create derived disks from an original image (also called delta disks).
A derived disk image is a new storage file that will inherit all the data from the original file, without modifying the original ever, it's like stacking a new fresh disk on top of the previous one, but all the changes are now written on the new one.
This allows interesting use cases such as using a golden image to provide a base template, like a fresh OpenBSD install, or create a temporary disks to try changes without harming to original file (and without having to backup a potentially huge file).
This is NOT OpenBSD specific, it's a feature of the qcow2 format, so while this guide is using OpenBSD as an example, this will work wherever qcow2 can be used.
First, you need to have a qcow2 file with something installed in it, let's say you already have a virtual machine with its storage file /var/lib/vmm/alpine.qcow2.
We will create a derived file /var/lib/vmm/derived.qcow2 using the vmctl command:
The derived disk will stop working if the original file is modified, so once you make derived disks from a base image, you shouldn't modify the base image.
However, it's possible to merge changes from a derived disk to the base image using the qemu-img command:
The derived images can be useful in some scenarios, if you have an image and want to make some experimentation without making a full backup, just use a derived disk. If you want to provide a golden image as a start like an installed OS, this will work too.
One use case I had was with OpenKuBSD, I had a single OpenBSD install as a base image, each VM had a derived disk as their root but removed and recreated at every boot, but they also had a dedicated disk for /home, this allows me to keep all the VMs clean, and I just have a single system to manage.
Merging multiple PDFs into a single PDF also uses the sub command cat. In the following example, you will concatenate the PDF first.pdf and second.pdf into a merged.pdf result:
pdftk first.pdf second.pdf cat output merged.pdf
Note that they are concatenated in their order in the command line.
Pdftk comes with a very powerful way to rotate PDFs pages. You can specify pages or ranges of pages to rotate, the whole document, or only odd/even pages etc...
If you want to rotate all the pages of a PDF clockwise (east), we need to specify a range 1-end, which means first to last page:
If you want to select even or odd pages, you can add the keyword even or odd between the range and the rotation direction: 1-10oddwest or 2-8eveneast are valid rotations.
If you want to reverse how pages are in your PDF, we can use the special range end-1 which will go through pages from the last to the first one, with the sub command cat this will only recreate a new PDF:
Pdftk have some other commands, most people will need to extract / merge / rotate pages, but take a look at the documentation to learn about all pdftk features.
PDF are usually a pain to work with, but pdftk make it very fast and easy to apply transformation on them. What a great tool :-)
As some may know, I'm an XMPP user, an instant messaging protocol which used to be known as Jabber. My server is running Prosody XMPP server on OpenBSD. Recently, I got more users on my server, and I wanted to improve performance a bit by switching from the internal storage to SQLite.
Actually, prosody comes with a tool to switch from a storage to another, but I found the documentation lacking and on OpenBSD the migration tool isn't packaged (yet?).
The switch to SQLite drastically reduced prosody CPU usage on my small server, and went pain free.
For the migration to be done, you will need a few prerequisites:
know your current storage, which is "internal" by default
know the future storage you want to use
know where prosody stores its files
the migration tool
On OpenBSD, the migration tool can be retrieved by downloading the sources of prosody. If you have the ports tree available, just run make extract in net/prosody and cd into the newly extracted directory. The directory path can be retrieved using make show=WRKSRC.
The migration tool can be found in the subdirectory tools/migration of the sources, the program gmake is required to build the program (it's only replacing a few variables in it, so no worry about a complex setup).
In the migration directory, run gmake, you will obtain the migration tool prosody-migrator.install which is the program you will run for the migration to happen.
In the migration directory, you will find a file migrator.cfg.lua.install, this is a configuration file describing your current prosody deployment and what you want with the migration, it defaults to a conversion from "internal" to "sqlite" which is what most users will want in my opinion.
Make sure the variable data_path in the file refers to /var/prosody which is the default directory on OpenBSD, and check the hosts in the "input" part which describe the current storage. By default, the new storage will be in /var/prosody/prosody.sqlite.
Prosody comes with a migration tool to switch from a storage backend to another, that's very handy when you didn't think about scaling the system correctly at first.
The migrator can also be used to migrate from the server ejabberd to prosody.
Thanks prx for your report about some missing steps!
This means there are 244 MB of memory currently in use, and 158 MB in the swap file.
The cache column displays how much file system data you have cached in memory, this is extremely useful because every time you open a program, this would avoid seeking it on the storage media if it's already in the memory cache, which is way faster. This memory is freed when needed if there are not enough free memory available.
The "free" column only tell you that this ram is completely unused.
The number 733M indicates the total real memory, which includes memory in use that could be freed if required, however if someone find a clearer explanation, I'd be happy to read it.
The command systat is OpenBSD specific, often overlooked but very powerful, it has many displays you can switch to using left/right arrows, each aspect of the system has its own display.
The default display has a "memory totals in (KB)" area about your real, free or virtual memory.
When one looks at OpenBSD memory usage, it's better to understand the various field before reporting a wrong amount, or that OpenBSD uses too much memory. But we have to admit the documentation explaining each field is quite lacking.
It's common knowledge that SSH connections are secure; however, they always had a flaw: when you connect to a remote host for the first time, how can you be sure it's the right one and not a tampered system?
SSH uses what we call TOFU (Trust On First Use), when you connect to a remote server for the first time, you have a key fingerprint displayed, and you are asked if you want to trust it or not. Without any other information, you can either blindly trust it or deny it and not connect. If you trust it, the key's fingerprint is stored locally in the file known_hosts, and if the remote server offers you a different key later, you will be warned and the connection will be forbidden because the server may have been replaced by a malicious one.
Let's try an analogy. It's a bit like if you only had a post-it with, supposedly, your bank phone number on it, but you had no way to verify if it was really your bank on that number. This would be pretty bad. However, using an up-to-date trustable public reverse lookup directory, you could check that the phone number is genuine before calling.
What we can do to improve the TOFU situation is to publish the server's SSH fingerprint over DNS, so when you connect, SSH will try to fetch the fingerprint if it exists and compare it with what the server is offering. This only works if the DNS server uses DNSSEC, which guarantees the DNS answer hasn't been tampered with in the process. It's unlikely that someone would be able to simultaneously hijack your SSH connection to a different server and also craft valid DNSSEC replies.
The setup is really simple, we need to gather the fingerprints of each key (they exist in multiple different crypto) on a server, securely, and publish them as SSHFP DNS entries.
If the server has new keys, you need to update its SSHFP entries.
We will use the tool ssh-keygen which contains a feature to automatically generate the DNS records for the server on which the command is running.
For example, on my server interbus.perso.pw, I will run ssh-keygen -r interbus.perso.pw. to get the records
$ ssh-keygen -r interbus.perso.pw.
interbus.perso.pw. IN SSHFP 1 1 d93504fdcb5a67f09d263d6cbf1fcf59b55c5a03
interbus.perso.pw. IN SSHFP 1 2 1d677b3094170511297579836f5ef8d750dae8c481f464a0d2fb0943ad9f0430
interbus.perso.pw. IN SSHFP 3 1 98350f8a3c4a6d94c8974df82144913fd478efd8
interbus.perso.pw. IN SSHFP 3 2 ec67c81dd11f24f51da9560c53d7e3f21bf37b5436c3fd396ee7611cedf263c0
interbus.perso.pw. IN SSHFP 4 1 cb5039e2d4ece538ebb7517cc4a9bba3c253ef3b
interbus.perso.pw. IN SSHFP 4 2 adbcdfea2aee40345d1f28bc851158ed5a4b009f165ee6aa31cf6b6f62255612
You certainly noted I used an extra dot, this is because they will be used as DNS records, so either:
Use the full domain name with an extra dot to indicate you are not giving a subdomain
Use only the subdomain part, this would be interbus in the example
If you use interbus.perso.pw without the dot, this would be for the domain interbus.perso.pw.perso.pw because it would be treated as a subdomain.
Note that -r arg isn't used for anything but the raw text in the output, this doesn't make ssh-keygen fetch the keys of a remote URL.
Now, just add each of the generated entries in your DNS.
By default, if you connect to my server, you should see this output:
> ssh interbus.perso.pw
The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.
ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
It's telling you the server isn't known in known_hosts yet, and you have to trust it (or not, but you wouldn't connect).
However, with the option VerifyHostKeyDNS set to yes, the fingerprint will automatically be accepted if the one offered is found in an SSHFP entry.
As I explained earlier, this only works if the DNS answer is valid with regard to DNSSEC, otherwise, the setting "VerifyHostKeyDNS" automatically falls back to "ask", asking you to manually check the DNS SSHFP found and if you want to accept or not.
For example, without a working DNSSEC, the output would look like this:
$ ssh -o VerifyHostKeyDNS=yes interbus.perso.pw
The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.
ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.
Matching host key fingerprint found in DNS.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
With a working DNSSEC, you should immediately connect without any TOFU prompt, and the host fingerprint won't be stored in known_hosts.
SSHFP is a simple mechanism to build a chain of trust using an external service to authenticate the server you are connecting to. Another method to authenticate a remote server would be to use an SSH certificate, but I'll keep that one for later.
We saw that VerifyHostKeyDNS is reliable, but doesn't save the fingerprint in the file ~/.ssh/known_hosts, which can be an issue if you need to connect later to the same server if you don't have a working DNSSEC resolver, you would have to trust blindly the server.
However, you could generate the required output from the server to be used by the known_hosts when you have DNSSEC working, so next time, you won't only rely on DNSSEC.
Note that if the server is replaced by another one and its SSHFP records updated accordingly, this will ask you what to do if you have the old keys in known_hosts.
To gather the fingerpints, connect on the remote server, which will be remote-server.local in the example and add the command output to your known_hosts file:
ssh-keyscan localhost 2>/dev/null | sed 's/^localhost/remote-server/'
We omit the .local in the remote-server.local hostname because it's a subdomain of the DNS zone. (thanks Francisco Gaitán for spotting it).
Basically, ssh-keyscan can remotely gather keys, but we want the local keys of the server, then we need to modify its output to replace localhost by the actual server name used to ssh into it.
This article explains a setup I made for our family vacation place, I wanted to turn an old laptop (a Dell Vostro 1500 from 2008) into a retrogaming station. That's actually easy to do, but I wanted to make it "childproof" so it will always work even if we let children alone with the laptop for a moment, that part was way harder.
This is not a tutorial explaining everything from A to Z, but mostly what worked / didn't work from my experimentation.
First step is to pick an operating system. I wanted to use Alpine, with the persistent mode I described last week, this would allow having nothing persistent except the ROMs files. Unfortunately, the packages for Retroarch on Alpine were missing the cores I wanted, so I dropped Alpine. A retroarch core is the library required to emulate a given platform/console.
Then, I wanted to give FreeBSD a try before switching to a more standard Linux system (Alpine uses the libc musl which makes it "non-standard" for my use case). The setup was complicated as FreeBSD barely do anything by itself at install time, but after I got a working desktop, Retroarch had an issue, I couldn't launch any game even though the cores were loaded. I can't explain why this wasn't working, everything seemed fine. On top of this issue, game pad support have been really random, so I gave up.
Finally, I installed Debian 12 using the netinstall ISO, and without installing any desktop and graphical server like X or Wayland, just a bare Debian.
To achieve a more children-proof environment, I decided to run Retroarch directly from a TTY, without a graphical server.
This removes a lot of issues:
no desktop you could lock
no desktop you could log out from
no icons / no menus to move / delete
nothing fancy, just retroarch in full screen
In addition to all the benefits listed above, this also reduces the emulation latency, and makes the system lighter by not having to render through X/Wayland. I had to install the retroarch package and some GL / vulkan / mesa / sdl2 related packages to have it working.
One major painful issue I had was to figure a way to start retroarch in tty1 at boot. Actually, this is really hard, especially since it must start under a dbus session to have all features enabled.
My solution is a hack, but good enough for the use case. I overrode the getty@tty1 service to automatically log in the user, and modified the user ~/.bashrc file to exec retroarch. If retroarch quits, the tty1 would be reset and retroarch started again, and you can't escape it.
I can't describe all the tweaks I did in retroarch, some were for pure enhancement, some for "hardening". Here is a list of things I changed:
pre-configure all the controllers you want to use with the system
disable all menus except the playlists, they automatically group games by support which is fine
set the default core for each playlist, this removes an extra weird step for non-technical users
set a special shortcut to access the quick menu from the controller, something like select+start should be good, this allows to drop/pause a game from the controller
In addition to all of that, there is a lovely kiosk mode. This basically just allow you to password protect all the settings in Retroarch, once you are done with the configuration, enable the kiosk mode, nothing can be changed (except putting a ROM in favorite).
Grub can be a major issue if a children boots up the laptop but press a key at grub time. Just set GRUB_TIMEOUT=0 to disable the menu prompt, it will directly start into Debian.
The computer doesn't need to connect to any network, so I disabled all the services related to network, this reduced the boot time by a few seconds, and will prevent anything weird from happening.
It may be wise to lock the bios, so in case you have children who know how to boot something on a computer, they wouldn't even be able to do that. This also prevent mistakes in the bios, better be careful. Don't lose that password.
If you want your gaming console to have this extra thing that will turn the boring and scary boot process text into something cool, you can use Plymouth.
I found a nice splash screen featuring Optimus head from Transformers while the system is booting, this looks pretty cool! And surely, this will give the system some charm and persona compared to systemd boot process. This delays the boot by a few seconds though.
Retroarch is a fantastic software for emulation, and you can even run it from a TTY for lower latency. Its controller mapping is really smart, you have to configure each controller against some kind of "reference" controller, and then each core will have a map from the reference controller to convert into the console controller you are emulating. This mean you don't have to map your controller for each console, just once.
Doing a children proof kiosk computer wasn't easy, I'm sure there is room for improvement, but I'm happy that I turned a 15 years old laptop into something useful that will bring joy for kids, and memories for adults, without them fearing that the system will be damaged by kids (except physical damage but hey, I won't put the thing in a box).
Now, I have to do some paint job for the laptop behind-the-screen part to look bright and shiny :)
Hi! I've not been very communicative about my week during the Old Computer Challenge v3, the reason is that I failed it. Time for a postmortem (analysis of what happened) to understand the failure!
For the context, the last time I was using a restricted hardware was for the first edition of the challenge two years ago. Last year challenge was about reducing Internet connectivity.
I have to admit, I didn't prepare anything. I thought I could simply limit the requirements on my laptop, either on OpenBSD or openSUSE and enjoy the challenge. It turned out it was more complicated than that.
OpenBSD memory limitation code wasn't working on my system for some reason (I should report this issue)
openSUSE refused to boot with 512 MB of memory under 30 minutes, even by adding swap, and I couldn't log in through GDM once there
I had to figure a backup plan, which turned to be using Alpine Linux installed on a USB memory stick, memory and core number restriction worked out of the box, figuring how to effectively reduce the frequency was hard, but I did it finally.
From this point, I had a non-encrypted Alpine Linux on a poor storage medium. What would I do with this? Nothing much.
It turns out that in 2 years, my requirements evolved a bit. 512 MB wasn't enough to use a web browser with JavaScript, and while I thought it wouldn't be such a big deal, it WAS.
I regularly need to go on some websites, doing it on my non-trusted smartphone is a no-go, so I need a computer, and Firefox on 512 MB just doesn't work. Chromium almost work, but it depends on the page, and WebKit browser often didn't work well enough.
Here is a sample of websites I needed to visit:
OVH web console
Patreon web page
Bank service
Some online store
Mastodon (I have such a huge flow that CLI tools doesn't work well for me)
Kanban tool
Deepl for translation
Replying to people on some open source project Discourse forums
Managing stuff in GitHub (gh tool isn't always on-par with the web interface)
For this reason, I often had to use my "work" computer to do the tasks, and ended up inadvertently continuing on this computer :(
In addition to web browsing, some programs like LanguageTool (a java GUI spellcheck program) required too much memory to be started, so I couldn't even spell check my blog posts (Aspell is not as complete as LanguageTool).
At first when I thought about the rules for the 3rd edition, the CPU frequency seemed to be the worst part. In practice, the system was almost swapping continuously but wasn't CPU bound. Hardware acceleration was fast enough to play videos smoothly.
If you can make good use of the 512 MB of memory, you certainly won't have CPU problems.
This is not related to the challenge itself, but I felt a bit stuck with my untrusted Alpine Linux, I have some ssh / GPG keys that are secured on two systems and my passwords, I almost can't do anything without them, and I didn't want to take the risk of compromising my security chain for the challenge.
In fact, since I started using Qubes OS, I started being reluctant to mix all my data on a single system, even the other one I'm used to being working with (which has all the credentials too), but Qubes OS is the anti-oldcomputerchallenge as you need to throw the more hardware you can to make it useful.
However, the challenge wasn't such a complete failure for me. While I can't say I played by the rules, it definitely helped me to realize the changes in my computer use over the last years. This was the point when I started the "offline laptop" project three years ago, which transformed into the old computer challenge the year after.
I tried to use less the computer as I wasn't able to fulfill the challenge requirements, and did some stuff IRL at home and outside, the week went SUPER FAST, I was astonished to realize it's already over. This also forced me to look for solutions, so I spent *a LOT* of time trying to make Firefox fit in 512 MB, TLDR it didn't work.
The LEAST memory I'd need nowadays is 1 GB of memory, it's still not much compared to what we have nowadays (my main system has 32 GB), but it's twice the first requirements I've set.
It seems everyone had a nice week with the challenge, I'm very happy to see the community enjoying this every year. I may not be the challenge paragon for this year, but it was useful to me, and since then I couldn't stop thinking about how to improve my computer usage.
In this guide, I'd like to share with you how to install Alpine Linux, so it runs entirely from RAM, but using its built-in tool to handle persistency. Perfect setup for a NAS or router, so you don't waste a disk for the system, and this can even be used for a workstation.
Basically, we want to get the Alpine installer on a writable disk formatted in FAT instead of a read only image like official installers, then we will use the command lbu to handle persistency, and we will see what need to be configured to have a working system.
This is only a list of steps, they will be detailed later:
boot from an Alpine Installer (if you are using Alpine, you don't need too)
format an usb memory drive with an ESP partition and make it bootable
run setup-bootloader to copy the bootloader from the installer to the freshly formatted drive
reboot on the usb drive
run setup-alpine
you are on your new Alpine system
run lbu commit to make changes persistent across reboot
For this step you have to download an Alpine Linux installer, take the one that suits your needs, if unsure, take the "Extended" one. Don't forget to verify the file checksum.
Once you have the ISO file, create the installation media:
In this step, we will need to boot on the Alpine installer to create a new Alpine installer, but writable.
You need another USB media for this step, the one that will keep your system and data.
On Alpine Linux, you can use setup-alpine to configure your network, key map and a few things for the current system. You only have to say "none" when you are asked what you want to install, where, and if you want to store the configuration somewhere.
Run the following commands on the destination USB drive (networking is required to install a package), this will format it and use all the space as a FAT32 partition. In the example below, the drive is /dev/sdc.
This creates a GPT table on /dev/sdc, then creates a first partition as FAT32 from the first megabyte up to the full disk size, and finally marks it bootable. This guide is only for UEFI compatible systems.
We actually have to format the drive as FAT32, otherwise it's just a partition type without a way to mount it as FAT32:
mkfs.vfat /dev/sdc1
modprobe vfat
Final step, we use an Alpine tool to copy the bootloader from the installer to our new disk. In the example below, your installer may be /media/usb and the destination /dev/sdc1, you could figure the first one using mount.
setup-bootable /media/usb /dev/sdc1
At this step, you made a USB disk in FAT32 containing the Alpine Linux installer you were using live. Reboot on the new one.
On your new installation media, run setup-alpine as if you were installing Alpine Linux, but answer "none" when you are asked which disk you want to use. When asked "Enter where to store configs", you should be prompted your new device by default, accept. Immediately, after, you will be prompted for an APK cache, accept.
At this point, we can say Alpine is installed! Don't reboot yet, you are already on your new system!
Just use it, and run lbu commit when you need to save changes done to packages or /etc/. lbu commit creates a new tarball in your USB disk containing a list of files configured in /etc/apk/protected_paths.d/, and this tarball is loaded at boot time, and will install your package list quickly from the local cache.
Please take extra care that if you include more files, everything you commit the changes, they have to be stored on your USB media. You could modify the fstab to add an extra disk/partition for persistent data on a performant drive.
The kernel can't be upgraded using apk, you have to use the script update-kernel that will create a "modloop" file in the boot partition which contains the boot image. You can't rollback this file.
You will need a few gigabytes in your in-memory filesystem, or use a temporary build directory by affecting TMPDIR variable to a persistent storage.
By default, tmpfs on root is set to 1 GB, this can be increased given you have enough memory using the command: mount -o remount,size=6G /.
The script should have the boot directory as a parameter, so it should look like update-kernel /media/usb/boot in a default setup, if you use an external partition, this would look like env TMPDIR=/mnt/something/ update-kernel /media/usb/boot.
By default, lbu will only keep the last version you save, by settingBACKUP_LIMIT to a number n, you will always have the last n versions of your system stored in the boot media, this is practical if you want to roll back a change.
This mean your system may have troubles if you use it on a different computer or that you plug another USB disk in it. Fix by using the UUID of your partition, you can find it using the program blkid from the eponym package, and fix the fstab like this:
If you added a user during setup-alpine, its home directory has been automatically added to /etc/apk/protected_paths.d/lbu.list, when you run lbu commit, its whole home is stored. This may not be desired.
If you don't want to save the whole home directory, but only a selection of files/directories, here is how to proceed:
edit /etc/apk/protected_paths.d/lbu.list to remove the line adding your user directory
you need to create the user directory at boot with the correct permissions: echo "install -d -o solene -g solene -m 700 /home/solene" | doas tee /etc/local.d/00-user.start
in case you have some persistency set at least one user sub directories, it's important to fix the permissions of all the user data after the boot: echo "chown -R solene:solene /home/solene | doas tee -a /etc/local.d/00-user.start
you need to mark this script as executable: doas chmod +x /etc/local.d/00-user.start
you need to run the local scripts at boot time: doas rc-update add local
save the changes: doas lbu commit
I'd recommend the use of a directory named Persist and adding it to the lbu list. Doing so, you have a place to store some important data without having to save all your home directory (including garbage such as cache). This is even nicer if you use ecryptfs as explained below.
Because Alpine Linux is packaged in a minimalistic manner, you may have to install a lot of extra packages to have all the fonts, icons, emojis, cursors etc... working correctly as you would expect for a standard Linux desktop.
Fortunately, there is a community guide explaining each section you may want to configure.
Alpine insists of you using a qwerty desktop for X until you log into your session, this can be complicated to type passwords.
You can create a file /etc/X11/xorg.conf.d/00-keyboard.conf like in the linked example and choose your default keyboard layout. You will have to create the directories /etc/X11/xorg.conf.d first.
You could use ecryptfs to either encrypt the home partition of your user, or just give it a Private directory that could be unlocked on demand AND made persistent without pulling all the user files at every configuration commit.
$ doas apk add ecryptfs-utils
$ doas modprobe ecryptfs
$ ecryptfs-setup-private
Enter your login passphrase [solene]:
Enter your mount passphrase [leave blank to generate one]:
[...]
$ doas lbu add $HOME/.Private
$ doas lbu add $HOME/.ecryptfs
$ echo "install -d -o solene -g solene -m 700 /home/solene/Private" | doas tee /etc/local.d/50-ecryptfs.start
$ doas chmod +x /etc/local.d/50-ecryptfs.start
$ doas rc-update add local
$ doas lbu commit
Now, when you need to access your private directory, run ecryptfs-mount-private and you have your $HOME/Private directory which is encrypted.
You could use ecryptfs to encrypt the whole user directory, this requires extra steps and changes into /etc/pam.d/base-auth, don't forget to add /home/.ecryptfs to the lbu include list.
Let's be clear, this setup isn't secure! The weak part is the boot media, which doesn't use secure boot, could easily be modified, and has nothing encrypted (except the local backups, but NOT BY DEFAULT).
However, once the system has booted, if you remove the boot media, nothing can be damaged as everything lives in memory, but you should still use passwords for your users.
Alpine is a very good platform for this kind of setup, and they provide all the tools out of the box! It's a very fun setup to play with.
Don't forget that by default everything runs from memory without persistency, so be careful if you generate data you don't want to lose (passwords, downloads, etc...).
The lbu configuration can be encrypted, this is recommended if you plan to carry your disk around, especially if it contains sensitive data.
You can use the fat32 partition only for the bootloader and the local backup files, but you could have an extra partition that could be mounted for /home or something, and why not a layer of LUKS for encryption.
You may want to use zram if you are tight on memory, this creates a compressed block device that could be used for swap, it's basically compressed RAM, it's very efficient but less useful if you have a slow CPU.
If you reach this page, you may be interested into this new category of Linux distributions labeled "immutable".
In this category, one can find by age (oldest → youngest) NixOS, Guix, Endless OS, Fedora Silverblue, OpenSUSE MicroOS, Vanilla OS and many new to come.
I will give examples of immutability implementation, then detail my thoughts about immutability, and why I think this naming can be misleading. I spent a few months running all of those distributions on my main computers (NAS, Gaming, laptop, workstation) to be able to write this text.
The word immutability itself refers to an object that can't change.
However, when it comes to an immutable operating system, the definition immediately become vague. What would be an operating system that can't change? What would you be supposed to do with it?
We could say that a Linux LIVE-CD is immutable, because every time you boot it, you get the exact same programs running, and you can't change anything as the disk media is read only. But while the LIVE-CD is running, you can make changes to it, you can create files and directories, install packages, it's not stuck in an immutable state.
Unfortunately, this example was nice but the immutability approach by those Linux distribution is totally different, so we need to think a bit further.
There are three common principles in these systems:
system upgrades aren't done on the live system
packages changes are applied on the next boot
you can roll back a change
Depending on the implementation, a system may offer more features. But this list is what a Linux distribution should have to be labelled "immutable" at the moment.
In this section, I'm mixing NixOS and Guix as they both rely on the same implementation. NixOS is based on Nix (first appearance in 2003), which has been forked into early 2010s into the Guix package manager to be 100% libre, which gave birth to an eponym operating system also 100% free.
These two systems are really different than a traditional Unix like system we are used to, and immutability is a main principle. To make it quick, they are based on their package manager (being Nix or Guix) that contains every package or built file into a special read-only directory (where only the package manager can write) where each package has its own unique entry, and the operating system itself is a byproduct of the package manager.
What does that imply? If the operating system is built, this is because it's made of source code, you literally describe what you want your system to be in a declarative way. You have to list users, their shells, installed packages, running services and their configurations, partitions to mount with which options etc... Fortunately, it's made a lot easier by the use of modules which provide sane defaults, so if you create a user, you don't have to specify its UID, GID, shell, home etc...
So, as the system is built and stored in the special read-only directory, all your system is derived from that (using symbolic links), so all the files handled by the package manager are read-only. A concrete example is that /etc/fstab or /bin/sh ARE read-only, if you want to make a change in those, you have to do it through the package manager.
I'm not going into details, because this store based package manager is really different than everything else but:
you can switch between two configurations on the fly as it's just a symlink dance to go from a configuration to another
you can select your configuration at boot time, so you can roll back to a previous version if something is wrong
you can't make change to a package file or system file as they are read only
the mount points except the special store directory are all mutable, so you can write changes in /home or /etc or /var etc... You can remove the system symlinks by a modified version, but you can't modify the symlink source itself.
This is the immutability as seen through the Nix lens.
I've spent a few years running NixOS systems, this is really a blast for me, and the best "immutable" implementation around, but unfortunately it's too different, so its adoption rate is very low, despite all the benefits.
While this one is not the oldest immutable OS around, it's the first one to be released for the average user, while NixOS and Guix are older but for a niche user category. The company behind Endless OS is trying to offer a solid and reliable system, free and open source, that can works without Internet, to be used in countries with a low Internet / powergrid coverage. They even provide a version with "offline internet included" containing Wikipedia dumps, class lessons and many things to make a computer useful while offline (I love their work).
Endless OS is based on Debian, but uses the OSTree tool to make it immutable. OSTree allows you to manage a core system image, and add layers on top of it, think of packages as layers. But it can also prepare a new system image for the next boot.
With OSTree, you can apply package changes in a new version of the system that will be available at next boot, and revert to a previous version at boot time.
The partitions are mounted writable, except for /usr, the land of packages handled by OSTree, which is mounted read-only. There are no rollbacks possible for /etc.
Programs meant to be for the user (not the packages to be used by the system like grub, X display or drivers) are installed from Flatpak (which also uses OSTree, but unrelated to the system), this avoids the need to reboot each time you install a new package.
My experience with Endless OS is mixed, it is an excellent and solid operating system, it's working well, never failed, but I'm just not the target audience. They provide a modified GNOME desktop that looks like a smartphone menu, because this is what most non-tech users are comfortable with (but I hate it). And installing DevOps tools isn't practical but not impossible, so I keep Endless OS for my multimedia netbook and I really enjoy it.
This linux distribution is the long descendant of Project Atomic, an old initiative to make Fedora / CentOS/ RHEL immutable. It's now part of the Fedora releases along with Fedora Workstation.
Fedora Silverblue is also using OSTree, but with a twist. It's using rpm-OSTree, a tool built on top of OSTree to let your RPM packages apply the changes through OSTree.
The system consists of a single core image for the release, let's say fedora-40, and for each package installed, a new layer is added on top of the core. At anytime, you can list all the layers to know what packages have been installed on top of the core, if you remove a package, the whole stack is generated again (which is terribly SLOW) without the package, there is absolutely no leftover after a package removal.
On boot, you can choose an older version of the system, in case something broke after an upgrade. If you install a package, you need to reboot to have it available as the change isn't applied on the current booted system, however rpm-OSTree received a nice upgrade, you can temporarily merge the changes of the next boot into the live system (using a tmpfs overlay) to use the changes.
The mountpount management is a bit different, everything is read-only except /etc/, /root and /var, but your home directory is by default in /var/home which sometimes breaks expectations. There are no rollbacks possible for /etc as it is not managed by rpm-ostree. A nice surprise was to discover that /usr/local/ was a symbolic link to a directory in /var/, allowing to easily inject custom changes without going through a rpm file.
As installing a new package is slow due to rpm-OSTree and requires a reboot to be fully usable (the live change back port store the extra changes in memory), they recommend to use Flatpak for programs, or toolbox, some kind of wrapper that create a rootless fedora container where you can install packages and use it in your terminal. toolbox is meant to provide development libraries or tool you wouldn't have in Flatpak, but that you wouldn't want to install in your base Fedora system.
My experience with Fedora Silverblue has been quite good, it's stable, the updates are smooth even if they are slow. toolbox was working fine, but using it is an habit to learn.
This spin of OpenSUSE Tumbleweed (rolling-release OpenSUSE) features immutability, but with its own implementation. The idea of MicroOS / Aeon is really simple, the whole system except a few directories like /home or /var lives on a btrfs snapshot, if you want to make a change to the system, the current snapshot is forked into a new snapshot, and the changes are applied there, ready for the next boot.
What's interesting here is that /etc IS part of the snapshots, and can be roll backed, which wasn't possible in the OSTree based systems. It's also possible to make changes to any file of the file system (in a new snapshot, not the live one) using a shell, which can be very practical for injecting files to solve a driver issue. The downside it's not guaranteed that your system is "pure" if you start making changes, because they won't be tracked, the snapshots are just numbered, and you don't know what changes were made in each of them.
Changes must be done through the command transactional-update which do all the snapshot work for you, and you could either manipulate package by adding/removing a package, or just start a shell in the new snapshot to make all the changes you want. I said /etc is part of the snapshots, it's true, but it's never read-only, so you could make a change live in /etc, then create a new snapshot, the change would be immediately inherited. This can create troubles if you roll back to a previous state after an upgrade if you also made changes to /etc just before.
The default approach of MicroOS is disturbing at first, a reboot is planned every day after a system update, this is because it's a rolling-release system and there are updates every day, and you won't benefit from them until you reboot. While you can disable this automatic reboot, it makes sense to use the newest packages anyway, so it's something to consider if you plan to use MicroOS.
There is currently no way to apply the changes into the live system (like Silverblue is offering), it's still experimental, but I'm confident this will be doable soon. As such, it's recommended to use distrobox to use rootless containers of various distributions to install your favorite tools for your users, instead of using the base system packages. I don't really like this because this adds maintenance, and I often had issues of distrobox refusing to start a container after a reboot, I had to destroy and recreate it entirely to solve.
My experience with OpenSUSE MicroOS has been wonderful, it's in dual-boot with OpenBSD on my main laptop, it's my Linux Gaming OS, and it's also my NAS operating system, so I don't have to care about updates. I like that the snapshots system doesn't restrict me, while OSTree systems just doesn't allow you to make changes without installing a package.
Finally, the really new (but mature enough to be usable) system in the immutable family is Vanilla OS based on Ubuntu (but soon on Debian), using ABroot for immutability. With Vanilla OS, we have another implementation that really differs from what we saw above.
ABroot named is well thought, the idea is to have a root partition A, another root partition B, and a partition for persistent data like /home or /var.
Here is the boot dance done by ABroot:
first boot is done on A, it's mounted in read-only
changes to the system like new packages or file changes in /etc are done on B (and can be applied live using a tmpfs overlay)
upon reboot, if previous boot was A, you boot on B, then if the boot is successful, ABroot scan for all the changes between A and B, and apply all the changes from B to A
when you are using your system, until you make a change, A and B are always identical
This implementation has downsides, you can only roll back a change until you boot on the new version, then the changes are also applied on the previous boot, and you can't roll back. This implementation mostly protects you from a failing upgrade, or if you made changes and tried them live, but you prefer to rollback.
Vanilla OS features the package manager apx, written by distrobox author. That's for sure an interesting piece of software, allowing your non-root user to install packages from many distributions (arch linux, fedora, ubuntu, nix, etc...) and integrates them into the system as if they were installed locally. I suppose it's some kind of layer on top of distrobox.
My experience wasn't very good, I didn't find ABroot to be really useful, and the version 22.10 I tried was using an old Ubuntu LTS release which didn't make my gaming computer really happy. The overall state of Vanilla OS, ABroot and apx is that they are young, I think it can become a great distribution, but it still has some rough edges.
I don't want to go much into details, but here is the short version: you can use Alpine Linux installer as a base system to boot from, and create tarballs of "saved configurations" that are automatically applied upon boot (it's just tarred directories and some automation to install packages). At every boot, everything is untarred again, and packages are installed again (you should use an apk cache directory), everything in live memory, fully writable.
What does this achieve? You always start from a clean state, changes are applied on top of it at every boot, you can roll back the changes and start fresh again. Immutability as we defined above here isn't achieved because changes are applied on the base system, but it's quite close to fulfill (my own) requirements.
I've been using it a few days only, not as my main system, and it requires a very good understanding of what you are doing because the system is fully in memory, and you need to take care about what you want to save/restore, which can create big archives.
Now I gave some details about all the major immutable systems (Linux based) around, I think it's time to list the real pros and cons I found from my experimentation.
configuration management tool (ansible, salt, puppet etc..) integrate VERY badly, they received updates to know how to apply package changes, but you will mostly hit walls if you want to manage those like regular systems.
having to reboot after a change is annoying (except for NixOS and Guix which don't require rebooting for each change).
OSTree based systems aren't flexible, my netbook requires some extra files in alsa directories to get sound (fortunately Endless OS have them!), you just can't add the files without making a package deploying them.
blind rollbacks, it's hard to figure what was done in each version of the system, so when you roll back it's hard to figure what you are doing exactly.
it can be hard to install programs like Nix/Guix which require a directory at the root of the file system, or install non-packaged software system-wide (this is often bad practice, but sometimes a necessary evil).
immutability is a lie, many parts of the systems are mutable, although I don't know how to describe this family with a different word (transactional something?).
immutable doesn't imply stateless.
NixOS / Guix are doing it right in my opinion, you can track your whole system through a reliable package manager, and you can use a version control system on the sources, it has the right philosophy from the ground up.
immutability is often associated with security benefits, I don't understand why. If someone obtains root access on your system, they can still manipulate the live system and have fun with the /boot partition, nothing prevent them to install a backdoor for the next boot.
immutability requires discipline and maintenance, because you have to care about the versioning, you have extra programs like apx / distrobox / devbox that must be updated in parallel of the system (while this is all integrated into NixOS/Guix).
Immutable operating systems are making the news in our small community of open source systems, but behind this word lies various implementations with different use cases. The word immutable certainly creates expectations from users, but it's really nothing more than transactional updates for your operating system, and I'm happy we can have this feature now.
But transactional updates aren't new, I think it started a while ago with Solaris and ZFS allowing you to select a system snapshot at boot time, then I'm quite sure FreeBSD implemented this a decade ago, and it turns out that on any linux distribution with regular btrfs snapshots you could select a snapshot at boot time.
In the end, what's REALLY new is the ability to apply a transactional change on a non-live environment, integrates this into the bootloader, and give the user the tooling to handle this easily.
For Qubes OS, the simplest way to proceed is to use the qube sys-net (which is UNTRUSTED) to proceed with the scanner operations. Scanning in it isn't less secure than having a dedicated qube as the network traffic isn't encrypted toward the scanner, this also ease a lot the network setup.
All the instructions below will be done in sys-net, with the root user.
Note that sys-net should be either an AppVM with persistent /home or a fully disposable system, so you will have to do all the commands every time you need your scanner. If you need it really often (I use mine once in a while), you may want to automate this in the template used by sys-net.
We need to install the program sane-airscan used to discover network scanners, and also all the backends/drivers for devices. On Fedora, this can be done using the following command, the package list may differ for other systems.
Make sure the service avahi-daemon is installed and running, the default Qubes OS templates have it, but not running. It is required for network devices discovery.
# systemctl start avahi-daemon
An extra step is required, avahi requires the port UDP/5353 to be opened on the system to receive discovery replies, if you don't do that, you won't find your network scanner (this is also required for printers).
You need to figure the network interface name of your network, open a console and type ip -4 -br a | grep UP, the first column is the interface name, the lines starting by vif can be discarded. Run the following command, and make sure to replace INTERFACE_NAME by the real name you just found.
You can run the command scanimage as a regular user to use your remote scanner, by default, it selects the first device available, so if you have a single scanner, you don't need to specify its long and complicated name/address.
You can scan and save as a PDF file using this command:
$ scanimage --format pdf > my_document.pdf
On Qubes OS, you can open a file manager in sys-net and right-click on the file to move it to the qube where you want to keep the document.
Using a network scanner is quite easy when it's supported by SANE, but you need direct access to the network because of the avahi discovery requirement, which is not practical when you have a firewall or use virtual machines in sub networks.
Hi! Today, I started the 3rd edition of the Old Computer Challenge. And it's not going well, I didn't prepare a computer before, because I wanted to see how easy it would be.
main computer (Ryzen 5 5600X with 32 GB of memory) running Qubes OS: well, Qubes OS may be the worse OS for that challenge because it needs so much memory as everything is done in virtual machines, just handling USB devices requires 400 MB of memory
main laptop (a t470) running OpenBSD 7.3: for some reasons, the memory limitation isn't working, maybe it's due to the hardware or the 7.3 kernel
main laptop running OpenSUSE MicroOS (in dual boot): reducing the memory to 512MB prevent the system to unlock the LUKS drive!
The thing is that I have some other laptops around, but I'd have to prepare them with full disk encryption and file synchronization to have my passwords, GPG and SSH keys around.
With this challenge, in its first hour, I realized my current workflows don't allow me to use computers with 512 MB of memory, this is quite sad. A solution would be to use the iBook G4 laptop that I've been using since the beginning of the challenges, or my T400 running OpenBSD -current, but they have really old hardware, and the challenge is allowing some more fancy systems.
I'd really like to try Alpine Linux for this challenge, let's wrap something around this idea.
Let me share an installation guide on OpenBSD for a product I like: kanboard. It's a Kanban board written in PHP, it's easy of use, light, effective, the kind of software I like.
While there is a docker image for easy deployment on Linux, there is no guide to install it on OpenBSD. I did it successfuly, including httpd for the web server.
Extract the archive, and move the extracted content into /var/www/htdocs/kanboard; the file /var/www/htdocs/kanboard/cli should exists if you did it correctly.
Now, you need to fix the permissions for a single directory inside the project to allow the web server to write persistent data.
For kanboard, we will need PHP and a few extensions. They can be installed and enabled using the following command: (for the future, 8.2 will be obsolete, adapt to the current PHP version)
pkg_add php-zip--%8.2 php-curl--%8.2 php-zip--%8.2 php-pdo_sqlite--%8.2
for mod in pdo_sqlite opcache gd zip curl
do
ln -s /etc/php-8.2.sample/${mod}.ini /etc/php-8.2/
done
rcctl enable php82_fpm
rcctl start php82_fpm
Now you have the service php82_fpm (chrooted in /var/www/) ready to be used by httpd.
If you want to use one of the first two methods, you will have to add a few files to the chroot like /bin/sh; you can find accurate and up to date information about the specific changes in the file /usr/local/share/doc/pkg-readms/php-8.2.
Kanboard is a fine piece of software, I really like the kanban workflow to organize. I hope you'll enjoy it as well.
I'd also add that installing software without docker is still a thing, this requires you to know exactly what you need to make it run, and how to configure it, but I'd consider this a security bonus point. Think that it will also have all its dependencies updated along with your system upgrades over time.
When you need to regularly run a program on your workstation that isn't powered 24/7 or even not every day, you can't rely on cronjob for that task.
Fortunately, there is a good old tool for this job (first release June 2000), it's called anacron and it will track when was the last time each configured tasks have been running.
I'll use OpenBSD as an example for the setup, but it's easily adaptable to any other Unix-like system.
The first step is to install the package anacron, this will provide the program /usr/local/sbin/anacron we will use later. You can also read OpenBSD specific setup instructions in /usr/local/share/doc/pkg-readmes/anacron.
Configure root's crontab to run anacron at system boot, we will use the flag -d to not run anacron as a daemon, and -s to run each task in a sequence instead of in parallel.
The crontab entry would look like this:
@reboot /usr/local/sbin/anacron -ds
If your computer is occasionally on for a few days, anacron won't run at all after the boot, so it would make sense to run it daily too just in case:
# at each boot
@reboot /usr/local/sbin/anacron -ds
# at 01h00 if the system is up
0 1 * * * /usr/local/sbin/anacron -ds
Now, you will configure the tasks you want to run, and at which frequency. This is configured in the file /etc/anacrontab using a specific format, different from crontab.
There is a man page named anacrontab for official reference.
The format consists of the following ordered fields:
the frequency in days at which the task should be started
the delay in minutes after which the task should be started
a readable name (used as an internal identifier)
the command to run
I said it before but it's really important to understand, the purpose of anacron is to run daily/weekly/monthly scripts on a system that isn't always on, where cron wouldn't be reliable.
Usually, anacron is started at the system boot and run each task from its anacrontab file, this is why a delay field is useful, you may not want your backup to start immediately upon reboot, while the system is still waiting to have a working network connection.
Some variables can be used like in crontab, the most important are PATH and MAILTO.
Anacron keeps the last run date of each task in the directory /var/spool/anacron/ using the identifier field as a filename, it will contain the last run date in the format YYYYMMDD.
I really like the example provided in the OpenBSD package. By default, OpenBSD has some periodic tasks to run every day, week and month at night, we can use anacron to run those maintenance scripts on our workstations.
If you are an OpenBSD running an OpenSMTP email server, you may want to ban IPs used by bots trying to bruteforce logins. OpenBSD doesn't have fail2ban available in packages, and sshguard isn't extensible enough to support the multiline log format used by OpenSMTP.
Here is a short script that looks for authentication failures in /var/mail/maillog and will add the IPs into the PF table bot after too many failed login.
Write the following content in an executable file, this could be /usr/local/bin/ban_smtpd but this doesn't really matter.
#!/bin/sh
TRIES=10
EXPIRE_DAYS=5
awk -v tries="$TRIES" '
/ smtp connected / {
ips[$6]=substr($9, 9)
}
/ smtp authentication / && /result=permfail/ {
seen[ips[$6]]++
}
END {
for(ip in seen) {
if(seen[ip] > tries) {
print ip
}
}
}' /var/log/maillog | xargs pfctl -T add -t bot
# if the file exists, remove IPs listed there
if [ -f /etc/mail/ignore.txt ]
then
cat /etc/mail/ignore.txt | xargs pfctl -T delete -t bot
fi
# remove IPs from the table after $EXPIRE_DAYS days
pfctl -t bot -T expire "$(( 60 * 60 * 24 * $EXPIRE_DAYS ))"
This parses the maillog file, so by default it has a rotation every day, you could adapt the script to your log rotation policy to match what you want, users failing with permfail are banned after some tries, configurable with $TRIES.
I added support for an ignore list, to avoid blocking yourself out, just add IP addresses in /etc/mail/ignore.txt.
Finally, banned IPs are unbanned after 5 days, you can change it using the variable EXPIRE_DAYS.
Now, edit root's crontab, you want to run this script at least every hour, and get a log if it fails.
~ * * * * -sn /usr/local/bin/ban_smtpd
This cron job will run every hour at a random minute (defined each time crond restarts, so it stays consistent for a while). The periodicity may depend on the number of scan your email server receives and also the log size vs the CPU power.
This would be better to have an integrated banning system supporting multiple logfiles / daemons, such as fail2ban, but in the current state it's not possible. This script is simple, fast, extensible and does the job.
Qubes OS is like a meta system emphasizing on security and privacy. You start on an almost empty XFCE interface on a system called dom0 (Xen hypervisor) with no network access: this is your desktop from which you will start virtual machines integrating into dom0 display in order to do what you need to do with your computer.
Virtual Machines in Qubes OS are called qubes, most of the time, you want them to be using a template (Debian or Fedora for the official ones). If you install a program in the template, it will be available in a Qube using that template. When a Qube is set to only have a persistent /home directory, it's called an AppVM. In that case, any change done outside /home will be discarded upon reboot.
By default, the system network devices are assigned to a special Qube named sys-net which is special in that it gets the physical network devices attached to the VM. sys-net purpose is to be disposable and provide network access to the outside to the VM named sys-firewall which will be doing some filtering.
All your qubes using Internet will have to use sys-firewall as their network provider. A practical use case if you want to use a VPN but not globally is to create a sys-vpn Qube (pick the name you want), connect it to the Internet using sys-firewall, and now you can use sys-vpn as the network source for qubes that should use your VPN, it's really effective.
If you need to use an USB device like a microphone and webcam in a Qube, you have a systray app to handle USB pass-through, from the special Qube sys-usb managing the physical USB controllers, to attach the USB device into a Qube. This allows you to plug anything USB into the computer, and if you need to analyze it, you can start a disposable VM and check what's in there.
Efficient VM management due to the use of templates.
Efficient resource usage due to Xen (memory ballooning, para-virtualization).
Built for being secure.
Disposables VMs.
Builtin integration with Tor (using whonix).
Secure copy/paste between VMs.
Security (network is handled by a VM which gets the physical devices attached, hypervisor is not connected).
Practical approach: if you need to run a program you can't trust because you have too (this happens sometimes), you can do that in a disposable VM and not worry.
Easy update management + rollback ability in VMs.
Easy USB pass-through to VMs.
Easy file transfer between VMs.
Incredible VM windows integration into the host.
Qubes-rpc to setup things like split-ssh where the ssh key is stored in an offline VM, with user approval for each use.
Modular networking, I can make a VPN in a VPN and assign it to other VM but not all.
Easily extensible as all templates and VMs are managed by Salt Stack.
I tried Qubes OS early 2022, it felt very complicated and not efficient so I abandoned it only after a few hours. This year, I did want to try again for a longer time, reading documentation, trying to understand everything.
The more I used it, the more I got hooked by the idea, and how clean it was. I basically don't really want to use a different workflow anymore, that's why I'm currently implementing OpenKuBSD to have a similar experience on OpenBSD (even if I don't plan to have as many features as Qubes OS).
My workflow is the following, this doesn't mean it's the best one, but it fits my mindset and the way I want to separate things:
a Qube for web browsing with privacy plugins and Arkenfox user.js, this is what I use to browse websites in general
a Qube for communication: emails, XMPP and Matrix
a Qube for development which contains my projects source code
a Qube for each work client which contains their projects source code
an OpenBSD VM to do ports work (it's not as integrated as the other though)
a Qube without network for the KeePassXC databases (personal and per-client), SSH and GPG keys
a Qube using a VPN for some specific network tasks, it can be connected 24/7 without having all the programs going through the VPN (or without having to write complicated ip rules to use this route only in some case)
disposable VMs at hand to try things
I've configured my system to use split-SSH and split-GPG, so some qubes can request the use of my SSH key in the dom0 GUI, and I have to manually accept that one-time authorization on each use. It may appear annoying, but at least it gives me a visual indicator that the key is requested, from which VM, and it's not automatically approved (I only have to press Enter though).
I'm not afraid of mixing up client work with my personal projects due to different VM use. If I need to make experimentation, I can create a new Qube or use a disposable one, this won't affect my working systems. I always feel dirty and unsafe when I need to run a package manager like npm to build a program in a regular workstation...
Sometimes I want to experiment a new program, but I have no idea if it's safe when installing it manually or with "curl | sudo bash". In a dispoable, I just don't care, everything is destroyed when I close its terminal, and it doesn't contain any information.
What I really like is that when I say I'm using Qubes OS, for real I'm using Fedora, OpenBSD and NixOS in VMs, not "just" Qubes OS.
However, Qubes OS is super bad for multimedia in general. I have a dual boot with a regular Linux if I want to watch videos or use 3D programs (like Stellarium or Blender).
This is a question that seems to pop quite often on the project forum. It's hard to reply because Qubes OS has an important learning curve, it's picky with regard to hardware compatibility and requirements, and the pros/cons weight can differ greatly depending on your usage.
When you want important data to be kept almost physically separated from running programs, it's useful.
When you need to run programs you don't trust, it's useful.
When you prefer to separate contexts to avoid mixing up files / clipboard, like sharing some personal data in your workplace Slack, this can be useful.
When you want to use your computer without having to think about security and privacy, it's really not for you.
When you want to play video games, use 3D programs, benefit from GPU hardware acceleration (for machine learning, video encoding/decoding), this won't work, although with a second GPU you could attach it to a VM, but it requires some time and dedication to get it working fine.
Qubes OS security model relies on a virtualization software (currently XEN), however they are known to regularly have security issues. It can be debated whether virtualization is secure or not.
I think Qubes OS has an unique offer with its compartmentalization paradigm. However, the required mindset and discipline to use it efficiently makes me warn that it's not for everyone, but more for a niche user base.
The security achieved here is relatively higher than in other systems if used correctly, but it really hinders the system usability for many common tasks. What I like most is that Qubes OS gives you the tools to easily solve practical problems like having to run proprietary and untrusted software.
In a previous article, I explained how to use Fossil version control system to version the files you may write in dom0 and sync them against a remote repository.
I figured how to synchronize a git repository between an AppVM and dom0, then from the AppVM it can be synchronized remotely if you want. This can be done using the git feature named bundle, which bundle git artifacts into a single file.
In this setup, you will create a git repository (this could be a clone of a remote repository) in an AppVM called Dev, and you will clone it from there into dom0.
Then, you will learn how to send and receive changes between the AppVM repo and the one in dom0, using git bundle.
The first step is to have git installed in your AppVM and in dom0.
For the sake of simplicity for the guide, the path /tmp/repo/ refers to the git repository location in both dom0 and the AppVM, don't forget to adapt to your setup.
In the AppVM Dev, create a git repository using cd /tmp/ && git init repo. We need a first commit for the setup to work because we can't bundle commits if there is nothing. So, commit at least one file in that repo, if you have no idea, you can write a short README.md file explaining what this repository is for.
In dom0, use the following command:
qvm-run -u user --pass-io Dev "cd /tmp/repo/ && git bundle create - master" > /tmp/git.bundle
cd /tmp/ && git clone -b master /tmp/git.bundle repo
Congratulations, you cloned the repository into dom0 using the bundle file, the path /tmp/git.bundle is important because it's automatically set as URL for the remote named "origin". If you want to manage multiple git repositories this way, you should use a different name for this exchange file for each repo.
Back to the AppVM Dev, run the following command in the git repository, this will configure the bundle file to use for the remote dom0. Like previously, you can pick the name you prefer.
In the script push.sh, git bundle is used to send a bundle file over stdout containing artifacts from the remote AppVM last known commit up to the latest commit in the current repository, hence origin/master..master range. This data is piped into the file /tmp/dom0.bundle in the AppVm, and was configured earlier as a remote for the repository.
Then, the command git pull -r dom0 master is used to fetch the changes from the bundle, and rebase the current repository, exactly like you would do with a "real" remote over the network.
In the script pull.sh, we run the git bundle from within the AppVM Dev to generate on stdout the bundle from the last known state of dom0 up to the latest commit in the branch master, and pipe into the dom0 file /tmp/git.bundle, remember that this file is the remote origin in dom0's clone.
After the bundle creation, a regular git pull -r is used to fetch the changes, and rebase the repository.
I find this setup really elegant, the safe qvm-run is used to exchange static data between dom0 and the AppVM, no network is involved in the process. Now there is no reason to have dom0 configuration file not properly tracked within a version control system :)
Here is a summary of my progress for writing OpenKuBSD. So far, I've had a few blockers but I've been able to find solutions, more or less simple and nice, but overall I'm really excited about how the project is turning out.
As a quick introduction to OpenKuBSD in its current state, it's a program to install on top of OpenBSD, using mostly base system tools.
OpenBSD templates can be created and configured
Kubes (VMs) inherit an OpenBSD template for the disk, except for a dedicated persistent /home, any changes outside of /home will be reset on each boot
Kubes have a nice name like "www.kube" to connect to
NFS storage per Kube in /shared/ , this allows data to be shared with the host, which can then move files between Kubes via the shared directories
Xephyr based compartimentalization for GUI display. Each program run has its own Xephyr server.
Clipboard manipulation tool: a utility for copying the clipboard from one Xephyr to another one. This is a secure way to share the clipboard between Kubes without leakage.
On-demand start and polling for ssh connection, so you don't have to pre-start a Kube before running a program.
Executable /home/openkubsd/rc.local script at boot time to customize an environment at kube level rather than template level
Desktop entry integration: a script is available to create desktop entries to run program X on Kube Y, directly from the menu
The Xephyr trick was hard to figure and implement correctly. Originally, I used ssh -Y which worked fine, and integrated very well with the desktop however:
ssh -Y allows any window to access the X server, meaning any hacked VM could access all other running programs
ssh -X is secure, but super bad: slow, can't have a custom layout, crashes when trying to do access X in some cases. (fun fact, on Fedora, ForwardX11Trusted seems to be set to Yes by default, so ssh -X does ssh -Y!)
Xephyr worked, but running a program in it didn't use the full display, so a window manager was required. But all the tiling window managers I used (to automatically use all the screen) couldn't resize when Xephyr was resized.... except stumpwm!
Stumpwm custom configuration to quit when it has no more window displayed. If you exit your programs then stumpwm quits then Xephyr stops.
I'm really getting satisfied with the current result. It's still far from being ready to ship or feature complete, but I think the foundations are quite cool.
Next steps:
tighten the network access for each Kube using PF (only NAT + host access + prevent spoofing)
allow a Kube to not have NAT (communication would be restricted to the host only for ssh access), this is the most "no network" implementation I can achieve.
allow a Kube to have a NAT from another Kube (to handle a Kube VPN for a specific list of Kubes)
figure how to make a Tor VPN Kube
allow to make disposable Kubes using the Tor VPN Kube network
Mid term steps:
support Alpine Linux (with features matching what OpenBSD Kubes have)
Long term steps:
rewrite all OpenKuBSD shell implementation into a daemon/client model, easier to install, more robust
define a configuration file format to declare all the infrastructure
The project is still in its beginning, but I made important progress over the last two weeks, I may reduce the pace here a bit to get everything stabilized. I started using OpenKuBSD on my own computer, this helps a lot to refine the workflow and see what feature matter, and which design is wrong or correct.
I got an idea today (while taking a shower...) about _partially_ reusing Qubes OS design of using VMs to separate contexts and programs, but doing so on OpenBSD.
To make explanations CLEAR, I won't reimplement Qubes OS entirely on OpenBSD. Qubes OS is an interesting operating system with a very strong focus on security (from a very practical point of view ), but it's in my opinion overkill for most users, and hence not always practical or usable.
In the meantime, I think the core design could be reused and made it easy for users, like we are used to do in OpenBSD.
I like the way Qubes OS allows to separate things and to easily run a program using a VPN without affecting the rest of the system. Using it requires a different mindset, one has to think about data silos, what do I need for which context?
However, I don't really like that Qubes OS has so many opened issues, governance isn't clear, and Xen seems to be creating a lot of troubles with regard to hardware compatibility.
I'm sure I can provide a similar but lighter experience, at the cost of "less" security. My threat model is more preventing data leak in case of a compromised system/software, than protecting my computer from a government secret agency.
After spending two months using "immutables" distributions (openSUSE MicroOS, Vanilla OS, Silverblue), where they all want to you use root-less containers (with podman) through distrobox, I hate that idea, it integrates poorly with the host, it's a nightmare to maintain, can create issues due to different versions of programs altering your user data directory, and that just doesn't bring anything much to the table except allowing users to install software without being root (and without having to reboot on those systems).
Here is a list of features that I think good to implement.
vmd based OpenBSD and Alpine template (installation automated), with the help of qcow2 format for VMs, it's possible to create a disk based on another, a must for using templates
disposable VMs, they are started from the template but using a derived disk of the template, destroyed after use
AppVM, a VM created with a persistent /home, and the rest of the system is inherited from the template using a derived qcow2 from template
VPN VMs that could be used by other VMs as their network source (Tor VPN template should be provided)
Simple configuration file describing your templates, your VMS, packages installed (in templates), and which network source to use for which VM
Installing software in templates will create .desktop files in menus to easily start programs (over ssh -Y)
OpenBSD host should be USABLE (hardware acceleration, network handling, no perf issues)
OpenBSD host should be able to transfer files between VMs using ssh
Audio disabled by default on VMs, sndio could be allowed (by the user in a configuration file) to send the sound to the host
Should work with at least 4 GB of memory (I would like to make just 2 as a requirement if possible)
Some kind of quick diagram explaining relationship of various components. This doesn't show the whole picture because it wouldn't be easy to represent (and I didn't had time to try doing so yet):
HVM support and passthrough, this could be done one day if vmd supports passthrough, but this creates too much problems, and only help security for niche use case I don't want to focus on
USB passthrough, too complex to implement, too niche use case
VM RPC, except for the host being able to copy files from one vm to the other using ssh
An OpenBSD distribution, OpenKuBSD must be installable on top of OpenBSD with the least friction possible, not as a separate system
Hi! It's that time of the year when I announce a new Old Computer Challenge :)
If you don't know about it, it's a weird challenge I've did twice in the past 3 years that consists into limiting my computer performance using old hardware, or by limiting Internet access to 60 minutes a day.
I want this challenge to be accessible. The first one wasn't easy for many because it required to use an old machine, but many readers didn't have a spare old computer (weird right? :P). The second one with Internet time limitation was hard to setup.
This one is a bit back to the roots: let's use a SLOW computer for 7 days. This will be achieved by various means with any hardware:
Limit your computer's CPU to use only 1 core. This can be set in the BIOS most of the time, and on Linux you can use maxcores=1 in the boot command line, on OpenBSD you can use bsd.sp kernel for the duration of the challenge.
Limit your computer's memory to 512 MB of memory (no swap limit). This can be set on Linux using the boot command line mem=512MB. On OpenBSD, this can be achieved a bit similarly by using datasize-max=512M in login.conf for your user's login class.
Set your CPU frequency to the lowest minimum (which is pretty low on modern hardware!). On Linux, use the "powersave" frequency governor, in modern desktop environments the battery widget should offer an easy way to set the governor. On OpenBSD, run apm -L (while apmd service is running). On Windows, in the power settings, set the frequency to minimum.
I got the idea when I remembered a few people reporting these tricks to do the first challenge, like in this report:
Since I'm using Qubes OS, I always faced an issue; I need a proper tracking of the configuration files for my systemthis can be done using Salt as I explained in a previous blog post. But what I really want is a version control system allowing me to synchronize changes to a remote repository (it's absurd to backup dom0 for every change I make to a salt file). So far, git is too complicated to achieve that.
I gave a try with fossil, a tool I like (I wrote about this one too ;) ), and it was surprisingly easy to setup remote access leveraging Qubes'qvm-run.
In this blog post, you will learn how to setup a remote fossil repository, and how to use it from your dom0.
Now, we will clone this remote repository in our dom0, I'm personnally fine with storing such files in /root/ directory.
In the following example, the file my-repo.fossil was created on the machine 10.42.42.200 with the path /home/solene/devel/my-repo.fossil. I'm using the AppVM qubes-devel to connect to the remote host using SSH.
This command clone a remote fossil repository by piping the SSH command through qubes-devel AppVM, allowing fossil to reach the remote host.
Cool fact with fossil's clone command, it keeps the proxy settings, so no further changes are required.
With a Split SSH setup, I'm asked everytime fossil is synchronizing; by default fossil has "autosync" mode enabled, for every commit done the database is synced with the remote repository.
4. Open the repository (reminder about fossil usage) §
As I said, fossil works with repository files. Now you cloned the repository in /root/my-repo.fossil, you could for instance open it in /srv/ to manage all your custom changes to the dom0 salt.
This can be achieved with the following command:
[root@dom0 ~#] cd /srv/
[root@dom0 ~#] fossil open --force /root/my-repo.fossil
The --force flag is needed because we need to open the repository in a non-empty directory.
Finally, I figured a proper way to manage my dom0 files, and my whole host. I'm very happy of this easy and reliable setup, especially since I'm already a fossil user. I don't really enjoy git, so demonstrating alternatives working fine always feel great.
If you want to use Git, I have a hunch that something could be done using git bundle, but this requires some investigation.
Download an ISO file to install OpenBSD, do it from an AppVM. You can use the command cksum -a sha256 install73.iso in the AppVM to generate a checksum to compare with the file SHA256 to be found in the OpenBSD mirror.
In the XFCE menu > Qubes Tools > Create Qubes VM GUI, choose a name, use the type "StandaloneVM (fully persistent)", use "none" as a template and check "Launch settings after creation".
In the "Basic" tab, configure the "system storage max size", that's the storage size OpenBSD will see at installation time. OpenBSD storage management is pretty limited, if you add more space later it will be complicated to grow partitions, so pick something large enough for your task.
Still in the "Basic" tab, you have all the network information, keep them later (you can open the Qube settings after the VM booted) to configure your OpenBSD.
In "Firewall rules" tab, you can set ... firewall rules that happens at Qubes OS level (in the sys-firewall VM).
In the "Devices" tab, you can expose some internal devices to the VM (this is useful for networking VMs).
In the "Advanced" tab, choose the memory to use and the number of CPU. In the "Virtualization" square, choose the mode "HVM" (it should already be selected). Finally, click on "Boot qube from CD-ROM" and pick the downloaded file by choosing the AppVM where it is stored and its path. The VM will directly boot when you validate.
You should get into your working OpenBSD VM with functional network.
Be careful, it doesn't have any specific integration with Qubes OS like the clipboard, USB passthrough etc... However, it's a HVM system, so you could give it an USB controller or a dedicated GPU.
It's perfectly possible to run OpenBSD in Qube OS with very decent performance, the setup is straightforward when you know where to look for the network information (and that the netmask is /8 and not /32 like on Linux).
As a recent Qubes OS user, but also a NixOS user, I want to be able to reproduce my system configuration instead of fiddling with files everywhere by hand and being clueless about what I changed since the installation time.
Fortunately, Qubes OS is managed internally with Salt Stack (it's similar to Ansible if you didn't know about Salt), so we can leverage salt to modify dom0 or Qubes templates/VMs.
In this example, I'll show how to write a simple Salt state files, allowing you to create/modify system files, install packages, add repositories etc...
Everything will happen in dom0, you may want to install your favorite text editor in it. Note that I'm still trying to figure a nice way to have a git repository to handle this configuration, and being able to synchronize it somewhere, but I still can't find a solution I like.
The dom0 salt configuration can be found in /srv/salt/, this is where we will write:
a .top file that is used to associate state files to apply to which hosts
a state file that contain actual instructions to run
Quick extra explanation: there is a directory /srv/pillar/, where you store things named "pillars", see them as metadata you can associate to remote hosts (AppVM / Templates in the Qubes OS case). We won't use pillars in this guide, but if you want to write more advanced configurations, you will surely need them.
On my computer, I added the following piece of configuration to /srv/salt/dom0.sls to automatically assign the USB mouse to dom0 instead of being asked every time, this implements the instructions explained in the documentation link below:
This snippet makes sure that the line sys-usb dom0 allow in the file /etc/qubes-rpc/policy/qubes.InputMouse is present above the line matching ^sys-usb dom0 ask. This is a more reproducible way of adding lines to configuration file than editing by hand.
Now, we need to apply the changes by running salt on dom0:
qubesctl --target dom0 state.apply
You will obtain a list of operations done by salt, with a diff for each task, it will be easy to know if something changed.
Note: state.apply used to be named state.highstate (for people who used salt a while ago, don't be confused, it's the same thing).
Using the same method as above, we will add a match for the fedora templates in the custom top file:
In /srv/salt/custom.top add:
'fedora-*':
- globbing: true
- fedora
This example is slightly different than the one for dom0 where we matched the host named "dom0". As I want my salt files to require the least maintenance possible, I won't write the template name verbatim, but I'd rather use a globbing (this is the name for simpler wildcard like foo*) matching everything starting by fedora-, I currently have fedora-37 and fedora-38 on my computer, so they are both matching.
In order to apply, we can type qubesctl --all state.apply, this will work but it's slow as salt will look for changes in each VM / template (but we only added changes for fedora templates here, so nothing would change except for the fedora templates).
For a faster feedback loop, we can specify one or multiple targets, for me it would be qubesctl --targets fedora-37,fedora-38 state.apply, but it's really a matter of me being impatient.
An interesting setup with Qubes OS is to have your SSH key in a separate VM, and use Qubes OS internal RPC to use the SSH from another VM, with a manual confirmation on each use. However, this setup requires modifying files at multiple places, let's see how to manage everything with salt.
Reusing the file /srv/salt/custom.top created earlier, we add split_ssh_client.sls for some AppVMs that will use the split SSH setup. Note that you should not deploy this state to your Vault, it would self reference for SSH and would prevent the agent to start (been there :P):
Create /srv/salt/split_ssh_client.sls: this will add two files to load the environment variables from /rw/config/rc.local and ~/.bashrc. It's actually easier to separate the bash snippets in separate files and use source, rather than using salt to insert the snippets directly in place where needed.
Now, run qubesctl --all state.apply to configure all your VMs, which are the template, dom0 and the matching AppVMs. If everything went well, you shouldn't have errors when running the command.
Create /srv/salt/default_www.sls with the following content, this will run xdg-settings to set the default browser:
xdg-settings set default-web-browser browser_vm.desktop:
cmd.run:
- runas: user
Now, run qubesctl --target fedora-38,dom0 state.apply.
From there, you MUST reboot the VMs that will be configured to use the WWW AppVm as the default browser, they need to have the new file browser_vm.desktop available for xdg-settings to succeed. Run qubesctl --target vault,qubes-communication,qubes-devel state.apply.
Congratulations, now you will have a RPC prompt when an AppVM wants to open a file to ask you if you want to open it in your browsing AppVM.
This method is a powerful way to handle your hosts, and it's ready to use on Qubes OS. Unfortunately, I still need to figure a nicer way to export the custom files written in /srv/salt/ and track the changes properly in a version control system.
Erratum: I found a solution to manage the files :-) stay tuned for the next article.
Recently, OpenBSD package manager received a huge speed boost when updating packages, but it's currently only working in -current due to an issue.
Fortunately, espie@ fixed it for the next release, I tried it and it's safe to fix yourself. It will be available in the 7.4 release, but for 7.3 users, here is how to apply the change.
There is a single file modified, just download the patch and apply it on /usr/libdata/perl5/OpenBSD/PackageRepository/Installed.pm with the command patch.
On -current, there is a single directory to look for packages, but on release for architectures amd64, aarch64, sparc64 and i386, there are two directories: the packages generated for the release, and the packages-stable directory receiving updates during the release lifetime.
The code wasn't working with the two paths case, preventing pkg_add to build a local packages signature to compare the remote signature database in the "quirks" package in order to look for updates. The old behavior was still used, making pkg_add fetching the first dozen kilobytes of each installed packages to compare their signature package by package, while now everything is stored in quirks.
If you have any issue, just revert the patch by adding -R to the patch command, and report the problem TO ME only.
This change is not officially supported for 7.3, so you are on your own if there is an issue, but it's not harmful to do. If you were to have an issue, reporting it to me would help solving it for 7.4 for everyone, but really, it just work without being harmful in the worse case scenario.
I hope you will enjoy this change so you don't have to wait for 7.4. This makes OpenBSD pkg_add feeling a bit more modern, compared to some packages manager that are now almost instant to install/update packages.
As a reed-alert user for monitoring my servers, while using emails works efficiently, I wanted to have more instant notifications for critical issues. I'm also an happy XMPP user, so I looked for a solution to send XMPP messages from a command line.
I will explain how to use the program go-sendxmpp to send messages from a command line, this is a newer drop-in replacement for the old perl sendxmpp that doesn't seem to work anymore.
Following go-sendxmpp documentation, you need go to be installed, and then run go install salsa.debian.org/mdosch/go-sendxmpp@latest to compile the binary in ~/go/bin/go-sendxmpp. Because it's a static binary, you can move it to a directory in $PATH.
If I'm satisfied of it, I'll import go-sendxmpp into the OpenBSD ports tree to make it available as a package for everyone.
Now, your user should be ready to use go-sendxmpp, I recommend always enabling the flag -t to use TLS to connect to the server, but you should really choose an XMPP server providing TLS-only.
The program usage is simple: echo "this is a message for you" | go-sendxmpp dest@remote, and you are done. It's easy to integrate it in shell tasks.
Note that go-sendxmpp allows you to get the password for a command instead of storing it in plain text, this may be more convenient and secure in some scenarios.
Back to reed-alert, using go-sendxmpp is as easy as declaring a new alert type, especially using the email template:
(alert xmpp "echo -n '[%state%] Problem with %function% %date% %params%' | go-sendxmpp user@remote")
;; example of use
(=> xmpp ping :host "dataswamp.org" :desc "Ping to dataswamp.org")
XMPP is a very reliable communication protocol, I'm happy that I found go-sendxmpp, a modern, working and simple way to programmatically send me alerts using XMPP.
I'm still playing with Qubes OS, today I had to figure how to install Nix because I rely on it for some tasks. It turned out to be a rather difficult task for a Qubes beginner like me when not using a fully persistent VM.
Here is how to install Nix in an AppVm (only /home/ is persistent) and some links to the documentation about bind-dirs, an important component of Qubes OS that I didn't know about.
Behind this unfriendly name is a smart framework to customize templates or AppVM. It allows running commands upon VM start, but also make directories explicitly persistent.
The configuration can be done at the local or template level, in our case, we want to create /nix and make it persistent in a single VM, so that when we install nix packages, they will stay after a reboot.
The implementation is rather simple, the persistent directory is under the /rw partition in ext4, which allows mounting subdirectories. So, if the script finds /rw/bind-dirs/nix it will mount this directory on /nix on the root filesystem, making it persistent and without having to copy at start and sync on stop.
A limitation for this setup is that we need to install nix in single user mode, without the daemon. I suppose it should be possible to install Nix with the daemon, but it should be done at the template level as it requires adding users, groups and systemd units (service and socket).
In your AppVM, run the following commands as root:
mkdir -p /rw/config/qubes-bind-dirs.d/
echo "binds+=( '/nix' )" > /rw/config/qubes-bind-dirs.d/50_user.conf
install -d -o user -g user /rw/bind-dirs/nix
This creates an empty directory nix owned by the regular Qubes user named user, and we tell bind-dirs that this directory is persistent.
/!\ It's not clear if it's a bug or a documentation issue, but the creation of /rw/bind-dirs/nix wasn't obvious. Someone already filled a bug about this, and funny enough, they reported it using Nix installation as an example.
Now, reboot your VM, you should have a /nix directory that is owned by your user. This mean it's persistent, and you can confirm that by looking at mount | grep /nix output which should have a line.
Finally, install nix in single user mode, using the official method:
sh <(curl -L https://nixos.org/nix/install) --no-daemon
Now, we need to fix the bash code to load Nix into your environment. The installer modified ~/.bash_profile, but it isn't used when you start a terminal from dom0, it's only used when using a full shell login with bash -l, which doesn't happen on Qubes OS.
Copy the last line of ~/.bash_profile in ~/.bashrc, this should look like that:
if [ -e /home/user/.nix-profile/etc/profile.d/nix.sh ]; then . /home/user/.nix-profile/etc/profile.d/nix.sh; fi # added by Nix installer
Now, open a new shell, you have a working Nix in your environment \o/
You can try it using nix-shell -p hello and run hello. If you reboot, the same command should work immediately without need to download packages again.
Installing Nix in a Qubes OS AppVM is really easy, but you need to know about some advanced features like bind-dirs. This is a powerful feature that will allow me to make lot of fun stuff with Qubes now, and using nix is one of them!
If you plan to use Nix like this in multiple AppVM, you may want to set up a local substituter cache in a dedicated VM, this will make your bandwidth usage a lot more efficient.
If you use Qubes OS, you already know that installed software in templates are available in your XFCE menu for each VM, and can be customized from the Qubes Settings panel.
However, if you want to locally install a software, either by compiling it, or using a tarball, you won't have a application entry in the Qubes Settings, and running this program from dom0 will require using an extra terminal in the VM. But we can actually add the icon/shortcut by creating a file at the right place.
In this example, I'll explain how I made a menu entry for the program DeltaChat, "installed" by downloading an archive containing the binary.
In the VM (with a non-volatile /home) create the file /home/user/.local/share/applications/deltachat.desktop, or in a TemplateVM (if you need to provide this to multiple VMs) in the path /usr/share/applications/deltachat.desktop:
This will create a desktop entry for the program named DeltaChat, with the path to the executable and a few other information. You can add an Icon= attribute with a link toward an image file, I didn't have one for DeltaChat.
Knowing how to create desktop entries is useful, not even on Qubes OS but for general Linux/BSD use. Being able to install custom programs with a launcher in Qubes dom0 is better than starting yet another terminal to run a GUI program from there.
These days, I've been playing a lot with Qubes OS, it has an interesting concept of deploying VMs (using Xen) in a well integrated and transparent manner in order to hardly separate every tasks you need.
By default, you get default environments such as Personal, Work and an offline Vault, plus specials VMs to handle USB proxy, network and firewall. What is cool here is that when you run a program from a VM, only the window is displayed in your window manager (xfce), and not the whole VM desktop.
The cool factor with this project is their take on the real world privacy and security need, allowing users to run what they need to run (proprietary software, random binaries), but still protect them. Its goal is totally different from OpenBSD and Tails. Did I say you can also route a VM network through Tor out of the box? =D
If you want to learn more, you can visit Qubes OS website (or ask if you want me to write about it):
If you know me, you should know I'm really serious about backups. This is incredibly important to have backups.
Qubes OS has a backup tool that can be used out of the box, it just dump the VMs storage into an encrypted file, it's easy but not efficient or practical enough for me.
If you want to learn more about the format used by Qubes OS (and how to open them outside of Qubes OS), they wrote some documentation:
Now, let's see how to store the backups in Restic or Borg in order to have proper backups.
/!\ While both software support deduplication, this doesn't work well in this case because the stored data are compressed + encrypted already, which has a very high entropy (it's hard to find duplicated patterns).
Qubes OS backup tool offers compression and encryption out of the box, but when it comes to the storage location, we can actually use a command to send the backups to the command's stdin, and guess what, both restic and borg support receiving data on their standard input!
I'll demonstrate how to proceed both with restic and borg with a simple example, I recommend to build your own solution on top of it the way you need.
As we are running Qubes OS, I prefer to create a dedicated backup VM using the Fedora template, it will contain the passphrase to the repository and an SSH key for remote backup.
You need to install restic/borg in the template to make it available in that VM.
If you don't know how to install software in a template, it's well documented:
In order to simplify the backup command configuration in the backup tool (it's a single input line), but don't sacrifice on features like pruning, we will write a script on the backup VM doing everything we need.
While I'm using a remote repository in the example, nothing prevents you from using a local/external drive for your backups!
The script usage will be simple enough for most tasks:
./script init to create the repository
./script backup to create the backup
./script list to display snapshots
./script restore $snapshotID to restore a backup, the output file will always be named stdin
Write a script in /home/user/restic.sh in the backup VM, it will allow simple customization of the backup process.
#!/bin/sh
export RESTIC_PASSWORD=mysecretpass
# double // is important to make the path absolute
export RESTIC_REPOSITORY=sftp://solene@10.42.42.150://var/backups/restic_qubes
KEEP_HOURLY=1
KEEP_DAYS=5
KEEP_WEEKS=1
KEEP_MONTHS=1
KEEP_YEARS=0
case "$1" in
init)
restic init
;;
list)
restic snapshots
;;
restore)
restic restore --target . $2
;;
backup)
cat | restic backup --stdin
restic forget \
--keep-hourly $KEEP_HOURLY \
--keep-daily $KEEP_DAYS \
--keep-weekly $KEEP_WEEKS \
--keep-monthly $KEEP_MONTHS \
--keep-yearly $KEEP_YEARS \
--prune
;;
esac
Obviously, you have to change the password, you can even store it in another file and use the according restic option to load the passphrase from a file (or from a command). Although, Qubes OS backup tool enforces you to encrypt the backup (which will be store in restic), so encrypting the restic repository won't add any more security, but it can add privacy by hiding what's in the repo.
/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:
Write a script in /home/user/borg.sh in the backup VM, it will allow simple customisation of the backup process.
#!/bin/sh
export BORG_PASSPHRASE=mysecretpass
export BORG_REPO=ssh://solene@10.42.42.150/var/solene/borg_qubes
KEEP_HOURLY=1
KEEP_DAYS=5
KEEP_WEEKS=1
KEEP_MONTHS=1
KEEP_YEARS=0
case "$1" in
init)
borg init --encryption=repokey
;;
list)
borg list
;;
restore)
borg extract ::$2
;;
backup)
cat | borg create ::{now} -
borg prune \
--keep-hourly $KEEP_HOURLY \
--keep-daily $KEEP_DAYS \
--keep-weekly $KEEP_WEEKS \
--keep-monthly $KEEP_MONTHS \
--keep-yearly $KEEP_YEARS
;;
esac
Same explanation as with restic, you can save the password elsewhere or get it from a command, but Qubes backup already encrypt the data, so the repo encryption will mostly only add privacy.
/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:
While it's nice to have backups, it's important to know how to use them. The setup doesn't add much complexity, and the helper script will ease your life.
On the backup VM, run ./borg.sh list (or the restic version) to display available snapshots in the repository, then use ./borg.sh restore $snap with the second parameter being a snapshot identifier listed in the earlier command.
You will obtain a file named stdin, this is the file to use in Qubes OS restore tool.
If you don't always backup all the VMs, if you keep the retention policy like in the example above, you may lose data.
For example, if you have a KEEP_HOURLY=1, create a backup of all your VMs, and just after, you specifically want to backup a single VM, you will lose the previous full backup due to the retention policy.
In some cases, it may be better to not have any retention policy, or simply time based (keep snapshots which date < n days).
Using this configuration, you get all the features of a industry standard backup solution such as integrity check, retention policy or remote encrypted storage.
In case of an issue with the backup command, Qubes backup will display a popup message with the command output, this helps a lot debugging problems.
An easy way to check if the script works by hand is to run it from the backup VM:
echo test | ./restic.sh backup
This will create a new backup with the data "test" (and prune older backups, so take care!), if it doesn't work this is a simple way to trigger a new backup to solve your issue.
Hi, back on OpenBSD desktop, I miss being able to use my bluetooth headphones (especially the Shokz ones that allow me to listen to music without anything on my ears).
Unfortunately, OpenBSD doesn't have a bluetooth stack, but I have a smartphone (and a few other computers), so why not stream my desktop sound to another device with bluetooh? Let's see what we can do!
I'll often refer to the "monitor" input source, which is the name of an input that provides "what you hear" from your computer.
While it would be easy to just allow a remote device to play music files, I want to stream the computer's monitor input, so it could be litteraly anything, and not just music files.
This method can be used on any Linux distribution, and certainly on other BSDs, but I will only cover OpenBSD.
One simple setup is to use icecast, the program used by most web radios, and ices, a companion program to icecast, in order to stream your monitor input to the network.
The pros:
it works with anything that can read OGG from the network (any serious audio client or web browser can do this)
it's easy to set up
you can have multiple clients at once
secure (icecast is in a chroot, and other components are sending data or playing music)
The cons:
there is a ~10s delay, which prevents you from watching a video on your computer and listening the audio from another device (you could still set 10s offset, but it's not constant)
reencoding happens, which can slightly reduce the sound quality (if you are able to tell the difference)
The default sound server in OpenBSD, namely sndiod, supports network streaming!
Too bad, if you want to use Bluetooth as an output, you would have to run sndiod on Linux (which is perfectly fine), but you can't use Bluetooth with sndiod, even on Linux.
So, no sndiod. Between two OpenBSD, or OpenBSD and Linux, it works perfectly well without latency, and it's a super simple setup, but as Bluetooth can't be used, I won't cover this setup.
This sound server is available as a port on OpenBSD, and has two streaming modes: native-protocol-tcp and RTP, the former is exchanging pulseaudio internal protocol from one server to another which isn't ideal and prone to problems over a bad network, the latter being more efficient and resilient.
However, the RTP sender doesn't work on OpenBSD, and I have no interest in finding out why (the bug doesn't seem to be straightforward), but the native protocol works just fine.
The pros:
almost no latency (may depend of the network and remote hardware)
Snapcast is an amazing piece of software that you can use to broadcast your audio toward multiple other client (using snapcast or a web page) with the twist that the audio will be synchronized on each client, allowing a multi room setup at no cost.
Unfortunately, I've not been able to build it on OpenBSD :(
The pros:
multi room setup with synchronized clients
compatible with almost any client able to display an HTML5 page
On the local OpenBSD, you need to install pulseaudio and ffmpeg packages.
You also need to set sndiod flags, using rcctl set sndiod flags -s default -m play,mon -s mon, this will allow you to use the monitor input through the device snd/0.mon.
Now, when you want to stream your monitor to a remote pulseaudio, run this command in your terminal:
This will load the module accepting network connections, the auth-anonymous option is there to simplify connection to the server, otherwise you would have to share the pulseaudio cookie between computers, which I recommend doing but on a smartphone this can be really cumbersome to do, and out of scope here.
The other option is pretty obvious, just give a list of IPs you want to allow to connect to the server.
If you want the changes to be persistent, edit /etc/pulse/default.pa to add the line load-module module-native-protocol-tcp auth-anonymous=1 auth-ip-acl=192.168.1.0/24.
On Android, you can install pulseaudio using Termux (available on f-droid), using the commands:
There is a project named PulseDroid, the original project has been unmaintained for 13 years, but someone took it back quite recently, unfortunately no APK are provided, and I'm still trying to build it to try, it should provide an easier user experience to run pulseaudio on Android.
Using icecast, you will have to setup an icecast server, and locally use ices2 client to broadcast your monitor input. Then, any client can play the stream URL.
in the <authentication> node, change all the passwords. The only one you will need is the source password used to send the audio to icecast, but set all other passwords to something random.
in the <hostname> node, set the IP or hostname of the computer with icecast.
add a <bind-address> node to <listen-socket> using the example for 127.0.0.1, but use the IP of the icecast server, this will allow other to connect.
Keep in mind this is the bare minimum for a working setup, if you want to open it to the wide Internet, I'd strongly recommend reading icecast documentation before. Using a VPN may be wiser if it's only for private use.
Then, to configure ices2, copy the file /usr/local/share/examples/ices2/ices-sndio.xml somewhere you feel comfortable for storing user configuration files. The example file is an almost working template to send sndio sources to icecast.
Edit the file, under the <instance> node:
modify <hostname> with the hostname used in icecast.
modify <password> with the source password defined earlier.
modify <mount> to something ending in .ogg of your liking, this will be the filename in the URL (can be /stream.ogg if you are out of ideas).
set <yp> to 0, otherwise the stream will appear on the icecast status page (you may want to have it displayed though).
Now, search for <channels> and set it to 2 because we want to broadcast stereo sound, and set <downmix> to 0 because we don't need to merge both channels into a mono output. (If those values aren't in sync, you will have funny results =D)
When you want to broadcast, run the command:
env AUDIORECDEVICE=snd/0.mon ices2 ices-sndio.xml
With any device, open the url http://<hostname>:8000/file.ogg with file.ogg being what you've put in <mount> earlier. And voilà, you have a working local audio streaming!
With these two setup, you have a choice for occasionnaly streaming your audio to another device, which may have bluetooth support or something making it interesting enough to go through the setup.
I'm personally happy to be able to use bluetooth headphones through my smartphone to listen to my OpenBSD desktop sound.
If you want to directly attach bluetooth headphones to your OpenBSD, you can buy an USB dongle that will pair to the headphones and appear as a sound card to OpenBSD.
While I like Alpine because it's lean and minimal, I have always struggled to install it for a desktop computer because of the lack of "meta" packages that install everything.
However, there now is a nice command that just picks your desktop environment of choice and sets everything up for you.
This article is mostly a cheat sheet to help me remember how to install Alpine using a desktop environment, NetworkManager, man pages etc... Because Alpine is still a minimalist distribution and you need to install everything you think is useful.
By default, the installer will ask you to set up networking, but if you want NetworkManager, you need to install it, enable it and disable the other services.
As I prefer to avoid duplication of documentation, please refer to the relevant Wiki page.
By default, Alpine Linux sticks to Long Term Support (LTS) kernels, which is fine, but for newer hardware, you may want to run the latest kernel available.
Fortunately, the Alpine community repository provides the linux-edge package for the latest version.
If you want to keep all the installed packages in cache (so you could keep them for reinstalling, or share on your network), it's super easy.
Run setup-apkcache and choose a location (or even pass it as a parameter), you're done. It's very handy for me because when I need to use Alpine in a VM, i just hook it to my LAN cache and I don't have to download packages again and again.
Alpine Linux is becoming a serious, viable desktop Linux distribution, not just for containers or servers. It's still very minimalist and doesn't hold your hand, so while it's not for everyone, it's becoming accessible to enthusiasts and not just hardcore users.
I suppose it's a nice choice for people who enjoy minimalism and don't like SystemD.
Calendar and contacts syncing, it's something I pushed away for too long, but when I've lost data on my phone and my contacts with it, setting up a local CalDAV/CardDAV server is the first thing I did.
Today, I'll like to show you how to set up the server radicale to have your own server.
On OpenBSD 7.3, the latest version of radicale is radicale 2, available as a package with all the service files required for a quick and efficient setup.
You can install radicale with the following command:
# pkg_add radicale
After installation, you will have to edit the file /etc/radicale/config in order to make a few changes. The syntax looks like INI files with sections between brakets and then key/values on separate lines.
For my setup, I made my radicale server to listen on the IP 10.42.42.42 and port 5232, and I chose to use htpasswd files encrypted in bcrypt to manage users. This was accomplished with the following piece of configuration:
After saving the changes, you need to generate the file /etc/radicale/users to add credentials and password in it, this is done using the command htpasswd.
In order to add the user solene to the file, use the following command:
Now you should be able to reach radicale on the address it's listening, in my example it's http://10.42.42.42:5232/ and use your credentials to log in.
Then, just click on the link "Create new addressbook or calendar", and complete the form.
Back on the index, you will see each item managed by radicale and the URL to access it. When you will configure your devices to use CalDAV and CardDAV, you will need the crendentials and the URL.
Radicale is very lightweight and super easy to configure, and I finally have a proper calendar synchronization on my computers and smartphone, which turned to be very practical.
If you want to setup HTTPS for radicale, you can either use a certificate file and configure radicale to use it, or use a reverse http proxy such as nginx and handle the certificate there.
As the owner of a Steam Deck (a handeld PC gaming device), I wanted to explore alternatives to the pre-installed SteamOS you can find on it. Fortunately, this machine is a plain PC with UEFI Firmware allowing you to boot whatever you want.
It's like a Nintendo Switch, but much bigger. The "deck" is a great name because it's really what it looks like, with two touchpads and four extra buttons behind the deck. By default, it's running SteamOS, an ArchLinux based system working in two modes:
Steam gamepadUI mode with a program named gamescope as a wayland compositor, everything is well integrated like you would expect from a gaming device. Special buttons trigger menus, integration with monitoring tool to view FPS, watts consumption, TDP limits, screen refresh rate....
Desktop mode, using KDE Plasma, and it acts like a regular computer
Unfortunately for me, I don't like ArchLinux and I wanted to understand how the different modes were working, because on Steam, you just have a button menu to switch from Gaming to Desktop, and a desktop icon to switch from desktop to gaming.
Here is a picture I took to compare a Nintendo Switch and a Steam Deck, it's really beefy and huge, but while its weight is higher than the Switch, I prefer how it holds and the buttons' placement.
This project purpose is to reimplement SteamOS the best it can, but only using open source components. They also target alternative devices if you want to have a Steam Deck experience.
My experience wasn't great with it, once installation was done, I had to log in into Steam, and at every reboot it was asking me to log-in again. As the project was mostly providing the same experience based on ArchLinux, I wasn't really interested to look into it further.
This project purpose is to give Steam Deck user (or similar device owners) an OS that would fit the device, it's currently offering a similar experience, but I've read plans to offer alternative UI. On top of that, they integrated a web server to manage emulations ROMS, or Epic Games and GOG installer, instead of having to fiddle with Lutris, minigalaxy or Heroic game launcher to install games from these store.
The project also has many side-projects such as gamescope-session, chimera or forks with custom patches.
This project is truly amazing, it's currently what I'm running on my own devices. Let's use NixOS with some extra patches to run your Deck, and it's just working fine!
Jovian-NixOS (in reference to Neptune, the Deck codename) is a set of configuration to use with NixOS to adapt to the Steam Deck, or any similar handeld device. The installation isn't as smooth as the two other above because you have to install NixOS from console, write a bit of configuration, but the result is great. It's not for everyone though.
Obviously, my experience is very good. I'm in full control of the system, thanks to NixOS declarative approach, no extra services running until I want to, it even makes a great Nix remote builder...
3.4. Plain linux installed like a regular computer §
The first attempt was to install openSUSE on the Deck like I would do on any computer. The experience was correct, installation went well, and I got in GNOME without issues.
However, some things you must know about the Deck:
patches are required on the Linux kernel to have proper fan control, they work out of the box now but the fan curve isn't ideal, like the fan will never stop even under low temperature
in Desktop mode, the controller is seen as a poor mouse with triggers to click, the touchscreen is working, but Linux isn't really ready to be used like a tablet, you need Steam in big picture mode to make the controller useful
many patches here and there (Mesa, mangohud, gamescope) are useful to improve the experience
In order to switch between Desktop and Gaming mode, I found a weird setup that was working for me:
gaming mode is started by automatically log-in my user on tty1 with the user .bashrc checking if running on tty1 and running steam over gamescope
desktop mode is started by setting automatic login in GDM
a script started from a .desktop file that would toggle between gaming and desktop mode. Either by killing gamescope and starting GDM, or by stopping gdm and startin tty1. The .desktop was added to Steam, so from Steam or GNOME I was able to switch to the other. It worked surprisingly well.
I turned out Steam GamepadUI with Gamescope button "Switch to desktop mode" is using a dbus signal to switch to desktop, distributions above handle it correctly.
Although it was mostly working, my main issues were:
No fan curve control because it's not easy to find the kernel patches, and then run the utility to control the fans, my deck was constantly doing some fan noise, and it was irritating
I had no idea how to allow firmware update (OS above support that)
Integration with mangohud was bad, and performance control in Gaming mode wasn't working
Sometimes, XWayland would crash or stay stuck when starting a game from Gaming mode
But, despite these issues, performance was perfectly fine, as well as battery life. But usability should be priority for such a device, and it didn't work very well here.
If you already enjoy your Steam Deck the way it is, I recommend you to stick to SteamOS. It does the job fine, allows you to install programs from Flatpak, and you can also root it if you really need to install system packages.
If you want to do more on your Deck (use it as a server maybe? Who knows), you may find it interesting to get everything under your control.
I'm using syncthing on my Steam Deck and other devices to synchronize GOG/Epic save games, Steam cloud is neat, but with one minute per game to configure syncthing, you have something similar.
Nintendo Switch emulation works fine on Steam Deck, more about that soon :)
Une petite sélection de haikus qui ont été publiés sur Mastodon, cela dit, il ne sont pas toujours bien fichus mais ce sont mes premiers, espérons que l'expérience m'aide à faire mieux par la suite.
Merle qui chasse
Un ciel bleu teinté de blanc
Le thym en fleurs
Plateaux enneigés
Bien au chaud et à l'abri -
Violente tempête
As you may have understood by now, I like efficiency on my systems, especially when it comes to network usage due to my poor slow ADSL internet connection.
Flatpak is nice, I like it for many reasons, and what's cool is that it can download only updated files instead of the whole package again.
Unfortunately, when you start using more and more packages that are updated daily, and which require subsystems like NVIDIA drivers, MESA etc... this adds up to quitea lot of daily downloads, and multiply that by a few computers and you gets a lot of network traffic.
But don't worry, you can cache it on your LAN to download updates only once.
As usual for this kind of job, we will use Nginx on a local server on the network, and configure it to act as a reverse proxy to the flatpak repositories.
This requires modifying the URL of each flatpak repository on the machines, it's a one time operation.
Here is the configuration you need on your Nginx to proxy Flathub:
map $status $cache_header {
200 "public";
302 "public";
default "no-cache";
}
server {
listen 0.0.0.0:8080; # you may want to listen on port 80, or add TLS
server_name my-cache.local; # replace this with your hostname, or system IP
# flathub cache
set $flathub_cache https://dl.flathub.org;
location /flathub/ {
rewrite ^/flathub/(.*) /$1 break;
proxy_cache flathub;
proxy_cache_key "$request_filename";
add_header Cache-Control $cache_header always;
proxy_cache_valid 200 302 300d;
expires max;
proxy_pass $flathub_cache;
}
}
proxy_cache_path /var/cache/nginx/flathub/cache levels=1:2
keys_zone=flathub:5m
max_size=20g
inactive=60d
use_temp_path=off;
This will cause nginx to proxy requests to the flathub server, but keep files in a 20 GB cache.
You will certainly need to create the /var/cache/nginx/flathub directory, and make sure it has the correct ownership for your system configuration.
If you want to support another flatpak repository (like Fedora's), you need to create a new location, and new cache in your nginx config.
Please note that if you add flathub repo, you must use the official URL to have the correct configuration, and then you can change its URL with the above command.
If you use OpenBSD and administrate machines, you may be aware that packages can install new dedicated users and groups, and that if you remove a package doing so, the users/groups won't be deleted, instead, pkg_delete displays instructions about deletion.
In order to keep my OpenBSD systems clean, I wrote a script looking for users and groups that have been installed (they start by the character _), and check if the related package is still installed, if not, it outputs instructions that could be run in a shell to cleanup your system.
Write the content of the script above in a file, mark it executable, and run it from the shell, it should display a list of userdel and groupdel commands for all the extra users and groups.
Smokeping is a Perl daemon that will regularly run a command (fping, some dns check, etc…) multiple times to check the availability of the remote host, but also the quality of the link, including the standard deviation of the response time.
It becomes very easy to know if a remote host is flaky, or if the link where Smokeping runs isn't stable any more when you see that all the remote hosts have connectivity issues.
Let me explain how to install and configure it on OpenBSD 7.2 and 7.3.
Smokeping comes in two parts, but they are in the same package, the daemon components to run it 24/7 to gather metrics, and the fcgi component used to render the website for visualizing data.
First step is to install the smokeping package.
# pkg_add smokeping
The package will also install the file /usr/local/share/doc/pkg-readmes/smokeping giving explanations for the setup. It contains a lot of instructions, from the setup to advanced configuration, but without many explanations if you are new to smokeping.
Once you installed the package, the first step is to configure smokeping by editing the file /etc/smokeping/config as root.
Under the *** General *** section, you can change the variables owner and contact, this information is displayed on Smokeping HTML interface, so if you are in company and some colleague look at the graphs, they can find out who to reach if there is an issue with smokeping or with the links. This is not useful if you use it for yourself.
Under the *** Alerts *** section, you can configure the emails notifications by configuring to and from to match your email address, and a custom address for smokeping emails origin.
Then, under *** Targets *** section, you can configure each host to monitor. The syntax is unusual though.
lines starting with + SomeSingleWord will create a category with attributes and subcategories. Attribute title is used to give a name to it when showing the category, and menu is the name displayed on the sidebar on the website.
lines starting with ++ SomeSingleWord will create a subcategory for a host. Attributes title and menu works the same as the first level, and host is used to define the remote host to monitor, it can be a hostname or an IP address.
That's for the simplest configuration file. It's possible to add new probes such as "SSH Ping", DNS, Telnet or LDAP...
Let me show a simple example of targets configuration I'm using:
*** Targets ***
probe = FPing
menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing
+ Remote
menu= Remote
title= Remote hosts
++ Persopw
menu = perso.pw
title = My server perso.pw
host = perso.pw
++ openportspl
menu = openports.pl
title = openports.pl VM at openbsd.amsterdam
host = openports.pl
++ grifonfr
menu = grifon.fr
title = grifon.fr VPN endpoint
host = 89.234.186.37
+ LAN
menu = Lan
title = Lan network at home
++ solaredge
menu = solaredge
title = solardedge
host = 10.42.42.246
++ modem
menu = ispmodem
title = ispmodem
host = 192.168.1.254
Now you configured smokeping, you need to enable the service and run it.
# rcctl enable smokeping
# rcctl start smokeping
If everything is alright, rcctl check smokeping shouldn't fail, if so, you can read /var/log/messages to find why it's failing. Usually, it's a + line that isn't valid because of a non-authorized character or a space.
I recommend to always add a public host of a big platform that is known to be working reliably all the time, to have a comparison point against all your other hosts.
Now the daemon is running, you certainly want to view the graphs produced by Smokeping. Reusing the example from the pkg-readme file, you can configure httpd web server with this:
server "smokeping.example.org" {
listen on * port 80
location "/smokeping/smokeping.cgi*" {
fastcgi socket "/run/smokeping.sock"
root "/"
}
}
Your service will be available at the address http://smokeping.example.org/smokeping/smokeping.cgi.
For this to work, we need to run a separate FCGI server, fortunately packaged as an OpenBSD service.
Note that there is a way to pre-render all the HTML interface by a cron job, but I don't recommend it as it will drain a lot of CPU for nothing, except if you have many users viewing the interface and that they don't need interactive zoom on the graphs.
Smokeping is very effective because of the way it renders data, you can easily spot issues in your network that a simple ping or response time wouldn't catch.
Please note it's better to have two smokeping setup at different places in order to monitor each other remote smokeping link quality. Otherwise, if a remote host appear flaky, you can't entirely be sure if the Internet access of the smokeping is flaky, or if it's the remote host, or a peering issue.
Here is the 10 days graph for a device I have on my LAN but connected to the network using power line networking.
Don't forget to read /usr/local/share/doc/pkg-readmes/smokeping and the official documentation if you want a more complicated setup.
C'est rare, mais ceci est un message de ras-le-bol.
Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +".
Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé.
Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins.
Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer. Pas de soucis, j'ai pu installer le programme de La Poste.
Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là, je ne peux absolument rien faire d'autre que de quitter l'application.
Et voilà, je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :
installer les services Google sur mon téléphone, et ça me ferait bien mal au coeur car cela va à l'encontre de mes valeurs
installer l'application dans un émulateur Android avec les services Google, c'est absolument pas pratique mais ça résoud le problème
m'asseoir sur l'argent de mon compte de formation (500 € / an)
remonter le problème publiquement en espérant que cela fasse changer quelque chose, au moins que l'on puisse installer l'application sans services Google
Why would you do that in the first place? Well, this would allow me to take time off my job, and spend it either writing on the blog, or by contributing to open source projects, mainly OpenBSD or a bit of nixpkgs.
I've been publishing on the blog for almost 7 years now, for the most recent years, I've been writing a lot here, and I still enjoy doing so! However, I have a less free time now, and I'd prefer to continue writing here instead of working at my job full time. I've been ocasionaly receiving donation for my blog work, but one-shot gifts (I appreciate! :-) ) won't help me much against regular monthly incomes that I can expect, and help me to organize myself with my job.
I chose Patreon because the platform is reliable and offers managing some extras for the people patronizing me.
Let be clear about the advantages:
you will ocasionaly be offered to choose the topic for the blog post I'm writing. I often can't decide what to write about when I look at my pipe of ideas.
you will have access to the new blog posts a few days in advance.
you give me incentive to write better content in order to make you happy of your expenses.
It's hard for me to frame exactly what I'll be working on. I include the OpenBSD webzine as an extension of the blog, and sometimes ports work too because I'm writing about a program, I go down the rabbit-hole of updating it, and then there is a whole story to tell.
To conclude, let me thank you if you plan to support me financially, every bit will help, even small sponsors. I'm really motivated by this, I want to promote community driven open source projects such as OpenBSD, but I also want to cover a topic that matters a lot to me which is old hardware reuse. I highlighted this with the old computer challenge, but this is also the core of all my self-hosting articles and what drives me when using computers.
In this article, I'd like to share with you about the Linux specific feature ecryptfs, which allows users to have encrypted directories.
While disk encryption done with cryptsetup/LUKS is very performant and secure, there are some edge cases in which you may want to use ecryptfs, whether the disk is LUKS encrypted or not.
I've been able to identify a few use cases making ecryptfs relevant:
a multi-user system, people want their files to be private (and full disk encryption wouldn't help here)
an encrypted disk on which you want to have an encrypted directory that is only available when needed (preventing a hacked live computer to leak important files)
a non-encrypted disk on which you want to have an encrypted directory/$HOME instead of reinstalling with full disk encryption
In this configuration, you want all the files in the $HOME directory of your user to be encrypted. This works well and especially as it integrates with PAM (the "login manager" in Linux) so it unlocks the files upon login.
I tried the following setup on Gentoo Linux, the setup is quite standard for any Linux distribution packaging ecryptfs-utils.
In this configuration, you will have ecryptfs encrypting a single directory named Private in the home directory.
That can be useful if you already have an encrypted disk, but you have very secret files that must be encrypted when you don't need them, this will protect file leak on a compromised running system, except if you unlock the directory while the system is compromised.
This can also be used on a thrashable system (like my netbook) that isn't encrypted, but I may want to save a few files there that are private.
install a package named ecryptfs-utils (may depend on your distribution)
run ecryptfs-setup-private --noautomount
Type your login password
Press enter to use an auto generated mount passphrase (you don't use this one to unlock the directory)
Done!
The mount passphrase is used in addition to the login passphrase to encrypt the files, you may need it if you have to unlock backuped encrypted files, so better save it in your password manager if you make backup of the encrypted files.
You can unlock the access to the directory ~/Private by typing ecryptfs-mount-private and type your login password. Congratulations, now you have a local safe for your files!
Ecryptfs was available in older Ubuntu installer releases as an option to encrypt a user home directory without the full disk, it seems it has been abandoned due to performance reasons.
I didn't make extensive benchmarks here, but I compared the writing speed of random characters into a file on an unencrypted ext4 partition, and the ecryptfs private directory on the same disk. On the unencrypted directory, it was writing at 535 MB/s while on the ecryptfs it was only writing at 358 MB/s, that's almost 33% slower. However, it's still fast enough for a daily workstation. I didn't measure the time to read or browse many files, but it must be slower. A LUKS encrypted disk should only have a performance penalty of a few percent, so ecryptfs is really not efficient in comparison, but it's still fast enough if you don't do database operation on it.
There are extra security shortcomings coming with ecryptfs: when using your encrypted files unlocked, they may be copied in swap or in temporary directories, or in cache.
If you use the Private encrypted directories, for instance, you should think that most image reader will create a thumbnail in your HOME directory, so pictures in Private may have a local copy that is available outside the encrypted directory. Some text editors may cache a backup file in another directory.
If your system is running a bit out of memory, data may be written to the swap file, if it's not encrypted then one may be able to recover files that were opened during that time. There is a command ecryptfs-setup-swap from the ecryptfs package which check if the swap files are encrypted, and if not, propose to encrypt them using LUKS.
One major source of leakage is the /tmp/ directory, that may be used by programs to make a temporary copy of an opened file. It may be safe to just use a tmpfs filesystem for it.
Finally, if you only have a Private directory encrypted, don't forget that if you use a file browser to delete a file, it may end up in a trash directory on the unencrypted filesystem.
If you get the error setreuid: Operation not permitted when running ecryptfs commands, this mean the ecryptfs binaries aren't using suid bit. On Gentoo, you have to compile ecryptfs-utils with the USE suid.
Ecryptfs is can be useful in some real life scenarios, and doesn't have much alternative. It's especially user-friendly when used to encrypt the whole home because users don't even have to know about it.
Of course, for a private encrypted directory, the most tech-savvy can just create a big raw file and format it in LUKS, and mount it on need, but this mean you will have to manage the disk file as a separate partition with its own size, and scripts to mount/umount the volume, while ecryptfs offers an easy secure alternative with a performance drawback.
In this blog post, I'd like to share how I had fun using GitHub actions in order to maintain a repository of generic x86-64 Gentoo packages up to date.
Built packages are available at https://interbus.perso.pw/ and can be used in your binrepos.conf for a generic x86-64 packages provider, it's not building many packages at the moment, but I'm open to add more packages if you want to use the repository.
I don't really like GitHub, but if we can use their CPU for free for something useful, why not? The whole implementation and setup looked fun enough that I should give it a try.
I was using a similar setup locally to build packages for my Gentoo netbook using a more powerful computer, so it was actually achievable, so I had to try. I don't have much use of it myself, but maybe a reader will enjoy the setup and do something similar (maybe not for Gentoo).
My personal infrastructure is quite light, with only an APU router plus a small box with an Atom CPU as a NAS, I was looking for a cheap way to keep their Gentoo systems running without having to compile locally.
Building a generic Gentoo packages repository isn't straighforward for a rew reasons:
compilation flags must match all the consumers' architecture
default USE flags must be useful for many
no support for remote builders
the whole repository must be generated on a single machine with all the files (can't be incremental)
Fortunately, there are Gentoo containers images that can be used to start a fresh Gentoo, and from there, build packages from a clean system every time. Packages have to be added into the container before each change, otherwise the file Packages that will be generated as a repository index won't contain all the files.
Using a -march=x86-64 compiler flag allows targeting all the amd64 systems, at the cost of less optimized binaries.
For the USE flags, a big part of Gentoo, I chose to select a default profile and simply stick with it. People using the repository could still change their USE flags, and only pick the binary packages from the repo if they still match expectations.
We will use GitHub actions (Free plan) to build packages for a given Gentoo profile, and then upload it to a remote server that will share the packages over HTTPS.
The plan is to use a docker image of a stage3 Gentoo provided by the project gentoo-docker-images, pull previously built packages from my server, build new packages or updating existing packages, and push the changes to my server. Meanwhile, my server is serving the packages over https.
GitHub's actions are a feature from GitHub allowing to create Continuous Integration easy by providing "actions" (reusable components made by other) that you organize in steps.
For the job, I used the following steps on an Ubuntu system:
Deploy SSH keys (used to pull/push packages to my server) stored as secrets in the GitHub project
Checkout the sources of the project
Make a local copy of the packages repository
Create a container image based on the Gentoo stage3 + instructions to run
Run the image that will use emerge to build the packages
Copy the new repository on the remote server (using rsync to copy the diff)
While the idea is simple, I faced a lot of build failures, here is a list of problems I remember.
5.1. Go is failing to build (problem is Docker specific) §
For some reasons, Go was failing to build with a weird error, this is due to some sandboxing done by emerge that wasn't allowed by the Docker environment.
The solution is to loose the sandboxing with FEATURES="-ipc-sandbox -pid-sandbox -sandbox -usersandbox" in /etc/portage/make.conf. That's not great.
The starter image is a stage3 of Gentoo, it's quite bare, one critical package missing to build other but never pulled as dependency is kernel sources.
You need to install sys-kernel/gentoo-sources if you want builds to succeed for many packages.
The gentoo docker images repository isn't provided merged-usr profiles (yet?), I had to install merged-usr and run it, to have a correct environment matching the selected profile.
The job time is limited to 6h00 on the free plan, I added a timeout for the emerge doing the building job to stop a bit earlier, to let it some time to push the packages to the remote server, this will allow saving time for the next run. Of course, this only works until a single package require more than the timeout time to build (but it's quite unlikely given the CI is fast enough).
One has to trust GitHub actions, GitHub employees may have access to jobs running there, and could potentially compromise built packages using a rogue container image. While it's unlikely, this is a possibility.
Also, please note that the current setup doesn't sign the packages. This is something that could be added later, you can find documentation on the Gentoo Wiki for this part.
Another interesting area for security was the rsync access of the GitHub actions to easily synchronize the packages with the builder. It's possible to restrict an SSH key to a single command to run, like a single rsync with no room to change a single parameter. Unfortunately, the setup requires using rsync in two different cases: downloading and pushing files, so I had to write a wrapper looking at the variable SSH_COMMAND and allowing either the "pull" rsync, or the "push" rsync.
The GitHub free plan allows you to run a builder 24/7 (with no parallel execution), it's really fast enough to keep a non-desktop @world up to date. If you have a pro account, the local cache GitHub cache may not be limited, and you may be able to keep the built packages there, removing the "pull packages" step.
If you really want to use this, I'd recommend using a schedule in the GitHub action to run it every day. It's as simple as adding this in the GitHub workflow.
on:
schedule:
- cron: '0 2 * * *' # every day at 02h00
I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it.
Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from. It's using a gentoo stage3 docker image, populated with packages from the local system and its /etc/portage/ directory.
Note that you have to use app-misc/resolve-march-native to generate the compiler command line parameters to replace -march=native because you want the remote host to build with the correct flags and not its own -march=native, you should also make sure those flags are working on the remote system. From my experience, any remote builder newer than your machine should be compatible.
I like my servers to run the least code possible, and the least services running in general, this ease maintenance and let room for other thing to run. I recently wrote about monitoring software to gather metrics and render them, but they are all overkill if you just want to keep track of a single value over time, and graph it for visualization.
Fortunately, we have an old and robust tool doing the job fine, it's perfectly documented and called RRDtool.
RRDtool stands for "Round Robin Database Tool", it's a set of programs and a specific file format to gather metrics. The trick with RRD files is that they have a fixed size, when you create it, you need to define how many values you want to store in it, at which frequency, for how long. This can't be changed after the file creation.
In addition, RRD files allow you to create derivated time series to keep track of computed values on a longer timespan, but with a lesser resolution. Think of the following use case: you want to monitor your home temperature every 10 minutes for the past 48 hours, but you want to keep track of some information for the past year, you can tell RRD to compute the average temperature for every hour, but for a week, or the average temperature for four hours but for a month, and the average temperature per day for a year. All of this will be fixed size.
RRD files can be dumped as XML, this will give you a glimpse that may ease the understanding of this special file format.
Let's create a file to monitor the battery level of your computer every 20 seconds, with the last 5 values, don't focus at understanding the whole command line now:
The most important thing to understand here, is that we have a "ds" (data serie) named battery of type GAUGE with no last value (I never updated it), but also a "RRA" (Round Robin Archive) for our average value that contain timestamp and no value associated to each. You can see that internally, we already have our 5 slots that exist with a null value associated. If I update the file, the first null value will disappear, and a new record will be added at the end with the actual value.
In this guide, I would like to share my experience at using rrdtool to monitor my solar panel power output over the last few hours, which can be easily displayed on my local dashboard. The data are also collected and sent to a graphana server, but it's not local and displaying to know the last values is wasting resources and bandwidth.
First, you need rrdtool to be installed, you don't need anything else to work with RRD files.
Creating the RRD file is the most tricky part, because you can't change it afterward.
I want to collect a data every 5 minutes (300 seconds), this is an absolute data between 0 and 4000, so we will define a step of 300 seconds to tell the file must receive a value every 300 seconds. The type of the value will be GAUGE, because it's just a value that doesn't depend on the previous one. If we were monitoring power change over time, we would like to use DERIVE, because it computes the delta between each value.
Furthermore, we need to configure the file to give up on a value slot if it's not updated within 600 seconds.
Finally, we want to be able to graph each measurement, this can be done by adding an AVERAGE calculated value in the file, but with a resolution of 1 value, with 240 measurements stored. What this mean, is for each time we add a value in the RRD file, the field for AVERAGE will be calculated with only the last value as input, and we will keep 240 of them, allowing us to graph up to 240 * 5 minutes of data back in time.
rrdtool create solar-power.rrd --step 300 ds:value:gauge:600:0:4000 rra:average:0.5:1:240
^ ^ ^ ^ ^ ^ ^ ^ ^
| | | | | max value | | | | number of values to keep
| | | | min value | | | how many previous values should be used in the function, 1 means just a single value, so averaging itself
| | | time before null | | (xfiles factor) how much percent of unknown values do we agree to use for calculating a value
| | measurement type | function to apply, can be AVERAGE, MAX, MIN, LAST, or mathematical operations
| variable name
And then, you have your solar-power.rrd file created. You can inspect it with rrdtool info solar-power.rrd or dump its content with rrdtool dump solar-power.rrd.
Now that we have prepared the file to receive data, we need to populate it with something useful. This can be done using the command rrdtool update.
CURRENT_POWER=$(some-command-returning-a-value)
rrdtool update solar-power.rrd "N:${CURRENT_POWER}"
^ ^
| | value of the first field of the RRD file (we created a single field)
| when the value has been measured, N equals to NOW
The trickiest part, but less problematic, is to generate a usable graph from the data. The operation is not destructive as it's not modifying the file, so we can make a lot of experimentations on it without affecting the content.
We will generate something simple like the picture below. Of course, you can add a lot more information, color, axis, legends etc.. but I need my dashboard to stay simple and clean.
rrdtool graph --end now -l 0 --start end-14000s --width 600 --height 300 \
/var/www/htdocs/dashboard/solar.svg -a SVG \
DEF:ds0=/var/lib/rrdtool/solar-power.rrd:value:AVERAGE \
"LINE1:ds0#0000FF:power" \
"GPRINT:ds0:LAST:current value %2.1lf"
I think most flags are explicit, if not you can look at the documentation, what interests us here are the last three lines.
The DEF line associates the RRA AVERAGE of the variable value in the file /var/lib/rrdtool/solar-power.rrd to the name ds0 that will be used later in the command line.
The LINE1 line associates a legend, and a color to the rendering of this variable.
The GPRINT line adds a text in the legend, here we are using the last value of ds0 and format it in a printf style string current value %2.1lf.
RRDtool is very nice, it's a storage engine for monitoring software such as collectd or munin, but we can also use them on the spot with simple scripts. However, they have drawbacks, when you start to create many files it doesn't scale well, generate a lot of I/O and consume CPU if you need to render hundreds of pictures, that's why a daemon named rrdcached has been created to help mitigate the load issue by delegating updates of a lot of RRD files in a more sequential way.
I encourage you to look at the official project website, all the other command can be very useful, and rrdtool also exports data as XML or JSON if needed, which is perfect to plug in with other software.
Linux kernel has an integrated firewall named netfilter, but you manipulate it through command lines such as the good old iptables, or nftables which will eventually superseed iptables.
Today, I'll share my experience in using nftables to manage my Linux home router, and my workstation.
I won't explain much in this blog post because I just want to introduce nftables and show what it looks like, and how to get started.
I added comments in my configuration files, I hope it's enough to get a grasp and make you curious to learn about nftables if you use Linux.
nftables works by creating a file running nft -f in the shebang, this allows atomic replacement of the ruleset if it's valid.
Depending on your system, you may need to run the script at boot, but for instance on Gentoo, a systemd service is provided to save rules upon shutdown and restore them at boot.
#!/sbin/nft -f
flush ruleset
table inet filter {
# defines a list of networks for further reference
set safe_local {
type ipv4_addr
flags interval
elements = { 10.42.42.0/24 }
}
chain input {
# drop by default
type filter hook input priority 0; policy drop;
ct state invalid drop comment "early drop of invalid packets"
# allow connections to work when initiated from this system
ct state {established, related} accept comment "accept all connections related to connections made by us"
# allow loopback
iif lo accept comment "accept loopback"
# remove weird packets
iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"
# make ICMP work
ip protocol icmp accept comment "accept all ICMP types"
ip6 nexthdr icmpv6 accept comment "accept all ICMP types"
# only for known local networks
ip saddr @safe_local tcp dport {22, 53, 80, 2222, 19999, 12344, 12345, 12346} accept
ip saddr @safe_local udp dport {53} accept
# allow on WAN
iif eth0 tcp dport {80} accept
iif eth0 udp dport {7495} accept
}
# allow NAT to get outside
chain lan_masquerade {
type nat hook postrouting priority srcnat;
meta nfproto ipv4 oifname "eth0" masquerade
}
# port forwarding
chain lan_nat {
type nat hook prerouting priority dstnat;
iif eth0 tcp dport 80 dnat ip to 10.42.42.102:8080
}
}
#!/sbin/nft -f
flush ruleset
table inet filter {
set safe_local {
type ipv4_addr
flags interval
elements = { 10.42.42.0/24, 10.43.43.1/32 }
}
chain input {
# drop by default
type filter hook input priority 0; policy drop;
ct state invalid drop comment "early drop of invalid packets"
# allow connections to work when initiated from this system
ct state {established, related} accept comment "accept all connections related to connections made by us"
# allow loopback
iif lo accept comment "accept loopback"
# remove weird packets
iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"
# make ICMP work
ip protocol icmp accept comment "accept all ICMP types"
ip6 nexthdr icmpv6 accept comment "accept all ICMP types"
# only for known local networks
ip saddr @safe_local tcp dport 22 accept comment "accept SSH"
ip saddr @safe_local tcp dport {7905, 7906} accept comment "accept musikcube"
ip saddr @safe_local tcp dport 8080 accept comment "accept nginx"
ip saddr @safe_local tcp dport 1714-1764 accept comment "accept kdeconnect TCP"
ip saddr @safe_local udp dport 1714-1764 accept comment "accept kdeconnect UDP"
ip saddr @safe_local tcp dport 22000 accept comment "accept syncthing"
ip saddr @safe_local udp dport 22000 accept comment "accept syncthing"
ip saddr @safe_local tcp dport {139, 775, 445} accept comment "accept samba"
ip saddr @safe_local tcp dport {111, 775, 2049} accept comment "accept NFS TCP"
ip saddr @safe_local udp dport 111 accept comment "accept NFS UDP"
# for my public IP over VPN
ip daddr 78.224.46.36 udp dport 57500-57600 accept comment "accept mosh"
ip6 daddr 2a00:5854:2151::1 udp dport 57500-57600 accept comment "accept mosh"
}
# drop anything that looks forwarded
chain forward {
type filter hook forward priority 0; policy drop;
}
}
Fossil is a DVCS (decentralized version control software), an alternative to programs such as darcs, mercurial or git. It's developed by the same people doing sqlite and rely on sqlite internally.
Why not? I like diversity in software, and I'm unhappy to see Git dominating the field. Fossil is a viable alternative, with simplified workflow that work very well for my use case.
One feature I really like is the autosync, when a remote is configured, fossil will automatically push the changes to the remote, then it looks like a centralizer version control software like SVN, but for my usage it's really practical. Of course, you can disable autosync if you don't want to use this feature. I suppose this could be reproduced in git using a post-commit hook that run git push.
Fossil is opinionated, so you may not like it if that doesn't match your workflow, but when it does, it's a very practical software that won't get in your way.
A major and disappointing fact at first is that a fossil repository is a single file. In order to checkout the content of the repository, you will need to run fossil open /path/to/repo.fossil in the directory you want to extract the files.
Fossil supports multiple checkout of different branches in different directories, like git worktrees.
Because I'm used to other versionning software, I need a simple cheatsheet to learn most operations, they are easy to learn, but I prefer to note it down somewhere.
Copy the .fossil file to a remote server (I'm using ssh), and in your fossil checkout, type fossil remote add my-remote ssh://hostname//home/solene/my-file.fossil, and then fossil remote my-remote.
Note that the remote server must have the fossil binary available in $PATH.
fossil ui will open your web browser and log in as admin user, you can view the timeline, bug trackers, wiki, forum etc... Of course, you can enable/disable everything you want.
Fossil doesn't allow staging and committing partial changes in a file like with git add -p, the official way is to stash your changes, generate a diff of the stash, edit the diff, apply it and commit. It's recommended to use a program named patchouli to select hunks in the diff file to ease the process.
Quick blog entry to remember about something that wasn't as trivial as I thought. I needed to use syncthing to keep a single file in sync (KeePassXC database) without synchronizing the whole directory.
You have to use mask exclusion feature to make it possible. Put it simple, you need the share to forbid every file, except the one you want to sync.
This configuration happens in the .stignore file in the synchronized directory, but can also be managed from the Web interface.
I always wanted to have a simple rollback method on Linux systems, NixOS gave me a full featured one, but it wasn't easy to find a solution for other distributions.
Fortunately, with BTRFS, it's really simple thanks to snapshots being mountable volumes.
When you are in the bootloader (GRUB, systemd-boot, Lilo etc..), edit the command line, and add the new option (replace if already exists) with the following, the example uses the snapshot ROOT.20230102:
rootflags=subvol=gentoo/.snapshots/ROOT.20230103
Boot with the new command line, and you should be on your snapshot as the root filesystem.
This is mostly a reminder for myself. I installed Gentoo on a machine, but I reused the same BTRFS filesystem where NixOS is already installed, the trick is the BTRFS filesystem is composed of two partitions (a bit like raid 0) but they are from two different LUKS partitions.
It wasn't straightforward to unlock that thing at boot.
Grub was trying to autodetect the root partition to add root=/dev/something, but as my root filesystem requires /dev/mapper/ssd1 and /dev/mapper/ssd2, it was simply adding root=/dev/mapper/ssd1 /dev/mapper/ssd2, which is wrong.
This required a change in the file /etc/grub.d/10_linux where I entirely deleted the root= parameter.
A mistake I made was to try to boot without systemd compiled with cryptsetup support, this was just failing because in the initramfs, some systemd services were used to unlock the partitions, but without proper support for cryptsetup it didn't work.
In /etc/default/grub, I added this line, it contains the UUID of both LUKS partitions needed, and a root=/dev/dm-0 which is unexpectedly the first unlocked device path, and rd.luks=1 to enble LUKS support.
It's working fine now, I thought it would require me to write a custom initrd script, but dracut is providing all I needed, but there were many quirks on the path with no really helpful message to understand what's failing.
Now, I can enjoy my dual boot Gentoo / NixOS (they are quite antagonists :D), but they share the same filesystem and I really enjoy this weird setup.
As a flatpak user, but also someone with a slow internet connection, I was looking for a way to export a flatpak program to install it on another computer. It turns out flatpak supports this, but it's called "create-usb" for some reasons.
So today, I'll show how to export a flatpak program from a computer to another.
For some reasons, the default flathub parameters doesn't associate it a "Collection ID", which is required for the create-usb feature to work, so we need to associate a "Collection ID" to the flathub remote repository on both systems.
We can use the example from the official documentation:
The export process is simple, create a directory in which you want the flatpak application to be exported, we will use ~/export/ in the examples, with the program org.mozilla.firefox.
flatpak create-usb ~/export/ org.mozilla.firefox
The export process will display a few lines and tell you when it finished.
If you export multiple programs into the same directory, the export process will be smart and skip already existing components.
Take the ~/export/ directory, either on a USB drive, or copy it using rsync, share it over NFS/Samba etc... It's up to you. In the example, ~/export/ refers to the same directory transferred from the previous step onto the new system.
Now, we can run the import command to install the program.
Because the flatpak components/dependencies of a program can differ depending on the host (for example if you have an NVIDIA card, it will pull some NVIDIA dependencies), so if you export a program from a non-NVIDIA system to the other, it won't be complete to work reliably on the new system, but the missing parts can be downloaded on the Internet, it's still reducing the bandwidth requirement.
I kinda like Flatpak, it's convenient and reliable, and allow handling installed programs without privileges escalation. The programs can be big, it's cool to be able to save/export them for later use.
A neat feature in OpenBSD is the program authpf, an authenticating gateway using SSH.
Basically, it allows to dynamically configure the local firewall PF by connecting/disconnecting into a user account over SSH, either to toggle an IP into a table or rules through a PF anchor.
This program is very useful for the following use case:
firewall rules dedicated to authenticated users
enabling NAT to authenticated users
using a different bandwidth queue for authenticated users
logging, or not logging network packets of authenticated users
Of course, you can be creative and imagine other use cases.
This method is actually different from using a VPN, it doesn't have encryption extra cost but is less secure in the sense it only authenticates an IP or username, so if you use it over the Internet, the triggered rule may also benefit to people using the same IP as yours. However, it's much simpler to set up because users only have to share their public SSH key, while setting up a VPN is another level of complexity and troubleshooting.
In the following example, you manage a small office OpenBSD router, but you only want Chloe's workstation to reach the Internet with the NAT. We need to create her a dedicated account, set the shell to authpf, deploy her SSH key and configure PF.
Now, you can edit /etc/pf.conf and use the default table name authpf_users. With the following PF snippet, we will only allow authenticated users to go through the NAT.
table <authpf_users> persist
match out on egress inet from <authpf_users> to any nat-to (egress)
Reload your firewall, and when Chloe will connect, she will be able to go through the NAT.
The program authpf is an efficient tool for the network administrator's toolbox. And with the use of PF anchors, you can really extend its potential as you want, it's really not limited to tables.
It's possible to ban users, for various reasons you may want to block someone with a message asking to reach the help desk. This can be done by creating a file name after the username, like in the following example for user chloe: /etc/authpf/banned/chloe, the file text content will be displayed to the user upon connection.
It's possible to write a custom greeting message displayed upon connection, this can be global or per user, just write a message in /etc/authpf/authpf.message for a global one, or /etc/authpf/users/chloe/authpf.message for user chloe.
I have remote systems that only have /home as encrypted partitions, the reason is it ease a lot of remote management without a serial access, it's not ideal if you have critical files but in my use case, it's good enough.
In this blog post, I'll explain how to get the remote system to prompt you the unlocking passphrase automatically when it boots. I'm using OpenBSD in my example, but you can achieve the same with Linux and cryptsetup (LUKS), if you want to push the idea on Linux, you could do this from the initramfs to unlock your root partition.
on the remote system generate ssh-keys without a passphrase on your root account using ssh-keygen
copy the content of /root/.ssh/id_rsa.pub for the next step (or the public key file if you chose a different key algorithm)
edit ~/.ssh/authorized_keys on your workstation
create a new line with: restrict,command="/usr/local/bin/zenity --forms --text='Unlock t400 /home' --add-password='passphrase' --display=:0" $THE_PUBLIC_KEY_HERE
The new line allows the ssh key to connect to our local user, but it gets restricted to a single command: zenity, which is a GUI dialog program used to generate forms/dialogs in X sessions.
In the example, this creates a simple form in an X window with a label "Unlock t400 /home" and add a field password hiding typed text, and showing it on display :0 (the default one). Upon connection from the remote server, the form is displayed, you can type in and validate, then the content is passed to stdout on the remote server, to the command bioctl which unlocks the disk.
On the server, creates the file /etc/rc.local with the following content (please adapt to your system):
#!/bin/sh
ssh solene@10.42.42.102 | bioctl -s -c C -l 1a52f9ec20246135.k softraid0
if [ $? -eq 0 ]
then
mount /home
fi
In this script, solene@10.42.42.102 is my user@laptop-address, and 1a52f9ec20246135.k is my encrypted partition. The file /etc/rc.local is run at boot after most of the services, including networking.
You should get a display like this when the system boots:
With this simple setup, I can reboot my remote systems and wait for the passphrase to be asked quite reliably. Because of ssh, I can authenticate which system is asking for a passphrase, and it's sent encrypted over the network.
It's possible to get more in depth in this idea by using a local password database to automatically pick the passphrase, but you lose some kind of manual control, if someone steals a machine you may not want to unlock it after all ;) It would also be possible to prompt a Yes/No dialog before piping the passphrase from your computer, do what feels correct for you.
This blog post is for Mastodon users who may not like the official Mastodon web interface. It has a lot of features, but it's using a lot of CPU and requires a large screen.
Fortunately, there are alternatives front-ends to Mastodon, this is possible through calls between the front-end to then instance API. I would like to introduce you Pinafore.
Pinafore is a "web application" consisting of a static website, this implies nothing is actually store on the server hosting Pinafore, think about it like a page loaded in your browser that stores data in your browser and make API calls from your browser.
This design is elegant because it delegates everything to the browser and requires absolutely no processing on the Pinafore hosting server, it's just a web server there serving static files once.
As I said previously, Pinafore is a Mastodon (but also extends to other Fediverse instances whenever possible) front-end with a bunch of features such as:
There are two ways to use it, either by using the official hosted service, or by hosting it yourself.
Whether you choose the official or self-hosted, the principle is the following: you enter your account instance address in it the first time, this will trigger an oauth authentication on your instance and will ask if you want pinafore to use your account through the API (this can be revoked later from your Mastodon account). Accept, and that's it!
The official service is run by the developers and kept up to date. You can use it without installing anything, simply visit the address below and go through the login process.
This is a very convenient way to use pinafore, but it comes with a tradeoff: it involves a third party between your social network account and your client. While pinafore.social is trustable, this doesn't mean it can't be compromised and act as a "Man In The Middle". As I mentionned earlier, no data are stored by Pinafore because everything is in your browser, but nothing prevent a malicious attacker to modify the hosted Pinafore code to redirect data from your browser to a remote server they control in order to steal information.
It's possible to create Pinafore static files from your system and host it on any web server. While it's more secure than pinafore.social (if your host is secure), it still involves extra code that could "potentially" be compromised through a rogue commit, but it's not realistic to encounter this case when using Pinafore releases versions.
For this step, I'll link to the according documentation in the project:
This blog post is a republication of the article I published on my employer's blog under CC BY 4.0. I'm grateful to be allowed to publish NixOS related content there, but also to be able to reuse it here!
This guide explains how to install NixOS on a computer, with a twist.
If you use the same computer in different contexts, let's say for work and for your private life, you may wish to install two different operating systems to protect your private life data from mistakes or hacks from your work. For instance a cryptolocker you got from a compromised work email won't lock out your family photos.
But then you have two different operating systems to manage, and you may consider that it's not worth the effort and simply use the same operating system for your private life and for work, at the cost of the security you desired.
I offer you a third alternative, a single NixOS managing two securely separated contexts. You choose your context at boot time, and you can configure both context from either of them.
You can safely use the same machine at work with your home directory and confidential documents, and you can get into your personal context with your private data by doing a reboot. Compared to a dual boot system, you have the benefits of a single system to manage and no duplicated package.
For this guide, you need a system either physical or virtual that is supported by NixOS, and some knowledge like using a command line. You don't necessarily need to understand all the commands. The system disk will be erased during the process.
You can find an example of NixOS configuration files to help you understand the structure of the setup on the following GitHub repository:
We will create a 512 MB space for the /boot partition that will contain the kernels, and allocate the space left for an LVM partition we can split later.
We will use LVM so we need to initialize the partition and create a Volume Group with all the free space.
pvcreate /dev/sda2
vgcreate pool /dev/sda2
We will then create three logical volumes, one for the store and two for our environments:
lvcreate -L 15G -n root-private pool
lvcreate -L 15G -n root-work pool
lvcreate -l 100%FREE -n nix-store pool
NOTE: The sizes to assign to each volume is up to you, the nix store should have at least 30GB for a system with graphical sessions. LVM allows you to keep free space in your volume group so you can increase your volumes size later when needed.
We will enable encryption for the three volumes, but we want the nix-store partition to be unlockable with either of the keys used for the two root partitions. This way, you don't have to type two passphrases at boot.
cryptsetup luksFormat /dev/pool/root-work
cryptsetup luksFormat /dev/pool/root-private
cryptsetup luksFormat /dev/pool/nix-store # same password as work
cryptsetup luksAddKey /dev/pool/nix-store # same password as private
We unlock our partitions to be able to format and mount them. Which passphrase is used to unlock the nix-store doesn't matter.
Please note we don't encrypt the boot partition, which is the default on most encrypted Linux setup. While this could be achieved, this adds complexity that I don't want to cover in this guide.
Note: the nix-store partition isn't called crypto-nix-store because we want the nix-store partition to be unlocked after the root partition to reuse the password. The code generating the ramdisk takes the unlocked partitions' names in alphabetical order, by removing the prefix crypto the partition will always be after the root partitions.
We format each partition using ext4, a performant file-system which doesn't require maintenance. You can use other filesystems, like xfs or btrfs, if you need features specific to them.
The boot partition should be formatted using fat32 when using UEFI with mkfs.fat -F 32 /dev/sda1. It can be formatted in ext4 if you are using legacy boot (MBR).
Mount the partitions onto /mnt and its subdirectories to prepare for the installer.
mount /dev/mapper/crypto-work /mnt
mkdir -p /mnt/etc/nixos /mnt/boot /mnt/nix
mount /dev/mapper/nix-store /mnt/nix
mkdir /mnt/nix/config
mount --bind /mnt/nix/config /mnt/etc/nixos
mount /dev/sda1 /mnt/boot
We generate a configuration file:
nixos-generate-config --root /mnt
Edit /mnt/etc/nixos/hardware-configuration.nix to change the following parts:
We need two configuration files to describe our two environments, we will use hardware-configuration.nix as a template and apply changes to it.
sed '/imports =/,+3d' /mnt/etc/nixos/hardware-configuration.nix > /mnt/etc/nixos/work.nix
sed '/imports =/,+3d ; s/-work/-private/g' /mnt/etc/nixos/hardware-configuration.nix > /mnt/etc/nixos/private.nix
rm /mnt/etc/nixos/hardware-configuration.nix
Edit /mnt/etc/nixos/configuration.nix to make the imports code at the top of the file look like this:
imports =
[
./work.nix
./private.nix
];
Remember we removed the file /mnt/etc/nixos/hardware-configuration.nix so it shouldn't be imported anymore.
Now we need to hook each configuration to become a different boot entry, using the NixOS feature called specialisation. We will make the environment you want to be the default in the boot entry as a non-specialised environment and non-inherited so it's not picked up by the other, and a specialisation for the other environment.
For the hardware configuration files, we need to wrap them with some code to create a specialisation, and the "non-specialisation" case that won't propagate to the other specialisations.
Starting from a file looking like this, some code must be added at the top and bottom of the files depending on if you want it to be the default context or not.
It's now the time to configure your system as you want. The file /mnt/etc/nixos/configuration.nix contains shared configuration, this is the right place to define your user, shared packages, network and services.
The files /mnt/etc/nixos/private.nix and /mnt/etc/nixos/work.nix can be used to define context specific configuration.
During the numerous installation tests I've made to validate this guide, on some hardware I noticed an issue with LVM detection, add this line to your global configuration file to be sure your disks will be detected at boot.
The partitions are mounted and you configured your system as you want it, we can run the NixOS installer.
nixos-install
Wait for the copy process to complete after which you will be prompted for the root password of the current crypto-work environment (or the one you mounted here), you also need to define the password for your user now by chrooting into your NixOS system.
# nixos-enter --root /mnt -c "passwd your_user"
New password:
Retape new password:
passwd: password updated successfully
# umount -R /mnt
From now, you have a password set for root and your user for the crypto-work environment, but no password are defined in the crypto-private environment.
We will rerun the installation process with the other environment mounted:
mount /dev/mapper/crypto-private /mnt
mkdir -p /mnt/etc/nixos /mnt/boot /mnt/nix
mount /dev/mapper/nix-store /mnt/nix
mount --bind /mnt/nix/config /mnt/etc/nixos
mount /dev/sda1 /mnt/boot
As the NixOS configuration is already done and is shared between the two environments, just run nixos-install, wait for the root password to be prompted, apply the same chroot sequence to set a password to your user in this environment.
You can reboot, you will have a default boot entry for the default chosen environment, and the other environment boot entry, both requiring their own passphrase to be used.
Now, you can apply changes to your NixOS system using nixos-rebuild from both work and private environments.
Congratulations for going through this long installation process. You can now log in to your two contexts and use them independently, and you can configure them by applying changes to the corresponding files in /etc/nixos/.
With this setup, I chose to not cover swap space because this would allow to leak secrets between the contexts. If you need some swap, you will have to create a file on the root partition of your current context, and add the according code to the context filesystems.
If you want to use hibernation in which the system stops after dumping its memory into the swap file, your swap size must be larger than the memory available on the system.
It's possible to have a single swap for both contexts by using a random encryption at boot for the swap space, but this breaks hibernation as you can't unlock the swap to resume the system.
As you noticed, you had to run passwd in both contexts to define your user password and root's password. It is possible to define their password declaratively in the configuration file, refers to the documentation ofusers.mutableUsers and users.extraUsers.<name>.initialHashedPassword
If something is wrong when you boot the first time, you can reuse the installer to make changes to your installation: you can run again the cryptsetup luksOpen and mount commands to get access to your filesystems, then you can edit your configuration files and run
This may appear like a very niche use case, in my quest of software conservancy for nixpkgs I didn't encounter many people understanding why I was doing this.
I would like to present you a project I made to easily download all the sources files required to build packages from nixpkgs, allowing to keep offline copies.
Why would you like to keep a local copy? If upstream disappear, you can't get access to the sources anymore, except maybe in Hydra, but you rely on a third party to access the sources, so it's still valuable to have local copies of software you care about, just to make copies. It's not that absolutely useful for everyone, but it's always important to have such tools available.
After cloning and 'cd-ing' into the directory, simply run ./run.sh some package list | ./mirror.pl. The command run.sh will generate a JSON structure containing all the dependencies used by the packages listed as arguments, and the script mirror.pl will iterate over the JSON list and use nix's fetcher to gather the sources in the nix store, verifying the checksum on the go. This will create a directory distfiles containing symlinks to the sources files stored in the store.
The symlinks are very important as they will prevent garbage collection from the store, and it's also used internally to quickly check if a file is already in the store.
To delete a file from the store, remove its symlink and run the garbage collector.
I still need to figure how to get a full list of all the packages, I currently have a work in progress relying on nix search --json but it doesn't work on 100% of the packages for some reasons.
It's currently not possible to easily trim distfiles that aren't useful anymore, I plan to maybe add it someday.
This task is natively supported in the OpenBSD tool building packages (dpb), it can fetch multiples files in parallel and automatic remove files that aren't used anymore. This was really complicated to figure how to replicate this with nixpkgs.
Let me introduce you to a nice project I found while lurking on the Internet. It's called nushell and is a non-POSIX shell, so most of your regular shells knowledge (zsh, bash, ksh, etc…) can't be applied on it, and using it feels like doing functional programming.
It's a good tool for creating robust data manipulation pipelines, you can think of it like a mix of a shell which would include awk's power, behave like a SQL database, and which knows how to import/export XML/JSON/YAML/TOML natively.
You may want to try nushell only as a tool, and not as your main shell, it's perfectly fine.
With a regular shell, iterating over a command output can be complex when it involves spaces or newlines, for instance, that's why find and xargs have a -print0 parameter to have a special delimited between "items", but it doesn't compose well with other tools. Nushell handles correctly this situation as its manipulates the data using indexed entries, given you correctly parsed the input at the beginning.
Nushell is a rust program, so it should work on every platform where Rust/Cargo are supported. I packaged it for OpenBSD, so it's available on -current (and will be in releases after 7.3 is out), the port could be used on 7.2 with no effort.
With Nix, it's packaged under the name nushell, the binary name is nu.
For other platforms, it's certainly already packaged, otherwise you can find installation instructions to build it from sources.
At first run, you are prompted to use default configuration files, I'd recommend accepting, you will have files created in ~/.config/nushell/.
The only change I made from now is to make Tab completion case-sensitive, so D[TAB] completes to Downloads instead of asking between dev and Downloads. Look for case_sensitive_completions in .config/nushell/config.nu and set it to true.
If you are like me, and you prefer learning by doing instead of reading a lot of documentation, I prepared a bunch of real world use case you can experiment with. The documentation is still required to learn the many commands and syntax, but examples are a nice introduction.
Help from nushell can be parsed directly with nu commands, it's important to understand where to find information about commands.
Use help a-command to learn from a single command:
> help help
Display help information about commands.
Usage:
> help {flags} ...(rest)
Flags:
-h, --help - Display this help message
-f, --find <String> - string to find in command names, usage, and search terms
[cut so it's not too long]
Use help commands to list all available commands (I'm limiting to 5 between there are a lot of commands)
help commands | last 5
╭───┬─────────────┬────────────────────────┬───────────┬───────────┬────────────┬───────────────────────────────────────────────────────────────────────────────────────┬──────────────╮
│ # │ name │ category │ is_plugin │ is_custom │ is_keyword │ usage │ search_terms │
├───┼─────────────┼────────────────────────┼───────────┼───────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────────┤
│ 0 │ window │ filters │ false │ false │ false │ Creates a sliding window of `window_size` that slide by n rows/elements across input. │ │
│ 1 │ with-column │ dataframe or lazyframe │ false │ false │ false │ Adds a series to the dataframe │ │
│ 2 │ with-env │ env │ false │ false │ false │ Runs a block with an environment variable set. │ │
│ 3 │ wrap │ filters │ false │ false │ false │ Wrap the value into a column. │ │
│ 4 │ zip │ filters │ false │ false │ false │ Combine a stream with the input │ │
╰───┴─────────────┴────────────────────────┴───────────┴───────────┴────────────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────────╯
Add sort-by category to list them... sorted by category.
help commands | sort-by category
Use where category == filters to only list commands from the filters category.
help commands | where category == filters
Use find foobar to return lines containing foobar.
A complicated task using a regular shell, recursively find files matching a pattern and then run a given command on each of them, in parallel. Which is exactly what you need if you want to convert your music library into another format, let's convert everything from FLAC to OPUS in this example.
In the following command line, we will look for every .flac file in the subdirectories, then run in parallel using par-each the command ffmpeg on it, from its current name to the old name with .flac changed to .opus.
The let convert and | complete commands are used to store the output of each command into a result table, and store it in the variable convert so we can query it after the job is done.
Now, we have a structure in convert that contains the columns stdout, stderr and exit_code, so we can look if all the commands did run correctly using the following query.
$convert | where exit_code != 0
4.2.4. Synchronize a music library to a compressed one §
I had a special need for my phone and my huge music library, I wanted to have a lower quality version of it synced with syncthing, but I needed this to be easy to update when adding new files.
It takes all the music files in /home/user/Music/ and creates a 64K opus file in /home/user/Stream/ by keeping the same file tree hierarchy, and if the opus destination file exists it's skipped.
cd /home/user/Music/
let dest = "/home/user/Stream/"
let convert = (ls **/* |
where name =~ ".(mp3|flac|opus|ogg)$" |
where name !~ "(Audiobook|Piano)" |
par-each {
|file| do -i {
let new_name = ($file.name | str replace -r ".(flac|ogg|mp3)" ".opus")
if (not ([$dest, $new_name] | str join | path exists)) {
mkdir ([$dest, ($file.name | path dirname)] | str join)
ffmpeg -i $file.name -b:a 64K ([$dest, $new_name] | str join)
} | complete
}
})
$convert
4.2.5. Convert PDF/CBR/CBZ pages into webp and CBZ archives §
I have a lot of digitalized books/mangas/comics, this conversion is a handy operation reducing the size of the files by 40% (up to 70%).
def conv [] {
if (ls | first | get name | str contains ".jpg") {
ls *jpg | par-each {|file| do -i { cwebp $file.name -o ($file.name | str replace jpg webp) } | complete }
rm *jpg
}
if (ls | first | get name | str contains ".ppm") {
ls *ppm | par-each {|file| do -i { cwebp $file.name -o ($file.name | str replace ppm webp) } | complete }
rm *ppm
}
}
ls * | each {|file| do -i {
if ($file.name | str contains ".cbz") { unzip $file.name -d ./pages/ } ;
if ($file.name | str contains ".cbr") { unrar e -op./pages/ $file.name } ;
if ($file.name | str contains ".pdf") { mkdir pages ; pdfimages $file.name pages/page } ;
cd pages ; conv ; cd ../ ; ^zip -r $"($file.name).webp.cbz" pages ; rm -fr pages
} }
Nushell is very fun, it's terribly different from regular shells, but it comes with a powerful language and tooling. I always liked shells because of pipes commands, allowing to construct a complex transformation/analysis step by step, and easily inspect any step, or be able to replace a step by another.
With nushell, it feels like I finally have a better tool to create more reliable, robust, portable and faster command pipelines. The learning curve didn't feel too hard, but maybe it's because I'm already used to functional programming.
This blog post aims to be a quick clarification about the website openports.pl: an online database that could be used to search for OpenBSD packages and ports available in -current.
The software used by openports.pl is the package ports-readmes-dancer which uses the sqlite database from the sqlports package.
The host is running OpenBSD -current through snapshots, it tries twice a day to upgrade when possible, and regularly try to upgrade all packages, so it's as fresh as it can be through snapshots.
The program packaged in ports-readmes-dancer has been created by espie@, it's using a Perl web framework named Dancer. It's open source software and you can contribute to it if you want to enhance openports.pl itself
For security reasons, as it's running "too much" unaudited code server side, it's not possible to host it in the OpenBSD infrastructure under the domain .openbsd.org.
The main alternative is OpenBSD.app, a website but also a command line tool, using sqlports package as a data source, and it supports -stable and -current.
I wrote a GUI application named AppManager (the package name is appmanager) that allows to view all packages available for the running OpenBSD version, and install/remove them. It also has surprisingly effective heuristic to tell if search results are GUI/CLI/other programs.
Let's have fun doing OpenBSD kiosks! As explained in a recent article, a kiosk is a computer dedicated to display things or to be used interactively without being able to escape the current program.
I modified the script surf-display which run the web browser surf in full screen and run various commands to sanitize the environment to prevent users to escape surf to make it compatible with OpenBSD.
edit ~/.xsession to use /usr/local/bin/surf-display as a window manager
You will also need dependencies:
pkg_add surf wmctrl blackbox xdotool unclutter
Now, when you log in your user, surf will be started automatically, and you can't escape it, so you will need to switch to a TTY if you want to disable it, or through ssh.
The configuration is relatively simple for a single screen setup. Edit the file /etc/surf-display and put the URL you want to display as the value of DEFAULT_WWW_URI=, this file will be loaded by surf-display when it runs, otherwise OpenBSD website will be displayed.
It's still a bit rough for OpenBSD, I'd like to add xprintidle to automatically restart the session if the user has been inactive, but it's working really well already!
The open question may want a different answer depending on the context. For an operating system, I think most people want a boring one which work, and doesn't require having to fight it ever.
In that ground, NixOS is extremely boring. It just works, when you don't want something anymore, remove it from its config, and it's gone. Auto upgrades are reliable, in case of a rare issue after an update, you can still easily rollback.
In two years running the unstable version, I may have had one major issue.
NixOS can be bent in many ways, but can still get its shape back once you are done. It's very annoying to me because it's so smooth I can't find anything to repair.
This is disappointing to me, because I used to have fun with my computers by breaking them, and then learning how to repair it, which often involve a various area of knowledge, but this just never happen with NixOS.
Most people will certainly enjoy something super reliable.
Here is the story of the biggest problem I had when running NixOS. My disk was full, and I had to delete a few files to make some room, that's it. It wasn't very straightforward because it requires to know where to delete profiles to run the garbage collector manually, but nothing more serious ever happened.
This blog post may look like an ode to NixOS, but I'm really disappointed. Actually, now I need to find something to do on my computer which is not in the list ["fix the operating system"].
I suppose someone enjoying mechanics may feel the same when using a top-notch electric bike with high grade components made to be reliable.
As shown in my previous article about the NILFS file system, continuous snapshots are great and practical as they can save you losing data accidentally between two backups jobs.
Today, I'll demonstrate how to do something quite similar using BTRFS and regular snapshots.
In the configuration, I'll show the code for NixOS using the tool btrbk to handle snapshots retention correctly.
Snapshots are not backups! It is important to understand this. If your storage is damaged or the file system get corrupted, or the device stolen, you will lose your data. Backups are archives of your data that are on another device, and which can be used when the original device is lost/destroyed/corrupted. However, snapshots are superfast and cheap, and can be used to recover accidentally deleted files.
The program btrbk is simple, it requires a configuration file /etc/btrbk.conf defining which volume you want to snapshot regularly, where to make them accessible and how long you want to keep them.
In the following example, we will keep the snapshots for 2 days, and create them every 10 minutes. A SystemD service will be scheduled using a timer in order run btrbk run which handle snapshot creation and pruning. Snapshots will be made available under /.snapshots/.
Rebuild your system, you should now have systemd units btrfs-snapshot.service and btrfs-snapshot.timer available.
As the configuration file will be at the standard location, you can use btrbk as root to manually list or prune your snapshots in case you need to, like immediately reclaiming disk space.
After publishing this blog post, I realized a NixOS module existed to simplify the setup and provide more features. Here is the code used to replicate the behavior of the code above.
btrbk is a powerful tool, as not only you can create snapshots with it, but it can stream them on a remote system with optional encryption. It can also manage offline backups on a removable media and a few other non-simple cases. It's really worth taking a look.
A kiosk, in the sysadmin jargon, is a computer that is restricted to a single program so anyone can use it for the sole provided purpose. You may have seen kiosk computers here and there, often wrapped in some kind of box with just a touch screen available. ATM are kiosks, most screens showing some information are also kiosks.
What if you wanted to build a kiosk yourself? For having done a bunch of kiosk computers a few years ago, it's not an easy task, you need to think about:
how to make boot process bullet proof?
which desktop environment to use?
will the system show notifications you don't want?
can the user escape from the kiosk program?
Nowadays, we have more tooling available to ease kiosk making. There is also a distinction that has to be made between kiosks used displaying things, and kiosks used by users. The latter is more complicated and require lot of work, the former is a bit easier, especially with the new tools we will see in this article.
Using cage, we will be able to start a program in fullscreen, and only it, without having any notification, desktop, title bar etc...
In my case, I want to open firefox to open a local file used to display monitoring information. Firefox can still be used "normally" because hardening it would require a lot of work, but it's fine because I'm at home and it's just to display gauges and diagrams.
Here is the piece of code that will start the firefox window at boot automatically. Note that you need to disable any X server related configuration.
services.cage = {
enable = true;
user = "solene";
program = "${pkgs.firefox}/bin/firefox -kiosk -private-window file:///home/solene/monitoring.html";
};
Firefox has a few special flags, such as -kiosk to disable a few components, and -private-window to not mix with the current history. This is clearly not enough to prevent someone to use Firefox for whatever they want, but it's fine to handle a display of a single page reliably.
I wish I had something like Cage available back in the time I had to make kiosks. I can enjoy my low power netbook just displayin monitoring graphs at home now.
Today, I'll share about a special Linux file system that I really enjoy. It's called NILFS and has been imported into Linux in 2009, so it's not really a new player, despite being stable and used in production it never got popular.
In this file system, there is a unique system of continuous checkpoint creation. A checkpoint is a snapshot of your system at a given point in time, but it can be deleted automatically if some disk space must be reclaimed. A checkpoint can be transformed into a snapshot that will never be removed.
This mechanism works very well for workstations or file servers on which redundancy is nonexistent, and on which backups are done every day/weeks which give room for unrecoverable mistakes.
As NILFS is a Copy-On-Write file system (CoW), which mean when you make a change to a file, the original chunk on the disk isn't modified but a new chunk is created with the new content, this play well with making an history of the files.
From my experience, it performs very well on SSD devices on a desktop system, even during heavy I/O operation.
The continuous checkpoint creation system may be very confusing, so I'll explain how to learn about this mechanism and how to tame it.
The concept of a garbage collector may appear given for most people, but if it doesn't speak to you, let me give a quick explanation. In computer science, a garbage collector is a task that will look at unused memory and make it available again.
On NILFS, as a checkpoint is created every few seconds, used data is never freed and one would run out of disk pretty quickly. But here is the nilfs_cleanerd program, the garbage collector, that will look at the oldest checkpoint and delete them to reclaim the disk space under certain condition. Its default strategy is trying to keep checkpoints as long as possible, until it needs to make some room to avoid issues, it may not suit a workload creating a lot of files and that's why it can be tuned very precisely. For most desktop users, the defaults should work fine.
The garbage collector is automatically started on a volume upon mount. You can use the command nilfs-clean to control that daemon, reload its configuration, stop it etc...
When you delete a file on a NILFS file system, it doesn't free up any disk space because it's still available in a previous checkpoint, you need to wait for the according checkpoints to be removed to have some space freed.
4. How to find the current size of your data set §
As the output of df for a NILFS filesystem will give you the real data used on the disk for your data AND the snapshots/checkpoints, it can't be used to know how much free disk is available/used.
In order to figure the current disk usage (without accounting older checkpoints/snapshots), we will use the command lscp to look at the number of blocks contained in the most recent checkpoint. On Linux, a block is 4096 bytes, we can then turn the total in bytes into gigabytes by dividing three time by 1024 (bytes -> kilobytes -> megabytes -> gigabytes).
Let's say you deleted an important in-progress work, you don't have any backup and no way to retrieve it, fortunately you are using NILFS and a checkpoint was created every few seconds, so the files are still there and at reach!
The first step is to pause the garbage collector to avoid losing the files: nilfs-clean --suspend. After this, we can think slowly about the next steps without having to worry.
The next step is to list the checkpoints using the command lscp and look at the date/time in which the files still existed and preferably in their latest version, so the best is to get just before the deletion.
Then, we can mount the checkpoint (let's say number 12345 for the example) on a different directory using the following command:
mount -t nilfs2 -r -o cp=12345 /dev/sda1 /mnt
If it went fine, you should be able to browse the data in /mnt to recover your files.
Once you finished recovering your files, umount /mnt and resume the garbage collector with nilfs-clean --resume.
I recently got interested into what's possible with machine learning programs, and this has been an exciting journey. Let me share about a few programs I added to my toolbox.
They all work well on NixOS, but they might require specific instructions to work except for upscayl and whisper that are in nixpkgs. However, it's not that hard, but may not be accessible to everyone.
This program analyzes audio content of an audio or video file, and make a transcript of it. It supports many languages, I tried it with English, French and Japanese, and it worked very reliably.
Not only it creates a transcript text file, but it also generates a subtitles (.srt) file, you can create video subtitles automatically. It has a translation function which pass all the transcript text to Google translate and give you the result in English.
It's quite slow using a CPU, but it definitely works, using a GPU gives an 80 times speed boost.
It requires a weight to work, it exists in different sizes: tiny, small, base, medium, large, and each has an English only variant that is smaller. It will download them automatically on demand in the ~/.cache/whisper/ directory.
This program can be used to generate pictures from a sentence, it's actually very effective. You need a weight file which is like a database on how to interpret stuff in the sentence.
You need an account on https://huggingface.co/CompVis/stable-diffusion-v-1-4-original to download the free weight file (4 GB).
This program can be used to colorize a picture. The weights are provided. This works well without a GPU.
I tried to use it on mangas, it works to some extent, it adds some shading and identify things with colors, but the colorization isn't reliable and colors may be weird. However, this improves readability for me 👍🏻.
This program upscales a picture to 4 times its resolution, the result can be very impressive, but in some situation it gives a "plastic" and unnatural feeling.
I've been very impressed by it, I've been able to improve some old pictures taken with a poor phone.
Fail2ban is a wonderful piece of software, it can analyze logs from daemons and ban them in the firewall. It's triggered by certain conditions like a single IP found in too many lines matching a pattern (such as a login failure) under a certain time.
What's even cooler is that writing new filters is super easy! In this text, I'll share how to write new filters for NixOS.
Before continuing, if you are not familiar with fail2ban, here are the few important keywords to understand:
action: what to do with an IP (usually banning an IP)
filter: set of regular expressions and information used to find bad actors in logs
jail: what ties together filters and actions in a logical unit
For instance, a sshd jail will have a filter applied on sshd logs, and it will use a banning action. The jail can have more information like how many times an IP must be found by a filter before using the action.
The easiest part is to enable fail2ban. Take the opportunity to declare IPs you don't want to block, and also block IPs on all ports if it's something you want.
services.fail2ban = {
enable = true;
ignoreIP = [
"192.168.1.0/24"
];
};
# needed to ban on IPv4 and IPv6 for all ports
services.fail2ban = {
extraPackages = [pkgs.ipset];
banaction = "iptables-ipset-proto6-allports";
};
Now we can declare fail2ban jails with each filter we created. If you use a log file, make sure to have backend = auto, otherwise the systemd journal is used and this won't work.
The most important settings are:
filter: choose your filter using its filename minus the .conf part
maxretry: how many times an IP should be reported before taking an action
findtime: how long should we keep entries to match in maxretry
Fail2ban is a fantastic tool to easily create filtering rules to ban the bad actors. It turned out most rules didn't work out of the box, or were too narrow for my use case, so extending fail2ban was quite straightforward.
Since I switched my server from OpenBSD to NixOS, I was missing a feature. The previous server was using iblock, a program I made to block IPs connecting on a list of ports, I don't like people knocking randomly on ports.
iblock is simple, if you connect to any port on which it's listening, you get banned in the firewall.
Iptables provides a feature adding an IP to a set if the address connects n times before s seconds. Let's just set it to ONCE so the address is banned on first connection.
For the record, a "set" is an extra iptables feature allowing to add many IP addresses like an OpenBSD PF table. We need separate sets for IPv4 and IPv6, they don't mix well.
The configuration isn't stateless, it creates a file /var/lib/ipset.conf , so if you want to make changes like expiration time to the sets while they already exist, you will need to use ipset yourself.
And most importantly, because of the way the firewall service is implemented, if you don't use this file anymore, the firewall won't reload.
I've lost a lot of time figuring why: when NixOS reloads the firewall service, it uses the new reload script which doesn't include the cleanup from stopCommand, and this fails because the NixOS service didn't expect anything in the INPUT chain.
In this case, you have to manually delete the rules in the INPUT chain in for IPv4 and IPv6, or reboot your system that will start with a fresh set, or flush all rules in iptables and restart the firewall service.
Within operating system kernels, at least for Linux and the BSDs, there is a mechanism called "out of memory killer" which is triggered when the system is running out of memory and some room must be made to make the system responsive again.
However, in practice this OOM mechanism doesn't work well. If the system is running out of memory, it will become totally unresponsive, and sometimes the OOM killer will help, but it may take like 30 minutes, but sometimes it may be stuck forever.
Today, I stumbled upon a nice project called "earlyoom", which is an OOM manager working in the user land instead of inside the kernel, which gives it a lot more flexibility about its actions and the consequences.
earlyoom is simple in that it's a daemon running as root, using nearly no memory, that will regularly poll for remaining swap memory and RAM memory, if the current level are below the threshold of both, actions will be taken.
What's cool is you can tell it to prefer some processes to terminate first, and some processes to avoid as much as possible. For some people, it may be preferable to terminate a web browser first and instant messaging than their development software.
The command line above means that if my system has less than 2% of its RAM and less than 2% of its swap available, earlyoom will try to terminate existing program whose binary matches electron/libreoffice/gimp etc.... and avoid programs named X/Plasma.*/konsole/kwin.
For configuring it properly as a service, explanations can be found in the project README file.
This program is a pleasant surprise to me, I often run out of memory on my laptop because I'm running some software requiring a lot of memory for good reasons, and while the laptop has barely enough memory to run them, I should have most of the other software close to make it fit in. However, when I forget to close them, the system would just lock up for a while, which most often require a hard reboot. Being able to avoid this situation is a big plus for me. Of course, adding some swap space would help, but I prefer to avoid adding more swap as it's terribly inefficient and only postpone the problem.
Keeping an OpenBSD system up-to-date requires two daily operation:
updating the base system with the command: /usr/sbin/syspatch
updating the packages (if any) with the command: /usr/sbin/pkg_add -u
However, OpenBSD isn't very friendly with regard to what to do after upgrading: modified binaries should be restarted to use the new code, and a new kernel requires an upgrade
It's not useful to update if the newer binaries are never used.
I wrote a small script to automatically reboot if syspatch deployed a new kernel. Instead of running syspatch from a cron job, you can run a script with this content:
#!/bin/sh
OUT=$(/usr/sbin/syspatch)
SUCCESS=$?
if [ "$SUCCESS" -eq 0 ]
then
if echo "$OUT" | grep reboot >/dev/null
then
reboot
fi
fi
It's not much, it runs syspatch and if the output contains "reboot", then a reboot of the system is done.
This works well for system services, except when the binary is different from the service name like for prosody, in which case you must know the exact name of the binary.
But for long-lived commands like a 24/7 emacs or an IRC client, there isn't any mechanism to handle it. At best, you can email you checkrestart output, or run checkrestart upon SSH login.
All the computers above used to run OpenBSD, let me explain why I migrated. It was a very complicated choice for me, because I still like OpenBSD despite I uninstalled it.
NixOS offers more software choice than OpenBSD, this is especially true for recent software, and porting them to OpenBSD is getting difficult over time.
After spending too much time with OpenBSD, I wanted to explore a whole new world, NixOS being super different, it was a good opportunity. As a professional IT worker, it's important for me to stay up to date, Linux ecosystem evolved a lot over that past ten years. What's funny is OpenBSD and NixOS share similar issues such as not being able to use binaries found on the Internet (but for various reasons)
NixOS maintenance is drastically reduced compared to OpenBSD
NixOS helps me to squeeze more from my hardware (speed, storage capacity, reliability)
systemd: I bet this one will be controversial, but since I learned how to use it, I really like it (and NixOS make it even greater for writing units)
Security is hard to measure, but it's the main argument in favor of OpenBSD, however it is possible to enable mitigations on Linux as well such as hardened memory allocator or a hardened Kernel. OpenBSD isn't practical to separate services from running all in the same system, while on Linux you can easily sandbox services. In the end, the security mechanisms are different, but I feel the result is pretty similar for my threat model of protecting against script kiddies.
I give a bonus point for Linux for its ability to account CPU/Memory/Swap/Disk/network per user, group and process. This allows spotting unusual activity. Security is about protection, but also about being aware of intrusion, OpenBSD isn't very good at it at the moment.
One issue I had migrating my mail server and the router was to find what changes were made in /etc. I was able to figure which services were enabled, but not really all the steps done a few years ago to configure them. I had to scrape all the configuration file to see if it looked like verbatim default configuration or something I changed manually.
This is where NixOS shines for maintenance and configuration, everything is declarative, so you never touch anything in /etc. At anytime, even in a few years, I'll be able to exactly tell what I need for each service, without having to dig up /etc/ and compare with default files. This is a saner approach, and also ease migration toward another system (OpenBSD? ;) ) because I'd just have to apply these changes to configuration files.
Working with NixOS can be disappointing. Most of the system is read-only, you need to learn a new language (Nix) to configure services, you have to "rebuild" your system to make a change as simple as adding an entry in /etc/hosts, not very "Unix-like".
Your biggest friend is the man page configuration.nix which contains all the possible configurations settings available in NixOS, from Kernel choice and grub parameters, to Docker containers started at boot or your desktop environment.
The workflow is pretty easy, take your configuration.nix file, apply changes to it, and run "nixos-rebuild test" (or switch if you prefer) to try the changes. Then, you may want something more elaborated like tracking your changes in a git or darcs repository, and start sharing pieces of configuration between machines.
But in the end, you just declare some configuration. I prefer to keep my configurations very easy to read, I still don't have any modules or much variable, the common pieces are just .nix files imported for the systems needing it. It's super easy to track and debug.
After a while, I found it very tedious to have to run nixos-rebuild on each machine to keep them up to date, so I started using the autoUpgrade module which basically do it for you in a scheduled task.
But then, I needed to centralize each configuration file somewhere, and have fun with ssh keys because I don't like publishing my configuration files publicly. Which isn't optimal either as if you make a change locally, you need to push the changes and connect to the remote host to pull the changes and rebuild immediately instead of waiting for the auto upgrade process.
So, I wrote bento, which allows me to manage all the configuration files in a single place, but better than that, I can build the configuration locally to ensure they will work once shipped. I quickly added a way to track the status of each remote system to be sure they picked up and applied the changes (every 10 minutes). Later, I improved the network efficiency by central management computer as a local binary cache, so other systems are now downloading packages from it locally, instead of downloading them again from the Internet.
The coolest thing ever is that I can manage offline systems such as my work laptop, I can update its configuration file in the weekend for an update or to improve the environment (it mostly shares the same configuration as my main laptop), and it will automatically pick it up when I boot it.
Moving to NixOS was a very good and pleasant experience, but I had some knowledge about it before starting. It might be confusing a lot of people, and you certainly need to get into NixOS mindset to appreciate it.
This is my work computer with a big Nix store, and some build programs involving a lot of cache files and many git repositories.
Processed 3570629 files, 894690 regular extents (1836135 refs), 2366783 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 61% 55G 90G 155G
none 100% 35G 35G 52G
zlib 37% 20G 54G 102G
prealloc 100% 138M 138M 67M
The output reads that the real disk usage is 61%, so 39% of the disk compressed data. We have more details per compression algorithm about the content, none represents uncompressed data and zlib the files compressed using this algorithm.
Files compressed with zlib are down to 37% of their real size, this is not bad. I made a mistake when creating the BTRFS mount point: I used zlib compression algorithm which is quite obsolete nowadays. For history record, zlib is the library used to provide the "deflate compression algorithm" found in zip or gzip.
Let's change the compression to use zstd algorithm instead. This can be changed with the command btrfs filesystem defrag -czstd -r /. Basically, all files are scanned, if they can be compressed with zstd, they are rewritten on the disk with the new algorithm.
My own laptop has a huge Nix store, a lot of binaries files (music, pictures), a few hundreads of gigabytes of video games. I suppose it's quite a realistic and balanced environment.
Processed 1804099 files, 755845 regular extents (1295281 refs), 980697 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 93% 429G 459G 392G
none 100% 414G 414G 332G
zstd 34% 15G 45G 59G
prealloc 100% 92M 92M 91M
The saving due to compression is 30 GB, but this only count as 7% of the global file system. That's not impressive compared to the other computer, but having an extra 30 GB for free is clearly something I enjoy.
NixOS is cool, but it's super cool because it has modules for many services, so you don't have to learn how to manage them (except if you want them in production), and you don't need to update them like a container image.
But it's specific to NixOS, while the modules are defined in the nix nixpkgs repository, you can't use them if you are not using NixOS.
But there is a trick, it's called arion and is able to generate containers to leverage NixOS modules power in them, without being on NixOS. You just need to have Nix installed locally.
Long story short, docker is a tool to manage containers but requires going through a local socket and root daemon to handle this. Podman is a docker drop-in alternative that is almost 100% compatible (including docker-compose), and can run containers in userland or through a local daemon for more privileges.
Arion works best with podman, this is so because it relies on some systemd features to handle capabilities, and docker is diverting from this while podman isn't.
Arion can create different kind of container, using more or less parts of NixOS. You can run systemd services from NixOS, or a full blown NixOS and its modules, this is what I want to use here.
There are examples of the various modes that are provided in arion sources, but also in the documentation.
We are now going to create a container to run a Netdata instance:
Create a file arion-compose.nix
{
project.name = "netdata";
services.netdata = { pkgs, lib, ... }: {
nixos.useSystemd = true;
nixos.configuration.boot.tmpOnTmpfs = true;
nixos.configuration = {
services.netdata.enable = true;
};
# required for the service, arion tells you what is required
service.capabilities.SYS_ADMIN = true;
# required for network
nixos.configuration.systemd.services.netdata.serviceConfig.AmbientCapabilities =
lib.mkForce [ "CAP_NET_BIND_SERVICE" ];
# bind container local port to host port
service.ports = [
"8080:19999" # host:container
];
};
}
And a file arion-pkgs.nix
import <nixpkgs> {
system = "x86_64-linux";
}
And then, run arion up -d, you should have Netdata reachable over http://localhost:8080/ , it's managed like any docker / podman container, so usual commands work to stop / start / export the container.
Of course, this example is very simple (I choose it for this reason), but you can reuse any NixOS module this way.
If you change the network parts, you may need to delete the previous network creating in docker. Just use docker network ls to find the id, and docker network rm to delete it, then run arion up -d again.
Arion is a fantastic tool allowing to reuse NixOS modules anywhere. These modules are a huge weight in NixOS appeal, and being able to use them outside is a good step toward a ubiquitous Nix, not only to build programs but also to run services.
This program is a simple service to run on a computer, it will automatically gather a ton of metrics and make them easily available over the local TCP port 19999. You just need to run Netdata and nothing else, and you will have every metrics you can imagine from your computer, and some explanations for each of them!
That's pretty cool because Netdata is very efficient, it draws nearly no CPU while gathering a few thousands metrics every few seconds, and is memory efficient and can be constrained to a dozen of megabytes.
While you can export its metrics to something like graphite or Prometheus, you lose the nice display which is absolutely a blast compare to Grafana (in my opinion).
Update: as pointed out by a reader (thanks!), it's possible to connect Netdata instances to only one used for viewing metrics. I'll investigate this soon.
Netdata also added some machine learning anomaly detection, it's simple and doesn't use many resources or require a GPU, it only builds statistical models to be able to report if some metrics have an unusual trend. It takes some time to gather enough data, and after a few days it's starting to work.
Here is a simple configuration on NixOS to connect a headless node without persistency to send all on a main Netdata server storing data but also displaying them.
You need to generate an UUID with uuidgen, replace UUID in the text with the result. It can be per system or shared by multiple Netdata instances.
My networks are 10.42.42.0/24 and 10.43.43.0/24, I'll allow everything matching 10.* on the receiver, I don't open port 19999 on a public interface.
Netdata company started a "cloud" offer that is free, but they plan to keep it free but also propose more services for paying subscribers. The free plan is just a convenience to see metrics from multiple nodes at the same place, they don't store any metrics apart metadata (server name, OS version, kernel, etc..), when you look at your metrics, they just relay from your server to your web browser without storing the data.
The free cloud plan offers a correlating feature, but I still didn't have the opportunity to try it, and also email alerting when an alarm is triggered.
I strongly dislike this method as I'm not a huge fan of downloading script to run as root that are not provided by my system.
When you want to add a new node, you will be given a long command line and a token, keep that token somewhere. NixOS Netdata package offers a script named netdata-claim.sh (which seems to be part of Netdata source code) that will generate a pair of RSA keys, and look for the token in a file.
Netdata is really a wonderful tool, ideally I'd like it to replace all the Grafana + storage + agent stack, but it doesn't provide persistent centralized storage compatible with its dashboard. I'm going to experiment with their Netdata cloud service, I'm not sure if it would add value for me, and while they have a very correct data privacy policy, I prefer to self-host everything.
Hello 👋🏻, it's been a long time I didn't have to take a look at monitoring servers. I've set up a Grafana server six years ago, and I was using Munin for my personal servers.
However, I recently moved my server to a small virtual machine which has CPU and memory constraints (1 core / 1 GB of memory), and Munin didn't work very well. I was curious to learn if the Grafana stack changed since the last time I used it, and YES.
There is that project named Prometheus which is used absolutely everywhere, it was time for me to learn about it. And as I like to go against the flow, I tried various changes to the industry standard stack by using VictoriaMetrics.
In this article, I'm using NixOS configuration for the examples, however it should be obvious enough that you can still understand the parts if you don't know anything about NixOS.
VictoriaMetrics is a Prometheus drop-in replacement that is a lot more efficient (faster and use less resources), which also provides various API such as Graphite or InfluxDB. It's the component storing data. It comes with various programs like VictoriaMetrics agent to replace various parts of Prometheus.
Update: a dear reader shown me VictoriaMetrics can be used to scrape remote agents without the VictoriaMetrics agent, this reduce the memory usage and configuration required.
Prometheus is a time series database, which also provide a collecting agent named Node Exporter. It's also able to pull (scrape) data from remote services offering a Prometheus API.
NixOS is an operating system built with the Nix package manager, it has a declarative approach that requires to reconfigure the system when you need to make a change.
4. Setup 2: VictoriaMetrics + node-exporter in pull model §
In this setup, a VictoriaMetrics server is running on a server along with Grafana. A VictoriaMetrics agent is running locally to gather data from remote servers running node_exporter.
Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB.
5. Setup 3: VictoriaMetrics + node-exporter in push model §
In this setup, a VictoriaMetrics server is running on a server along with Grafana, on each server node_exporter and VictoriaMetrics agent are running to export data to the central VictoriaMetrics server.
Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB, which is exactly the same as the setup 2, except the VictoriaMetrics agent is running on all remote servers.
In this setup, a VictoriaMetrics server is running on a server along with Grafana, servers are running Collectd sending data to VictoriaMetrics graphite API.
Running it on my server, Grafana takes 67 MB, VictoriaMetrics 30 MB and Collectd 172 kB (yes).
The server requires VictoriaMetrics to run exposing its graphite API on ports 2003.
Note that in Grafana, you will have to escape "-" characters using "\-" in the queries. I also didn't find a way to automatically discover hosts in the data to use variables in the dashboard.
UPDATE: Using write_tsdb exporter in collectd, and exposing a TSDB API with VictoriaMetrics, you can set a label to each host, and then use the query "label_values(status)" in Grafana to automatic discover hosts.
The first section named #!/bin/introduction" is on purpose and not a mistake. It felt super fun when I started writing the article, and wanted to keep it that way.
The Collectd setup is the most minimalistic while still powerful, but it requires lot of work to make the dashboards and configure the plugins correctly.
bento is now a single script, easy to package and add to $PATH. Before that it was a set of scripts with a shared shell files with functions in it, not very practical…
the hosts directory can contain directories with flakes in it, that may contain multiple hosts, it’s now handled. If there is no flake in it, then the machine is named after the directory name
bento supports rollbacks, if something is wrong during the deployment then the previous system is roll backed
enhancement to the status output when you don't have a flaked system, as build are not reproducible (without efforts) we can't really compare local and remote builds
machine local version remote version state time
------- --------- ----------- ------------- ----
interbus non-flakes 1dyc4lgr 📌 up to date 💚 (build 11s)
kikimora 996vw3r6 996vw3r6 💚 sync pending 🚩 (build 5m 53s) (new config 2m 48s)
nas r7ips2c6 lvbajpc5 🛑 rebuild pending 🚩 (build 5m 49s) (new config 1m 45s)
t470 b2ovrtjy ih7vxijm 🛑 rollbacked 🔃 (build 2m 24s)
x1 fcz1s2yp fcz1s2yp 💚 up to date 💚 (build 2m 37s)
network measurements shown that polling for configuration changes costs 5.1 kB IN and OUT
many checks has been added when something is going wrong
At work, we have a weekly "knowledge sharing" meeting, yesterday I talked about the state of NixOS deployments tools.
I had to look at all the tools we currently have at hand before starting my own, so it made sense to share all what I found.
This is a real topic, it doesn't make much sense to use regular sysadmins tools like ansible / puppet / salt etc... on NixOS, we need specific tools, and there is currently a bunch of them, and it can be hard to decide which one to use.
I was looking for a simple way to prevent pushing a specific git branch. A few searches on the Internet didn't give me good results, so let me share a solution.
Project update: the report is now able to compare if the remote server is using the NixOS version we built locally. This is possible as NixOS builds are reproducible, I get the same result on the server and the remote system.
The tool is getting in a better shape, the code received extra checks in a lot of place.
A bit later (blog post update), I added the possibility to trigger the update from the user.
With systemd it's possible to trigger a command upon connecting on a socket, I made bento systemd service to listen on port TCP/51337, a connection would start the service "bento-update.service", and display the output to the TCP client.
This totally works in the web browser, it's now possible to create a bookmark that just starts the update and give instant feedback about the update process. This will be particularly useful in case of a debug phone session to ask the remote person to trigger an update on their side instead of waiting for a timer.
It is now possible to differenciate the "not up to date" state into two categories:
the bento scripts were updated but not NixOS version change, this is called "sync pending". Changes could be distributing the updating file to give a new address for the remote server, so we can ensure they all received it.
the local NixOS version differs from the remote version, a rebuild is required, thus it's called "rebuild pending"
The "sync pending" is very fast, it only need to copy the files, but won't rebuild anything.
machine local version remote version state time
------- --------- ----------- ------------- ----
kikimora 996vw3r6 996vw3r6 💚 sync pending 🚩 (build 5m 53s) (new config 2m 48s)
nas r7ips2c6 lvbajpc5 🛑 rebuild pending 🚩 (build 5m 49s) (new config 1m 45s)
t470 ih7vxijm ih7vxijm 💚 up to date 💚 (build 2m 24s)
x1 fcz1s2yp fcz1s2yp 💚 up to date 💚 (build 2m 37s)
Bento received a new feature, it is now able to report if the remote hosts are up-to-date, how much time passed since their last update, and if they are not up-to-date, how long passed since the configuration change.
As Bento is using SFTP, it's possible to deposit information on the central server, I'm currently using log files from the builds, and compare this date to the date of the configuration.
This will be very useful to track deployments across the fleet. I plan to also check the version expected for a host and make them report their version after an update, this should possible for flakes system at least.
I pushed a new version affecting all hosts on the SFTP server, and run the status report regularly.
This is the output 15 seconds after making the changes available.
status of kikimora not up to date 🚩 (last_update 15m 6s ago) (since config change 15s ago)
status of nas not up to date 🚩 (last_update 12m ago) (since config change 15s ago)
status of t470 not up to date 🚩 (last_update 16m 9s ago) (since config change 15s ago)
status of x1 not up to date 🚩 (last_update 16m 24s ago) (since config change 14s ago)
This is the output after two systems picked up the changes and reported a success.
status of kikimora not up to date 🚩 (last_rebuild 16m 46s ago) (since config change 1m 55s ago)
status of nas up to date 💚 (last_rebuild 8s ago)
status of t470 not up to date 🚩 (last_rebuild 17m 49s ago) (since config change 1m 55s ago)
status of x1 up to date 💚 (last_rebuild 4s ago)
This is the output after all systems reported a success.
status of kikimora up to date 💚 (last_rebuild 0s ago)
status of nas up to date 💚 (last_rebuild 1m 24s ago)
status of t470 up to date 💚 (last_rebuild 1m 2s ago)
status of x1 up to date 💚 (last_rebuild 1m 20s ago)
secure 🛡️: each client can only access its own configuration files (ssh authentication + sftp chroot)
efficient 🏂🏾: configurations can be built on the central management server to serve binary packages if it is used as a substituters by the clients
organized 💼: system administrators have all configurations files in one repository to easy management
peace of mind 🧘🏿: configurations validity can be verified locally by system administrators
smart 💡: secrets (arbitrary files) can (soon) be deployed without storing them in the nix store
robustness in mind 🦾: clients just need to connect to a remote ssh, there are many ways to bypass firewalls (corkscrew, VPN, Tor hidden service, I2P, ...)
extensible 🧰 🪡: you can change every component, if you prefer using GitHub repositories to fetch configuration files instead of a remote sftp server, you can change it
for all NixOS 💻🏭📱: it can be used for remote workstations, smartphones running NixoS, servers in a datacenter
The project is still bare right now, I started it yesterday and I have many ideas to improve it:
package it to provide commands in $PATH instead of adding scripts to your config repository
add a rollback features in case an upgrade is losing connectivity
upgrades can depose a log file in the remote sftp server
upgrades could be triggered by the user by accessing a local socket, like opening a web page in a web browser to trigger it, if it returns output that'd be better
provide more useful modules in the utility nix file (automatically use the host as a binary cache for instance)
have a local information how to ssh to the client to ease the rebuild trigger (like a SSH file containing ssh command line)
a way to tell a client (when using flakes) to try to update flakes every time even if no configuration changed, to keep them up to date
Let's continue my series trying to design a NixOS fleet management.
Yesterday, I figured out 3 solutions:
periodic data checkout
pub/sub - event driven
push from central management to workstations
I retained solutions 2 and 3 only because they were the only providing instantaneous updates. However, I realize we could have a hybrid setup because I didn't want to let the KISS solution 1 away.
In my opinion, the best we can create is a hybrid setup of 1 and 3.
In this setup, all workstations will connect periodically to the central server to look for changes, and then trigger a rebuild. This simple mechanism can be greatly extended per-host to fit all our needs:
periodicity can be configured per-host
the rebuild service can be triggered on purpose manually by the user clicking on a button on their computer
the rebuild service can be triggered on purpose manually by a remote sysadmin having access to the system (using a VPN), this partially implements solution 3
the central server can act as a binary cache if configured per-host, it can be used to rebuild each configuration beforehand to avoid rebuilding on the workstations, this is one of Cachix Deploy arguments
using ssh multiplexing, remote checks for the repository can have a reduced bandwidth usage for maximum efficiency
a log of the update can be sent to the sftp server
the sftp server can be used to check connectivity and activate a rollback to previous state if you can't reach it anymore (like "magic rollback" with deploy-rs)
the sftp server is a de-facto available target for potential backups of the workstation using restic or duplicity
The mechanism is so simple, it could be adapted to many cases, like using GitHub or any data source instead of a central server. I will personally use this with my laptop as a central system to manage remote servers, which is funny as my goal is to use a server to manage workstations :-)
One important issue I didn't approach in the previous article is how to distribute the configuration files:
each workstation should be restricted to its own configuration only
how to send secrets, we don't want them in the nix-store
should we use flakes or not? Better to have the choice
the sysadmin on the central server should manage everything in a single git repository and be able to use common configuration files across the hosts
Addressing each of these requirements is hard, but in the end I've been able to design a solution that is simple and flexible:
The workflow is the following:
the sysadmin writes configuration files for each workstation in a dedicated directory
the sysadmin creates a symlink to a directory of common modules in each workstation directories
after a change, the sysadmin runs a program that will copy each workstation configuration into a directory in a chroot, symlinks have to be resolved
OPTIONAL: we can dry-build each host configuration to check if they work
OPTIONAL: we can build each host configuration to provide them as a binary cache
The directory holding configuration is likely to have a flake.nix file (can be a symlink to something generic), a configuration file, a directory with a hierarchy of files to copy as-this in the system to copy things like secrets or configuration files not managed by NixOS, and a symlink to a directory of nix files factorized for all hosts.
The NixOS clients will connect to their dedicated users with ssh using their private key, this allows to separate each client on the host system and restrict what they can access using the SFTP chroot feature.
A diagram of a real world case with 3 users would look like this:
The setup is very easy and requires only a few components:
a program to translates the configuration repository into separate directories in the chroot
some NixOS configuration to create the SFTP chroots, we just need to create a nix file with a list of pair of values containing "hostname" "ssh-public-key" for each remote host, this will automate the creation of the ssh configuration file
a script on the user side that connects and look for changes and run nixos-rebuild if something changes, maybe rclone could be used to "sync" over SFTP efficiently
a systemd timer for the user script
a systemd socket triggering the user script, so people can just open http://localhost:9999 to trigger the socket and forcing the update, create a bookmark named "UPDATE MY MACHINE" on the user system
I absolutely love this design, it's simple, and each piece can easily be replaced to fit one's need. Now, I need to start writing all the bits to make it real, and offer it to the world 🎉.
There is a NixOS module named autoUpgrade, I'm aware of its existence, but while it's absolutely perfect for the average user workstation or server, it's not practical for managing a fleet of NixOS efficiently.
I'm not a consumer of proprietary social networks, but sometimes I have to access content hosted there, and in that case I prefer to use a front-end reimplementation of the service.
These front-ends are network services that acts as a proxy to the proprietary service, and offer a different interface (usually cleaner) and also remove tracking / ads.
In your web browser, you can use the extension Privacy Redirect to automatically be redirected to such front-ends. But even better, you can host them locally instead of using public instances that may be unresponsive, on NixOS it's super easy.
As September 2022, libreddit, invidious and nitter have NixOS modules to manage them.
The following pieces of code can be used in your NixOS configuration file (/etc/nixos/configuration.nix as the default location) before running "nixos-rebuild" to use the newer configuration.
I focus on running the services locally and not expose them on the network, thus you will need a bit more configuration to add HTTPS and tune the performance if you need more users.
I very enjoy these front-ends, they draw a lot less resources when browsing these websites. I prefer to run them locally for performance reasons.
If you run such instances on your local computer, this doesn't help with regard to privacy. If you care about privacy, you should use public instances, or host your own public instances so many different users are behind the same service and this makes profiling harder. But if you want to host such instance, you may need to tweak the performance, and add a reverse proxy and a valid TLS certificate.
I have a grand project in my mind, and I need to think about it before starting any implementation. The blog is a right place for me to explain what I want to do and the different solutions.
It's related to NixOS. I would like to ease the management of a fleet of NixOS workstations that could be anywhere.
This could be useful for companies using NixOS for their employees, to manage all the workstations remotely, but also for people who may manage NixOS systems in various places (cloud, datacenter, house, family computers).
In this central management, it makes sense to not have your users with root access, they would have to call their technical support to ask for a change, and their system could be updated quickly to reflect the request. This can be super useful for remote family computers when they need an extra program not currently installed, and that you took the responsibility of handling your system...
With NixOS, this setup totally makes sense, you can potentially reproduce users bugs as you have their configuration, stage new changes for testing, and users can roll back to a previous working state in case of big regression.
Cachix company made it possible before I figure a solution. It's still not late to propose an open source alternative.
The purpose of this project is to have a central management system on which you keep the configuration files for all the NixOS around, and allow the administrator to make the remote NixOS to pick up the new configuration as soon as possible when required.
We can imagine three different implementations at the highest level:
a scheduled job on each machine looking for changes in the source. The source could be a git repository, a tarball or anything that could be used to carry the configuration.
NixOS systems could connect to something like a pub/sub and wait for an event from the central management to trigger a rebuild, the event may or not contain information / sources.
the central management system could connect to the remote NixOS to trigger the build / push the build
These designs have all pros and cons. Let's see them more in details.
this can lead to privacy issue as you know when each host is connected
this adds complexity to the server
this adds complexity on each client
firewalls usually don't like long-lived connections, HTTPS based solution would help bypass firewalls
2.3. Solution 3 - The central management pushes the updates to the remote systems §
In this scenario, the NixOS system would be reachable over a protocol allowing to run commands like SSH. The central management system would run a remote upgrade on it, or push the changes using tools like deploy-rs, colmena, morph or similar...
offline systems may be complicated to update, you would need to try to connect to them often until they are reachable
you can connect to the remote machine and potentially spy the user. In the alternatives above, you can potentially achieve the same by reconfiguring the computer to allow this, but it would have to be done on purpose
I tried to state the pros and cons of each setup, but I can't see a clear winner. However, I'm not convinced by the Solution 1 as you don't have any feedback or direct control on the systems, I prefer to abandon it.
The Solutions 2 and 3 are still in the competition, we basically ended with a choice between a PUSH and a PULL workflow.
This blog post is a republication of the article I published on my employer's blog under CC BY 4.0. I'm grateful to be allowed to publish NixOS related content there, but also to be able to reuse it here!
After the publication of the original post, the NixOS wiki got updated to contain most of this content, I added some extra bits for the specific use case of "options for the non-specialisation that shouldn't be inherited by specialisations" that wasn't convered in this text.
I often wished to be able to define different boot entries for different uses of my computer, be it for separating professional and personal use, testing kernels or using special hardware. NixOS has a unique feature that solves this problem in a clever way — NixOS specialisations.
A NixOS specialisation is a mechanism to describe additional boot entries when building your system, with specific changes applied on top of your non-specialised configuration.
You may have hardware occasionally connected to your computer, and some of these devices may require incompatible changes to your day-to-day configuration. Specialisations can create a new boot entry you can use when starting your computer with your specific hardware connected. This is common for people with external GPUs (Graphical Processing Unit), and the reason why I first used specialisations.
With NixOS, when I need my external GPU, I connect it to my computer and simply reboot my system. I choose the eGPU specialisation in my boot menu, and it just works. My boot menu looks like the following:
You can also define a specialisation which will boot into a different kernel, giving you a safe opportunity to try a new version while keeping a fallback environment with the regular kernel.
We can push the idea further by using a single computer for professional and personal use. Specialisations can have their own users, services, packages and requirements. This would create a hard separation without using multiple operating systems. However, by default, such a setup would be more practical than secure. While your users would only exist in one specialisation at a time, both users’ data are stored on the same partition, so one user could be exploited by an attacker to reach the other user’s data.
In a follow-up blog post, I will describe a secure setup using multiple encrypted partitions with different passphrases, all managed using specialisations with a single NixOS configuration. This will be quite awesome :)
As an example, we will create two specialisations, one having the user Chani using the desktop environment Plasma, and the other with the user Paul using the desktop environment Gnome. Auto login at boot will be set for both users in their own specialisations. Our user Paul will need an extra system-wide package, for example dune-release. Specialisations can use any argument that would work in the top-level configuration, so we are not limited in terms of what can be changed.
After applying the changes, run "nix-rebuild boot" as root. Upon reboot, in the GRUB menu, you will notice a two extra boot entries named “chani” and “paul” just above the last boot entry for your non-specialised system.
Rebuilding the system will also create scripts to switch from a configuration to another, specialisations are no exception.
Run "/nix/var/nix/profiles/system/specialisation/chani/bin/switch-to-configuration switch" to switch to the chani specialisation.
When using the switch scripts, keep in mind that you may not have exactly the same environment as if you rebooted into the specialisation as some changes may be only applied on boot.
Specialisations are a perfect solution to easily manage multiple boot entries with different configurations. It is the way to go when experimenting with your system, or when you occasionally need specific changes to your regular system.
I recently switched my home "NAS" (single disk!) to BTRFS, it's a different ecosystem with many features and commands, so I had to write a bit about it to remember the various possibilities...
BTRFS is an advanced file-system supported in Linux, it's somehow comparable to ZFS.
A BTRFS file-system can be made of multiple disks and aggregated in mirror or "concatenated", it can be split into subvolumes which may have specific settings.
Snapshots and quotas are applying on subvolumes, so it's important to think beforehand when creating BTRFS subvolumes, one may want to use a subvolume for /home and for /var for most cases.
It's possible to take an instant snapshot of a subvolume, this can be used as a backup. Snapshots can be browsed like any other directory. They exist in two flavors: read-only and writable. ZFS users will recognize writable snapshots as "clones" and read-only as regular ZFS snapshots.
Snapshots are an effective way to make a backup and rolling back changes in a second.
Raw filesystem can be sent / receive over network (or anything supporting a pipe) to allow incremental differences backup. This is a very effective way to do incremental backups without having to scan the entire file-system each time you run your backup.
I covered deduplication with bees, but one can also use the program "duperemove" (works on XFS too!). They work a bit differently, but in the end they have the same purpose. Bees operates on the whole BTRFS file-system, duperemove operates on files, it's different use cases.
BTRFS supports on-the-fly compression per subvolume, meaning the content of each file is stored compressed, and decompressed on demand. Depending on the files, this can result in better performance because you would store less content on the disk, and it's less likely to be I/O bound, but also improve storage efficiency. This is really content dependent, you can't compress binary files like pictures/videos/music, but if you have a lot of text and sources files, you can achieve great ratios.
From my experience, compression is always helpful for a regular user workload, and newer algorithm are smart enough to not compress binary data that wouldn't yield any benefit.
There is a program named compsize that reports compression statistics for a file/directory. It's very handy to know if the compression is beneficial and to which extent.
Fragmentation is a real thing and not specific to Windows, it matters a lot for mechanical hard drive but not really for SSDs.
Fragmentation happens when you create files on your file-system, and delete them: this happens very often due to cache directories, updates and regular operations on a live file-system.
When you delete a file, this creates a "hole" of free space, after some time, you may want to gather all these small parts of free space to have big chunks of free space, this matters for mechanical disks has the physical location of data is tied to the raw performance. The defragmentation process is just physically reorganizing data to order files chunks and free space into continuous blocks.
Defragmentation can be used to force compression in a subvolume, like if you want to change the compression algorithm or enabled compression after saving the files.
The scrubbing feature is one of the most valuable feature provided by BTRFS and ZFS. Each file in these file-system is associated with its checksum in some metadata index, this mean you can actually check each file integrity by comparing its current content with the checksum known in the index.
Scrubbing costs a lot of I/O and CPU because you need to compute the checksum of each file, but it's a guarantee for validating the stored data. In case of a corrupted file, if the file-system is composed of multiple disks (raid1 / raid5), it can be repaired from mirrored copies, it should work most of the time because such file corruption is often related to the drive itself, thus other drives shouldn't be affected.
Scrubbing can be started / paused / resumed, this is handy if you need to operate heavy I/O and you don't want the scrubbing process to increase time. While the scrub commands can take a device or a path, the path parameter is only used to find the related file-system, it won't just scrub the files in that directory.
When you are aggregating multiple disks into one BTRFS file-system, files are written on a disk and some other files are written to the other, after a while, a disk may contain more data than the other.
The rebalancing purpose is to redistribute data across the disks more evenly.
You can't create a swap file on a BTRFS disk without a tweak. You must create the file in a directory with the special attribute "no COW" using "chattr +C /tmp/some_directory", then you can move it anywhere as it will inherit the "no COW" flag.
If you try to use a swap file with COW enabled on it, swapon will report a weird error, but you get more details in the dmesg output.
It's possible to convert a ext2/3/4 file-system into BTRFS, obviously it must not be currently in use. The process can be rolled back until a certain point like defragmenting or rebalancing.
I occasionally get feedback about my blog, most of the time people are impressed with the rate of publication when they see the index page. I'm surprised it appears to be huge efforts, so I'll explain how I work on my blog.
I rarely spend more than 40 minutes for a blog post, the average blog post takes 20 minutes. Most of them are sharing something I fiddled with in the day or week, so the topic is still fresh for me. The content of the short articles often consists of dumping a few commands / configuration I used, and write a bit of text around so the reader knows what to expect from the article, how to use the content and what's the point of the topic.
It's important to keep track of commands/configuration beforehand, so when I'm trying something new, and I think I could write about it, I keep a simple text file somewhere with the few commands I typed or traps I encountered.
My fear with regard to the blog is to be out of ideas, this would mean I would have boring days and I would have nothing to write about. Sometimes I look at packages repository updates in different Linux distribution, and look at the projects homepages for which the name is unknown to me. This is a fun way to discover new programs / tools and ideas. When something looks interesting, I write its name down somewhere and may come later to it. I also write down any idea that I could get in my mind about some unusual setup I would like to try, if I come to try it, it will certainly end up as a new blog entry to share my experience.
There are two rules for the blog: having fun and not lie/be accurate. Having fun? Yes, writing can be fun, organizing ideas and sharing them is a cool exercise. Watching the result is fun. Thinking too much about perfection is not fun.
I prefer to write most of the blog posts in one shot, quickly proofread and publish, and be done with it. If I save a blog post as a draft, I may not pick it up quickly, and it's not fun to get into the context to continue it. I occasionally abandon some posts because of that, or simply delete the file and start over.
Sometimes it happens I'm wrong when writing, in the case I prefer to remove the blog post than keeping it online at all cost. When I know a text is terribly outdated, I either remove it from the index or update it.
I don't use any analytics services and I do the blog for free, the only incentive is to have fun and to know it will certainly help someone to look for information.
This website is generated with a custom blog generator I wrote a few years ago (cl-yag), the workflow to use it is very simple it never fails to me:
write the blog file in the format I want, I currently use GemText but in the past some blog posts were written in org-mode, man page or markdown
add an entry in the list of articles, this contains all the metadata such as the title, date, tags and description for the open graph protocol (optional)
run "make"
wait 30s, it's online on HTTP / gopher / Gemini
The program is really fast despite it's generating all the files every time, the "raw text to HTML" content is cached and reused when wrapping the HTML in the blog layout, the Gemini version is published as-this, and the gopher files are processed by a Perl script rewriting all the links and wrapping the text (takes a while).
Before publishing, I read my text and run a spellcheck program on it, my favorite is LanguageTool because it finds so many mistake versus aspell which only finds obvious typos.
It happens for some blog posts to be more elaborated, they often describe a complex setup and I need to ensure readers can reproduce all the steps and get the same results as me. This kind of blog post takes a day to write, they often require using a spare computer for experimentation, formatting, installing, downloading things, adjusting the text, starting over because I changed the text...
If you want to publish a blog, my advices would be to have fun, to use a blog/website generator that doesn't get in your way, and to not be afraid to get started. It could be scary at first to publish texts on the wild Internet, and fear to be wrong, but it happens, accept it, learn from your mistakes and improve for the next time.
There is a cool project related to NixOS, called Peerix. It's a local daemon exposed as a local substituter (a server providing binary packages) that will discover other Peerix daemon on the local network, and use them as a source of binary packages.
Peerix is a simple way to reuse package already installed somewhere on the network instead of downloading it again. Packages delivered by Peerix substituters are signed with a private key, so you need to import each computer public key before being able to download/use their packages. While this can be cumbersome, this also mandatory to prevent someone on the network to spoof packages.
Perrix should be used wisely, because secrets in your store could be leaked to others.
There is nothing special to do, when you update your system, or use nix-shell, the nix-daemon will use the local Peerix substituter first which will discover other Peerix instances if any, and will use them when possible.
You can check the logs of the peerix daemons using "journalctl -f -u peerix.service" on both systems.
While Peerix isn't a big project, it has a lot of potential to help NixOS users with multiple computers to have a more efficient bandwidth usage, but also build time. If you build the same project (with same inputs) on your computers, you can pull the result from the other.
Dear readers, given the popular demand for a RSS feed with HTML in it (which used to be the default), I modified the code to generate a new RSS file using HTML for its content.
I submitted a change to the nix package manager last week, and it got merged! It's now possible to define a bandwidth speed limit in the nix.conf configuration file.
This kind of limit setting is very important for users who don't have a fast Internet access, this allows the service to download packages while keep the network usable meanwhile.
Unfortunately, we need to wait for the next Nix version to be available to use it, fortunately it's easy to override a package settings to use the merge commit as a new version for nix.
Let's see how to configure NixOS to use a newer Nix version from git.
Minecraft is quite slow and unoptimized, fortunately using the mod "Sodium", you get access to more advanced video settings that allow to reduce the computer power usage, or just make the game playable for older computers.
Sometimes it feels I have specific use cases I need to solve alone. Today, I wanted to have a local Minecraft server running on my own workstation, but only when someone needs it. The point was that instead of having a big java server running all the system, Minecraft server would start upon connection from a player, and would stop when no player remains.
However, after looking a bit more into this topic, it seems I'm not the only one who need this.
As often, I prefer not to rely on third party tools when I can, so I found a solution to implement this using only systemd.
Even better, note that this method can work with any daemon given you can programmatically get the information whether to let it running or stop. In this example, I'm using Minecraft and the server stop is decided based on the player connecting fetch through rcon (a remote administration protocol).
I made a simple graph to show the dependencies, there are many systemd components used to build this.
The important part is the use of the systemd proxifier, it's a command to accept a connection over TCP and relay it to another socket, meanwhile you can do things such as starting a server and wait for it to be ready. This is the key of this setup, without it, this couldn't be possible.
Basically, listen-minecraft.socket listens on the public TCP port and runs listen-minecraft.service upon connection. This service needs hook-minecraft.service which is responsible for stopping or starting minecraft, but will also make listen-minecraft.service wait for the TCP port to be open so the proxifier will relay the connection to the daemon.
Then, minecraft-server.service is started alongside with stop-minecraft.timer which will regularly run stop-minecraft.service to try to stop the server if possible.
I used NixOS to configure my on-demand Minecraft server. This is something you can do on any systemd capable system, but I will provide a NixOS example, it shouldn't be hard to translate to a regular systemd configuration files.
{ config, lib, pkgs, modulesPath, ... }:
let
# check every 20 seconds if the server
# need to be stopped
frequency-check-players = "*-*-* *:*:0/20";
# time in second before we could stop the server
# this should let it time to spawn
minimum-server-lifetime = 300;
# minecraft port
# used in a few places in the code
# this is not the port that should be used publicly
# don't need to open it on the firewall
minecraft-port = 25564;
# this is the port that will trigger the server start
# and the one that should be used by players
# you need to open it in the firewall
public-port = 25565;
# a rcon password used by the local systemd commands
# to get information about the server such as the
# player list
# this will be stored plaintext in the store
rcon-password = "260a368f55f4fb4fa";
# a script used by hook-minecraft.service
# to start minecraft and the timer regularly
# polling for stopping it
start-mc = pkgs.writeShellScriptBin "start-mc" ''
systemctl start minecraft-server.service
systemctl start stop-minecraft.timer
'';
# wait 60s for a TCP socket to be available
# to wait in the proxifier
# idea found in https://blog.developer.atlassian.com/docker-systemd-socket-activation/
wait-tcp = pkgs.writeShellScriptBin "wait-tcp" ''
for i in `seq 60`; do
if ${pkgs.libressl.nc}/bin/nc -z 127.0.0.1 ${toString minecraft-port} > /dev/null ; then
exit 0
fi
${pkgs.busybox.out}/bin/sleep 1
done
exit 1
'';
# script returning true if the server has to be shutdown
# for minecraft, uses rcon to get the player list
# skips the checks if the service started less than minimum-server-lifetime
no-player-connected = pkgs.writeShellScriptBin "no-player-connected" ''
servicestartsec=$(date -d "$(systemctl show --property=ActiveEnterTimestamp minecraft-server.service | cut -d= -f2)" +%s)
serviceelapsedsec=$(( $(date +%s) - servicestartsec))
# exit if the server started less than 5 minutes ago
if [ $serviceelapsedsec -lt ${toString minimum-server-lifetime} ]
then
echo "server is too young to be stopped"
exit 1
fi
PLAYERS=`printf "list\n" | ${pkgs.rcon.out}/bin/rcon -m -H 127.0.0.1 -p 25575 -P ${rcon-password}`
if echo "$PLAYERS" | grep "are 0 of a"
then
exit 0
else
exit 1
fi
'';
in
{
# use NixOS module to declare your Minecraft
# rcon is mandatory for no-player-connected
services.minecraft-server = {
enable = true;
eula = true;
openFirewall = false;
declarative = true;
serverProperties = {
server-port = minecraft-port;
difficulty = 3;
gamemode = "survival";
force-gamemode = true;
max-players = 10;
level-seed = 238902389203;
motd = "NixOS Minecraft server!";
white-list = false;
enable-rcon = true;
"rcon.password" = rcon-password;
};
};
# don't start Minecraft on startup
systemd.services.minecraft-server = {
wantedBy = pkgs.lib.mkForce [];
};
# this waits for incoming connection on public-port
# and triggers listen-minecraft.service upon connection
systemd.sockets.listen-minecraft = {
enable = true;
wantedBy = [ "sockets.target" ];
requires = [ "network.target" ];
listenStreams = [ "${toString public-port}" ];
};
# this is triggered by a connection on TCP port public-port
# start hook-minecraft if not running yet and wait for it to return
# then, proxify the TCP connection to the real Minecraft port on localhost
systemd.services.listen-minecraft = {
path = with pkgs; [ systemd ];
enable = true;
requires = [ "hook-minecraft.service" "listen-minecraft.socket" ];
after = [ "hook-minecraft.service" "listen-minecraft.socket"];
serviceConfig.ExecStart = "${pkgs.systemd.out}/lib/systemd/systemd-socket-proxyd 127.0.0.1:${toString minecraft-port}";
};
# this starts Minecraft is required
# and wait for it to be available over TCP
# to unlock listen-minecraft.service proxy
systemd.services.hook-minecraft = {
path = with pkgs; [ systemd libressl busybox ];
enable = true;
serviceConfig = {
ExecStartPost = "${wait-tcp.out}/bin/wait-tcp";
ExecStart = "${start-mc.out}/bin/start-mc";
};
};
# create a timer running every frequency-check-players
# that runs stop-minecraft.service script on a regular
# basis to check if the server needs to be stopped
systemd.timers.stop-minecraft = {
enable = true;
timerConfig = {
OnCalendar = "${frequency-check-players}";
Unit = "stop-minecraft.service";
};
wantedBy = [ "timers.target" ];
};
# run the script no-player-connected
# and if it returns true, stop the minecraft-server
# but also the timer and the hook-minecraft service
# to prepare a working state ready to resume the
# server again
systemd.services.stop-minecraft = {
enable = true;
serviceConfig.Type = "oneshot";
script = ''
if ${no-player-connected}/bin/no-player-connected
then
echo "stopping server"
systemctl stop minecraft-server.service
systemctl stop hook-minecraft.service
systemctl stop stop-minecraft.timer
fi
'';
};
}
The OpenBSD operating system is known to be secure, but also for having an accurate and excellent documentation. In this text, I'll try to figure out what makes the OpenBSD documentation so great.
After you installed OpenBSD, when you log in as root for the first time, you are greeted by a message saying you received an email. In fact, there is an email from Theo De Raadt crafted at install time which welcomes you to OpenBSD. It gives you a few hints about how to get started, but most notably it leads you to the afterboot(8) man page.
The afterboot(8) man page is described as "things to check after the first complete boot", it will introduce you to the most common changes you may want to do on your system. But most importantly, it explains how to use the man page like looking at the SEE ALSO section leading to other man pages related to the current one.
The man pages are a way to ship documentation with a software, usually you find a man page with the same name as the command or configuration file you want to document. It seems man pages appeared in 1971, the "man" stands for manual.
The manual pages are literally the core of the OpenBSD documentation, they follow some standard and contains much metadata in it. When you write a man page, you not only write text, but you describe your text. For instance, when we need to refer to another man page, we will use the tag "cross-reference", this rich format allows accurate rendering but also accurate searches.
When we refer at a page in a text discussion, we often write their name including the section, like man(1). If you see man(1), you understand it's a man page for "man" within the first section. There are 9 sections of man pages, this is an old way to sort them into categories, so if two things have the same name, you use the section to distinguishes them. Here is an example, "man passwd" will display passwd(1), which is a program to change the password of a user, however you could want to read passwd(5) which describes the format of the file /etc/passwd, in this case you would use "man 5 passwd". I always found this way of referring to man pages very practical.
On OpenBSD, there are man pages for all the base systems programs, and all the configuration files. We always try to be very consistent in the way information is shown, and the wording is carefully chosen to be as clear as possible. They are a common effort involving multiple reviewers, changes must be approved by at least one member of the team. When an OpenBSD program is modified, the man page must be updated accordingly. The pages are also occasionally updated to include more history explaining the origins of the commands, it's always very instructive.
When it comes to packages, there is no guarantee as we just bundle upstream software, they may not provide a man page. However, packages maintainers offers a "pkg-readme" file for packages requiring very specific tuning, theses files can be found in /usr/local/share/doc/pkg-readmes/.
One way to distribute information related to OpenBSD is the website, it explains what the project is about, on which hardware you can install it, why it exists and what it provides. It has a lot of information which are interesting before you install OpenBSD, so they can't be in a man pages.
I chose the treat the Frequently Asked Questions part of the website as a different support for documentation. It's a special place that contains real world use cases, while the man pages are the reference for programs or configuration, they lack the big picture overview like "how to achieve XY on OpenBSD". The FAQ is particularly well crafted, it has different categories such as multimedia, virtualization and VPNs...
The OpenBSD installation comes with a directory /etc/examples/ providing configuration file samples and comments. They are a good way to get started with a configuration file and understand the file format described in the according man page.
This part is not for end users, but for contributors. When a change is done in the sources, there is often a great commit message explaining the logic of the code and the reasons for the changes. I say often because some trivial changes doesn't require such explanations every time. The commit messages are a valuable source of information when you need to know more about a component.
Documentation is also keeping the users informed about important news. OpenBSD is using an opt-in method with the mailing lists. One list that is important for information is announce@openbsd.org, news releases and erratas are published here. This is a simple and reliable method working for everyone having an email.
This is an important point in my opinion, all the OpenBSD documentation is stored in the sources trees, they must be committed by someone with a commit access. Wiki often have orphan pages, outdated information, duplicates pages with contrary content. While they can be rich and useful, their content often tend to rot if the community doesn't spend a huge time to maintain them.
Finally, most of the above is possible because OpenBSD is developed by the same team. The team can enforce their documentation requirements from top to bottom, which lead to accurate and consistent documentation all across the system. This is more complicated on a Linux system where all components come from various teams with different methods.
When you get your hands on OpenBSD, you should be able to understand how to use all the components from the base system (= not the packages) with just the man pages, being offline doesn't prevent you to configure your system.
What makes a good documentation? It's hard to tell. In my opinion, having a trustful source of knowledge is the most important, whatever the format or support. If you can't trust what you read because it may be outdated, or not applying on your current version, it's hard to rely on it. Man pages are a good format, very practical, but only when they are well written, but this is a difficult task requiring a lot of time.
BTRFS is a Linux file system that uses a Copy On Write (COW) model. It is providing many features like on the fly compression, volumes management, snapshots and clones etc...
However, BTRFS doesn't natively support deduplication, which is a feature that looks for chunks in files to see if another file share that block, if so, only one chunk of data can be used for both files. In some scenarios, this can drastically reduce the disk space usage.
This is where we can use "bees", a program that can do offline deduplication for BTRFS file systems. In this context, offline means it's done when you run a command, while it could be live/on the fly where deduplication is instantly applied. HAMMER file system from DragonFly BSD is doing offline deduplication, while ZFS is doing it live. There are pros and cons for both models, ZFS documentation recommends 1 GB of memory per Terabyte of disk when deduplication is enabled, because it requires to have all chunks hashes in memory.
Bees is a service you need to install and start on your system, it has some limitations and caveats documented, but it should work for most users.
You can define a BTRFS file system on which you want deduplication and a load target. Bees will work silently when your system is below the load threshold, and will stop when the load exceeds the limit, this is a simple mechanism to prevent bees to eat all your system resources after some freshly modified/created files need to be scanned.
First time you run bees on a file system that is not empty, it may take a while to scan everything, but then it's really quiet except if you do heavy I/O operation like downloading big files, but it's doing a good job at staying behind the scene.
The code suppose your root partition is labelled "nixos", you want a hash table of 256 MB (this will be used by bees) and you don't want bees to run when the system load is more than 2.0.
You may want to tune the values, mostly the hash size, depending on your file system size. Bees is for terabytes file systems, but this doesn't mean you can use it for the average user disks.
I tried on my workstation with a lot of build artifacts and git repositories, bees reduced the disk usage from 160 GB to 124 GB, so it's a huge win here.
Later, I tried again on some Steam games with a few proton versions, it didn't save much on the games but saved a lot on the proton installations.
On my local cache server, it did save nothing, but is to be expected.
BTRFS is a solid alternative to ZFS, it requires less memory while providing volumes, snapshots and compression. The only thing it needed for me was deduplication, and I'm glad it's offline, so it doesn't use too much memory.
In this guide, I'll explain how to create a NixOS VM in the hosting company OpenBSD Amsterdam which only provides OpenBSD VMs hosted on OpenBSD.
I'd like to thank the team at OpenBSD Amsterdam who offered me a VM for this experiment. While they don't support NixOS officially, they are open to have customers running non-OpenBSD systems on their VMs.
You need to order a VM at OpenBSD Amsterdam first. You will receive an email with your VM name, its network configuration (IPv4 and IPv6), and explanations to connect to the hypervisor. We will need to connect to the hypervisor to have a serial console access to the virtual machine. A serial console is a text interface to a machine, you get the machine output displayed in your serial console client, and what you type is sent to the machine as if you had a keyboard connected to it.
It can be useful to read the onboarding guide before starting.
Our first step is to get into the OpenBSD installer, so we can use it to overwrite the disk with our VM.
Connect to the hypervisor, attach to your virtual machine serial console by using the following command, we admit your VM name is "vm40" in the example:
vmctl console vm40
You can leave the console anytime by typing "~~." to get back into your ssh shell. The keys sequence "~." is used to drop ssh or a local serial console, but when you need to leave a serial console from a ssh shell, you need to use "~~.".
You shouldn't see anything because you won't get anything displayed until something is showed in the machine virtual first tty, you can press "enter" and you should see a login prompt. We don't need it, but it confirms the serial console is working.
In parallel, connect to your VM using ssh, find the root password at the end of ~/.ssh/authorized_keys, use "su -" to become root and run "reboot".
You should see the shutdown sequence scrolling in the hypervisor ssh session displaying the serial console, wait for the machine to reboot to spot for the login prompt, in which you will type bsd.rd:
Using drive 0, partition 3.
Loading......
probing: pc0 com0 mem[638K 3838M 4352M a20=on]
disk: hd0+
>> OpenBSD/amd64 BOOT 3.53
com0: 115200 baud
switching console to com0
>> OpenBSD/amd64 BOOT 3.53
boot> bsd.rd [ENTER] # you need to type bsd.rd
At this step, in the serial console you should see a GRUB boot menu, it will boot the first entry after a few seconds. Then NixOS will start booting. In this menu you can access older versions of your system.
After the text stopped scrolling press enter. You should see a login prompt, you can log in with the username "root" and the default password "nixos" if you used my disk image.
If you used my template, your VM still doesn't have network connectivity, you need to edit the file /etc/nixos/configuration.nix in which I've put the most important variables you want to customize at the top of the file. You need to configure your IPv4 and IPv6 addresses and their gateways, and also your username with an ssh key to connect to it, and the system name.
Once you are done, run "nixos-rebuild switch", you should have network if you configured it correctly.
After the rebuild, run "passwd your_user" if you want to assign a password to your newly declared user.
You should be able to connect to your VM using its public IP and your ssh key with your username.
EXTRA: You may want to remove the profile minimal.nix which is imported: it disables documentation and the use of X libraries, but this may trigger packages compilation as they are not always built without X support.
Because we started with a small 2 GB raw disk to create the virtual machine, the partition still has 2 GB only. We will have to resize the partition /dev/vda1 to take all the disk space, and then resize the ext4 file system.
First step is to extend the partition to 50 GB, the size of the virtual disk offered at openbsd.amsterdam.
# nix-shell -p parted
# parted /dev/vda
(parted) resizepart 1
Warning: The partition /dev/vda1 is currently in use. Are you sure to continue?
Yes/No? yes
End? [2147MB]? 50GB
(parted) quit
Second step is to resize the file system to fill up the partition:
# resize2fs /dev/vda1
The file system /dev/vda1 is mounted on / ; Resizing done on the fly
old_desc_blocks = 1, new_desc_blocks = 6
The file system /dev/vda1 now has a size of 12206775 blocks (4k).
While I provide a bootable NixOS disk image at https://perso.pw/nixos/vm.disk.gz , you can generate yours with this guide.
create a raw disk of 2 GB to install the VM in it
qemu-img create -f raw vm.disk 2G
run qemu in a serial console to ensure it works, in the grub boot menu you will need to select the 4th choice enabling serial console in the installer. In this no graphics qemu mode, you can stop qemu by pressing "ctrl+a" and then "c" to drop into qemu's own console, and type "quit" to stop the process.
edit the file /mnt/etc/nixos/configuration.nix , the NixOS install has nano available by default, but you can have your favorite editor by using "nix-shell -p vim" if you prefer vim. Here is a configuration file that will work:
we can run the installer, it will ask for the root password, and then we can shut down the VM
nixos-install
systemctl poweroff
Now, you have to host the disk file somewhere to make it available through http or ftp protocol in order to retrieve it from the openbsd.amsterdam VM. I'd recommend compressing the file by running gzip on it, that will drastically reduce its size from 2GB to ~500MB.
The ext4 file system offers a way to encrypt specific directories, it can be enough for most users.
However, if you want to enable full disk encryption, you need to use the guide above to generate your VM, but you need to create a separate /boot partition and create a LUKS volume for the root partition. This is explained in the NixOS manual, in the installer section. You should adapt the according bits in the configuration file to match your new setup.
Don't forget you will need to connect to the hypervisor to type your password through the serial access every time you will reboot.
There is an issue with the OpenBSD hypervisor and Linux kernels at the moment, when you reboot your Linux VM, the VM process on the OpenBSD host crashes. Fortunately, it crashes after all the shutdown process is done, so it doesn't let the file system in a weird state.
This problem is fixed in OpenBSD -current as of August 2022, and won't happen in OpenBSD 7.2 hypervisors that will be available by the end of the year.
A simple workaround is to open a tmux session in the hypervisor to run an infinite loop regularly checking if your VM is running, and starting it when it's stopped:
while true ; do vmctl status vm40 | grep stopped && vmctl start vm40 ; sleep 30 ; done
It's great to have more choice when you need a VM. The OpenBSD Amsterdam team is very kind, professional and regularly give money to the OpenBSD project.
This method should work for other hosting providers, given you can access the VM disk from an live environment (installer, rescue system etc..). You may need to pay attention to the disk device, and if you can't obtain a serial console access to your system, you need to get the network done right in the VM before copying it to the disk.
In the same vein, you can use this method to install any operating system supported by the hypervisor. I chose NixOS because I love this system, and it's easy to reproduce a result with its declarative paradigm.
So, I recently switched my home router to Linux but had a network issues for devices that would get/renew their IP with DHCP. They were obtaining an IP, but they couldn't reach the router before a while (between 5 seconds to a few minutes), which was very annoying and unreliable.
After spending some time with tcpdump on multiple devices, I found the issue, it was related to ARP (the protocol to discover MAC addresses associate them with IPs).
I have an unusual network setup at home as I use my ISP router for Wi-Fi, switch and as a modem, the issue here is that there are two subnets on its switch.
Because the modem is reachable over 192.168.1.0/24 and is used by the router on that switch, but that the LAN network uses the same switch with 10.42.42.0/24, ARP packets arrives on two network interfaces of the router, for addresses that are non routables (ARP packets for 10.42.42.0 would arrive at the interface 192.168.1.0 or the opposite).
There is simple solution, but it was very complicated to find as it's not obvious. We can configure the Linux kernel to discard ARP packets that are related to non routable addresses, so the interface with a 192.168.1.0/24 address will discard packets for the 10.42.42.0/24 network and vice-versa.
You need to define the sysctl net.ipv4.conf.all.arp_filter to 1.
sysctl net.ipv4.conf.all.arp_filter=1
This can be set per interface if you have specific need.
This was a very annoying issue, incredibly hard to troubleshoot. I suppose OpenBSD has this strict behavior by default because I didn't have this problem when the router was running OpenBSD.
A while ago I wrote an OpenBSD guide to fairly share the Internet bandwidth to the LAN network, it was more or less working. Now I switched my router to Linux, I wanted to achieve the same. Unfortunately, it's not really documented as well as on OpenBSD.
The command needed for this job is "tc", acronym for Traffic Control, the Jack of all trades when it comes to manipulate your network traffic. It can add delays or packets lost (this is fun when you want to simulate poor conditions), but also traffic shaping and Quality of Service (QoS).
Fortunately, tc is not that complicated for what we will achieve in this how-to (fair share) and will give results way better than what I achieved with OpenBSD!
I don't want to explain how the whole stack involved works, but with tc we will define a queue on the interface we want to apply the QoS, it will create a number of flows assigned to each active network streams, each active flow will receive 1/total_active_flows shares of bandwidth. It mean if you have three connections downloading data (from the same computer or three different computers), they should in theory receive 1/3 of bandwidth each. In practice, you don't get exactly that, but it's quite close.
I made a script with variables to make it easy to reuse, it deletes any traffic control set on the interfaces and then creates the configuration. You are supposed to run it at boot.
It contains two variables, DOWNLOAD_LIMIT and UPLOAD_LIMIT that should be approximately 95% of each maximum speed, it can be defined in bits with kbits/mbits or in bytes with kbps/mbps, the reason to use 95% is to let the router some room for organizing the packets. It's like a "15 puzzle", you need one empty square to use it.
#!/bin/sh
TC=$(which tc)
# LAN interface on which you have NAT
LAN_IF=br0
# WAN interface which connects to the Internet
WAN_IF=eth0
# 95% of maximum download
DOWNLOAD_LIMIT=13110kbit
# 95% of maximum upload
UPLOAD_LIMIT=840kbit
$TC qdisc del dev $LAN_IF root
$TC qdisc del dev $WAN_IF root
$TC qdisc add dev $WAN_IF root handle 1: htb default 1
$TC class add dev $WAN_IF parent 1: classid 1:1 htb rate $UPLOAD_LIMIT
$TC qdisc add dev $WAN_IF parent 1:1 fq_codel noecn
$TC qdisc add dev $LAN_IF root handle 1: htb default 1
$TC class add dev $LAN_IF parent 1: classid 1:1 htb rate $DOWNLOAD_LIMIT
$TC qdisc add dev $LAN_IF parent 1:1 fq_codel
tc is very effective but not really straightfoward to understand. What's cool is you can apply it on the fly without incidence.
It has been really effective for me, now if some device is downloading on the network, it doesn't affect much the other devices when they need to reach the Internet.
After lurking on the Internet looking for documentation about tc, I finally found someone who made a clear explanation about this tool. tc is documented, but it's too abstract for me.
At home, I'm running my own router to manage Internet, run DHCP, do filter and caching etc... I'm using an APU2 running OpenBSD, it works great so far, but I was curious to know if I could manage to run NixOS on it without having to deal with serial console and installation.
It turned out it's possible! By configuring and creating a live NixOS USB image, one can plug the USB memory stick into the router and have an immutable NixOS.
Here is a diagram of my network. It's really simple except the bridge part that require an explanation. The APU router has 3 network interfaces and I only need 2 of them (one for WAN and one for LAN), but my switch doesn't have enough ports for all the devices, just missing one, so I use the extra port of the APU to connect that device to the whole LAN by bridging the two network interfaces.
There is currently an issue when trying to use a non default kernel, ZFS support is pulled in and create errors. By redefining the list of supported file systems you can exclude ZFS from the list.
In order to reduce usage of the USB memory stick, upon boot all the content of the liveUSB will be loaded in memory, the USB memory stick can be removed because it's not useful anymore.
boot.kernelParams = [ "copytoram" ];
The service irqbalance is useful as it assigns certain IRQ calls to specific CPUs instead of letting the first CPU core to handle everything. This is supposed to increase performance by hitting CPU cache more often.
As my APU wasn't running Linux, I couldn't know the name if the interfaces without booting some Linux on it, attach to the serial console and check their names. By using this setting, Ethernet interfaces are named "eth0", "eth1" and "eth2".
networking.usePredictableInterfaceNames = true;
Now, the most important part of the router setup, doing all the following operations:
- assign an IP for eth0 and a default gateway
- create a bridge br0 with eth1 and eth2 and assign an IP to br0
- enable NAT for br0 interface to reach the Internet through eth0
This creates a user solene with a predefined password, add it to the wheel and sudo groups in order to use sudo. Another setting allows wheel members to run sudo without password, this is useful for testing purpose but should be avoided on production systems. You could add your SSH public key to ease and secure SSH access.
This enables the service unbound, a DNS resolver that is able to do some caching as well. We need to allow our network 10.42.42.0/24 and listen on the LAN facing interface to make it work, and not forget to open the ports TCP/53 and UDP/53 in the firewall. This caching is very effective on a LAN server.
This enables the service miniupnpd, this can be quite dangerous because its purpose is to allow computer on the network to create NAT forwarding rules on demand. Unfortunately, this is required to play some video games and I don't really enjoy creating all the rules for all the video games requiring it.
This enables the service munin-node and allow a remote server to connect to it. This service is used to gather metrics of various data and make graphs from them. I like it because the agent running on the systems is very simple and easy to extend with plugins, and on the server side, it doesn't need a lot of resources. As munin-node listens on the port TCP/4949 we need to open it.
By building a NixOS live image using Nix, I can easily try a new configuration without modifying my router storage, but I could also use it to ssh into the live system to install NixOS without having to deal with the serial console.
Today we will learn about how to use sshfs, a program to mount a remote directory through ssh into our local file system.
But OpenBSD has a different security model than in other Unixes systems, you can't use FUSE (Filesystem in USErspace) file systems from a non-root user. And because you need to run your fuse mount program as root, the mount point won't be reachable by other users because of permissions.
Fortunately, with the correct combination of flags, this is actually achievable.
As root, we will run sshfs to mount a directory from t470-wifi.local (my laptop Wi-Fi IP address on my LAN) to make it available to our user with uid 1000 and gid 1000 (this is the ids for the first user added), you can find the information about your users with the command "id". We will also use the allow_other mount option.
This article will explain how to make the flakes enabled nix commands reusing the nixpkgs repository used as input to build your NixOS system. This will regularly save you time and bandwidth.
By default, nix commands using flakes such as nix shell or nix run are pulling a tarball of the development version of nixpkgs. This is the default value set in the nix registry for nixpkgs.
$ nix registry list | grep nixpkgs
global flake:nixpkgs github:NixOS/nixpkgs/nixpkgs-unstable
Because of this, when you run a command, you are likely to download a tarball of the nixpkgs repository including the latest commit every time you use flakes, this is particularly annoying because the tarball is currently around 30 MB. There is a simple way to automatically set your registry to define the nixpkgs repository to the local archive used by your NixOS configuration.
To your flake.nix file describing your system configuration, you should have something similar to this:
Edit /etc/nixos/configuration.nix and make sure you have "inputs" listed in the first line, such as:
{ lib, config, pkgs, inputs, ... }:
And add the following line to the file, and then rebuild your system.
nix.registry.nixpkgs.flake = inputs.nixpkgs;
After this change, running a command such as "nix shell nixpkgs#gnumake" will reuse the same nixpkgs from your nix store used by NixOS, otherwise it would have been fetching the latest archive from GitHub.
If you started using flakes, you may wonder why there are commands named "nix-shell" and "nix shell", they work totally differently.
nix-shell and non flakes commands use the nixpkgs offered in the NIX_PATH environment variable, which should be set to a directory managed by nix-channel, but the channels are obsoleted by flakes...
Fortunately, in the same way we synchronized the system flakes with the commands flakes, you can add this code to use the system nixpkgs with your nix-shell:
This requires your user to logout from your current session to be effective. You can then check nix-shell and nix shell use the same nixpkgs source with this snippet. This asks the full path of the test program named "hello" and compares both results, they should match if they use the same nixpkgs.
Flakes are awesome, and are in the way of becoming the future of Nix. I hope this article shed some light about nix commands, and saved you some bandwidth.
I found this information on a blog post of the company Tweag (which is my current employer) in a series of articles about Nix flakes. That's a bit sad I didn't find this information in the official NixOS documentation, but as flakes are still experimental, they are not really covered.
We will enable the attribute IPAccounting on the systemd service nix-daemon, this will make systemd to account bytes and packets that received and sent by the service. However, when the service is stopped, the counters are reset to zero and the information logged into the systemd journal.
In order to efficiently gather the network information over time into a database, we will run a script just before the service stops using the preStop service hook.
The script checks the existence of a sqlite database /var/lib/service-accounting/nix-daemon.sqlite, creates it if required, and then inserts the received bytes information of the nix-daemon service about to stop. The script uses the service attribute InvocationID and the current day to ensure that a tuple won't be recorded more than once, because if we restart the service multiple times a day, we need to distinguish all the nix-daemon instances.
Here is the code snippet to add to your /etc/nixos/configuration.nix file before running nixos-rebuild test to apply the changes.
systemd.services.nix-daemon = {
serviceConfig.IPAccounting = "true";
path = with pkgs; [ sqlite busybox systemd ];
preStop = ''
#!/bin/sh
SERVICE="nix-daemon"
DEST="/var/lib/service-accounting"
DATABASE="$DEST/$SERVICE.sqlite"
mkdir -p "$DEST"
# check if database exists
if ! dd if="$DATABASE" count=15 bs=1 2>/dev/null | grep -Ea "^SQLite format.[0-9]$" >/dev/null
then
cat <<EOF | sqlite3 "$DATABASE"
CREATE TABLE IF NOT EXISTS accounting (
id TEXT PRIMARY KEY,
bytes INTEGER NOT NULL,
day DATE NOT NULL
);
EOF
fi
BYTES="$(systemctl show "$SERVICE.service" -P IPIngressBytes | grep -oE "^[0-9]+$")"
INSTANCE="'$(systemctl show "$SERVICE.service" -P InvocationID | grep -oE "^[a-f0-9]{32}$")'"
cat <<EOF | sqlite3 "$DATABASE"
INSERT OR REPLACE INTO accounting (id, bytes, day) VALUES ($INSTANCE, $BYTES, date('now'));
EOF
'';
};
If you want to apply this to another service, the script has a single variable SERVICE that has to be updated.
Systemd services are very flexible and powerful thanks to the hooks provided to run script at the right time. While I was interested into network usage accounting, it's also possible to achieve a similar result with CPU usage and I/O accesses.
To be honest, this challenge was hard and less fun than the previous one as we couldn't communicate about our experiences. It was so hard to schedule my Internet needs over the days than I tried to not use it at all, leaving some time when I had some unexpected need to check something.
Nevertheless, it was still a good experience to go through, it helped me realize many daily small things required Internet without me paying attention anymore. Fortunately, I avoid most streaming services and my multimedia content is all local.
I spend a lot of time every day in instant messaging software, even if they work asynchronously, it often happen to have someone answering within seconds and then we start to chat and time passes. This was a huge time consumer of the daily limited Internet time available in the challenge.
We have a few other people who made the challenge, reading their reports was very interesting and fun.
Now this second challenge is over, our community is still strong and regained some activity. People are already thinking about the next edition and we need to find what do to next. An currently popular idea would be to reduce the Internet speed to RTC (~5 kB/s) instead of limiting time, but we still have some time to debate about the next rules.
We waited one year between the first and second challenge, but this doesn't mean we can't do this more often!
To conclude this article and challenge, I would like to give special thanks to all the people who got involved or interested into the challenge.
It's often said Docker is not very good with regard to security, let me illustrate a simple way to get root access to your Linux system through a docker container. This may be useful for people who would have docker available to their user, but whose company doesn't give them root access.
This is not a Docker vulnerability being exploited, just plain Docker by design. It is not a way to become root from *within* the container, you need to be able to run docker on the host system.
If you use this to break against your employer internal rules, this is your problem, not mine. I do write this to raise awareness about why Docker for systems users could be dangerous.
UPDATE: It is possible to run the Docker as a regular user since October 2021.
We will start a simple Alpine docker container, and map the system root file system / on the /mnt container directory.
docker run -v /:/mnt -ti alpine:latest
From there, you can use the command chroot /mnt to obtain a root shell of your system.
You are now free to use "passwd" to change root password, or visudo to edit sudo rules, or you could use the system package manager to install extra software you want.
If you don't understand why this works, here is a funny analogy. Think about being in a room as a human being, but you have a super power that allows you to imagine some environment in a box in front of you.
Now, that box (docker) has a specific feature: it permits you to take a piece of your current environment (the filesystem) to project it in the box itself. This can be useful if you want to imagine a beach environment and still have your desk in it.
Now, project your whole room (the host filesystem) into your box, and now, you are all mighty for what's happening in the box, which turn to be your own room (you are root, the super user).
Here is a draft for a protocol named PTPDT, an acronym standing for Pen To Paper Data Transfer. It comes with its companion specification Paper To Brain.
The protocol describes how a pen can be used to write data on a sheet of paper. Maybe it would be better named as Brain To Paper Protocol.
The writer uses a pen on a paper in order to duplicate information from his memories into the paper.
We won't go into technical implementation details about how the pen does transmit information into the paper, we will admit some ink or equivalent is used in the process without altering data.
When storing data with this protocol, paper should be incrementally numbered for ordered information that wouldn't fit on a single storage paper unit. The reader could then read the papers in the correct order by following the numbering.
It is advised to add markers before and after the data to delimit its boundaries. Such mechanism can increase reliability of extracting data from paper, or help to recover from mixed up papers.
It is recommended to use a single encoding, often known as language, for a single piece of paper. Abstract art is considered a blob, and hence doesn't have any encoding.
lossless: all the information is extracted and can be used and replicated by the reader
lossy: all the information is extracted and could be used by the reader
partial: some pieces of information are extracted with no guarantee it can be replicated or used
In order to retrieve data from paper, reader and anoreader must use their eyesight to pass the paper data to their brain which will decode the information and store it internally. If reader's brain doesn't know the encoding, the data could be lossy or partially extracted.
It's often required to make multiple read passes to achieve a lossless extraction.
There are different compression algorithms to increase the pen output bandwidth, the reader and anoreader must be aware of the compression algorithm used.
The protocol doesn't enforce encryption. The writer can encrypt data on paper so anoreader won't be able to read this, however this will increase the mental charge for both the writer and the reader.
As it's too tedious to monitor the time spent on the Internet, I'm now using a chronometer for the day... and stopped using Internet in small bursts. It's also currently super hot where I live right now, so I don't want to do much stuff with the computer...
I can handle most of my computer needs offline. When I use Internet, it's now for a solid 15 minutes, except when I connect from my phone for checking something quickly without starting my computer, I rarely need to connect it more than a minute.
This is a very different challenge than the previous one because we can't stay online on IRC all day speaking about tricks to improve our experience with the current challenge. On the other hand, it's the opportunity to show our writing skills to tell about what we are going through.
I didn't write the last days because there wasn't much to say. I miss internet 24/7 though, and I'll be happy to get back on the computer without having to track my time and stop after the hour, which always happen too soon!
I think my parents switched their Internet subscription from RTC to DSL around 2005, 17 years ago, it was a revolution for us because not only it was multiple time faster (up to 16 kB/s !) but it was unlimited in time! Since then, I only had unlimited Internet (no time, no quota), and it became natural to me to expect to have Internet all the time.
Because of this, it's really hard for me to just think about tracking my Internet time. There are many devices in my home connected to the Internet and I just don't think about it when I use them, I noticed I was checking emails or XMPP on my phone, I turned its Wi-Fi on in the morning and forgot about it then.
There are high chances I used more than my quota yesterday because of my phone, but I also forgot to stop the time accounting script. (It had a bug preventing it to stop correctly for my defense). And then I noticed I was totally out of time yesterday evening, I had to plan a trip for today which involved looking at some addresses and maps, despite I have a local OpenStreetMap database it's rarely enough to prepare a trip when you go somewhere the first time, and that you know you will be short on time to figure things out on the spot.
Ah yes, my car also has an Internet connection with its own LTE access, I can't count it as part as the challenge because it's not really useful (I don't think I used it at all), but it's there.
And it's in my Nintendo Switch too, but it has an airplane mode to disable connectivity.
And Steam (the game library) requires being online when streaming video games locally (to play on the couch)...
So, there are many devices and software silently (not always) relying on the Internet to work that we don't always know exactly why they need it.
While I said I wasn't really restrained with only one hour of Internet, this was yesterday. I didn't have a feeling to work on open source project in the day, but today I would like to help to review packages updates/changes, but I couldn't. Packaging requires a lot of bandwidth and time, it requires searching for errors if they are known or new, it just can't be done offline because it relies on many external packages that has to be downloaded, and with a DSL line it takes a lot of time to keep a system up to date with its development branch.
Of course, with some base materials like the project main repository, it's possible to contribute, but not really at reviewing packages.
I will add my counter a 30 minutes penalty for not tracking my phone Internet usage today. I still have 750 seconds of Internet when writing this blog post (including the penalty).
Yesterday I improved my blog deployment to reduce the time taken by the file synchronization process, from 18s to 4s. I'm using rsync, but I have four remote servers to synchronize: 1 for http, 1 for gemini, 1 for gopher and 1 for a gopher backup. As the output files of my blog are always generated and brand new, rsync was recopying all the files to update the modification time, now I'm using -c for checksum and -I to ignore times, and it's significantly faster and ensure the changes are copied. I insist about the changes being copied, because if you rely on size only, it will work 99% of the time, except when you fix a single letter type that won't change the file size... been there.
For now, I turned off my smartphone Wi-Fi because it would be hard to account its time.
My main laptop is using the very nice script from our community member prahou.
The script design is smart, it's accounting time and displaying time consumed, it can be described as a machine state like this:
+------------+ +----------------------------+
| wait for | | Accounting time for today |
| input | Type Enter | Internet is enabled |
| |------------------->| |
| Internet | | display time used |
| offline | | today |
+------------+ +----------------------------+
^ v
| press ctrl+C |
| (which is trapped to run a func) |
+-----------------------------------------+
As the way to disable / enable internet is specific to every one, the script has two empty fuctions: NETON and NETOFF, they enable or disable Internet access. On my Linux computer I found an easy way to achieve this by adding a bogus default route with a metric 1, bypassing my default route. Because the default route doesn't work my system can't reach the Internet, but it let my LAN in a working state.
So far, it's easy to remember I don't have Internet all the time, but with my Internet usage it works fine. I use the script to "start" Internet, check my emails, read IRC channels and reply, and then I disconnect. By using small amount of time, I can achieve most of my needs in less than a minute. However, that wouldn't be practical if I had to download anything big, and people with a fast Internet access (= not me) would have an advantage.
My guess about this first day being easy is that as I don't use any streaming service, I don't need to be connected all the time. All my data are saved locally, and most of my communication needs can be done asynchronously. Even publishing this blog post shouldn't consume more than 20 seconds.