<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>https://dataswamp.org/~solene/</link>
    <atom:link href="https://dataswamp.org/~solene/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>File transfer made easier with Tailscale</title>
  <description>
    <![CDATA[
<pre># Introduction

Since I started using Tailscale (using my own headscale server), I've been enjoying it a lot.  The file transfer feature is particularly useful with other devices.

This blog post explains my small setup to enhance the user experience.

# Quick introduction

Tailscale is a network service that allows to enroll devices into a mesh VPN based on WireGuard, this mean every peer connects to every peers, this is not really manageable without some lot of work.  It also allows automatic DNS assignment, access control, SSH service and lot of features.

Tailscale refers to both the service and the client.  The service is closed source, but not the client.  There is a reimplementation of the server called Headscale that you can use with the tailscale client.

=> https://tailscale.com/ Tailscale official website
=> https://headscale.net/ Headscale official website

# Automatically receive files

When you want to receive a file from Tailscale on your desktop system, you need to manually run `tailscale file get --wait $DEST`, this is rather not practical and annoying to me.

I wrote a systemd service that starts the tailscale command at boot, really it is nothing fancy but it is not something available out of the box.

In the directory `~/.config/systemd/user/` edit the file `tailscale-receiver.service` with this content:

```
[Unit]
Description=tailscale receive file
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/tailscale file get --wait --loop /%h/Documents/
Restart=always
RestartSec=5

[Install]
WantedBy=default.target
```

The path `/%h/Documents/` will expand to `/$HOME/Documents/` (the first / may be too much, but I keep it just in case), you can modify it to your needs.

Enable and start the service with the command:

```
systemctl --user daemon-reload
systemctl --enable --now tailscale-receiver.service
```

# Send files from Nautilus

When sending files, it is possible to use `tailscale file cp $file $target:` but it is much more convenient to have it directly from the GUI, especially when you do not know all the remotes names.  This also makes it easier for family member who may not want to fire up a terminal to send a file.

Someone wrote a short python script to add this "Send to" feature to Nautilus

=> https://github.com/flightmansam/nautilus-sendto-tailscale-python Script flightmansam/nautilus-sendto-tailscale-python

Create the directory `~/.local/share/nautilus-python/extensions/` and save the file `nautilus-send-to-tailscale.py` in it.

Make sure you have the package "nautilus-python" installed, on Fedora it is `nautilus-python` while on Ubuntu it is `python3-nautilus`, so your mileage may vary.

Make sure to restart nautilus, a `killall nautilus` should work but otherwise just logout the user and log back.  In Nautilus, in the contextual menu (right click), you should see "Send to Tailscale" and a sub menu should show the hosts.

# Conclusion

Tailscale is a fantastic technology, having a mesh VPN network allows to secure access to internal services without exposing anything to the Internet.  And because it features direct access between peers, it also enables some interesting uses like fast file transfer or VOIP calls without a relay.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2026-03-08-linux-integration-tailscale-file-transfer.html</guid>
  <link>https://dataswamp.org/~solene/2026-03-08-linux-integration-tailscale-file-transfer.html</link>
  <pubDate>Sun, 08 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Comparison of cloud storage encryption software</title>
  <description>
    <![CDATA[
<pre># Introduction

When using a not end-to-end encrypted cloud storage, you may want to store your file encrypted so if the cloud provider (that could be you if you self host a nextcloud or seafile) get hacked, your data will be available to the hacker, this is not great.

While there are some encryption software like age or gpg, they are not usable for working transparently with files.  A specific class of encryption software exists, they create a logical volume with your files and they are transparently encrypted in the file system.

You will learn about cryptomator, gocryptfs, cryfs and rclone.  They allow you to have a local directory that is synced with the cloud provider, containing only encrypted files, and a mount point where you access your files.  Your files are sent encrypted to the cloud provider, but you can use it as usual (with some overhead).

This blog post is a bit "yet another comparison" because all these software also provide a comparison list of challengers.

=> https://nuetzlich.net/gocryptfs/comparison/ A comparison done by gocryptfs
=> https://cryptomator.org/comparisons/ A comparison done by cryptomator
=> https://www.cryfs.org/comparison A comparison done by cryfs

# Benchmark

My comparison will compare the following attributes and features of each software:

* number of files in the encrypted dir always using the same input (837 MB from 4797 files mades of pictures and a git repository)
* filename and file tree hierarchy obfuscation within the encrypted dir
* size of the encrypted dir compared to the 837 MB of the raw material
* cryptography used

# Software list

Here is the challenger list I decided to evaluate:

## Cryptomator

The main software (running on Linux) is open source, they have a client for all major operating system around, including Android and iOS.  The android apps is not free (as in beer), the iOS app is free for read-only, the windows / linux / Mac OS program is free.  They have an offer for a company-wide system which can be convenient for some users.

Cryptomator features a graphical interface, making it easy to use.

Encryption suites are good, it uses AES-256-GCM and scrypt, featuring authentication of the encrypted data (which is important as it allows to detect if a file was altered).  A salt is used.

Hierarchy obfuscation can be sufficient depending on your threat model.  The whole structure information is flattened, you can guess the number of directories and their number of files files, and the file sizes, all the names are obfuscated.  This is not a huge security flaw, but this is something to consider.

=> https://docs.cryptomator.org/security/architecture/ Cryptomator implementation details

## gocryptfs

This software is written in Go and works on Linux, a C++ Windows version exists, and there is a beta version of Mac OS.

=> https://nuetzlich.net/gocryptfs/ gocryptfs official website

Hierarchy obfuscation is not great, the whole structure information is saved although the names are obfuscated.

Cryptography wise, scrypt is used for the key derivation and AES-256-GCM for encryption with authentication.

=> https://nuetzlich.net/gocryptfs/forward_mode_crypto/ gocryptfs implementation details

## CryFS

I first learned about cryfs when using KDE Plasma, there was a graphical widget named "vault" that can drive cryfs to create encrypted directories.  This GUI also allow to use gocryptfs but defaults to cryfs.

=> https://www.cryfs.org/ CryFS official website

CryFS is written in C++ but an official rewrite in Rust is ongoing.  It works fine on Linux but there are binaries for Mac OS and Windows as well.

Encryption suites are good, it uses AES-256-GCM and scrypt, but you can use xchacha20-poly1305 if you do not want AES-GCM.

It encrypts files metadata and split all files into small blocks of fixed size, it is the only software in the list that will obfuscate all kind of data (filename, directory name, tree hierarchy, sizes, timestamp) and also protect against an old file replay.

=> https://www.cryfs.org/howitworks CryFS implementation details

## rclone

It can be surprising to see rclone here, it is a file transfer software supporting many cloud provider, but it also features a few "fake" provider that can be combined with any other provider.  Thoses fakes remotes can be used to encrypt files, but also aggregate multiple remotes or split files in chunks.  We will focus on the "crypt" remote.

=> https://rclone.org/ Rclone official website

rclone is a Go software, it is available everywhere on desktop systems but not on mobile devices.

Encryption is done through libNaCl and uses XSalsa20 and Poly1305 which both support authentication, and also use scrypt for key derivation.  A salt can be used but it is optional, make sure to enable it.

Hierarchy obfuscation is not great, the whole structure information is saved although the names are obfuscated.

=> https://rclone.org/crypt/ rclone crypt remote implementation details

## Other

ecryptfs is almost abandonware, so I did not cover it.

=> https://lore.kernel.org/ecryptfs/ef98d985-6153-416d-9d5e-9a8a8595461a@app.fastmail.com/ ecryptfs is unmaintained and untested

encfs is limited and recommend users to switch to gocryptfs

=> https://github.com/vgough/encfs?tab=readme-ov-file#about encFS GitHub page: anchor "about"

LUKS and Veracrypt are not "cloud friendly" because although you can have a local big file encrypted with it and mount the volume locally, it will be synced as a huge blob on the remote service.

# Results

From sources directories with 4312 files, 480 directories for a total of 847 MB.

* cryptomator ended up with 5280 files, 1345 directories for a total of 855 MB
* gocryptfs ended up with 4794 files, 481 directories for a total of 855 MB
* cryfs ended up with 57928 files, 4097 directories for a total of 922 MB
* rclone ended up with 4311 files, 481 directories for a total of 847 MB

Although cryptomater has a bit more files and directories in its encrypted output compared to the original files, the obfuscation is really just all directories being in a single directory with filenames obfuscated.  Some extra directories and files are created for cryptomator internal works, which explains the small overhead.

I used default settings for cryfs with a blocksize of 16 kB which is quite low and will be a huge overhead for a synchronization software like Nextcloud desktop.  Increasing the blocksize is a setting worth considering depending on your file sizes distribution.  All files are spread in a binary tree, allowing it to scale to a huge number of files without filesystem performance issue.

# Conclusion

In my opinion, the best choice from a security point of view would be cryfs.  It features full data obfuscation, good encryption, mechanisms that prevent replaying old files or swapping files.  The documentation is clear and we can see the design choices are explained with ease and clearly.

But to be honest, I would recommend cryptomator to someone who want a nice graphical interface, easy to use software and whose threat model allows some metadata reveal.   It is also available everywhere (although not always for free), which is something to consider.

Authentication is used by all these software, so you will know if a file was tampered with, although it does not protect against swapping files or replaying an old file, this is certainly not in everyone's threat model.  Most people will just want to prevent a data leak to read their data, but the case of a cloud storage provider modifying your encrypted files is less likely.

# Going further


There is a GUI frontend for gocryptfs and cryfs called SiriKali.

=> https://mhogomchungu.github.io/sirikali/ SiriKali official project page
=> https://github.com/mhogomchungu/sirikali SiriKali GitHub project

Some self hostable cloud storage provider exists with end-to-end encryption (file are encrypted/decrypted locally and only stored as blob remotely):

The two major products I would recommend are Peergos and Seafile.  I am a peergos user, it works well and features a Web UI where as seafile encryption is not great as using the web ui requires sharing the password, metadata protection is bad too.

=> https://peergos.org/ Peergos official website
=> https://www.seafile.com/en/home/ Seafile official website
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2026-02-19-local-encrypted-volume-comparison.html</guid>
  <link>https://dataswamp.org/~solene/2026-02-19-local-encrypted-volume-comparison.html</link>
  <pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Revert fish shell deleting shortcuts behavior</title>
  <description>
    <![CDATA[
<pre># Introduction

In a recent change within fish shell, the shortcut to delete last words were replaced by "delete last big chunk" (I don't know exactly how it is called in this case) which is usually the default behavior on Mac OS "command" key vs "alt" key and I guess it is why it was changed like this on fish.

Unfortunately, this broke everyone's habit and a standard keyboard do not even offer the new keybinding that received the old behavior.

There is an open issue asking to revert this change.

=> https://github.com/fish-shell/fish-shell/issues/12122 GitHub fish project: Revert alt-backspace behaviour on non-macOS systems #12122

I am using this snippet in `~/.config/fish/config.fish` to restore the previous behavior (the same as in other all other shell, where M-d deletes last word).  I build it from the GitHub issue comments, I had to add `$argv` for some reasons.

```
if status is-interactive
 # Commands to run in interactive sessions can go here

 # restore delete behavior
 bind $argv alt-backspace backward-kill-word
 bind $argv alt-delete kill-word
 bind $argv ctrl-alt-h backward-kill-word
 bind $argv ctrl-backspace backward-kill-token
 bind $argv ctrl-delete kill-token
end
```
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2026-02-14-fish-shell-delete-behavior.html</guid>
  <link>https://dataswamp.org/~solene/2026-02-14-fish-shell-delete-behavior.html</link>
  <pubDate>Sat, 14 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Declaratively manage containers on Linux</title>
  <description>
    <![CDATA[
<pre># Introduction

When you have to deal with containers on Linux, there are often two things making you wonder how to deal with effectively: how to keep your containers up to date, and how to easily maintain the configuration of everything running.

It turns out podman is offering systemd unit templates to declaratively manage containers, this comes with the fact that podman can run in user mode.  This combination gives the opportunity to create files, maintain them in git or deploy them with a configuration management tool like ansible, and keep things separated per user.

It is also very convenient when you want to run a program shipped as a container on your desktop.

For some reason, this is called "quadlets".

=> https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html podman-systemd.unit man page

In this guide, I will create a kanboard service (a PHP software to run a kanban) under the kanban user.

# Setup (simple service)

You need to create files that will declare containers and/or networks, this can be done in various places depending on how you want to manage the files, the man page gives all the details, but basically you want to stick with the two following options:

* system-wide configuration: `/etc/containers/systemd/users/$(UID)`
* user configuration: `~/.config/containers/systemd/`

Both will run rootless containers under the user UID, but one keep the files in `/etc/` which may be more suitable for central management.

As systemd is used to run the containers, if you want to run a container for a user that is not one where you are logged, you need to always enable it so its related systemd processes / services are running, including the containers, this is done by enabling "linger".

```
useradd -m kanban
loginctl enable-linger kanban
```

This will immediately create a session for that user and pop all related services.

Now, create a file `/etc/containers/systemd/users/1001/` (1001 being the uid of kanban user) with this content:

```
[Container]
Image=docker.io/kanboard/kanboard:latest
Network=podman
PublishPort=10080:80
Volume=kanboard_data:/var/www/app/data
Volume=kanboard_plugins:/var/www/app/plugins
Volume=kanboard_ssl:/etc/nginx/ssl

[Service]
Restart=always

[Install]
WantedBy=default.target
```

This can exactly map to a very long podman command line that would use the image `docker.io/kanboard/kanboard:latest` in network `podman` and declaring three different container volumes and associated mount points.  This generator even allows you to add command line arguments in case an option is not available with systemd format.

Because the user already runs, the container will not start yet except if you use `disable-linger` and then `enable-linger` the kanban user, and that would not be ideal to be honest.  There is a better way to proceed: `systemctl --user --machine kanban@ daemon-reload` which basically runs `systemctl --user daemon-reload` by the user `kanban` except we do it as root user which is more convenient for automation.

Running the container this way will trigger exactly the same processes as if you started it manually with `podman run -v kanboard_data:/var/www/app/data/ [...] docker.io/kanboard/kanboard:latest`.

Note that you can skip the `[Install]` section if you do not want to automatically start the container, and prefer to manually start/stop it with "systemctl", this is actually useful if you have the container under your regular user and do not always need it.

# Setup (advanced service)

If you want to run a more complicated service that need a couple of containers to talk together like a web server, a backend runner and a database, you only need to configure them in the same network.

If you need them to start the containers of a group in a specific order, you can add use systemd dependency declaration in `[Install]` section.

Podman will run a local DNS resolver that translates the container name into a working hostname, this mean if you have a postgresql container called "db", then you can refer to the postgresql host as "db" from another container within the same network.  This works the same way as docker-compose.

# Ops

## Getting into a user shell

To have a working environment for `journalctl` or `systemctl` commands to work requires to use `machinectl shell kanban@`, otherwise the dbus environment variables will not be initialized.  Note that it works too when connecting with ssh, but it is not always ideal if you use it locally.

From this shell, you can run commands like `systemctl --user status kanboard.container` for our example or `journalctl --user -f -u kanboard.container`, or run a shell in a container, inspect a volume etc...

Using `sudo -u user` or `su - user` will not work.

## Disabling a user

If you want to disable the services associated with an user, use this command:

```
loginctl disable-linger username
```

This will immediately close all its sessions and stop services running under that user.

## Automatic updates

This is the very first reason I went into using quadlets for local services using containers, I did not want to have to manually run some `podman pull` commands over a list then restart related containers that were running.

Podman gives you a systemd services doing all of this for you, this works for containers with the parameter `AutoUpdate=registry` within the section `[Container]`.

Enable the timer of this service with: `systemctl --user enable --now podman-auto-update.timer` then you can follow the timer information with `systemctl --user status podman-auto-update.timer` or logs from the update service with `journalctl --user -u podman-auto-update.service`.

Make sure to pin your container image to a branch like "stable" or "lts" or "latest" if you want a development version, the update mechanism will obviously do nothing if you pin the image to a specific version or checksum.

# Conclusion

Quadlets made me switch to podman as it allowed me to deploy and maintain containers with ansible super easily, and also enabled me to separate each services into different users.

Prior to this, handling containers on a simple server or desktop was an annoying task to figure what should be running, how to start them and retrieving command lines from the shell history or use a docker/podman compose file.  This also comes with all the power from systemd like querying a service status or querying logs with journalctl.

# Going further

There is a program named "podlet" that allow you to convert some file format into quadlets files, most notably it is useful when getting a `docker-compose.yml` file and transforming it into quadlet files.

=> https://github.com/containers/podlet/ podlet GitHub page
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2026-02-10-podman-containers-with-systemd.html</guid>
  <link>https://dataswamp.org/~solene/2026-02-10-podman-containers-with-systemd.html</link>
  <pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Hardware review: ergonomic mouse Logitech Lift</title>
  <description>
    <![CDATA[
<pre># Introduction

In addition to my regular computer mouse, by the end of 2024 I bought a Logitech Lift, a wireless ergonomic vertical mouse.  This was the first time I used such mouse, although I am regularly using a track ball, the experience is really different.

=> https://www.logitech.com/en-gb/shop/p/lift-vertical-ergonomic-mouse.910-006475 Logitech.com : Lift product

I wanted to write this article to give some feedback about this device, I enjoy it a lot and I can not really go back to a regular mouse now.

# Specifications

The mouse works with a single AA / LR6 battery that with a heavy daily use for nine months is still reported as 30% charged.

The lift connects using Bluetooth, but Logitech provides a small USB dongle for a perfect "out of the box" experience with any operating system.  The dongle can be stored within the mouse when travelling, or when not using it.  There is a small button on the bottom of the mouse and 3 LED, this allows the mouse to be switched to different computers: two in Bluetooth, one for the dongle.  The first profile is always the dongle.  This allows you to connect the mouse to two different computers with Bluetooth and be able to switch between them.  This works very well in practice.

About the buttons, nothing fancy with the standard two buttons, there are extra "back / next" buttons easily available, one button to cycle the laser resolution / sensitivity.  The wheel is excellent, it is precise and easy to use, but if you give it a good kick it will spin a lot without being in free wheel like some other wheels, which is super handy to scroll a huge chunk of text.

Due to the mouse design, it is not ambidextrous, but Logitech made a version for left-handed users and right-hander users.

# Experience

The first week with the mouse was really weird, I was switching back and forth with my old Steel Series mouse because I was less accurate and not used to it.

After a week, I became used to holding it, moving it, and it was a real joy and source of fun to go on the computer to use this mouse :)

Then, without noticing, I started using it exclusively.  A few months later, I realized I did not use the previous mouse for a long time and gave it a try.  This was a terrible experience, I was surprised that it was fitting really poorly in my hand, then I disconnected it, and it has been stored in a box since then.

It is hard to describe the feeling of this ergonomic mouse, the hand position is really different, but it feels much more enjoyable that I do not consider using a non-ergonomic mouse ever again.

I was reluctant to use a wireless mouse at first, but not having to deal with the cable acting as a "spring" is really appreciable.

I can definitely play video games with this mouse, except nervous FPS (maybe with some training?).

# Conclusion

The price tag could be a blocker for many, but at the same time it is an essential peripheral when using your computer.  If you feel some pain in your hand when using your computer mouse, maybe give a try to ergonomic mice.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-09-05-hardware-review-logitech-lift.html</guid>
  <link>https://dataswamp.org/~solene/2025-09-05-hardware-review-logitech-lift.html</link>
  <pubDate>Fri, 05 Sep 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>URL filtering HTTP(S) proxy on Qubes OS</title>
  <description>
    <![CDATA[
<pre># Preamble

This article was first published as a community guide on Qubes OS forum.  Both are kept in sync.

=> https://forum.qubes-os.org/t/url-filtering-https-proxy/35846

# Introduction

This guide is meant to users who want to allow a qube to reach some websites but not all the Internet, but facing the issue that using the firewall does not work well for DNS names using often changing IPs.

⚠️ This guide is for advanced users who understand what a HTTP(s) proxy is, and how to type commands or edit files in a terminal.

The setup will create a `sys-proxy-out` qube that will define a list of allowed domains, and use qvm-connect-tcp to allow client qubes to use it as a proxy. Those qubes could have no netvm, but still reach the filtered websites.

I based it on debian 12 xfce, so it's easy to set up and will be supported long term.

# Use case

* an offline qube that need to reach a particular website
* a web browsing qube restricted to a list of websites
* mix multiple netvm / VPNs into a single qube

# Setup the template

* Install debian-12-xfce template
* Make a clone of it, let's call it debian-12-xfce-squid
* Start the qube and open a terminal
* Type `sudo apt install -y squid`
* Delete and replace `/etc/squid/squid.conf` with this content (the default file is not suitable at all)

```
acl localnet src 127.0.0.1/32

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl permit_list dstdomain '/rw/config/domains.txt'
http_access allow localnet permit_list

http_port 3128

cache deny all
logfile_rotate 0
coredump_dir /var/spool/squid
```

The configuration file only allows the proxy to be used for ports 80 and 443, and disables cache (which would only apply to port 80).

Close the template, you are done with it.

# Setup an out proxy qube

This step could be repeated multiple times, if you want to have multiple proxies with different lists of domains.

* Create a new qube, let's call it `sys-proxy-out`, based on the template you configured above (`debian-12-xfce-squid` in the example)
* Configure its firewall to allow the destination `*` and port TCP 443, and also `*` and port TCP 80 (this covers basic needs for doing http/https). This is an extra safety to be sure the proxy will not use another port.
* Start the qube
* Configure the domain list in `/rw/config/domains.txt` with this format:

```
# for a single domain
domain.example

# for all direct subdomains of qubes.org including qubes.org
# this work for doc.qubes-os.org for instance, but not foo.doc.qubes-os.org
.qubes-os.org
```

ℹ️ If you change the file, reload with `sudo systemctl reload squid`.

ℹ️ If you want to check squid started correctly, type `systemctl status squid`.  You should read that it's active, and that there are no error in the log lines.

⚠️ If you have a line with a domain included by another line, squid will not start as it considers it an error! For instance `.qubes.org` includes `doc.qubes-os.org`.

⚠️ As far as I know, it is only possible to allow a hostname or a wildcard of this hostname, so you at least need to know the depth of the hostname. If you want to allow `anything.anylevel.domain.com`, you could use `dstdom_regex` instead of `dstdomain`, but it seems a regular source of configuration problems,  and should not be useful for most users.

In dom0, using the "Qubes Policy Editor" GUI, create a new file named 50-squid (or edit the file `/etc/qubes/policy.d/50-squid.policy`) and append the configuration lines that you need to adapt from the following example:

```
qubes.ConnectTCP +3128 MyQube @default allow target=sys-proxy-out
qubes.ConnectTCP +3128 MyQube2 @default allow target=sys-proxy-out
```

This will allow qubes `MyQube` and `MyQube2` to use the proxy from `sys-proxy-out`. Adapt to your needs here.

# How to use the proxy

Now the proxy is set up, and `MyQube` is allowed to use it, a few more things are required:

* Start qube `MyQube`
* Edit `/rw/config/rc.local` to add `qvm-connect-tcp ::3128`
* Configure http(s) clients to use `localhost:3128` as a proxy

It's possible to define the proxy user wide, this should be picked by all running programs, using this:

```
mkdir -p /home/user/.config/environment.d/
cat <<EOF >/home/user/.config/environment.d/proxy.conf
all_proxy=http://127.0.0.1:3128/
EOF
```

# Going further

## Using a disposable qube for the proxy

The sys-proxy-out could be a disposable. In order to proceed:

* mark sys-proxy-out as a disposable template in its settings
* create a new disposable qube using sys-proxy-out as a template
* adapt the dom0 rule to have the new disposable qube name in the target field

## Checking logs

In the proxy qube, you can check all requests done in `/var/log/squid/access.log`, you can filter with `grep TCP_DENIED` to see denied requests, this can be useful to adapt the domain list.

## Test the proxy

### Check allowed domains are reachable

From the http(s) client qube, you can try this command to see if the proxy is working:

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

If the output is not `curl: (56) CONNECT tunnel failed, response 403` then it's working.

### Check non-allowed domains are denied

Use the same command as above, but with a domain you did not allow

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

The output should be `curl: (56) CONNECT tunnel failed, response 403`.

### Verify nothing is getting cached

In the qube `sys-proxy-out`, inspect `/var/spool/squid/`, it should be empty. If not, please report here, this should not happen.

Some logs file exist in `/var/log/squid/`, if you don't want any hints about queried domains, configure squid accordingly. Privacy-specific tweaks are beyond the scope of this guide.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-08-29-qubes-os-filtering-out-proxy.html</guid>
  <link>https://dataswamp.org/~solene/2025-08-29-qubes-os-filtering-out-proxy.html</link>
  <pubDate>Fri, 29 Aug 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Introduction to Qubes OS when you do not know what it is</title>
  <description>
    <![CDATA[
<pre># Introduction

Qubes OS can appear as something weird and hard to figure for people that never used it.  By this article, I would like to help other understanding what it is, and when it is useful.

=> https://www.qubes-os.org/ Qubes OS official project page

Two years ago, I wrote something that was mostly a list of Qubes OS features, but this was not really helping readers to understand what is Qubes OS except it does XYZ stuff.

While Qubes OS is often tagged as a security operating system, it only offers a canvas to handling compartmentalized systems to work as a whole.

Qubes OS gives its user the ability to do cyber risk management the way they want, which is unique.  A quick word about it if you are not familiar with risk management: for instance, when running software at different level, you should ask "can I trust this?", can you trust the packager?  The signing key?  The original developer?  The transitive dependencies involved?  It is not possible to entirely trust the whole chain, so you might want to take actions like handling sensitive data only when disconnected.  Or you might want to ensure that if your web browser is compromised, the data leak and damage will be reduced to a minimum.  This can go pretty far and is complementary to in-depth defense or security hardening of operating systems.

=> https://dataswamp.org/~solene/2023-06-17-qubes-os-why.html 2023-06-17 Why one would use Qubes OS?

In the article, I will pass on some features that I do not think are interesting for introducing Qubes OS to people or that could be too confusing, so no need to tell me I forgot to talk about XYZ feature :-)

# Meta operating system

I like to call Qubes OS a meta operating system, because it is not a Linux / BSD / Windows based OS: its core is Xen (some kind of virtualization enabled kernel).  Not only it's Xen based, but by design it is meant to run virtual machines, hence the name "meta operating system" which is an OS meant to run many OSes make sense to me. 

Qubes OS comes with a few virtual machines templates that are managed by the development team:

* debian
* fedora
* whonix (debian based distribution hardened for privacy)

There are also community templates for arch linux, gentoo, alpine, kali, kicksecure and certainly other you can find within the community.

Templates are not just templates, they are a ready to work, one-click/command install systems that integrate well within Qubes OS.  It is time to explain how virtual machines interact together, as it is what makes Qubes OS great compared to any Linux system running KVM.

A virtual machine is named a "qube", it is a set of information and integration (template, firewall rules, resources, services, icons, ...).

# Virtual machines synergy and integration

The host system which has some kind of "admin" powers with regard to virtualization is named dom0 in Xen jargon.  On Qubes OS, dom0 is a Fedora system (using a Xen kernel) with very few things installed, no networking and no USB access.  Those two devices classes are assigned to two qubes, respectively named "sys-net" and "sys"usb".  It is so to reduce the surface attack of dom0.

When running a graphical program within a qube, it will show a dedicated window in dom0 window manager, there are no big windows for each virtual machine, so running programs feels like a unified experience.  The seamless windows feature works through a specific graphics driver within the qube, official templates support it and there is a Windows driver for it too.

Each qube has its own X11 server running, its own clipboard, kernel and memory.  There are features to copy the clipboard of one qube, and transfer it to the clipboard of another qube.  This can be configured to prevent clipboards to be used where you should not.  This is rather practical if you store all your passwords in a qube, and you want to copy/paste them.

There are also file copy capabilities between qubes, which goes through Xen channels (some interconnection between Xen virtual machines allowing to transfer data), so no network is involved for data transfer.  Data copy can also be configured, like one qube may be able to receive files from any, but never allow file to be transferred out.

In operations involving RPC features like file copy, a GUI in dom0 is shown to ask confirmation by the user (with a tiny delay to prevent hitting Enter before being able to understand what was going on).

As mentioned above, USB devices are assigned to a qube named "sys-usb", it provides a program to pass a device to a given qube (still through Xen channels), so it is easy to dispatch devices where you need them.

# Networking

Qubes OS offer a tree like networking with sys-net (holding the hardware networking devices) at the root and a sys-firewall qube below, from there, you can attach qubes to sys-firewall to get network.

Firewall rules can be configured per qube, and will be applied on the qube providing network to the one configured, this prevents the qube from removing its own rules because it is done at a level higher in the tree.

A tree like networking system also allow running multiple VPN in parallel, and assign qubes to each VPNs as you need.  In my case, when I work for multiple clients they all have their own VPN, so I dedicate them a qube connecting to their VPN, then I attach qubes I use to work for this client to the according VPN.  With the firewall rule set on the VPN qube to prevent any connection except to the endpoint, I have the guarantee that all traffic of that client work will go through their VPN.

It is also possible to not use any network in a qube, so it is offline and unable to connect to network.

Qubes OS come out of the box (except if you uncheck the box) with a qube encapsulating all traffic network through Tor network (incompatible traffic like UDP is discarded).

# Templates (in Qubes OS jargon)

I talked about templates earlier, in the sense of "ready to be installed and used", but a "Template VM" in Qubes OS has a special meaning.  In order to make things manageable when you have a few dozen qubes, like handling updates or installing software, Qubes OS introduced Templates VMs.

A Template VM is a qube that you almost never use, except when you need to install a software or make a system change within it.  Qubes OS updater will also make sure, from time to time, that installed packages are up-to-date.

So, what are them if there are not used?  They are templates for a type of qubes named "AppVM".  An AppVM is what you work the most with.  It is an instance of the template it is configured to use, always reset from pristine state when starting, with a few directories persistent across reboot for this AppVM.  The directories are all in `/rw/` and symlinked where useful: `/home` and `/usr/local/` by default.  You can have a single Template VM of Debian 13 and a dozen AppVM with each their own data in it, if you want to install "vim", you do it in the template and then all AppVM using Debian 13 Template VM will have "vim" installed (after a reboot after the change). Note that is also work for emacs :)

With this mechanism, it is easy to switch an AppVM from a Linux distribution to another, just switch the qube template to use Fedora instead of Debian, reboot, done.  This is also useful when switching to a new major release of the distribution in the template: Debian 13 is bugged?  Let's switch back to Debian 12 until it is fixed and continue working (do not forget writing a bug report to Debian).

# Disposables templates

You learned about Templates VM and how a AppVM inherits all the template, reset in fresh state every time.  What about an AppVM that could be run from its pristine state the same way?  They did it, it is called a disposable qube.

Basically, a disposable qube is a temporary copy of an AppVM with all its storage discarded on shutdown.  It is the default for the sys-usb qube handling USB, if it gets infected by a device, it will be reset from a fresh state next boot.

Disposables have many use case:

* running a command on non-trusted file, to view or try to convert it to something more trustable (a PDF into BMP?)
* running a known to work system for a specific task, and be sure it will work exactly the same every time, like when using a printer
* as a playground to try stuff in an environment identical to another

# Automatic snapshot

Last but not least, a pretty nice but hidden feature is the ability to revert the storage of a qube to a previous state.

=> https://www.qubes-os.org/doc/volume-backup-revert/ Qubes OS documentation: volume backup and revert

qubes are using virtual storage that can stack multiple changes, from a base image with different layers of changes over time stacked on top of it.  Once the number of revisions to keep is reached, the oldest layer above the base image is merged.  This is a simple mechanism that allows to revert to any given checkpoint between the base image and the last checkpoint.

Did you delete important files, and restoring a backup is way too much effort?  Revert the last volume.  Did a package update break an important software in a template? Revert the last volume.

Obviously, it comes as an extra storage cost, deleted files are only freed from the storage once they do not exist in a checkpoint.

# Downsides of running Qubes OS


Qubes OS has some drawbacks:

* it is slower than running a vanilla system, because all virtualization involved as a cost, most notably all 3D rendering is done on CPU within qubes, which is terrible for eye candy effects or video decoding.  It is possible, with a lot of efforts, to assign second GPU when you have one, to a single qube at a time, to use it, but as the sentence already long enough is telling out loud, it is not practical.
* it requires effort to get into as it is different from your usual operating system, you will need to learn how to use it (this sounds rather logical when using a tool)
* hardware compatibility is a bit limited due Xen kernel, there is compatibility list curated by the community

=> https://www.qubes-os.org/hcl/ Qubes OS hardware compatibility list

# Conclusion

I tried to give a simple overview of major Qubes OS features.  The goal was not to make you reader an expert or be aware of every single feature, but to allow you to understand what Qubes OS can offer.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-08-03-introduction-to-qubes-os.html</guid>
  <link>https://dataswamp.org/~solene/2025-08-03-introduction-to-qubes-os.html</link>
  <pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to trigger a command on a running Linux laptop when disconnected from power</title>
  <description>
    <![CDATA[
<pre># Introduction

After thinking about BusKill product that triggers a command once the USB cord disconnects, I have been thinking at a simple alternative.

=> https://www.buskill.in BusKill official project website

When using a laptop connected to power most of the time, you may want it to power off once it gets disconnected, this can be really useful if you use it in a public area like a bar or a train.  The idea is to protect the laptop if it gets stolen while in use and unlocked.

Here is how to proceed on Linux, using a trigger on an udev rule looking for a change in the power_supply subsystem.

For OpenBSD users, it is possible to use apmd as I explained in this article:

=> https://dataswamp.org/~solene/2024-02-20-rarely-known-openbsd-features.html#_apmd_daemon_hooks => Rarely known OpenBSD features: apmd daemon hooks

In the example, the script will just power off the machine, it is up to you to do whatever you want like destroy the LUKS master key or trigger the coffee machine :D

# Setup

Create a file `/etc/udev/rules.d/disconnect.rules`, you can name it how you want as long as it ends with `.rules`:

```
SUBSYSTEM=="power_supply", ENV{POWER_SUPPLY_ONLINE}=="0", ENV{POWER_SUPPLY_TYPE}=="Mains", RUN+="/usr/local/bin/power_supply_off"
```

Create a file `/usr/local/bin/power_supply_off` that will be executed when you unplug the laptop:

```
#!/bin/sh
echo "Going off because power supply got disconnected" | systemd-cat
systemctl poweroff
```

This simple script will add an entry in journald before triggering the system shutdown.

Mark this script executable with:
```
chmod +x /usr/local/bin/power_supply_off
```

Reload udev rules using the following commands:

```
udevadm control --reload-rules
udevadm trigger
```

# Testing

If you unplug your laptop power, it should power off, you should find an entry in the logs.

If nothing happens, looks at systemd logs to see if something is wrong in udev, like a syntax error in the file you created or an incorrect path for the script.

# Script ideas

Depending on your needs, here is a list of actions the script could do, from gentle to hardcore:

* Lock user sessions
* Hibernate
* Proper shutdown
* Instant power off (through sysrq)
* Destroy LUKS master key to make LUKS volume unrecoverable + Instant power off

# Conclusion

While BusKill is an effective / unusual product that is certainly useful for a niche, protecting a running laptop against thieves is an extra layer when being outside.

Obviously, this use case works only when the laptop is connected to power.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-05-31-linux-killswitch-on-power-disconnect.html</guid>
  <link>https://dataswamp.org/~solene/2025-05-31-linux-killswitch-on-power-disconnect.html</link>
  <pubDate>Sat, 31 May 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>PDF bruteforce tool to recover locked files</title>
  <description>
    <![CDATA[
<pre># Introduction

Today, I had to open a password protected PDF (medical report), unfortunately it is a few years old document and I did not remember the password format (usually something based on named and birthdate -_-).

I found a nice tool that can try a lot of combinations, and it is even better as if you know a bit the password format you can easily generate tested patterns.

=> https://github.com/mufeedvh/pdfrip pdfrip GitHub page

# Usage

The project page offers binaries for some operating systems, but you can compile it using cargo.

The documentation on the project's README is quite clear and easy to understand.  It is possible to generate some simple patterns, try all combinations of random characters or use a dictionary (some tools exists to generate a dictionary).

Inside a virtual machine with 4 vCPU, I was able to achieve 36 000 checks per second, on baremetal I expect this to be a higher.
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-03-09-test-pdf-passwords.html</guid>
  <link>https://dataswamp.org/~solene/2025-03-09-test-pdf-passwords.html</link>
  <pubDate>Sun, 09 Mar 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Blog activity for 2025</title>
  <description>
    <![CDATA[
<pre># Introduction

Hello, you may have wondered why the blog has not been really active this year, let's talk about it :-)

# Retrospective

## Patreon

First, I decided to stop the Patreon page for multiple reasons.  It was an interesting experimentation and helped me a lot in 2023 and a part of 2024 as I went freelance and did not earn much money.  Now, the business is running fine and prefer my former patrons to support someone else more active / who need money.

The way I implemented Patreon support was like this: people supporting me financially had access to blog posts a 2 or 3 days earlier than the public release, the point was to give them a little something for their support without creating a paywall for some content.  I think it worked quite well in that regard.  A side effect of the "early access publishing" was that, almost every time, I used this extra delay to add more content / fix issues that I did not think about when writing.  As a reminder, I usually just write then proofread quickly and publish.

Having people paying money to have early access to my blog posts created some kind of expectations from them in my mind, so I tried to raise the level higher in terms of content at the point that I came to procrastinate because "this blog post will not be interesting enough" or "this will just take too long to write, I'm bored".  My writing cadence got delayed, I was able to sustain once a week at first then moved to twice a month.  I have no idea if readers had "expectations", but I imagined it and acted like if it was a thing.

For each blog post I was publishing, this also created extra work for me:

* publish in early access
* write short news about it on Patreon
* wait a few days and republish not in early access

It is not much more work, but this was still more work to think and schedule.

Cherry on the cake, Patreon was already bloated when I started using it, but it has been more and more aggressive in terms of marketing and selling features, which disgusted me at some point.  I was not using all of this, but I felt bad to have people supporting me having to deal with it.

I used Patreon to publish a "I stop Patreon support but the blog continues" news, but it seems it is poorly handled on Patreon when you freeze a creator's page as subscribers are not able to see anything anymore once you put on freeze?!  Sorry for the lack of news, I thought it was working fine :/

## Different contribution place

The blog started and has lived as the place where I shared my knowledge during my continuous learning journey.  The thing is I learn less nowadays and more complicated knowledge that is hard to share, because it is super niche and certainly not fascinating to most, and because sharing it correctly may be hard.

Most of the blog is about OpenBSD, there were no community place to share it, so I self-hosted it.  Then, I started to write about NixOS and got invited by the people I worked with at that time (at Tweag company) to contribute to NixOS documentation, this made sense after all to not write something only me can update and which can not be fixed by others.  I did it a bit, but also continued my blog in parallel to share experience and ideas, not really "documentation".

Now I am using Qubes OS daily, for more than a year, I wrote a bit about it, but I started to contribute actively to community guides handled on the project's forum.  As a result, this made less content to publish on the blog because it just makes sense to centralize all the documentation in one place that is manageable by a team instead of here.

I spent a lot of time contributing to Qubes OS community guides, mostly about networking/VPN, and early 2025 I officially joined Qubes OS core team as a documentation maintainer (concretely, this gives commit rights on some repositories that are website/documentation related).  Qubes OS team is super nice, and the way the work is handled is cool, I will spend a lot of contribution time there (there is a huge backlog of changes to review first), still less time and incentive to write here.

## New real job and new place

As stated earlier, I finally found a work place that I enjoy and can keep me busy, my last two employers were not really able to figure how to use my weird skill set.  I had a lot of time to kill during work in the previous years, so time to experiment and write, I just have a lot less time now because I am really busy at work doing cool things.

My family moved to a new place in 2024 as well, there is a lot of work and gardening to handle, so after this and job work, I just do not have many things to share about on the blog at the moment.

# Conclusion

The blog is not dead, I think I will be able to resume activity soon now I turned the page on Patreon and identified why I was not writing here (I like writing here!).

I have a backlog of ideas, I also may write simpler blog posts when I would like to share an idea or a cool project without having to cover it entirely.

Thank you everyone for your support!
</pre>
    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2025-02-16-2025-news.html</guid>
  <link>https://dataswamp.org/~solene/2025-02-16-2025-news.html</link>
  <pubDate>Sun, 16 Feb 2025 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>
