<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>https://dataswamp.org/~solene/</link>
    <atom:link href="https://dataswamp.org/~solene/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>Using a dedicated administration workstation for my infrastructure</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_The_need"> The need</a>
</li>
    <li>3. <a href="#_Setup"> Setup</a>
      <ul>
      <li>3. 1. <a href="#_Workstation"> Workstation</a>
</li>
      <li>3. 2. <a href="#_Servers"> Servers</a>
</li>
      <li>3. 3. <a href="#_File_exchange"> File exchange</a>
    </li></ul></li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>As I moved my infrastructure to a whole new architecture, I decided to only expose critical accesses to dedicated administration systems (I have just one).  That workstation is dedicated to my infrastructure administration, it can only connect to my servers over a VPN and can not reach the Internet.
</p>
<p>This blog post explains why I am doing this, and gives a high level overview of the setup.  Implementation details are not fascinating as it only requires basics firewall, HTTP proxy and VPN configuration.
</p>
<h1 id="_The_need">2. The need <a href="#_The_need">§</a></h1>
<p>I wanted to have my regular computer not being able to handle any administration task, so I have a computer "like a regular person" without SSH keys, VPN and a password manager that does not mix personal credentials with administration credentials ...  To prevent credentials leaks or malware risks, it makes sense to uncouple the admin role from the "everything else" role.  So far, I have been using Qubes OS which helped me to do so at the software level, but I wanted to go further.
</p>
<h1 id="_Setup">3. Setup <a href="#_Setup">§</a></h1>
<p>This is a rather quick and simple explanation of what you have to do in order to set up a dedicated system for administration tasks.
</p>
<h2 id="_Workstation">3.1. Workstation <a href="#_Workstation">§</a></h2>
<p>The admin workstation I use is an old laptop, it only needs a web browser (except if you have no internal web services), a SSH client, and being able to connect to a VPN.  Almost any OS can do it, just pick the one you are the most conformable with, especially with regard to the firewall configuration.
</p>
<p>The workstation has its own SSH key that is deployed on the servers.  It also has its own VPN to the infrastructure core.  And its own password manager.
</p>
<p>Its firewall is configured to block all in and out traffic except the following:
</p>
<ul>

  <li>UDP traffic to allow WireGuard</li>
  <li>HTTP proxy address:port through WireGuard interface</li>
  <li>SSH through WireGuard</li>
</ul>

<p>The HTTP proxy exposed on the infrastructure has a whitelist to allow some fqdn.  I actually want to use the admin workstation for some tasks, like managing my domains through my registrar web console.  Keeping the list as small as possible is important, you do not want to start using this workstation for browsing the web or reading emails.
</p>
<p>On this machine, make sure to configure the system to use the HTTP proxy for updates and installing packages.  The difficulty of doing so will vary from an operating system to another.  While Debian required a single file in <code>/etc/apt/apt.conf.d/</code> to configure apt to use the HTTP proxy, OpenBSD needed both <code>http_proxy</code> and <code>https_proxy</code> environment variables, but some scripts needed to be patched as they do not use the variables, I had to check fw_update, pkg_add, sysupgrade and syspatch were all working.
</p>
<p>Ideally, if you can afford it, configure a remote logging of this workstation logs to a central log server.  When available, <code>auditd</code> monitoring important files access/changes in <code>/etc</code> could give precious information.
</p>
<h2 id="_Servers">3.2. Servers <a href="#_Servers">§</a></h2>
<p>My SSH servers are only reachable through a VPN, I do not expose it publicly anymore.  And I do IP filtering over the VPN, so only the VPN clients that have a use to connect over SSH will be allowed to connect.
</p>
<p>When I have some web interfaces for services like Minio, Pi-Hole and the monitoring dashboard, all of that is restricted to the admin workstations only.  Sometimes, you have the opportunity to separate the admin part by adding a HTTP filter on a <code>/admin/</code> URI, or if the service uses a different port for the admin and the service (like Minio).  When enabling a new service, you need to think about all the things you can restrict to the admin workstations only.
</p>
<p>Depending on your infrastructure size and locations, you may want to use dedicated systems for SSH/VPN/HTTP proxy entry points, it is better if it is not shared with important services.
</p>
<h2 id="_File_exchange">3.3. File exchange <a href="#_File_exchange">§</a></h2>
<p>You will need to exchange data to the admin workstation (rarely the other way), I found nncp to be a good tool for that.  You can imagine a lot of different setup, but I recommend picking one that:
</p>
<ul>

  <li>does not require a daemon on the admin workstation: this does not increase the workstation attack surface</li>
  <li>allows encryption at rest: so you can easily use any deposit system for the data exchange</li>
  <li>is asynchronous: as a synchronous connection could be potentially dangerous because it establishes a link directly between the sender and the receiver</li>
</ul>

<p><a href='https://dataswamp.org/~solene/2024-10-04-secure-file-transfer-with-nncp.html'>Previous blog post: Secure file transfer with NNCP</a></p>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>I learned about this method while reading ANSSI (French cybersecurity national agency) papers.  While it may sound extreme, it is a good practice I endorse.  This gives a use to old second hand hardware I own, and it improves my infrastructure security while giving me peace of mind.
</p>
<p><a href='https://cyber.gouv.fr/'>ANSSI website (in French)</a></p>
<p>In addition, if you want to allow some people to work on your infrastructure (maybe you want to set up some infra for an association?), you already have the framework to restrict their scope and trace what they do.
</p>
<p>Of course, the amount of complexity and resources you can throw at this is up to you, you could totally have a single server and lock most of its services behind a VPN and call it a day, or have multiple servers worldwide and use dedicated servers to enter their software defined network.
</p>
<p>Last thing, make sure that you can bootstrap into your infrastructure if the only admin workstation is lost/destroyed.  Most of the time, you will have a physical/console access that is enough (make sure the password manager is reachable from the outside for this case).
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-10-19-my-admin-workstation.html</guid>
  <link>https://dataswamp.org/~solene/2024-10-19-my-admin-workstation.html</link>
  <pubDate>Wed, 23 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Securing backups using S3 storage</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Quick_intro_to_object_storage"> Quick intro to object storage</a>
      <ul>
      <li>2. 1. <a href="#_Open_source_S3-compatible_storage_implementations"> Open source S3-compatible storage implementations</a>
    </li></ul></li>
    <li>3. <a href="#_Configure_your_S3"> Configure your S3</a>
</li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>
</li>
    <li>5. <a href="#_Going_further"> Going further</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>In this blog post, you will learn how to make secure backups using Restic and a S3 compatible object storage.
</p>
<p>Backups are incredibly important, you may lose important files that only existed on your computer, you may lose access to some encrypted accounts or drives, when you need backups, you need them to be reliable and secure.
</p>
<p>There are two methods to handle backups:
</p>
<ul>

  <li>pull backups: a central server connects to the system and pulls data to store it locally, this is how rsnapshot, backuppc or bacula work</li>
  <li>push backups: each system run the backup software locally to store it on the backup repository (either locally or remotely), this is how most backups tool work</li>
</ul>

<p>Both workflows have pros and cons.  The pull backups are not encrypted, and a single central server owns everything, this is rather bad from a security point of view.  While push backups handle all encryption and accesses to the system where it runs, an attacker could destroy the backup using the backup tool.
</p>
<p>I will explain how to leverage S3 features to protect your backups from an attacker.
</p>
<h1 id="_Quick_intro_to_object_storage">2. Quick intro to object storage <a href="#_Quick_intro_to_object_storage">§</a></h1>
<p>S3 is the name of an AWS service used for Object Storage.  Basically, it is a huge key-value store in which you can put data and retrieve it, there are very little metadata associated with an object.  Objects are all stored in a "bucket", they have a path, and you can organize the bucket with directories and subdirectories.
</p>
<p>Buckets can be encrypted, which is an important feature if you do not want your S3 provider to be able to access your data, however most backup tools already encrypt their repository, so it is not really useful to add encryption to the bucket.  I will not explain how to use encryption in the bucket in this guide, although you can enable it if you want.  Using encryption requires more secrets to store outside of the backup system if you want to restore, and it does not provide real benefits because the repository is already encrypted.
</p>
<p>S3 was designed to be highly efficient for retrieving / storage data, but it is not a competitor to POSIX file systems.  A bucket can be public or private, you can host your website in a public bucket (and it is rather common!).  A bucket has permissions associated to it, you certainly do not want to allow random people to put files in your public bucket (or list the files), but you need to be able to do so.
</p>
<p>The protocol designed around S3 was reused for what we call "S3-compatible" services on which you can directly plug any "S3-compatible" client, so you are not stuck with AWS.
</p>
<p>This blog post exists because I wanted to share a cool S3 feature (not really S3 specific, but almost everyone implemented this feature) that goes well with backups: a bucket can be versioned.  So, every change happening on a bucket can be reverted.  Now, think about an attacker escalating to root privileges, they can access the backup repository and delete all the files there, then destroy the server.  With a backup on a versioned S3 storage, you could revert your bucket just before the deletion happened and recover your backup.  In order to prevent this, the attacker should also get access to the S3 storage credentials, which is different from the credentials required to use the bucket.
</p>
<p>Finally, restic supports S3 as a backend, and this is what we want.
</p>
<h2 id="_Open_source_S3-compatible_storage_implementations">2.1. Open source S3-compatible storage implementations <a href="#_Open_source_S3-compatible_storage_implementations">§</a></h2>
<p>There is a list of open source and free S3-compatible storage, I played with them all, and they have different goals and purposes, they all worked well enough for me:
</p>
<p><a href='https://github.com/seaweedfs/seaweedfs'>Seaweedfs GitHub project page</a></p>
<p><a href='https://garagehq.deuxfleurs.fr/'>Garage official project page</a></p>
<p><a href='https://min.io/'>Minio official project page</a></p>
<p>A quick note about those:
</p>
<ul>

  <li>I consider seaweedfs to be the Swiss army knife of storage, you can mix multiple storage backends and expose them over different protocols (like S3, HTTP, WebDAV), it can also replicate data over remote instances.  You can do tiering (based on last access time or speed) as well.</li>
  <li>Garage is a relatively new project, it is quite bare bone in terms of features, but it works fine and support high availability with multiple instances, it only offers S3.</li>
  <li>Minio is the big player, it has a paid version (which is extremely expensive) although the free version should be good enough for most users.</li>
</ul>

<h1 id="_Configure_your_S3">3. Configure your S3 <a href="#_Configure_your_S3">§</a></h1>
<p>You need to pick a S3 provider, you can self-host it or use a paid service, it is up to you.  I like backblaze as it is super cheap, with $6/TB/month, but I also have a local minio instance for some needs.
</p>
<p>Create a bucket, enable the versioning on it and define the data retention, for the current scenario I think a few days is enough.
</p>
<p>Create an application key for your restic client with the following permissions: "GetObject", "PutObject", "DeleteObject", "GetBucketLocation", "ListBucket", the names can change, but it needs to be able to put/delete/list data in the bucket (and only this bucket!).  After this process done, you will get a pair of values: an identifier and a secret key
</p>
<p>Now, you will have to provide the following environment variables to restic when it runs:
</p>
<ul>

  <li><code>AWS_DEFAULT_REGION</code> which contains the region of the S3 storage, this information is given when you configure the bucket.</li>
  <li><code>AWS_ACCESS_KEY</code> which contains the access key generated when you created the application key.</li>
  <li><code>AWS_SECRET_ACCESS_KEY</code> which contains the secret key generated when you created the application key.</li>
  <li><code>RESTIC_REPOSITORY</code> which will look like <code>s3:https://$ENDPOINT/$BUCKET</code> with $ENDPOINT being the bucket endpoint address and $BUCKET the bucket name.</li>
  <li><code>RESTIC_PASSWORD</code> which contains your backup repository passphrase to encrypt it, make sure to write it down somewhere else because you need it to recover the backup.</li>
</ul>

<p>If you want a simple script to backup some directories, and remove old data after a retention of 5 hourly, 2 daily, 2 weekly and 2 monthly backups:
</p>
<pre><code>restic backup -x /home /etc /root /var
restic forget --prune -H 5 -d 2 -w 2 -m 2
</code></pre>
<p>Do not forget to run <code>restic init</code> the first time, to initialize the restic repository.
</p>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>I really like this backup system as it is cheap, very efficient and provides a fallback in case of a problem with the repository (mistakes happen, there is not always need for an attacker to lose data ^_^').
</p>
<p>If you do not want to use S3 backends, you need to know Borg backup and Restic both support an "append-only" method, which prevents an attacker from doing damages or even read the backup, but I always found the use to be hard, and you need to have another system to do the prune/cleanup on a regular basis.
</p>
<h1 id="_Going_further">5. Going further <a href="#_Going_further">§</a></h1>
<p>This approach could work on any backend supporting snapshots, like BTRFS or ZFS.  If you can recover the backup repository to a previous point in time, you will be able to access to the working backup repository.
</p>
<p>You could also do a backup of the backup repository, on the backend side, but you would waste a lot of disk space.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-10-19-secure-backups-with-s3.html</guid>
  <link>https://dataswamp.org/~solene/2024-10-19-secure-backups-with-s3.html</link>
  <pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Snap integration in Qubes OS templates</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Setup_on_Fedora"> Setup on Fedora</a>
      <ul>
      <li>2. 1. <a href="#_Snap_installation"> Snap installation</a>
</li>
      <li>2. 2. <a href="#_Proxy_configuration"> Proxy configuration</a>
</li>
      <li>2. 3. <a href="#_Run_updates_on_template_update"> Run updates on template update</a>
</li>
      <li>2. 4. <a href="#_Qube_settings_menu_integration"> Qube settings menu integration</a>
</li>
      <li>2. 5. <a href="#_Snap_store_GUI"> Snap store GUI</a>
    </li></ul></li>
    <li>3. <a href="#_Debian"> Debian</a>
</li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>Snap package format is interesting, while it used to have a bad reputation, I wanted to make my opinion about it.  After reading its design and usage documentation, I find it quite good, and I have a good experience using some programs installed with snap.
</p>
<p><a href='https://snapcraft.io/'>Snapcraft official website (store / documentation)</a></p>
<p>Snap programs can be either packaged as &quot;strict&quot; or &quot;classic&quot;; when it is strict there is some confinement at work which can be inspected on an installed snap using <code>snap connections $appname</code>, while a &quot;classic&quot; snap has no sandboxing at all.  Snap programs are completely decorrelated from the host operating system where snap is running, so you can have old or new versions of a snap packaged program without having to handle shared library versions.
</p>
<p>The following setup explains how to install snap programs in a template to run them from AppVMs, and not how to install snap programs in AppVMs as a user, if you need this, please us the Qubes OS guide linked below.
</p>
<p>Qubes OS documentation explains how to setup snap in a template, but with a helper to allow AppVMs to install snap programs in the user directory.
</p>
<p><a href='https://www.qubes-os.org/doc/how-to-install-software/#installing-snap-packages'>Qubes OS official documentation: install snap packages in AppVMs</a></p>
<p>In a previous blog post, I explained how to configure a Qubes OS template to install flatpak programs in it, and how to integrate it to the template.
</p>
<p><a href='https://dataswamp.org/~solene/2023-09-15-flatpak-on-qubesos.html'>Previous blog post: Installing flatpak programs in a Qubes OS template</a></p>
<h1 id="_Setup_on_Fedora">2. Setup on Fedora <a href="#_Setup_on_Fedora">§</a></h1>
<p>All commands are meant to be run as root.
</p>
<h2 id="_Snap_installation">2.1. Snap installation <a href="#_Snap_installation">§</a></h2>
<p><a href='https://snapcraft.io/docs/installing-snap-on-fedora'>Snapcraft official documentation: Installing snap on Fedora</a></p>
<p>Installing snap is easy, run the following command:
</p>
<pre><code>dnf install snapd
</code></pre>
<p>To allow "classic" snaps to work, you need to run the following command:
</p>
<pre><code>sudo ln -s /var/lib/snapd/snap /snap
</code></pre>
<h2 id="_Proxy_configuration">2.2. Proxy configuration <a href="#_Proxy_configuration">§</a></h2>
<p>Now, you have to configure snap to use the http proxy in the template, this command can take some time because snap will time out as it tries to use the network when invoked...
</p>
<pre><code>snap set system proxy.http=&quot;http://127.0.0.1:8082/&quot;
snap set system proxy.https=&quot;http://127.0.0.1:8082/&quot;
</code></pre>
<h2 id="_Run_updates_on_template_update">2.3. Run updates on template update <a href="#_Run_updates_on_template_update">§</a></h2>
<p>You need to prevent snap from searching for updates on its own as you will run updates when the template is updated:
</p>
<pre><code>snap refresh --hold
</code></pre>
<p>To automatically update snap programs when the template is updating (or doing any dnf operation), create the file <code>/etc/qubes/post-install.d/05-snap-update.sh</code> with the following content and make it executable:
</p>
<pre><code>#!/bin/sh

if [ &quot;$(qubesdb-read /type)&quot; = &quot;TemplateVM&quot; ]
then
    snap refresh
fi
</code></pre>
<h2 id="_Qube_settings_menu_integration">2.4. Qube settings menu integration <a href="#_Qube_settings_menu_integration">§</a></h2>
<p>To add the menu entry of each snap program in the qube settings when you install/remove snaps, create the file <code>/usr/local/sbin/sync-snap.sh</code> with the following content and make it executable:
</p>
<pre><code>#!/bin/sh

# when a desktop file is created/removed
# - links snap .desktop in /usr/share/applications
# - remove outdated entries of programs that were removed
# - sync the menu with dom0

inotifywait -m -r \
-e create,delete,close_write \
/var/lib/snapd/desktop/applications/ |
while  IFS=&#x27;:&#x27; read event
do
    find /var/lib/snapd/desktop/applications/ -type l -name &quot;*.desktop&quot; | while read line
    do
        ln -s &quot;$line&quot; /usr/share/applications/
    done
    find /usr/share/applications/ -xtype l -delete
    /etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh
done
</code></pre>
<p>Install the package <code>inotify-tools</code> to make the script above working, and add this to <code>/rw/config/rc.local</code> to run it at boot:
</p>
<pre><code>/usr/local/bin/sync-snap.sh &amp;
</code></pre>
<p>You can run the script now with <code>/usr/local/bin/sync-snap.sh &amp;</code> if you plan to install snap programs.
</p>
<h2 id="_Snap_store_GUI">2.5. Snap store GUI <a href="#_Snap_store_GUI">§</a></h2>
<p>If you want to browse and install snap programs using a nice interface, you can install the snap store.
</p>
<pre><code>snap install snap-store
</code></pre>
<p>You can run the store with <code>snap run snap-store</code> or configure your template settings to add the snap store into the applications list, and run it from your Qubes OS menu.
</p>
<h1 id="_Debian">3. Debian <a href="#_Debian">§</a></h1>
<p>The setup on Debian is pretty similar, you can reuse the Fedora guide except you need to replace <code>dnf</code> by <code>apt</code>.
</p>
<p><a href='https://snapcraft.io/docs/installing-snap-on-debian'>Snapcraft official documentation: Installing snap on Debian</a></p>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>More options to install programs is always good, especially when it comes with features like quota or sandboxing.  Qubes OS gives you the flexibility to use multiple templates in parallel, a new source of packages can be useful for some users.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-10-16-snap-on-qubesos.html</guid>
  <link>https://dataswamp.org/~solene/2024-10-16-snap-on-qubesos.html</link>
  <pubDate>Sat, 19 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Asynchronous secure file transfer with nncp</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Explanations"> Explanations</a>
</li>
    <li>3. <a href="#_Real_world_example:_Syncthing_gateway"> Real world example: Syncthing gateway</a>
</li>
    <li>4. <a href="#_Setup"> Setup</a>
      <ul>
      <li>4. 1. <a href="#_Configuration_file_and_private_keys"> Configuration file and private keys</a>
</li>
      <li>4. 2. <a href="#_Public_keys"> Public keys</a>
</li>
      <li>4. 3. <a href="#_Import_public_keys"> Import public keys</a>
    </li></ul></li>
    <li>5. <a href="#_Usage"> Usage</a>
</li>
    <li>6. <a href="#_Workflow_(how_to_use)"> Workflow (how to use)</a>
      <ul>
      <li>6. 1. <a href="#_Sending_files"> Sending files</a>
</li>
      <li>6. 2. <a href="#_Sync_and_file_unpacking"> Sync and file unpacking</a>
    </li></ul></li>
    <li>7. <a href="#_Explanations_about_ACK"> Explanations about ACK</a>
</li>
    <li>8. <a href="#_Conclusion"> Conclusion</a>
</li>
    <li>9. <a href="#_Going_further"> Going further</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>nncp (node to node copy) is a software to securely exchange data between peers.  Is it command line only, it is written in Go and compiles on Linux and BSD systems (although it is only packaged for FreeBSD in BSDs).
</p>
<p>The website will do a better job than me to talk about the numerous features, but I will do my best to explain what you can do with it and how to use it.
</p>
<p><a href='http://www.nncpgo.org/'>nncp official project website</a></p>
<h1 id="_Explanations">2. Explanations <a href="#_Explanations">§</a></h1>
<p>nncp is a suite of tools to asynchronously exchange data between peers, using zero knowledge encryption.  Once peers have exchanged their public keys, they are able to encrypt data to send to this peer, this is nothing really new to be honest, but there is a twist.
</p>
<ul>

  <li>a peer can directly connect to another using TCP, you can even configure different addresses like a tor onion or I2P host and use the one you want</li>
  <li>a peer can connect to another using ssh</li>
  <li>a peer can generate plain files that will be carried over USB, network storage, synchronization software, whatever, to be consumed by a peer.  Files can be split in chunks of arbitrary size in order to prevent anyone snooping from figuring how many files are exchanged or their name (hence zero knowledge).</li>
  <li>a peer can generate data to burn on a CD or tape (it is working as a stream of data instead of plain files)</li>
  <li>a peer can be reachable through another relay peer</li>
  <li>when a peer receives files, nncp generates ACK files (acknowledgement) that will tell you they correctly received it</li>
  <li>a peer can request files and/or trigger pre-configured commands you expose to this peer</li>
  <li>a peer can send emails with nncp (requires a specific setup on the email server)</li>
  <li>data transfer can be interrupted and resumed</li>
</ul>

<p>What is cool with nncp is that files you receive are unpacked in a given directory and their integrity is verified.  This is sometimes more practical than a network share in which you are never sure when you can move / rename / modify / delete the file that was transferred to you.
</p>
<p>I identified a few "realistic" use cases with nncp:
</p>
<ul>

  <li>exchange files between air gap environments (I tried to exchange files over sound or QR codes, I found no reliable open source solution)</li>
  <li>secure file exchange over physical medium with delivery notification (the medium needs to do a round-trip for the notification)</li>
  <li>start a torrent download remotely, prepare the file to send back once downloaded, retrieve the file at your own pace</li>
  <li>reliable data transfer over poor connections (although I am not sure if it beats kermit at this task :D )</li>
  <li>"simple" file exchange between computers / people over network</li>
</ul>

<p>This let a lot of room for other imaginative use cases.
</p>
<h1 id="_Real_world_example:_Syncthing_gateway">3. Real world example: Syncthing gateway <a href="#_Real_world_example:_Syncthing_gateway">§</a></h1>
<p>My preferred workflow with nncp that I am currently using is a group of three syncthing servers.
</p>
<p>Each syncthing server is running on a different computer, the location does not really matter.  There is a single share between these syncthing instances.
</p>
<p>The servers where syncthing are running have incoming and outgoing directories exposed over a NFS / SMB share, with a directory named after each peer in both directories.  Deposing a file in the "outgoing" directory of a peer will make nncp to prepare the file for this peer, put it into the syncthing share and let it share, the file is consumed in the process.
</p>
<p>In the same vein, in the incoming directory, new files are unpacked in the incoming directory of emitting peer on the receiver server running syncthing.
</p>
<p>Why is it cool?  You can just drop a file in the peer you want to send to, it disappears locally and magically appears on the remote side.  If something wrong happens, due to ACK, you can verify if the file was delivered and unpacked.  With three shares, you can almost have two connected at the same time.
</p>
<p>It is a pretty good file deposit that requires no knowledge to use.
</p>
<p>This could be implemented with pure syncthing, however you would have to:
</p>
<ul>

  <li>for each peer, configure a one-way directory share in syncthing for each other peer to upload data to</li>
  <li>for each peer, configure a one-way directory share in syncthing for each other peer to receive data from</li>
  <li>for each peer, configure an encrypted share to relay all one way share from other peers</li>
</ul>

<p>This does not scale well.
</p>
<p>Side note, I am using syncthing because it is fun and requires no infrastructure.  But actually, a webdav filesystem, a Nextcloud drive or anything to share data over the network would work just fine.
</p>
<h1 id="_Setup">4. Setup <a href="#_Setup">§</a></h1>
<h2 id="_Configuration_file_and_private_keys">4.1. Configuration file and private keys <a href="#_Configuration_file_and_private_keys">§</a></h2>
<p>On each peer, you have to generate a configuration file with its private keys.  The default path for the configuration file is <code>/etc/nncp.hjson</code> but nothing prevents you from storing this file anywhere, you will have to use the parameter <code>-cfg /path/to/config</code> file in that case.
</p>
<p>Generate the file like this:
</p>
<pre><code>nncp-cfgnew &gt; /etc/nncp.hjson
</code></pre>
<p>The file contains comments, this is helpful if you want to see how the file is structured and existing options.  Never share the private keys of this file!
</p>
<p>I recommend checking the spool and log paths, and decide which user should use nncp.  For instance, you can use <code>/var/spool/nncp</code> to store nncp data (waiting to be delivered or unpacked) and the log file, and make your user the owner of this directory.
</p>
<h2 id="_Public_keys">4.2. Public keys <a href="#_Public_keys">§</a></h2>
<p>Now, generate the public keys (they are just derived from the private keys generated earlier) to share with your peers, there is a command for this that will read the private keys and output the public keys in a format ready to put in the nncp.hjson file of recipients.
</p>
<pre><code>nncp-cfgmin &gt; my-peer-name.pub
</code></pre>
<p>You can share the generated file with anyone, this will allow them to send you files.  The peer name of your system is "self", you can rename it, it is just an identifier.
</p>
<h2 id="_Import_public_keys">4.3. Import public keys <a href="#_Import_public_keys">§</a></h2>
<p>When import public keys, you just need to add the content generated by the command <code>nncp-cfgmin</code> of a peer in your nncp configuration file.
</p>
<p>Just copy / paste the content in the <code>neigh</code> structure within the configuration file, just make sure to rename &quot;self&quot; by the identifier you want to give to this peer.
</p>
<p>If you want to receive data from this peer, make sure to add an attribute line <code>incoming: &quot;/path/to/incoming/data&quot;</code> for that peer, otherwise you will not be able to unpack received file.
</p>
<h1 id="_Usage">5. Usage <a href="#_Usage">§</a></h1>
<p>Now you have peers who exchanged keys, they are able to send data to each other.  nncp is a collection of tools, let's see the most common and what they do:
</p>
<ul>

  <li>nncp-file: add a file in the spool to deliver to a peer</li>
  <li>nncp-toss: unpack incoming data (files, commands, file request, emails) and generate ack</li>
  <li>nncp-reass: reassemble files that were split in smaller parts</li>
  <li>nncp-exec: trigger a pre-configured command on the remote peer, stdin data will be passed as the command parameters.  Let&#x27;s say a peer offers a &quot;wget&quot; service, you can use <code>echo &quot;https://some-domain/uri/&quot; | nncp-exec peername wget</code> to trigger a remote wget.</li>
</ul>

<p>If you use the client / server model over TCP, you will also use:
</p>
<ul>

  <li>nncp-daemon: the daemon waiting for connections</li>
  <li>nncp-caller: a daemon occasionally triggering client connections (it works like a crontab)</li>
  <li>nncp-call: trigger a client connection to a peer</li>
</ul>

<p>If you use asynchronous file transfers, you will use:
</p>
<ul>

  <li>nncp-xfer: generates to / consumes files from a directory for async transfer</li>
</ul>

<h1 id="_Workflow_(how_to_use)">6. Workflow (how to use) <a href="#_Workflow_(how_to_use)">§</a></h1>
<h2 id="_Sending_files">6.1. Sending files <a href="#_Sending_files">§</a></h2>
<p>For sending files, just use <code>nncp-file file-path peername:</code>, the file name will be used when unpacked, but you can also give the filename you want to give once unpacked. 
</p>
<p>A directory could be used as a parameter instead of a file, it will be stored automatically in a .tar file for delivery. 
</p>
<p>Finally, you can send a stream of data using nncp-file stdin, but you have to give a name to the resulting file.
</p>
<h2 id="_Sync_and_file_unpacking">6.2. Sync and file unpacking <a href="#_Sync_and_file_unpacking">§</a></h2>
<p>This was not really clear from the documentation, so here it is how to best use nncp when exchanging files using plain files, the destination is <code>/mnt/nncp</code> in my examples (it can be an external drive, a syncthing share, a NFS mount...):
</p>
<p>When you want to sync, always use this scheme:
</p>
<ol>

  <li><code>nncp-xfer -rx /mnt/nncp</code></li>
  <li><code>nncp-toss -gen-ack</code></li>
  <li><code>nncp-xfer -keep -tx -mkdir /mnt/nncp</code></li>
  <li><code>nncp-rm -all -ack</code></li>
</ol>

<p>This receives files using <code>nncp-xfer -rx</code>, the files are stored in nncp spool directory.  Then, with <code>nncp-toss -gen-ack</code>, the files are unpacked to the &quot;incoming&quot; directory of each peer who sent files, and ACK are generated (older versions of <code>nncp-toss</code> does not handle ack, you need to generate ack befores and remove them after tx, with <code>nncp-ack -all 4&gt;acks</code> and <code>nncp-rm -all -pkt &lt; acks</code>).
</p>
<p><code>nncp-xfer -tx</code> will put in the directory the data you want to send to peers, and also the ack files generated by the rx which happened before.  The <code>-keep</code> flag is crucial here if you want to make use of ACK, with <code>-keep</code>, the sent data are kept in the pool until you receive the ACK for them, otherwise the data are removed from the spool and will not be retransmited if the files were not received.  Finally, <code>nncp-rm</code> will delete all ACK files so you will not transmit them again.
</p>
<h1 id="_Explanations_about_ACK">7. Explanations about ACK <a href="#_Explanations_about_ACK">§</a></h1>
<p>From my experience and documentation reading, there are three cases with the spool and ACK:
</p>
<ul>

  <li>the shared drive is missing the files you sent (that are still in pool), and you received no ACK, the next time you run <code>nncp-xfer</code>, the files will be transmitted again</li>
  <li>when you receive ACK files for files in spool, they are deleted from the spool</li>
  <li>when you do not use <code>-keep</code> when sending files with <code>nncp-xfer</code>, the files will not be stored in the spool so you will not be able to know what to retransmit if ACK are missing</li>
</ul>

<p>ACKs do not clean up themselves, you need to use <code>nncp-rm</code>.  It took me a while to figure this, my nodes were sending ACKs to each other repeatedly.
</p>
<h1 id="_Conclusion">8. Conclusion <a href="#_Conclusion">§</a></h1>
<p>I really like nncp as it allows me to securely transfer files between my computers without having to care if they are online.  Rsync is not always possible because both the sender and receiver need to be up at the same time (and reachable correctly).
</p>
<p>The way files are delivered is also practical for me, as I already shared above, files are unpacked in a defined directory by peer, instead of remembering I moved something in a shared drive.  This removes the doubt about files being in a shared drive: why is it there? Why did I put it there? What was its destination??
</p>
<p>I played with various S3 storage to exchange nncp data, but this is for another blog post :-)
</p>
<h1 id="_Going_further">9. Going further <a href="#_Going_further">§</a></h1>
<p>There are more features in nncp, I did not play with all of them.
</p>
<p>You can define "areas" in parallel of using peers, you can use emails notifications when a remote receives data from you to have a confirmation, requesting remote files etc...  It is all in the documentation.
</p>
<p>I have the idea to use nncp on a SMTP server to store encrypted incoming emails until I retrieve them (I am still working at improving the security of email storage), stay tuned :)
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-10-04-secure-file-transfer-with-nncp.html</guid>
  <link>https://dataswamp.org/~solene/2024-10-04-secure-file-transfer-with-nncp.html</link>
  <pubDate>Sun, 06 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>I moved my emails to Proton Mail</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_My_needs"> My needs</a>
</li>
    <li>3. <a href="#_Proton_Mail"> Proton Mail</a>
      <ul>
      <li>3. 1. <a href="#_Benefits"> Benefits</a>
</li>
      <li>3. 2. <a href="#_Interesting_features"> Interesting features</a>
</li>
      <li>3. 3. <a href="#_Shortcomings"> Shortcomings</a>
</li>
      <li>3. 4. <a href="#_Alternatives"> Alternatives</a>
    </li></ul></li>
    <li>4. <a href="#_My_ideal_email_setup"> My ideal email setup</a>
</li>
    <li>5. <a href="#_Conclusion"> Conclusion</a>
      <ul>
      <li>5. 1. <a href="#_Update_2024-09-14"> Update 2024-09-14</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>I recently took a very hard decision: I moved my emails to Proton Mail.
</p>
<p>This is certainly a shock for people following this blog for a long time, this was a shock for me as well!  This was actually pretty difficult to think this topic objectively, I would like to explain how I came up to this decision.
</p>
<p>I have been self-hosting my own email server since I bought my first domain name, back in 2009.  The server have been migrated multiple times, from hosting companies to another and regularly changing the underlying operating system for fun.  It has been running on: Slackware, NetBSD, FreeBSD, NixOS and Guix.
</p>
<h1 id="_My_needs">2. My needs <a href="#_My_needs">§</a></h1>
<p>First, I need to explain my previous self-hosted setup, and what I do with my emails.
</p>
<p>I have two accounts:
</p>
<ul>

  <li>one for my regular emails, mailing lists, friends, family</li>
  <li>one for my company to reach client, send quotes and invoices</li>
</ul>

<p>Ideally, having all the emails retrieved locally and not stored on my server would be ideal.  But I am using a lot of devices (most are disposable), and having everything on a single computer will not work for me.
</p>
<p>Due to my emails being stored remotely and containing a lot of private information, I have never been really happy with how emails work at all.  My dovecot server has access to all my emails, unencrypted and a single password is enough to connect to the server.  Adding a VPN helps to protect dovecot if it is not exposed publicly, but the server could still be compromised by other means.  OpenBSD smtpd server got critical vulnerabilities patched a few years ago, basically allowing to get root access, since then I have never been really comfortable with my email setup.
</p>
<p>I have been looking for ways to secure my emails, this is how I came to the setup encrypting incoming emails with GPG.  This is far from being ideal, and I stopped using it quickly.  This breaks searches, the server requires a lot of CPU and does not even encrypt all information.
</p>
<p><a href='https://dataswamp.org/~solene/2024-08-14-automatic-emails-gpg-encryption-at-rest.html'>Emails encryption at rest on OpenBSD using dovecot and GPG</a></p>
<p>Someone shown me a dovecot plugin to encrypt emails completely, however my understanding of the encryption of this plugin is that the IMAP client must authenticate the user using a plain text password that is used by dovecot to unlock an asymmetric encryption key.  The security model is questionable: if the dovecot server is compromised, users passwords are available to the attacker and they can decrypt all the emails.  It would still be better than nothing though, except if the attacker has root access.
</p>
<p><a href='https://0xacab.org/liberate/trees'>Dovecot encryption plugin: TREES</a></p>
<p>One thing I need from my emails is to arrive to the recipients.  My emails were almost always considered as spam by big email providers (GMail, Microsoft), this has been an issue for me for years, but recently it became a real issue for my business.  My email servers were always perfectly configured with everything required to be considered as legit as possible, but it never fully worked.
</p>
<h1 id="_Proton_Mail">3. Proton Mail <a href="#_Proton_Mail">§</a></h1>
<p>Why did I choose Proton Mail over another email provider?  There are a few reasons for it, I evaluated a few providers before deciding.
</p>
<p>Proton Mail is a paid service, actually this is an argument in itself, I would not trust a good service to work for free, this would be too good to be true, so it would be a scam (or making money on my data, who knows).
</p>
<p>They offer zero-knowledge encryption and MFA, which is exactly what I wanted.  Only me should be able to read my email, even if the provider is compromised, adding MFA on top is just perfect because it requires two secrets to access the data.  Their zero-knowledge security could be criticized for a few things, ultimately there is no guarantee they do it as advertised.
</p>
<p>Long story short, when making your account, Proton Mail generates an encryption key on their server that is password protected with your account password.  When you use the service and log-in, the encrypted key is sent to you so all crypto operations happens locally, but there is no way to verify if they kept your private key unencrypted at the beginning, or if they modified their web apps to key log the password typed.  Applications are less vulnerable to the second problem as it would impact many users and this would leave evidences.  I do trust them for doing the things right, although I have no proof.
</p>
<p>I did not choose Proton Mail for end-to-end encryption, I only use GPG occasionally and I could use it before.
</p>
<p>IMAP is possible with Proton Mail when you have a paid account, but you need to use a "connect bridge", it is a client that connects to Proton with your credentials and download all encrypted emails locally, then it exposes an IMAP and SMTP server on localhost with dedicated credentials.  All emails are saved locally and it syncs continuously, it works great, but it is not lightweight.  There is a custom implementation named hydroxide, but it did not work for me.  The bridge does not support caldav and cardav, which is not great but not really an issue for me anyway.
</p>
<p><a href='https://github.com/emersion/hydroxide'>GitHub project page: hydroxide</a></p>
<p>Before migrating, I verified that reversibility was possible, aka being able to migrate my emails away from Proton Mail.  In case they stop providing their export tool, I would still have a local copy of all my IMAP emails, which is exactly what I would need to move it somewhere else.
</p>
<p>There are certainly better alternatives than Proton with regard to privacy, but Proton is not _that_ bad on this topic, it is acceptable enough for me.
</p>
<h2 id="_Benefits">3.1. Benefits <a href="#_Benefits">§</a></h2>
<p>Since I moved my emails, I do not have deliverability issues.  Even people on Microsoft received my emails at first try!  Great success for me here.
</p>
<p>The anti-spam is more efficient that my spamd trained with years of spam.
</p>
<p>Multiple factor authentication is required to access my account.
</p>
<h2 id="_Interesting_features">3.2. Interesting features <a href="#_Interesting_features">§</a></h2>
<p>I did not know I would appreciate scheduling emails sending, but it's a thing and I do not need to keep the computer on.
</p>
<p>It is possible to generate aliases (10 or unlimited depending on the subscription), what's great with it is that it takes a couple seconds to generate a unique alias, and replying to an email received on an alias automatically uses this alias as the From address (webmail feature).  On my server, I have been using a lot of different addresses using a "+" local prefix, it was rarely recognized, so I switched to a dot, but these are not real aliases. So I started managing smtpd aliases through ansible, and it was really painful to add a new alias every time I needed one.  Did I mention I like this alias feature? :D
</p>
<p>If I want to send an end-to-end encrypted email without GPG, there is an option to use a password to protect the content, the email would actually send a link to the recipient, leading to a Proton Mail interface asking for the password to decrypt the content, and allow that person to reply.  I have no idea if I will ever use it, but at least it is a more user-friendly end-to-end encryption method.  Tuta is offering the same feature, but it is there only e2e method.
</p>
<p>Proton offer logs of login attempts on my account, this was surprising.
</p>
<p>There is an onion access to their web services in case you prefer to connect using tor.
</p>
<p>The web interface is open source, one should be able to build it locally to connect to Proton servers, I guess it should work?
</p>
<p><a href='https://github.com/ProtonMail/WebClients'>GitHub project page: ProtonMail webclients</a></p>
<h2 id="_Shortcomings">3.3. Shortcomings <a href="#_Shortcomings">§</a></h2>
<p>Proton Mail cannot be used as an SMTP relay by my servers, except through the open source bridge hydroxide.
</p>
<p>The calendar only works on the website and the smartphone app. The calendar it does not integrate with the phone calendar, although in practice I did not find it to be an issue, everything works fine.  Contact support is less good on Android, they are restrained in the Mail app and I still have my cardav server.
</p>
<p>The web app is first class citizen, but at least it is good.
</p>
<p>Nothing prevents Proton Mail from catching your incoming and outgoing emails, you need to use end-to-end encryption if you REALLY need to protect your emails from that.
</p>
<p>I was using two accounts, this would require a "duo" subscription on Proton Mail which is more expensive.  I solved this by creating two identities, label and filter rules to separate my two "accounts" (personal and professional) emails.  I actually do not really like that, although it is not really an issue at the moment as one of them is relatively low traffic.
</p>
<p>The price is certainly high, the "Mail plus" plan is 4€ / month (48€ / year) if you subscribe for 12 months, but is limited to 1 domain, 10 aliases and 15 GB of storage.  The "Proton Unlimited" plan is 10€ / month (120€ / year) but comes with the kitchen sink: infinite aliases, 3 domains, 500 GB storage, and access to all Proton services (that you may not need...) like VPN, Drive and Pass.  In comparison, hosting your email service on a cheap server should not cost you more than 70€ / year, and you can self-host a nextcloud / seafile (equivalent to Drive, although it is stored encrypted there), a VPN and a vaultwarden instance (equivalent to Pass) in addition to the emails.
</p>
<p>Emails are limited to 25MB, which is low given I always configured my own server to allow 100 MB attachments, but it created delivery issues on most recipient servers, so it is not a _real_ issue, but I prefer when I can decide of this kind of limitation.
</p>
<h2 id="_Alternatives">3.4. Alternatives <a href="#_Alternatives">§</a></h2>
<p>I evaluated Tuta too, but for the following reasons I dropped the idea quickly:
</p>
<ul>

  <li>they don't support email import (it's "coming soon" since years on their website)</li>
  <li>you can only use their app or website</li>
  <li>there is no way to use IMAP</li>
  <li>there is no way to use GPG because their client does not support it, and you cannot connect using SMTP with your own client</li>
</ul>

<p>Their service is cool though, but not for me.
</p>
<h1 id="_My_ideal_email_setup">4. My ideal email setup <a href="#_My_ideal_email_setup">§</a></h1>
<p>If I was to self-host again (which may be soon! Who knows), I would do it differently to improve the security:
</p>
<ul>

  <li>one front server with the SMTP server, cheap and disposable</li>
  <li>one server for IMAP</li>
  <li>one server to receive and analyze the logs</li>
</ul>

<p>Only the SMTP server would be publicly available, all ports would be closed on all servers, servers would communicate between each other through a VPN, and exports their logs to a server that would only be used for forensics and detecting security breaches.
</p>
<p>Such setup would be an improvement if I was self-hosting again my emails, but the cost and time to operate is non-negligible.  It is also an ecological nonsense to need 3 servers for a single person emails.
</p>
<h1 id="_Conclusion">5. Conclusion <a href="#_Conclusion">§</a></h1>
<p>I started this blog post with the fact that the decision was hard, so hard that I was not able to decide up to a day before renewing my email server for one year.  I wanted to give Proton a chance for a month to evaluate it completely, and I have to admit I like the service much more than I expected...
</p>
<p>My Unix hacker heart hurts terribly on this one.  I would like to go back to self-hosting, but I know I cannot reach the level of security I was looking for, simply because email sucks in the first place.  A solution would be to get rid of this huge archive burden I am carrying, but I regularly search information into this archive and I have not found any usable "mail archive system" that could digest everything and serve it locally.
</p>
<h2 id="_Update_2024-09-14">5.1. Update 2024-09-14 <a href="#_Update_2024-09-14">§</a></h2>
<p>I wrote this blog post two days ago, and I cannot stop thinking about this topic since the migration.
</p>
<p>The real problem certainly lies in my use case, not having my emails on the remote server would solve my problems.  I need to figure how to handle it.  Stay tuned :-)
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-09-12-email-selfhost-to-protonmail.html</guid>
  <link>https://dataswamp.org/~solene/2024-09-12-email-selfhost-to-protonmail.html</link>
  <pubDate>Sun, 15 Sep 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Self-hosting at home and privacy</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Public_information"> Public information</a>
      <ul>
      <li>2. 1. <a href="#_Domain_WHOIS"> Domain WHOIS</a>
</li>
      <li>2. 2. <a href="#_TLS_certificates_using_ACME"> TLS certificates using ACME</a>
</li>
      <li>2. 3. <a href="#_Domain_name"> Domain name</a>
</li>
      <li>2. 4. <a href="#_Public_IP"> Public IP</a>
    </li></ul></li>
    <li>3. <a href="#_Mitigations"> Mitigations</a>
</li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>You may self-host services at home, but you need to think about the potential drawbacks for your privacy.
</p>
<p>Let's explore what kind of information could be extracted from self-hosting, especially when you use a domain name.
</p>
<h1 id="_Public_information">2. Public information <a href="#_Public_information">§</a></h1>
<h2 id="_Domain_WHOIS">2.1. Domain WHOIS <a href="#_Domain_WHOIS">§</a></h2>
<p>A domain name must expose some information through WHOIS queries, basically who is the registrar responsible for it, and who could be contacted for technical or administration matters.
</p>
<p>Almost every registrar will offer you feature to hide your personal information, you certainly not want to have your full name, full address and phone number exposed on a single WHOIS request.
</p>
<p>You can perform a WHOIS request on the link below, directly managed by ICANN.
</p>
<p><a href='https://lookup.icann.org/en'>ICANN Lookup</a></p>
<h2 id="_TLS_certificates_using_ACME">2.2. TLS certificates using ACME <a href="#_TLS_certificates_using_ACME">§</a></h2>
<p>If you use TLS certificates for your services, and ACME (Let's Encrypt or alternatives), all the domains for which a certificate was emitted can easily be queried.
</p>
<p>You can visit the following website, type a domain name, and you will immediately have a list of existing domain names.
</p>
<p><a href='https://crt.sh/'>crt.sh Certificate Search</a></p>
<p>In such situation, if you planned to keep a domain hidden by not sharing it with anyone, you got it wrong.
</p>
<h2 id="_Domain_name">2.3. Domain name <a href="#_Domain_name">§</a></h2>
<p>If you use a custom domain in your email, it is highly likely that you have some IT knowledge and that you are the only user of your email server.
</p>
<p>Using this statement (IT person + only domain user), someone having access to your email address can quickly search for anything related to your domain and figure it is related to you.
</p>
<h2 id="_Public_IP">2.4. Public IP <a href="#_Public_IP">§</a></h2>
<p>Anywhere you connect, your public IP is known of the remote servers.
</p>
<p>Some bored sysadmin could take a look at the IPs in their logs, and check if some public service is running on it, polling for secure services (HTTPS, IMAPS, SMTPS) will immediately give associated domain name on that IP, then they could search even further.
</p>
<h1 id="_Mitigations">3. Mitigations <a href="#_Mitigations">§</a></h1>
<p>There are not many solutions to prevent this, unfortunately.
</p>
<p>The public IP situation could be mitigated by either continuing hosting at home by renting a cheap server with a public IP and establish a VPN between the two and use the public IP of the server for your services, or to move your services to such remote server.  This is an extract cost of course.  When possible, you could expose the service over Tor hidden service or I2P if it works for your use case, you would not need to rent a server for this.
</p>
<p>The TLS certificates names being public could be easily solved by generating self-signed certificates locally, and deal with it.  Depending on your services, it may be just fine, but if you have strangers using the services, the fact to accept to trust the certificate on first use (TOFU) may appear dangerous.  Some software fail to connect to self-signed certificates and do not offer a bypass...
</p>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>Self-hosting at home can be practical for various reasons: reusing old hardware, better local throughput, high performance for cheap... but you need to be aware of potential privacy issues that could come with it.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-09-10-self-hosting-at-home-privacy-issues.html</guid>
  <link>https://dataswamp.org/~solene/2024-09-10-self-hosting-at-home-privacy-issues.html</link>
  <pubDate>Thu, 12 Sep 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to use Proton VPN port forwarding</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Feature_explanation"> Feature explanation</a>
</li>
    <li>3. <a href="#_Setup"> Setup</a>
      <ul>
      <li>3. 1. <a href="#_OpenBSD"> OpenBSD</a>
        <ul>
        <li>3. 1. 1. <a href="#_Using_supervisord"> Using supervisord</a>
</li>
        <li>3. 1. 2. <a href="#_Without_supervisord"> Without supervisord</a>
      </li></ul></li>
      <li>3. 2. <a href="#_Linux"> Linux</a>
    </li></ul></li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>If you use Proton VPN with the paid plan, you have access to their port forwarding feature.  It allows you to expose a TCP and/or UDP port of your machine on the public IP of your current VPN connection.
</p>
<p>This can be useful for multiple use cases, let's see how to use it on Linux and OpenBSD.
</p>
<p><a href='https://protonvpn.com/support/port-forwarding-manual-setup/'>Proton VPN documentation: port forwarding setup</a></p>
<p>If you do not have a privacy need with regard to the service you need to expose to the Internet, renting a cheap VPS is a better solution: cheaper price, stable public IP, no weird script for port forwarding, use of standard ports allowed, reverse DNS, etc...
</p>
<h1 id="_Feature_explanation">2. Feature explanation <a href="#_Feature_explanation">§</a></h1>
<p>Proton VPN port forwarding feature is not really practical, at least not as practical as doing a port forwarding with your local router.  The NAT is done using NAT-PMP protocol (an alternative to UPnP), you will be given a random port number for 60 seconds.  The random port number is the same for TCP and UDP.
</p>
<p><a href='https://en.wikipedia.org/wiki/NAT_Port_Mapping_Protocol'>Wikipedia page about NAT Port Mapping Protocol</a></p>
<p>There is a NAT PMPC client named <code>natpmpc</code> (available almost everywhere as a package) that need to run in an infinite loop to renew the port lease before it expires.
</p>
<p>This is rather not practical for multiple reasons:
</p>
<ul>

  <li>you get a random port assigned, so you must configure your daemon every time</li>
  <li>the lease renewal script must run continuously</li>
  <li>if something wrong happens (script failing, short network failure) that prevent renewing the lease, you will get a new random port</li>
</ul>

<p>Although it has shortcomings, it is a useful feature that was dropped by other VPN providers because of abuses.
</p>
<h1 id="_Setup">3. Setup <a href="#_Setup">§</a></h1>
<p>Let me share a script I am using on Linux and OpenBSD that does the following:
</p>
<ul>

  <li>get the port number</li>
  <li>reconfigure the daemon using the port forwarding feature</li>
  <li>infinite loop renewing the lease</li>
</ul>

<p>You can run the script from supervisord (a process manager) to restart it upon failure.
</p>
<p><a href='http://supervisord.org/'>Supervisor official project website</a></p>
<p>In the example, the Java daemon I2P will be used to demonstrate the configuration update using sed after being assigned the port number.
</p>
<h2 id="_OpenBSD">3.1. OpenBSD <a href="#_OpenBSD">§</a></h2>
<p>Install the package <code>natpmpd</code> to get the NAT-PMP client.
</p>
<p>Create a script with the following content, and make it executable:
</p>
<pre><code>#!/bin/sh

PORT=$(natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk &#x27;/Mapped public/ { print $4 }&#x27;)

# check if the current port is correct
grep &quot;$PORT&quot; /var/i2p/router.config || /etc/rc.d/i2p stop

# update the port in I2P config
sed -i -E &quot;s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT,&quot; /var/i2p/router.config

# make sure i2p is started (in case it was stopped just before)
/etc/rc.d/i2p start

while true
do
    date # use for debug only
    natpmpc -a 1 0 udp 60 -g 10.2.0.1 &amp;&amp; natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo &quot;error Failure natpmpc $(date)&quot;; break ; }
    sleep 45
done
</code></pre>
<p>The script will search for the port number in I2P configuration, stop the service if the port is not found.  Then the port line is modified with sed (in all cases, it does not matter much).  Finally, i2p is started, this will only do something in case i2p was stopped before, otherwise nothing happens.
</p>
<p>Then, in an infinite loop with a 45 seconds frequency, there is a renewal of the TCP and UDP port  forwarding happening.  If something wrong happens, the script exits.
</p>
<h3 id="_Using_supervisord">3.1.1. Using supervisord <a href="#_Using_supervisord">§</a></h3>
<p>If you want to use supervisord to start the script at boot and maintain it running, install the package <code>supervisor</code> and create the file <code>/etc/supervisord.d/nat.ini</code> with the following content:
</p>
<pre><code>[program:natvpn]
command=/etc/supervisord.d/continue_nat.sh ; choose the path of your script
autorestart=unexpected ; when to restart if exited after running (def: unexpected)
</code></pre>
<p>Enable supervisord at boot, start it and verify it started (a configuration error prevents it from starting):
</p>
<pre><code>rcctl enable supervisord
rcctl start supervisord
rcctl check supervisord
</code></pre>
<h3 id="_Without_supervisord">3.1.2. Without supervisord <a href="#_Without_supervisord">§</a></h3>
<p>Open a shell as root and execute the script and keep the terminal opened, or run it in a tmux session.
</p>
<h2 id="_Linux">3.2. Linux <a href="#_Linux">§</a></h2>
<p>The setup is exactly the same as for OpenBSD, just make sure the package providing <code>natpmpc</code> is installed.
</p>
<p>Depending on your distribution, if you want to automate the script running / restart, you can run it from a systemd service with auto restart on failure, or use supervisord as explained above.
</p>
<p>If you use a different network namespace, just make sure to prefix the commands using the VPN with <code>ip netns exec vpn</code>.
</p>
<p>Here is the same example as above but using a network namespace named "vpn" to start i2p service and do the NAT query.
</p>
<pre><code>#!/bin/sh

PORT=$(ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk &#x27;/Mapped public/ { print $4 }&#x27;)

FILE=/var/i2p/.i2p/router.config

grep &quot;$PORT&quot; $FILE || sudo -u i2p /var/i2p/i2prouter stop
sed -i -E &quot;s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT,&quot; $FILE

ip netns exec vpn sudo -u i2p /var/i2p/i2prouter start

while true
do
    date
    ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 &amp;&amp; ip netns exec vpn natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo &quot;error Failure natpmpc $(date)&quot;; break ; }
    sleep 45
done
</code></pre>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>Proton VPN port forwarding feature is useful when need to expose a local network service on a public IP.  Automating it is required to make it work efficiently due to the unusual implementation.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-08-31-protonvpn-port-forwarding.html</guid>
  <link>https://dataswamp.org/~solene/2024-08-31-protonvpn-port-forwarding.html</link>
  <pubDate>Tue, 03 Sep 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Emails encryption at rest on OpenBSD using dovecot and GPG</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Threat_model"> Threat model</a>
</li>
    <li>3. <a href="#_Setup"> Setup</a>
      <ul>
      <li>3. 1. <a href="#_GPGit"> GPGit</a>
</li>
      <li>3. 2. <a href="#_Sieve"> Sieve</a>
</li>
      <li>3. 3. <a href="#_Dovecot"> Dovecot</a>
</li>
      <li>3. 4. <a href="#_User_GPG_setup"> User GPG setup</a>
</li>
      <li>3. 5. <a href="#_Anti-spam_service"> Anti-spam service</a>
    </li></ul></li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>In this blog post, you will learn how to configure your email server to encrypt all incoming emails using user's GPG public keys (when it exists).  This will prevent anyone from reading the emails, except if you own the according GPG private key.  This is known as "encryption at rest".
</p>
<p>This setup, while effective, has limitations.  Headers will not be encrypted, search in emails will break as the content is encrypted, and you obviously need to have the GPG private key available when you want to read your emails (if you read emails on your smartphone, you need to decide if you really want your GPG private key there).
</p>
<p>Encryption is CPU consuming (and memory too for emails of a considerable size), I tried it on an openbsd.amsterdam virtual machine, and it was working fine until someone sent me emails with 20MB attachments.  On a bare-metal server, there is absolutely no issue.  Maybe GPG makes use of hardware acceleration cryptography, and it is not available in virtual machines hosted under the OpenBSD hypervisor vmm.
</p>
<p>This is not an original idea, Etienne Perot wrote about a similar setup in 2012 and enhanced the <code>gpgit</code> script we will use in the setup.  While his blog post is obsolete by now because of all the changes that happened in Dovecot, the core idea remains the same.  Thank you very much Etienne for your job!
</p>
<p><a href='https://perot.me/encrypt-specific-incoming-emails-using-dovecot-and-sieve'>Etienne Perot: Encrypt specific incoming emails using Dovecot and Sieve</a></p>
<p><a href='https://github.com/EtiennePerot/gpgit'>gpgit GitHub project page</a></p>
<p><a href='https://tildegit.org/solene/gpgit'>gpgit mirror on tildegit.org</a></p>
<p>This guide is an extension of my recent email server setup guide:
</p>
<p><a href='https://dataswamp.org/~solene/2024-07-24-openbsd-email-server-setup.html'>2024-07-24 Full-featured email server running OpenBSD</a></p>
<h1 id="_Threat_model">2. Threat model <a href="#_Threat_model">§</a></h1>
<p>This setup is useful to protect your emails stored on the IMAP server. If the server or your IMAP account are compromised, the content of your emails will be encrypted and unusable.
</p>
<p>You must be aware that emails headers are not encrypted: recipients / senders / date / subject will remain in clear text even after encryption.  If you already use end-to-end encryption with your recipients, there are no benefits using this setup.
</p>
<p>An alternative is to not let any emails on the IMAP server, although they could be recovered as they are written in the disk until you retrieve them.
</p>
<p>Personally, I keep many emails of my server, and I am afraid that a 0day vulnerability could be exploited on my email server, allowing an attacker to retrieve the content of all my emails.  OpenSMTPD had critical vulnerabilities a few years ago, including a remote code execution, so it is a realistic threat.
</p>
<p>I wrote a privacy guide (for a client) explaining all the information shared through emails, with possible mitigations and their limitations.
</p>
<p><a href='https://www.ivpn.net/privacy-guides/email-and-privacy/'>IVPN: The Technical Realities of Email Privacy</a></p>
<h1 id="_Setup">3. Setup <a href="#_Setup">§</a></h1>
<p>This setup makes use of the program <code>gpgit</code> which is a Perl script encrypt emails received over the standard input using GPG, it is a complicated task because the email structure can be very complicated.  I have not been able to find any alternative to this script.  In gpgit repository there is a script to encrypt an existing mailbox (maildir format), that script must be run on the server, I did not test it yet.
</p>
<p>You will configure a specific sieve rule which is &quot;global&quot; (not user-defined) that will process all emails before any other sieve filter.  This sieve script will trigger a <code>filter</code> (a program allowed to modify the email) and pass the email on the standard input of the shell script <code>encrypt.sh</code>, which in turn will run <code>gpgit</code> with the according username after verifying a gnupg directory existed for them.  If there is no gnupg directory, the email is not encrypted, this allows multiple users on the email server without enforcing encryption for everyone.
</p>
<p>If a user has multiple addresses, this is the system account name that is used in the local part of the GPG key address.
</p>
<h2 id="_GPGit">3.1. GPGit <a href="#_GPGit">§</a></h2>
<p>Some packages are required for gpgit to work, they are all available on OpenBSD:
</p>
<pre><code>pkg_add p5-Mail-GnuPG p5-List-MoreUtils
</code></pre>
<p>Download gpgit git repository and copy its <code>gpgpit</code> script into <code>/usr/local/bin/</code> as an executable:
</p>
<pre><code>cd /tmp/
git clone https://github.com/EtiennePerot/gpgit
cd gpgit
install -o root -g wheel -m 555 gpgit /usr/local/bin/
</code></pre>
<h2 id="_Sieve">3.2. Sieve <a href="#_Sieve">§</a></h2>
<p>All the following paths will be relative to the directory <code>/usr/local/lib/dovecot/sieve/</code>, you can <code>cd</code> into it now.
</p>
<p>Create the file <code>encrypt.sh</code> with this content, replace the variable <code>DOMAIN</code> with the domain configured in the GPG key:
</p>
<pre><code>#!/bin/sh

DOMAIN=&quot;puffy.cafe&quot;

NOW=$(date +%s)
DATA=&quot;$(cat)&quot;

if test -d ~/.gnupg
then
    echo &quot;$DATA&quot; | /usr/local/bin/gpgit &quot;${USER}@${DOMAIN}&quot;
    NOW2=$(date +%s)
    echo &quot;Email encryption for user ${USER}: $(( NOW2 - NOW )) seconds&quot; | logger -p mail.info
else
    echo &quot;$DATA&quot;
    echo &quot;Email encryption for user for ${USER} none&quot; | logger -p mail.info
fi
</code></pre>
<p>Make the script executable with <code>chmod +x encrypt.sh</code>.  This script will create a new log line in your email logs every time an email is processed, including the username and the time required for encryption (in case of encryption).  You could extend the script to discard the <code>Subject</code> header from the email if you want to hide it, I do not provide the implementation as I expect this task to be trickier than it looks like if you want to handle all corner cases.
</p>
<p>Create the file <code>global.sieve</code> with the content:
</p>
<pre><code>require [&quot;vnd.dovecot.filter&quot;];
filter &quot;encrypt.sh&quot;;
</code></pre>
<p>Compile the sieve rules with <code>sievec global.sieve</code>.
</p>
<h2 id="_Dovecot">3.3. Dovecot <a href="#_Dovecot">§</a></h2>
<p>Edit the file <code>/etc/dovecot/conf.d/90-plugin.conf</code> to add the following code within the <code>plugin</code> block:
</p>
<pre><code>  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve
  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.filter
  sieve_before = /usr/local/lib/dovecot/sieve/global.sieve
  sieve_filter_exec_timeout = 200s
</code></pre>
<p>You may have <code>sieve_global_extensions</code> already set, in that case update its value.
</p>
<p>The variable <code>sieve_filter_exec_timeout</code> allows the script <code>encrypt.sh</code> to run for 200 seconds before being stopped, you should adapt the value to your system.  I came up with 200 seconds to be able to encrypt email with 20MB attachments on an openbsd.amsterdam virtual machine.  On a bare metal server with a Ryzen 5 CPU, it takes less than one second for the same email.
</p>
<p>The full file should look like the following (in case you followed my previous email guide):
</p>
<pre><code>##
## Plugin settings
##

# All wanted plugins must be listed in mail_plugins setting before any of the
# settings take effect. See &lt;doc/wiki/Plugins.txt&gt; for list of plugins and
# their configuration. Note that %variable expansion is done for all values.

plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms

  # From elsewhere to Spam folder
  imapsieve_mailbox1_name = Spam
  imapsieve_mailbox1_causes = COPY
  imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.siev

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Spam
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve

  # for GPG encryption
  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve
  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.filter
  sieve_before = /usr/local/lib/dovecot/sieve/global.sieve
  sieve_filter_exec_timeout = 200s
}
</code></pre>
<p>Open the file <code>/etc/dovecot/conf.d/10-master.conf</code> and uncomment the variable <code>default_vsz_limit</code> and set its value to <code>1024M</code>. This is required as GPG uses a lot of memory and without this, the process will be killed and the email lost.  I found 1024M to works with attachments up to 45 MB, however you should raise this value higher value if you plan to receive bigger attachments.
</p>
<p>Restart dovecot to take account of the changes: <code>rcctl restart dovecot</code>.
</p>
<h2 id="_User_GPG_setup">3.4. User GPG setup <a href="#_User_GPG_setup">§</a></h2>
<p>You need to create a GPG keyring for each users you want use encryption, the simplest method is to setup a passwordless keyring and import your public key:
</p>
<pre><code>$ gpg --quick-generate-key --passphrase &#x27;&#x27; --batch &quot;$USER&quot;
$ gpg --import public-key-file.asc
$ gpg --edit-key FINGERPRINT_HERE
gpg&gt; sign
[....]
gpg&gt; save
</code></pre>
<p>If you want to disable GPG encryption for the user, remove the directory <code>~/.gnupg</code>.
</p>
<h2 id="_Anti-spam_service">3.5. Anti-spam service <a href="#_Anti-spam_service">§</a></h2>
<p>If you use a spam filter such as rspamd or spamassassin relying on bayes filter, it will only work if it process the emails before arriving at dovecot, for instance in my email setup this is the case as rspamd is a filter of opensmtpd and pass the email before being delivered to Dovecot.
</p>
<p>Such service can have privacy issues, especially if you use encryption.  Bayes filter works by splitting an email content into tokens (not really words but almost) and looking for patterns using these tokens, basically each emails is split and stored in the anti-spam local database in small parts.  I am not sure one could recreate the emails based on tokens, but if someone like an attacker is able to access the token list, they may have some insights about your email content.  If this is part of your threat model, disable your anti-spam Bayes filter.
</p>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>This setup is quite helpful if you want to protect all your emails on their storage.  Full disk encryption on the server does not prevent anyone able to connect over SSH (as root or the email user) from reading the emails, even file recovery is possible when the volume is unlocked (not on the real disk, but the software encrypted volume), this is where encryption at rest is beneficial.
</p>
<p>I know from experience it is complicated to use end-to-end encryption with tech-savvy users, and that it is even unthinkable with regular users.  This is a first step if you need this kind of security (see the threat model section), but you need to remember a copy of all your emails certainly exist on the servers used by the persons you exchange emails with.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-08-14-automatic-emails-gpg-encryption-at-rest.html</guid>
  <link>https://dataswamp.org/~solene/2024-08-14-automatic-emails-gpg-encryption-at-rest.html</link>
  <pubDate>Mon, 19 Aug 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Using Firefox remote debugging feature</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Setup"> Setup</a>
</li>
    <li>3. <a href="#_Remote_connection"> Remote connection</a>
</li>
    <li>4. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>Firefox has an interesting features for developers, its ability to connect a Firefox developers tools to a remote Firefox instance.  This can really interesting in the case of a remote kiosk display for instance.
</p>
<p>The remote debugging does not provide a display of the remote, but it gives you access to the developer tools for tabs opened on the remote.
</p>
<h1 id="_Setup">2. Setup <a href="#_Setup">§</a></h1>
<p>The remote firefox you want to connect to must be started using the command line parameter <code>--start-debugger-server</code>.  This will make it listen on the TCP port 6000 on 127.0.0.1.  Be careful, there is another option named <code>remote-debugging-port</code> which is not what you want here, but the names can be confusing (trust me, I wasted too much time because of this).
</p>
<p>Before starting Firefox, a few knobs must be modified in its configuration.  Either search for the options in <code>about:config</code> or create a <code>user.js</code> file in the Firefox profile directory with the following content:
</p>
<pre><code>user_pref(&quot;devtools.chrome.enabled&quot;, true);
user_pref(&quot;devtools.debugger.remote-enabled&quot;, true);
user_pref(&quot;devtools.debugger.prompt-connection&quot;, false);
</code></pre>
<p>This enables the remote management and removes a prompt upon each connection, while this is a good safety measure, it is not practical for remote debugging.
</p>
<p>When you start Firefox, the URL input bar should have a red background.
</p>
<h1 id="_Remote_connection">3. Remote connection <a href="#_Remote_connection">§</a></h1>
<p>Now, you need to make a SSH tunnel to that remote host where Firefox is running in order to connect to the port.  Depending on your use case, a local NAT could be done to expose the port to a network interface or VPN interface, but pay attention to security as this would allow anyone on the network to control the Firefox instance.
</p>
<p>The SSH tunnel is quite standard: <code>ssh -L 6001:127.0.0.1:6000</code>, the remote port 6000 is exposed locally as 6001, this is important because your own Firefox may be using the port 6000 for some reasons.
</p>
<p>In your own local Firefox instance, visit the page <code>about:debugging</code>, add the remote instance <code>localhost:6001</code> and then click on Connect on its name on the left panel.  Congratulations, you have access to the remote instance for debugging or profiling websites.
</p>
<figure><a href='static/firefox-debug-add-remote-fs8.png'><picture><img src='static/firefox-debug-add-remote-fs8.png' alt='Input the remote address localhost:6001 and click on Add' width='60%' /></picture><figcaption>Input the remote address localhost:6001 and click on Add</figcaption></a></figure>
<figure><a href='static/firefox-debug-left-panel-fs8.png'><picture><img src='static/firefox-debug-left-panel-fs8.png' alt='Click on connect on the left' width='60%' /></picture><figcaption>Click on connect on the left</figcaption></a></figure>
<figure><a href='static/firefox-debug-access-fs8.png'><picture><img src='static/firefox-debug-access-fs8.png' alt='Enjoy your remote debugging session' width='60%' /></picture><figcaption>Enjoy your remote debugging session</figcaption></a></figure>
<h1 id="_Conclusion">4. Conclusion <a href="#_Conclusion">§</a></h1>
<p>While it can be tricky to debug a system you can directly see, especially if it is a kiosk in production that you can see / use in case of a problem.
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-08-06-remote-firefox-debug.html</guid>
  <link>https://dataswamp.org/~solene/2024-08-06-remote-firefox-debug.html</link>
  <pubDate>Thu, 08 Aug 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Full-featured email server running OpenBSD</title>
  <description>
    <![CDATA[

    <div id="toc">
    <h1 id="toctitle">Table of contents</h1>
    
      <ul>
    <li>1. <a href="#_Introduction"> Introduction</a>
</li>
    <li>2. <a href="#_Quick_reminder"> Quick reminder</a>
</li>
    <li>3. <a href="#_Packet_Filter_(PF)"> Packet Filter (PF)</a>
</li>
    <li>4. <a href="#_DNS"> DNS</a>
      <ul>
      <li>4. 1. <a href="#_MX_records"> MX records</a>
</li>
      <li>4. 2. <a href="#_SPF"> SPF</a>
</li>
      <li>4. 3. <a href="#_DKIM"> DKIM</a>
</li>
      <li>4. 4. <a href="#_DMARC"> DMARC</a>
</li>
      <li>4. 5. <a href="#_PTR_(Reverse_DNS)"> PTR (Reverse DNS)</a>
    </li></ul></li>
    <li>5. <a href="#_System_configuration"> System configuration</a>
      <ul>
      <li>5. 1. <a href="#_Acme-client"> Acme-client</a>
</li>
      <li>5. 2. <a href="#_Rspamd"> Rspamd</a>
        <ul>
        <li>5. 2. 1. <a href="#_Alternatives"> Alternatives</a>
      </li></ul></li>
      <li>5. 3. <a href="#_OpenSMTPD"> OpenSMTPD</a>
        <ul>
        <li>5. 3. 1. <a href="#_TLS"> TLS</a>
</li>
        <li>5. 3. 2. <a href="#_User_management"> User management</a>
</li>
        <li>5. 3. 3. <a href="#_Handling_extra_domains"> Handling extra domains</a>
</li>
        <li>5. 3. 4. <a href="#_Without_Dovecot"> Without Dovecot</a>
      </li></ul></li>
      <li>5. 4. <a href="#_Dovecot"> Dovecot</a>
        <ul>
        <li>5. 4. 1. <a href="#_IMAP"> IMAP</a>
</li>
        <li>5. 4. 2. <a href="#_POP"> POP</a>
</li>
        <li>5. 4. 3. <a href="#_JMAP"> JMAP</a>
</li>
        <li>5. 4. 4. <a href="#_Sieve_(filtering_rules)"> Sieve (filtering rules)</a>
</li>
        <li>5. 4. 5. <a href="#_Manage_Sieve"> Manage Sieve</a>
</li>
        <li>5. 4. 6. <a href="#_Start_the_service"> Start the service</a>
      </li></ul>
    </li></ul></li>
    <li>6. <a href="#_Webmail"> Webmail</a>
      <ul>
      <li>6. 1. <a href="#_Roundcube_mail_setup"> Roundcube mail setup</a>
    </li></ul></li>
    <li>7. <a href="#_Hardening"> Hardening</a>
      <ul>
      <li>7. 1. <a href="#_Always_allow_the_sender_per_email_or_domain"> Always allow the sender per email or domain</a>
</li>
      <li>7. 2. <a href="#_Block_bots"> Block bots</a>
</li>
      <li>7. 3. <a href="#_Split_the_stack"> Split the stack</a>
</li>
      <li>7. 4. <a href="#_Network_attack_surface_reduction"> Network attack surface reduction</a>
    </li></ul></li>
    <li>8. <a href="#_Email_client_configuration"> Email client configuration</a>
</li>
    <li>9. <a href="#_Verify_the_setup"> Verify the setup</a>
</li>
    <li>10. <a href="#_Maintenance"> Maintenance</a>
      <ul>
      <li>10. 1. <a href="#_Running_processes"> Running processes</a>
</li>
      <li>10. 2. <a href="#_Certificates_renewal"> Certificates renewal</a>
</li>
      <li>10. 3. <a href="#_All_about_logs"> All about logs</a>
</li>
      <li>10. 4. <a href="#_Disk_space"> Disk space</a>
    </li></ul></li>
    <li>11. <a href="#_Conclusion"> Conclusion</a>

    </li></ul></div>
    
<h1 id="_Introduction">1. Introduction <a href="#_Introduction">§</a></h1>
<p>This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5.  It was commissioned by a customer of my consultancy who wanted it to be published on my blog.
</p>
<p>Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system.
</p>
<p>The features list can be found below:
</p>
<ul>

  <li>email access through IMAP, POP or Webmail</li>
  <li>secure SMTP server (mandatory server to server encryption, personal information hiding)</li>
  <li>state-of-the-art setup to be considered as legitimate as possible</li>
  <li>firewall filtering (bot blocking, all ports closes but the required ones)</li>
  <li>anti-spam</li>
</ul>

<p>In the example, I will set up a temporary server for the domain <code>puffy.cafe</code> with a server using the subdomain <code>mail.puffy.cafe</code>.  From there, you can adapt with your own domain.
</p>
<h1 id="_Quick_reminder">2. Quick reminder <a href="#_Quick_reminder">§</a></h1>
<p>I prepared a few diagrams explaining how all the components are used together, in three cases: when sending an email, when the SMTP servers receives an email from the outside and when you retrieve your emails locally.
</p>
<figure><a href='static/img/email-setup-authenticated-mail-delivery.dot.png'><picture><img src='static/img/email-setup-authenticated-mail-delivery.dot.png' alt='Authenticated user sending an email to the outside' width='60%' /></picture><figcaption>Authenticated user sending an email to the outside</figcaption></a></figure>
<figure><a href='static/img/email-setup-receiving-email.dot.png'><picture><img src='static/img/email-setup-receiving-email.dot.png' alt='Outside sending an email to one of our users' width='60%' /></picture><figcaption>Outside sending an email to one of our users</figcaption></a></figure>
<figure><a href='static/img/email-setup-retrieving-emails.dot.png'><picture><img src='static/img/email-setup-retrieving-emails.dot.png' alt='User retrieving emails for reading' width='60%' /></picture><figcaption>User retrieving emails for reading</figcaption></a></figure>
<h1 id="_Packet_Filter_(PF)">3. Packet Filter (PF) <a href="#_Packet_Filter_(PF)">§</a></h1>
<p>Packet Filter is OpenBSD's firewall.  In our setup, we want all ports to be blocked except the few ones required for the email stack.
</p>
<p>The following ports will be required:
</p>
<ul>

  <li>opensmtpd 25/tcp (smtp): used for email delivery from other servers, supports STARTTLS</li>
  <li>opensmtpd 465/tcp (smtps): used to establish a TLS connection to the SMTP server to receive or send emails</li>
  <li>opensmtpd 587/tcp (submission): used to send emails to external servers, supports STARTTLS</li>
  <li>httpd 80/tcp (http): used to generate TLS certificates using ACME</li>
  <li>dovecot 993/tcp (imaps): used to connect to the IMAPS server to read emails</li>
  <li>dovecot 995/tcp (pop3s): used to connect to the POP3S server to download emails</li>
  <li>dovecot 4190/tcp (sieve): used to allow remote management of an user SIEVE rules</li>
</ul>

<p>Depending on what services you will use, only the opensmtpd ports are mandatory.  In addition, we will open the port 22/tcp for SSH.
</p>
<pre><code>set block-policy drop
set loginterface egress
set skip on lo0

# normalisation des paquets
match in all scrub (no-df random-id max-mss 1440)
antispoof quick for { egress }

tcp_ports = &quot;{ smtps smtp submission imaps pop3s sieve ssh http }&quot;

block all
pass out inet
pass out inet6

# allow ICMP (ping)
pass in proto icmp

# allow IPv6 to work
pass in on egress inet6 proto icmp6 all icmp6-type { routeradv neighbrsol neighbradv }
pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state

# allow our services
pass in on egress proto tcp from any to any port $tcp_ports

# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild
</code></pre>
<h1 id="_DNS">4. DNS <a href="#_DNS">§</a></h1>
<p>If you want to run your own email server, you need a domain name configured with a couple of DNS records about the email server.
</p>
<h2 id="_MX_records">4.1. MX records <a href="#_MX_records">§</a></h2>
<p><a href='https://en.wikipedia.org/wiki/MX_record'>Wikipedia page: MX record</a></p>
<p>The MX records list the servers that should be used by outside SMTP servers to send us emails, this is the public list of our servers accepting emails for a given domain.  They have a weight associated to each of them, the server with the lowest weight should be used first and if it does not respond, the next server used will be the one with a slightly higher weight.  This is a simple mechanism that allow setting up a hierarchy.
</p>
<p>I highly recommend setting up at least two servers, so if your main server fails is unreachable (host outage, hardware failure, upgrade ongoing) the emails will be sent to the backup server. Dovecot bundles a program to synchronize mailboxes between servers, one way or two-way, one shot or continuously.
</p>
<p>If you have no MX records in your domain name, it is not possible to send you emails. It is like asking someone to send you a post card without giving them any clue about your real address.
</p>
<p>Your server hostname can be different from the domain apex (raw domain name without a subdomain), a simple example would be to use <code>mail.domain.example</code> for the server name, this will not prevent it from receiving/sending emails using <code>@domain.example</code> in email addresses.
</p>
<p>In my example, the domain puffy.cafe mail server will be mail.puffy.cafe, giving this MX record in my DNS zone:
</p>
<pre><code>        IN MX     10 mail.puffy.cafe.
</code></pre>
<h2 id="_SPF">4.2. SPF <a href="#_SPF">§</a></h2>
<p><a href='https://en.wikipedia.org/wiki/Sender_Policy_Framework'>Wikipedia page: SPF record</a></p>
<p>The SPF record is certainly the most important piece of the email puzzle to detect spam.  With the SPF, the domain name owner can define which servers are allowed to send emails from that domain.  A properly configured spam filter will give a high spam score to incoming emails that are not in the sender domain SPF.
</p>
<p>To ease the configuration, that record can automatically include all MX defined for a domain, but also A/AAAA records, so if you only use your MX servers for sending, a simple configuration allowing MX servers to send is enough.
</p>
<p>In my example, only mail.puffy.cafe should be legitimate for sending emails, any future MX server should also be allowed to send emails, so we configure the SPF to allow all MX defined servers to be senders.
</p>
<pre><code>    600 IN TXT     &quot;v=spf1 mx -all&quot;
</code></pre>
<h2 id="_DKIM">4.3. DKIM <a href="#_DKIM">§</a></h2>
<p><a href='https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail'>Wikipedia page: DKIM signature</a></p>
<p>When used, the DKIM is a system allowing a receiver to authenticate a sender, based on an asymmetric cryptographic keys.  The sender publishes its public key on a TXT DNS record before signing all outgoing emails using the private key.  By doing so, receivers can validate the email integrity and make sure it was sent from a server of the domain claimed in the From header.
</p>
<p>DKIM is mandatory to not be classified as a spamming server.
</p>
<p>The following set of commands will create a 2048 bits RSA key in <code>/etc/mail/dkim/private/puffy.cafe.key</code> with its public key in <code>/etc/mail/dkim/puffy.cafe.pub</code>, the <code>umask 077</code> command will make sure any file created during the process will only be readable by root.  Finally, you need to make the private key readable to the group <code>_rspamd</code>.
</p>
<p>Note: the umask command will persist in your shell session, if you do not want to create files/directory only readable by root after this, either spawn a new shell, or run the set of commands in a new shell and then exit from it once you are done.
</p>
<pre><code>umask 077
install -d -o root -g wheel -m 755 /etc/mail/dkim
install -d -o root -g _dkim -m 775 /etc/mail/dkim/private
openssl genrsa -out /etc/mail/dkim/private/puffy.cafe.key 2048
openssl rsa -in /etc/mail/dkim/private/puffy.cafe.key -pubout -out /etc/mail/dkim/puffy.cafe.pub
chgrp _rspamd /etc/mail/dkim/private/puffy.cafe.key /etc/mail/dkim/private/
chmod 440 /etc/mail/dkim/private/puffy.cafe.key
chmod 775 /etc/mail/dkim/private/
</code></pre>
<p>In this example, we will name the DKIM selector <code>dkim</code> to keep it simple.  The selector is the name of the key, this allows having multiple DKIM keys for a single domain.
</p>
<p>Add the DNS record like the following, the value in <code>p</code> is the public key in the file <code>/etc/mail/dkim/puffy.cafe.pub</code>, you can get it as a single line with the command <code>awk &#x27;/PUBLIC/ { $0=&quot;&quot; } { printf (&quot;%s&quot;,$0) } END { print }&#x27; /etc/mail/dkim/puffy.cafe.pub</code>:
</p>
<p>Your registrar may offer to add the entry using a DKIM specific form.  There is nothing wrong doing so, just make sure the produced entry looks like the entry below.
</p>
<pre><code>dkim._domainkey IN TXT &quot;v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAo3tIFelMk74wm+cJe20qAUVejD0/X+IdU+A2GhAnLDpgiA5zMGiPfYfmawlLy07tJdLfMLObl8aZDt5Ij4ojGN5SE1SsbGC2MTQGq9L2sLw2DXq+D8YKfFAe0KdYGczd9IAQ9mkYooRfhF8yMc2sMoM75bLxGjRM1Fs1OZLmyPYzy83UhFYq4gqzwaXuTvxvOKKyOwpWzrXzP6oVM7vTFCdbr8E0nWPXWKPJhcd10CF33ydtVVwDFp9nDdgek3yY+UYRuo/iJvdcn2adFoDxlE6eXmhGnyG4+nWLNZrxIgokhom5t5E84O2N31YJLmqdTF+nH5hTON7//5Kf/l/ubwIDAQAB&quot;
</code></pre>
<h2 id="_DMARC">4.4. DMARC <a href="#_DMARC">§</a></h2>
<p><a href='https://en.wikipedia.org/wiki/DMARC'>Wikipedia page: DMARC record</a></p>
<p>The DMARC record is an extra mechanism that comes on top of SPF/DKIM, while it does not do much by itself, it is important to configure it.
</p>
<p>DMARC could be seen as a public notice explaining to servers receiving emails whose sender looks like your domain name (legit or not) what they should do if SPF/DKIM does not validate.
</p>
<p>As of 2024, DMARC offers three actions for receivers:
</p>
<ul>

  <li>do nothing but make a report to the domain owner</li>
  <li>"quarantine" mode: tell the receiver to be suspicious without rejecting it, the result will depend on the receiver (most of the time it will be flagged as spam) and make a report</li>
  <li>"reject" mode: tell the receiver to not accept the email and make a report</li>
</ul>

<p>In my example, I want invalid SPF/DKIM emails to be rejected.  It is quite arbitrary, but I prefer all invalid emails from my domain to be discarded rather than ending up in a spam directory, so <code>p</code> and <code>sp</code> are set to <code>reject</code>.  In addition, if my own server is misconfigured I will be notified about delivery issues sooner than if emails were silently put into quarantine.
</p>
<p>An email address should be provided to receive DMARC reports, they are barely readable and I never made use of them, but the email address should exist so this is what the <code>rua</code> field is for.
</p>
<p>The field <code>aspf</code> is set to <code>r</code> (relax), basically this allows any servers with a hostname being a subdomain of <code>.puffy.cafe</code> to send emails for <code>@puffy.cafe</code>, while if this field is set to <code>s</code> (strict), the domain of the sender should match the domain of the email server (<code>mail.puffy.cafe</code> would only be allowed to send for <code>@mail.puffy.cafe</code>).
</p>
<p><a href='https://mxtoolbox.com/dmarc/details/dmarc-tags'>Mx Toolbox website: DMARC tags list</a></p>
<pre><code>_dmarc        IN TXT     &quot;v=DMARC1;p=reject;rua=mailto:dmarc@puffy.cafe;sp=reject;aspf=r;&quot;
</code></pre>
<h2 id="_PTR_(Reverse_DNS)">4.5. PTR (Reverse DNS) <a href="#_PTR_(Reverse_DNS)">§</a></h2>
<p><a href='https://en.wikipedia.org/wiki/Reverse_DNS_lookup'>Wikipedia page: PTR record</a></p>
<p>An older mechanism used to prevent spam was to block, or consider as spam, any SMTP server whose advertised hostname did not match the result of the reverse lookup of its IP.
</p>
<p>Let's say "mail.foobar.example" (IP: A.B.C.D) is sending an email to my server, if the result of the DNS request to resolve the PTR of A.B.C.D is not "mail.foobar.example", the email would be considered as spam or rejected.  While this is superseded by SPF/DKIM and annoying as it is not always possible to define a PTR for a public IP, the reverse DNS setup is still a strong requirement to not be considered as a spamming platform.
</p>
<p>Make sure the PTR matches the system hostname and not the domain name itself, in the example above the PTR should be <code>mail.foobar.example</code> and not <code>foobar.example</code>.
</p>
<h1 id="_System_configuration">5. System configuration <a href="#_System_configuration">§</a></h1>
<h2 id="_Acme-client">5.1. Acme-client <a href="#_Acme-client">§</a></h2>
<p>The first step is to obtain a valid TLS certificate, this requires configuring acme-client, httpd and start httpd daemon.
</p>
<p>Copy the acme-client example <code>cp /etc/examples/acme-client.conf /etc/</code>
</p>
<p>Modify <code>/etc/acme-client.conf</code> and edit only the last entry to configure your own domain, mine looks like this:
</p>
<pre><code>#
# $OpenBSD: acme-client.conf,v 1.5 2023/05/10 07:34:57 tb Exp $
#
authority letsencrypt {
	api url &quot;https://acme-v02.api.letsencrypt.org/directory&quot;
	account key &quot;/etc/acme/letsencrypt-privkey.pem&quot;
}

authority letsencrypt-staging {
	api url &quot;https://acme-staging-v02.api.letsencrypt.org/directory&quot;
	account key &quot;/etc/acme/letsencrypt-staging-privkey.pem&quot;
}

authority buypass {
	api url &quot;https://api.buypass.com/acme/directory&quot;
	account key &quot;/etc/acme/buypass-privkey.pem&quot;
	contact &quot;mailto:me@example.com&quot;
}

authority buypass-test {
	api url &quot;https://api.test4.buypass.no/acme/directory&quot;
	account key &quot;/etc/acme/buypass-test-privkey.pem&quot;
	contact &quot;mailto:me@example.com&quot;
}

domain mail.puffy.cafe {
    # you can remove the line &quot;alternative names&quot; if you do not need extra subdomains
    # associated to this certificate
    # imap.puffy.cafe is purely an example, I do not need it
	alternative names { imap.puffy.cafe pop.puffy.cafe }
	domain key &quot;/etc/ssl/private/mail.puffy.cafe.key&quot;
	domain full chain certificate &quot;/etc/ssl/mail.puffy.cafe.fullchain.pem&quot;
	sign with letsencrypt
}
</code></pre>
<p>Now, configure httpd, starting from the OpenBSD example: <code>cp /etc/examples/httpd.conf /etc/</code>
</p>
<p>Edit <code>/etc/httpd.conf</code>, we want the first block to match all domains but not &quot;example.com&quot;, and we do not need the second block listen on 443/tcp (except if you want to run a https server with some content, but you are on your own then).  The resulting file should look like the following:
</p>
<pre><code># $OpenBSD: httpd.conf,v 1.22 2020/11/04 10:34:18 denis Exp $

server &quot;*&quot; {
	listen on * port 80
	location &quot;/.well-known/acme-challenge/*&quot; {
		root &quot;/acme&quot;
		request strip 2
	}
	location * {
		block return 302 &quot;https://$HTTP_HOST$REQUEST_URI&quot;
	}
}
</code></pre>
<p>Enable and start httpd with <code>rcctl enable httpd &amp;&amp; rcctl start httpd</code>.
</p>
<p>Run <code>acme-client -v mail.puffy.cafe</code> to generate the certificate with some verbose output (if something goes wrong, you will have a clue).
</p>
<p>If everything went fine, you should have the full chain certificate in <code>/etc/ssl/mail.puffy.cafe.fullchain.pem</code> and the private key in <code>/etc/ssl/private/mail.puffy.cafe.key</code>.
</p>
<h2 id="_Rspamd">5.2. Rspamd <a href="#_Rspamd">§</a></h2>
<p>You will use rspamd to filter spam and sign outgoing emails for DKIM.
</p>
<p>Install rspamd and the filter to plug it to opensmtpd:
</p>
<pre><code>pkg_add rspamd-- opensmtpd-filter-rspamd
</code></pre>
<p>You need to configure rspamd to sign outgoing emails with your DKIM private key, to proceed, create the file <code>/etc/rspamd/local.d/dkim_signing.conf</code> (the filename is important):
</p>
<pre><code># our usernames does not contain the domain part
# so we need to enable this option
allow_username_mismatch = true;

# this configures the domain puffy.cafe to use the selector &quot;dkim&quot;
# and where to find the private key
domain {
    puffy.cafe {
        path = &quot;/etc/mail/dkim/private/puffy.cafe.key&quot;;
        selector = &quot;dkim&quot;;
    }
}
</code></pre>
<p>For better performance, you need to use redis as a cache backend for rspamd:
</p>
<pre><code>rcctl enable redis
rcctl start redis
</code></pre>
<p>Now you can start rspamd:
</p>
<pre><code>rcctl enable rspamd
rcctl start rspamd
</code></pre>
<p>For extra information about rspamd (like statistics or its web UI), I wrote about it in 2021:
</p>
<p><a href='https://dataswamp.org/~solene/2021-07-13-smtpd-rspamd.html'>Older blog post: 2024-07-13 Filtering spam using Rspamd and OpenSMTPD on OpenBSD</a></p>
<h3 id="_Alternatives">5.2.1. Alternatives <a href="#_Alternatives">§</a></h3>
<p>If you do not want to use rspamd, it is possible to replace the DKIM signing part using <code>opendkim</code>, <code>dkimproxy</code> or <code>opensmtpd-filter-dkimsign</code>.  The spam filter could be either replaced by the featureful <code>spamassassin</code> available as a package, or partially with the base system program <code>spamd</code> (it does not analyze emails).
</p>
<p>This guide only focus on rspamd, but it is important to know alternatives exist.
</p>
<h2 id="_OpenSMTPD">5.3. OpenSMTPD <a href="#_OpenSMTPD">§</a></h2>
<p>OpenSMTPD configuration file on OpenBSD is <code>/etc/mail/smtpd.conf</code>, here is a working configuration with a lot of comments:
</p>
<pre><code>## this defines the paths for the X509 certificate
pki puffy.cafe cert &quot;/etc/ssl/mail.puffy.cafe.fullchain.pem&quot;
pki puffy.cafe key &quot;/etc/ssl/private/mail.puffy.cafe.key&quot;
pki puffy.cafe dhe auto

## this defines how the local part of email addresses can be split
# defaults to &#x27;+&#x27;, so solene+foobar@domain matches user
# solene@domain. Due to the &#x27;+&#x27; character being a regular source of issues
# with many online forms, I recommend using a character such as &#x27;_&#x27;,
# &#x27;.&#x27; or &#x27;-&#x27;. This feature is very handy to generate infinite unique emails
# addresses without pre-defining aliases.
# Using &#x27;_&#x27;, solene_openbsd@domain and solene_buystuff@domain lead to the
# same address
smtp sub-addr-delim &#x27;_&#x27;

## this defines an external filter
# rspamd does dkim signing and spam filter
filter rspamd proc-exec &quot;filter-rspamd&quot;

## this defines which file will contain aliases
# this can be used to define groups or redirect emails to users
table aliases file:/etc/mail/aliases

## this defines all the ports to use
# mask-src hides system hostname, username and public IP when sending an email
listen on all port 25  tls         pki &quot;puffy.cafe&quot; filter &quot;rspamd&quot;
listen on all port 465 smtps       pki &quot;puffy.cafe&quot; auth mask-src filter &quot;rspamd&quot;
listen on all port 587 tls-require pki &quot;puffy.cafe&quot; auth mask-src filter &quot;rspamd&quot;

## this defines actions
# either deliver to lmtp or to an external server
action &quot;local&quot; lmtp &quot;/var/dovecot/lmtp&quot; alias &lt;aliases&gt;
action &quot;outbound&quot; relay

## this defines what should be done depending on some conditions
# receive emails (local or from external server for &quot;puffy.cafe&quot;)
match from any for domain &quot;puffy.cafe&quot; action &quot;local&quot;
match from local for local action &quot;local&quot;

# send email (from local or authenticated user)
match from any auth for any action &quot;outbound&quot;
match from local for any action &quot;outbound&quot;
</code></pre>
<p>In addition, you can configure the advertised hostname by editing the file <code>/etc/mail/mailname</code>: for instance my machine&#x27;s hostname is <code>ryzen</code> so I need this file to advertise it as <code>mail.puffy.cafe</code>.
</p>
<p>Restart OpenSMTPD with <code>rcctl restart smtpd</code>.
</p>
<h3 id="_TLS">5.3.1. TLS <a href="#_TLS">§</a></h3>
<p>For ports using STARTTLS (25 and 587), there are different options with regard to TLS encryption.
</p>
<ul>

  <li>do not allow STARTTLS</li>
  <li>offer STARTTLS but allow not using it (option <code>tls</code>)</li>
  <li>require STARTTLS: drop connection when the remote peer does ask for STARTTLS (option <code>tls-require</code>)</li>
  <li>require STARTTLS: drop connection when no STARTTLS, and verify the remote certificate (option <code>tls-require verify</code>)</li>
</ul>

<p>It is recommended to enforce STARTTLS on port 587 as it is used by authenticated users to send emails, preventing them to send emails without network encryption.
</p>
<p>On port 25, used by external servers to reach yours, it is important to allow STARTTLS because most server will deliver emails over an encrypted TLS session, however it is your choice to enforce it or not.
</p>
<p>Enforcing STARTTLS might break email delivery from some external servers that are outdated or misconfigured (or bad actors).
</p>
<h3 id="_User_management">5.3.2. User management <a href="#_User_management">§</a></h3>
<p>By default, OpenSMTPD is configured to deliver email to valid users in the system.  In my example, if user <code>solene</code> exists, then email address <code>solene@puffy.cafe</code> will deliver emails to <code>solene</code> user mailbox.
</p>
<p>Of course, as you do not want the system daemons to receive emails, a file contains aliases to redirect emails from a user to another, or simply discard it.
</p>
<p>In <code>/etc/mail/aliases</code>, you can redirect emails to your username by adding a new line, in the example below I will redirect root emails to my user.
</p>
<pre><code>root: solene
</code></pre>
<p>It is possible to redirect to multiple users using a comma to separate them, this is handful if you want to create a local group delivering emails to multiple users.
</p>
<p>Instead of a user, it is possible to append the incoming emails to a file, pipe them to a command or return an SMTP code.  The aliases(5) man pages contains all you need to know.
</p>
<p><a href='https://man.openbsd.org/aliases.5'>OpenBSD manual pages: aliases(5)</a></p>
<p>Every time you modify this file, you need to run the command <code>smtpctl update table aliases</code> to reload the aliases table in OpenSMTPD memory.
</p>
<p>You can add a new email account by creating a new user with a shell preventing login:
</p>
<pre><code>useradd -m -s /sbin/nologin username_here
passwd username_here
</code></pre>
<p>This user will not be able to do anything on the server but connecting to SMTP/IMAP/POP.  They will not be able to change their password either!
</p>
<h3 id="_Handling_extra_domains">5.3.3. Handling extra domains <a href="#_Handling_extra_domains">§</a></h3>
<p>If you need to handle emails for multiple domains, this is rather simple:
</p>
<ul>

  <li>Add this line to the file <code>/etc/mail/smtpd.conf</code> by changing <code>puffy.cafe</code> to the other domain name: <code>match from any for domain &quot;puffy.cafe&quot; action &quot;local&quot;</code></li>
  <li>Configure the other domain DNS MX/SPF/DKIM/DMARC</li>
  <li>Configure <code>/etc/rspamd/local.d/dkim_signing.conf</code> to add a new block with the other domain, the dkim selector and the dkim key path</li>
  <li>The PTR does not need to be modified as it should match the machine hostname advertised over SMTP, and it is an unique value anyway</li>
</ul>

<p>If you want to use a different aliases table for the other domain, you need to create a new aliases file and configure <code>/etc/mail/smtpd.conf</code> accordingly where the following lines should be added:
</p>
<pre><code>table lambda file:/etc/mail/aliases-lambda

action &quot;local_mail_lambda&quot; lmtp &quot;/var/dovecot/lmtp&quot; alias &lt;lambda&gt;

match from any for domain &quot;lambda-puffy.eu&quot; action &quot;local_mail_lambda&quot;
</code></pre>
<p>Note that the users will be the same for all the domains configured on the server.  If you want to have separate users per domains, or that "user a" on domain A and "user a" on domain B could be different persons / logins, you would need to setup virtual users instead of using system users.  Such setup is beyond the scope of this guide.
</p>
<h3 id="_Without_Dovecot">5.3.4. Without Dovecot <a href="#_Without_Dovecot">§</a></h3>
<p>It is possible to not use Dovecot.  Such setup can suit users who would like to download the maildir directory using rsync on their local computer, this is a one-way process and does not allow sharing a mailbox across multiple devices.  This reduces maintenance and attack surface at the cost of convenience.
</p>
<p>This may work as a two-way access (untested) when using a software such as unison to keep both the local and remote directories synchronized, but be prepared to manage file conflicts!
</p>
<p>If you want this setup, replace the following line in smtpd.conf
</p>
<pre><code>action &quot;local&quot; lmtp &quot;/var/dovecot/lmtp&quot; alias &lt;aliases&gt;
</code></pre>
<p>by this line: if you want to store the emails into a maildir format (a directory per email folder, a file per email), emails will be stored in the directory "Maildir" in user's homes.
</p>
<pre><code>action &quot;local&quot; maildir &quot;~/Maildir/&quot; junk alias &lt;aliases&gt;
</code></pre>
<p>or this line if you want to keep the mbox format (a single file with emails appended to it, not practical), the emails will be stored in /var/mail/$user.
</p>
<pre><code>action &quot;local&quot; mbox alias &lt;aliases&gt;
</code></pre>
<p><a href='https://en.wikipedia.org/wiki/Maildir'>Wikipedia page: Maildir format</a></p>
<p><a href='https://en.wikipedia.org/wiki/Mbox'>Wikipedia page: Mbox format</a></p>
<h2 id="_Dovecot">5.4. Dovecot <a href="#_Dovecot">§</a></h2>
<p>Dovecot is an important piece of software for the domain end users, it provides protocols like IMAP or POP3 to read emails from a client.  It is the most popular open source IMAP/POP server available (the other being Cyrus IMAP).
</p>
<p>Install dovecot with the following command line:
</p>
<pre><code>pkg_add dovecot-- dovecot-pigeonhole--
</code></pre>
<p>Dovecot has a lot of configuration files in <code>/etc/dovecot/conf.d/</code> although most of them are commented and ready to be modified, you will have to edit a few of them.  This guide provides the content of files with empty lines / comments stripped so you can quickly check if your file is ok, you can use the command <code>awk &#x27;$1 !~ /^#/ &amp;&amp; $1 ~ /./&#x27;</code> on a file to display its &quot;useful&quot; content only (awk will not modify the file).
</p>
<p>Modify <code>/etc/dovecot/conf.d/10-ssl.conf</code> and search the lines <code>ssl_cert</code> and <code>ssl_key</code>, change their values to your certificate full chain and private key.
</p>
<p>Generate a Diffie-Hellman file for perfect forward secrecy, this will make each TLS negociation unique, so if the private key ever leak, every past TLS communication will remain safe.
</p>
<pre><code>openssl dhparam -out /etc/dovecot/dh.pem 4096
chown _dovecot:_dovecot /etc/dovecot/dh.pem
chmod 400 /etc/dovecot/dh.pem
</code></pre>
<p>The file (filtered of all comments/empty lines) should look like the following:
</p>
<pre><code>ssl_cert = &lt;/etc/ssl/mail.puffy.cafe.fullchain.pem
ssl_key = &lt;/etc/ssl/private/mail.puffy.cafe.key
ssl_dh = &lt;/etc/dovecot/dh.pem
</code></pre>
<p>Modify <code>/etc/dovecot/conf.d/10-mail.conf</code>, search for a commented line <code>mail_location</code>, uncomment it and set the value to <code>maildir:~/Maildir</code>, this will tell Dovecot where users mailboxes are stored and in which format, we want to use the maildir format.
</p>
<p>The resulting file should look like:
</p>
<pre><code>mail_location = maildir:~/Maildir
namespace inbox {
  inbox = yes
}
mmap_disable = yes
first_valid_uid = 1000
mail_plugin_dir = /usr/local/lib/dovecot
protocol !indexer-worker {
}
mbox_write_locks = fcntl
</code></pre>
<p>Modify the file <code>/etc/dovecot/conf.d/20-lmtp.conf</code>, LMTP is the protocol used by opensmtpd to transmit incoming emails to dovecot.  Search for the commented variable <code>mail_plugins</code> and uncomment it with the value <code>mail_plugins = $mail_plugins sieve</code>:
</p>
<p>The resulting file should look like:
</p>
<pre><code>protocol lmtp {
  mail_plugins = $mail_plugins sieve
}
</code></pre>
<p>If you do not want to use IMAP or POP3, you do not need Dovecot.  There is an explanation above how to proceed without Dovecot.
</p>
<h3 id="_IMAP">5.4.1. IMAP <a href="#_IMAP">§</a></h3>
<p><a href='https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol'>Wikipedia page: IMAP protocol</a></p>
<p>IMAP is an efficient protocol that returns headers of emails per directory, so you do not have to download all your emails to view the directory list, emails are downloaded upon read (by default in most email clients).  It allows some cool features like server side search, incoming email sorting with sieve filters or multi devices access.
</p>
<p>Edit <code>/etc/dovecot/conf.d/20-imap.conf</code> and configure the last lines accordingly to the result file:
</p>
<pre><code>protocol imap {
  mail_plugins = $mail_plugins imap_sieve
  mail_max_userip_connections = 25
}
</code></pre>
<p>The number of connections per user/IP should be high if you have an email client tracking many folders, in IMAP a connection is required for each folder, so the number of connections can quickly increase.  On top of that, if you have multiple devices under the same public IP you could quickly reach the limit.  I found 25 worked fine for me with 3 devices.
</p>
<h3 id="_POP">5.4.2. POP <a href="#_POP">§</a></h3>
<p><a href='https://en.wikipedia.org/wiki/Post_Office_Protocol'>Wikipedia page: POP protocol</a></p>
<p>POP3 is a pretty old protocol that is rarely considered by users, I still consider it a viable alternative to IMAP depending on your needs.
</p>
<p>A major incentive for using POP is that it downloads all emails locally before removing them from the server.  As we have no tooling to encrypt emails stored on remote email servers, POP3 is a must if you want to not leave any email on the server.  POP3 does not support remote folders, so you can not use Sieve filters on the server to sort your emails and then download them as-this.  A POP3 client downloads the Inbox and then sorts the emails locally.
</p>
<p>It can support multiple devices under some conditions: if you delete the emails after X days, your devices should synchronize before the emails are removed.  In such case they will have all the emails stored locally, but they will not be synced together: if both computers A and B are up-to-date, when deleting an email on A, it will still be in B.
</p>
<p>There are no changes required for POP3 in Dovecot as the defaults are good enough.
</p>
<h3 id="_JMAP">5.4.3. JMAP <a href="#_JMAP">§</a></h3>
<p>For information, a replacement for IMAP called JMAP is in development, it is meant to be better than IMAP in every way and also include calendars and address book management.
</p>
<p>JMAP Implementations are young but exist, although support in email clients is almost non-existent.  For instance, it seems Mozilla Thunderbird is not interested in it, an issue in their bug tracker about JMAP from December 2016 only have a couple of comments from people who would like to see it happening, nothing more.
</p>
<p><a href='https://bugzilla.mozilla.org/show_bug.cgi?id=1322991'>Issue 1322991: Add support for new JMAP protocol</a></p>
<p>From the JMAP website page listing compatible clients, I only recognized the name "aerc" which is a modern console email client.
</p>
<p><a href='https://jmap.io/software.html#clients'>JMAP project website: clients list</a></p>
<h3 id="_Sieve_(filtering_rules)">5.4.4. Sieve (filtering rules) <a href="#_Sieve_(filtering_rules)">§</a></h3>
<p><a href='https://en.wikipedia.org/wiki/Sieve_(mail_filtering_language)'>Wikipedia page: Sieve</a></p>
<p>Dovecot has a plugin to offer Sieve filters, they are rules applied to received emails going into your mailbox, whether you want to sort them into dedicated directories, mark them read or block some addresses.  That plugin is called pigeonhole.
</p>
<p>You will need Sieve to enable the spam filter learning system when moving emails from/to the Junk folder as it is triggered by a Sieve rule.  This improves rspamd Bayes (a method using tokens to understand information, the story of the person behind it is interesting) filter ability to detect spam accurately.
</p>
<p>Edit <code>/etc/dovecot/conf.d/90-plugin.conf</code> with the following content:
</p>
<pre><code>plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms

  # From elsewhere to Spam folder
  imapsieve_mailbox1_name = Spam
  imapsieve_mailbox1_causes = COPY
  imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.sieve

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Spam
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve

  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment
}
</code></pre>
<p>This piece of configuration was taken from the official Dovecot documentation: https://doc.dovecot.org/configuration_manual/howto/antispam_with_sieve/ .  It will trigger shell scripts calling rspamd to make it learn what does a spam look like, and what is legit (ham).  One script will run when an email is moved out of the spam directory (ham), another one when an email is moved to the spam directory (spam).
</p>
<p>Modify <code>/etc/dovecot/conf.d/15-mailboxes.conf</code> to add the following snippet inside the block <code>namespace inbox { ... }</code>, it will associate the Junk directory as the folder containing spam and automatically create it if it does not exist:
</p>
<pre><code>  mailbox Spam {
    auto = create
    special_use = \Junk
  }
</code></pre>
<p>To make this work completely, you need to write the two extra sieve filters that will run trigger the scripts:
</p>
<p>Create <code>/usr/local/lib/dovecot/sieve/report-spam.sieve</code>
</p>
<pre><code>require [&quot;vnd.dovecot.pipe&quot;, &quot;copy&quot;, &quot;imapsieve&quot;, &quot;environment&quot;, &quot;variables&quot;];

if environment :matches &quot;imap.user&quot; &quot;*&quot; {
  set &quot;username&quot; &quot;${1}&quot;;
}

pipe :copy &quot;sa-learn-spam.sh&quot; [ &quot;${username}&quot; ];
</code></pre>
<p>Create <code>/usr/local/lib/dovecot/sieve/report-ham.sieve</code>
</p>
<pre><code>require [&quot;vnd.dovecot.pipe&quot;, &quot;copy&quot;, &quot;imapsieve&quot;, &quot;environment&quot;, &quot;variables&quot;];

if environment :matches &quot;imap.mailbox&quot; &quot;*&quot; {
  set &quot;mailbox&quot; &quot;${1}&quot;;
}

if string &quot;${mailbox}&quot; &quot;Trash&quot; {
  stop;
}

if environment :matches &quot;imap.user&quot; &quot;*&quot; {
  set &quot;username&quot; &quot;${1}&quot;;
}

pipe :copy &quot;sa-learn-ham.sh&quot; [ &quot;${username}&quot; ];
</code></pre>
<p>Create <code>/usr/local/lib/dovecot/sieve/sa-learn-ham.sh</code>
</p>
<pre><code>#!/bin/sh
exec /usr/local/bin/rspamc -d &quot;${1}&quot; learn_ham
</code></pre>
<p>Create <code>/usr/local/lib/dovecot/sieve/sa-learn-spam.sh</code>
</p>
<pre><code>#!/bin/sh
exec /usr/local/bin/rspamc -d &quot;${1}&quot; learn_spam
</code></pre>
<p>Make the two scripts executable with <code>chmod +x /usr/local/lib/dovecot/sieve/sa-learn-spam.sh /usr/local/lib/dovecot/sieve/sa-learn-ham.sh</code>.
</p>
<p>Run the following command to compile the sieve filters:
</p>
<pre><code>sievec /usr/local/lib/dovecot/sieve/report-spam.sieve
sievec /usr/local/lib/dovecot/sieve/report-ham.sieve
</code></pre>
<h3 id="_Manage_Sieve">5.4.5. Manage Sieve <a href="#_Manage_Sieve">§</a></h3>
<p>By default, Sieves rules are a file located on the user home directory, however there is a standard protocol named "managesieve" to manage Sieve filters remotely from an email client.
</p>
<p>It is enabled out of the box in Dovecot configuration, although you need to make sure you open the port 4190/tcp in the firewall if you want to allow users to use it.
</p>
<h3 id="_Start_the_service">5.4.6. Start the service <a href="#_Start_the_service">§</a></h3>
<p>Once you configured everything, make sure that dovecot service is enabled, and then start / restart it:
</p>
<pre><code>rcctl enable dovecot
rcctl start dovecot
</code></pre>
<h1 id="_Webmail">6. Webmail <a href="#_Webmail">§</a></h1>
<p>A webmail will allow your users to read / send emails from a web interface instead of having to configure a local email client.  While they can be convenient, they enable a larger attack surface and are often affected by vulnerability issues, you may prefer to avoid webmail on your server.
</p>
<p>The two most popular open source webmail are Roundcube mail and Snappymail (a fork of the abandoned rainloop) and Roundcube, they both have pros and cons.
</p>
<h2 id="_Roundcube_mail_setup">6.1. Roundcube mail setup <a href="#_Roundcube_mail_setup">§</a></h2>
<p>Roundcube is packaged in OpenBSD, it will pull in all required dependencies and occasionally receive backported security updates.
</p>
<p>Install the package:
</p>
<pre><code>pkg_add roundcubemail
</code></pre>
<p>When installing the package, you will be prompted for a database backend for PHP.  If you have one or two users, I highly recommend choosing SQLite as it will work fine without requiring a running daemon, thus less maintenance and server resources locked.  If you plan to have a lot of users, there are no wrong picks between MySQL or PostgreSQL, but if you already have one of them running it would be better to reuse it for Roundcube.
</p>
<p>Specific instructions for installing Roundcube are provided by the package README in <code>/usr/local/share/doc/pkg-readmes/roundcubemail</code>.
</p>
<p>We need to enable a few PHP modules to make Roundcube mail working:
</p>
<pre><code>ln -s /etc/php-8.2.sample/zip.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/intl.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/opcache.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/pdo_sqlite.ini /etc/php-8.2/
</code></pre>
<p>Note that more PHP modules may be required if you enable extra features and plugins in Roundcube.
</p>
<p>PHP is ready to be started:
</p>
<pre><code>rcctl enable php82_fpm
rcctl start php82_fpm
</code></pre>
<p>Add the following blocks to <code>/etc/httpd.conf</code>, make sure you opened the port 443/tcp in your <code>pf.conf</code> and that you reloaded it with <code>pfctl -f /etc/pf.conf</code>:
</p>
<pre><code>server &quot;mail.puffy.cafe&quot; {

    listen on egress tls

    tls key &quot;/etc/ssl/private/mail.puffy.cafe.key&quot;
    tls certificate &quot;/etc/ssl/mail.puffy.cafe.fullchain.pem&quot;

    root &quot;/roundcubemail&quot;

    directory index index.php

    location &quot;*.php&quot; {
        fastcgi socket &quot;/run/php-fpm.sock&quot;
    }
}

types {
    include &quot;/usr/share/misc/mime.types&quot;
}
</code></pre>
<p>Restart httpd with <code>rcctl restart httpd</code>.
</p>
<p>You need to configure Roundcube to use a 24 bytes security key and configure the database: edit the file <code>/var/www/roundcubemail/config/config.inc.php</code>:
</p>
<p>Search for the variable <code>des_key</code>, replace its value by the output of the command <code>tr -dc [:print:] &lt; /dev/urandom | fold -w 24 | head -n 1</code> which will generate a 24 byte random string.  If the string contains a quote character, either escape this character by prefixing it with a <code>\</code> or generate a new string.
</p>
<p>For the database, you need to search the variable <code>db_dsnw</code>.
</p>
<p>If you use SQLite, change this line
</p>
<pre><code>$config[&#x27;db_dsnw&#x27;] = &#x27;sqlite:///roundcubemail/db/sqlite.db?mode=0660&#x27;;
</code></pre>
<p>by this line:
</p>
<pre><code>$config[&#x27;db_dsnw&#x27;] = &#x27;sqlite:///db/sqlite.db?mode=0660&#x27;;
</code></pre>
<p>If you chose MySQL/MariaDB or PostgreSQL, modify this line:
</p>
<pre><code>$config[&#x27;db_dsnw&#x27;] = &#x27;mysql://roundcube:pass@localhost/roundcubemail&#x27;;
</code></pre>
<p>by
</p>
<pre><code>$config[&#x27;db_dsnw&#x27;] = &#x27;mysql://USER:PASSWORD@DATABASE_NAME&#x27;;
</code></pre>
<p>Where <code>USER</code>, <code>PASSWORD</code> and <code>DATABASE_NAME</code> must match a new user and database created into the backend.
</p>
<p>Because PHP is chrooted on OpenBSD and that the OpenSMTPD configuration enforces TLS on port 587, it is required to enable TLS to work in the chroot:
</p>
<pre><code>mkdir -p /var/www/etc/ssl
cp -p /etc/ssl/cert.pem /etc/ssl/openssl.cnf /var/www/etc/ssl/
</code></pre>
<p>To make sure the files <code>cert.pem</code> and <code>openssl.cnf</code> stay in sync after upgrades, add the two commands to a file <code>/etc/rc.local</code> and make this file executable.  This script always starts at boot and is the best place for this kind of file copy.
</p>
<p>If your IMAP and SMTP hosts are not on the same server where Roundcube is installed, adapt the variables <code>imap_host</code> and <code>smtp_host</code> to the server name.
</p>
<p>If Roundcube mail is running on the same server where OpenSMTPD is running, you need to disable certificate validation because <code>localhost</code> will not match the certificate and authentication will fail.  Change <code>smtp_host</code> line to <code>$config[&#x27;smtp_host&#x27;] = &#x27;tls://127.0.0.1:587&#x27;;</code> and add this snippet to the configuration file:
</p>
<pre><code>$config[&#x27;smtp_conn_options&#x27;] = array(
&#x27;ssl&#x27; =&gt; array(&#x27;verify_peer&#x27; =&gt; false, &#x27;verify_peer_name&#x27; =&gt; false),
&#x27;tls&#x27; =&gt; array(&#x27;verify_peer&#x27; =&gt; false, &#x27;verify_peer_name&#x27; =&gt; false));
</code></pre>
<p>From here, Roundcube mail should work when you load the domain configured in <code>httpd.conf</code>.
</p>
<p>For a more in-depth guide to install and configure Roundcube mail, there is an excellent guide available which was written by Bruno Flückiger:
</p>
<p><a href='https://www.bsdhowto.ch/roundcube.html'>Install Roundcube on OpenBSD</a></p>
<h1 id="_Hardening">7. Hardening <a href="#_Hardening">§</a></h1>
<p>It is always possible to improve the security of this stack, all the following settings are not mandatory, but they can be interesting depending on your needs.
</p>
<h2 id="_Always_allow_the_sender_per_email_or_domain">7.1. Always allow the sender per email or domain <a href="#_Always_allow_the_sender_per_email_or_domain">§</a></h2>
<p>It is possible to configure rspamd to force it to accept emails from a given email address or domain, bypassing the anti-spam.
</p>
<p>To proceed, edit the file <code>/etc/rspamd/local.d/multimap.conf</code> to add this content:
</p>
<pre><code>local_wl_domain {
        type = &quot;from&quot;;
        filter = &quot;email:domain&quot;;
        map = &quot;$CONFDIR/local.d/whitelist_domain.map&quot;;
        symbol = &quot;LOCAL_WL_DOMAIN&quot;;
        score = -10.0;
        description = &quot;domains that are always accepted&quot;;
}

local_wl_from {
        type = &quot;from&quot;;
        map = &quot;$CONFDIR/local.d/whitelist_email.map&quot;;
        symbol = &quot;LOCAL_WL_FROM&quot;;
        score = -10.0;
        description = &quot;email addresses that are always accepted&quot;;
}
</code></pre>
<p>Create the files <code>/etc/rspamd/local.d/whitelist_domain.map</code> and <code>/etc/rspamd/local.d/whitelist_email.map</code> using the command <code>touch</code>.
</p>
<p>Restart the service rspamd with <code>rcctl restart rspamd</code>.
</p>
<p>The created files use a simple syntax, add a line for each entry you want to allow:
</p>
<ul>

  <li>a domain name in <code>/etc/rspamd/local.d/whitelist_domain.map</code> to allow the domain</li>
  <li>an email address in <code>/etc/rspamd/local.d/whitelist_email.map</code> to allow this address</li>
</ul>

<p>There is no need to restart or reload rspamd after changing the files.
</p>
<p>Reusing the same technique can be done to block domains/addresses directly in rspamd by giving a high positive score.
</p>
<h2 id="_Block_bots">7.2. Block bots <a href="#_Block_bots">§</a></h2>
<p>I published on my blog a script and related configuration to parse OpenSMTPD logs and block the bad actors with PF.
</p>
<p><a href='https://dataswamp.org/~solene/2023-06-22-opensmtpd-block-attempts.html'>2023-06-22 Ban scanners IPs from OpenSMTP logs</a></p>
<p>This includes an ignore file if you do not want some IPs to be blocked.
</p>
<h2 id="_Split_the_stack">7.3. Split the stack <a href="#_Split_the_stack">§</a></h2>
<p>If you want to improve your email setup security further, the best method is to split each part into dedicated systems.
</p>
<p>As dovecot is responsible for storing and exposing emails to users, this component would be safer in a dedicated system, so if a component of the email stack (other than dovecot) is compromised, the mailboxes will not be exposed.
</p>
<h2 id="_Network_attack_surface_reduction">7.4. Network attack surface reduction <a href="#_Network_attack_surface_reduction">§</a></h2>
<p>If this does not go against usability of the email server users, I strongly recommend limiting the publicly opened ports in the firewall to the minimum: 25, 80, 465, 587.  This would prevent attackers to exploit any network related 0day or unpatched vulnerabilities of non-exposed services such as Dovecot.
</p>
<p>A VPN should be deployed to allow users to reach Dovecot services (IMAP, POP) and other services if any.
</p>
<p>SSH port could be removed from the public ports as well, however, it would be safer to make sure your hosting provider offers a serial access / VNC / remote access to the system because if the VPN stops working, you will not be able to log in into the system using SSH to debug it.
</p>
<h1 id="_Email_client_configuration">8. Email client configuration <a href="#_Email_client_configuration">§</a></h1>
<p>If everything was done correctly so far, you should have a complete email stack fully functional.
</p>
<p>Here are the connection information to use your service:
</p>
<ul>

  <li>IMAP/POP3/SMTP login: username on the remote system (the username does not include the <code>@</code> part)</li>
  <li>IMAP/POP3/SMTP password: password of the remote system user</li>
  <li>IMAP/POP3 server: dovecot server hostname</li>
  <li>IMAP/POP3 port: 993 for IMAPS and 995 for POP3S (TLS is enabled)</li>
  <li>SMTP server: opensmtpd server hostname</li>
  <li>SMTP port: either 465 in SSL/TLS mode (encryption forced), or 587 in STARTTLS mode (encryption not enforced depending on OpenSMTPD configuration)</li>
</ul>

<p>The webmail, if any, will be available at the address configured in <code>httpd.conf</code>, using the same credentials as above.
</p>
<h1 id="_Verify_the_setup">9. Verify the setup <a href="#_Verify_the_setup">§</a></h1>
<p>There is an online service providing you a random email address to send a test email to, then you can check the result on their website displaying if the SPF, DKIM, DMARC and PTR records are correctly configured.
</p>
<p><a href='https://www.mail-tester.com'>www.mail-tester.com</a></p>
<p>The score you want to be displayed on their website is no least than 10/10.  The service can report meaningless issues like "the email was poorly formatted" or "you did not include an unsubscribe link", they are not relevant for the current test.
</p>
<p>While it used to be completely free last time I used it, I found it would ask you to pay after three free checks if you do not want to wait 24h.  It uses your public IP address for the limit.
</p>
<h1 id="_Maintenance">10. Maintenance <a href="#_Maintenance">§</a></h1>
<h2 id="_Running_processes">10.1. Running processes <a href="#_Running_processes">§</a></h2>
<p>The following processes list should always be running: using a program like monit, zabbix or reed-alert to notify you when they stop working could be a good idea.
</p>
<ul>

  <li>dovecot</li>
  <li>httpd</li>
  <li>redis</li>
  <li>rspamd</li>
  <li>smtpd</li>
</ul>

<h2 id="_Certificates_renewal">10.2. Certificates renewal <a href="#_Certificates_renewal">§</a></h2>
<p>In addition, the TLS certificate should be renewed regularly as ACME generated certificates are valid for a few months.  Edit root crontab with <code>crontab -e</code> as root to add this line:
</p>
<pre><code>10 4 * * 0 -s acme-client mail.puffy.cafe &amp;&amp; rcctl restart dovecot httpd smtpd
</code></pre>
<p>This will try to renew the certificate for <code>mail.puffy.cafe</code> every Sunday at 04h10 and upon renewal restart the services using the certificate: dovecot, httpd and smtpd.
</p>
<h2 id="_All_about_logs">10.3. All about logs <a href="#_All_about_logs">§</a></h2>
<p>If you need to find some logs, here is a list of paths where to find information:
</p>
<ul>

  <li>dovecot: <code>/var/log/maillog</code></li>
  <li>httpd: <code>/var/log/daemon</code> for the daemon, access logs in <code>/var/www/logs/access.log</code> and errors logs in <code>/var/www/logs/error.log</code></li>
  <li>redis: <code>/var/log/daemon</code></li>
  <li>rspamd: <code>/var/log/rspamd/rspamd.log</code> and its web UI on port 11334 (only on localhost by default, a SSH tunnel can be handy)</li>
  <li>smtpd: <code>/var/log/maillog</code></li>
  <li>roundcube: <code>/var/www/roundcubemail/logs/errors.log</code> and <code>/var/www/roundcubemail/logs/sendmail.log</code></li>
</ul>

<p>A log rotation of the new logs can be configured in <code>/etc/newsyslog.conf</code> with these lines (take only what you need):
</p>
<pre><code>/var/log/rspamd/rspamd.log		600  7     500  *     Z &quot;pkill -USR1 -u root -U root -x rspamd&quot;
/var/www/roundcubemail/logs/errors.log	600  7     500  *     Z
/var/www/roundcubemail/logs/sendmail.log 600 7     500  *     Z
</code></pre>
<h2 id="_Disk_space">10.4. Disk space <a href="#_Disk_space">§</a></h2>
<p>Finally, OpenSMTPD will stop delivering emails locally if the <code>/var</code> partition has less than 4% of free disk space, be sure to monitor the disk space of this partition otherwise you will not receive emails anymore for a while before noticing something is wrong.
</p>
<h1 id="_Conclusion">11. Conclusion <a href="#_Conclusion">§</a></h1>
<p>Congratulations, you configured a whole email stack that will allow you to send emails to the world, using your own domain and hardware.  Keeping your system up to date is important as you have network services exposed to the wild Internet.
</p>
<p>Even with a properly configured setup featuring SPF/DKIM/DMARC/PTR, it is not guaranteed to not end in the spam directory of our recipients.  The IP reputation of your SMTP server also account, and so is the domain name extension (I have a <code>.pw</code> domain which I learned too late that it was almost always considered as spam because it is not mainstream).
</p>

    ]]>
  </description>
  <guid>https://dataswamp.org/~solene/2024-07-24-openbsd-email-server-setup.html</guid>
  <link>https://dataswamp.org/~solene/2024-07-24-openbsd-email-server-setup.html</link>
  <pubDate>Thu, 25 Jul 2024 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>
