About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(BSD OpenBSD h+ Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on Freenode, solene+www at dataswamp dot org or solene@bsd.network (mastodon). If for some reason you want to give me some money, I accept paypal at the address donate@perso.pw.

Securely share a secret using Shamir's secret sharing

Written by Solène, on 21 March 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction

I will present you the program ssss (for Shamir's Secret Sharing Scheme), a cryptography program to split a secret into n parts, requiring at least t parts to be recovered (with t <= n).

Shamir Secret Sharing (method is mathematically proven to be secure)

Use case

The project website list a few use cases for real life and I like them, but I will share another use case.

ssss project website

I used to run a community but there was no person in charge apart me, which made me a single point of failure. I decided to make the encrypted backup available to a few kind of trustable community members, and I gave each a secret. There were four members and I made the backup password available only if the four members agreed to share their secrets to get the password. For privacy reasons, I didn't want any of these people to be able to lurk into the backup, at least, if someone had happened to me, they could agree to recover the database only if the four persons agreed on it.

How to use

ssss-split is easy to use, you can only share text with it. So you can use a very long passphrase to encrypt files and share this passphrase into many secrets that you share.

You can install it on OpenBSD using pkg_add ssss.

In the following examples, I will create a simple passphrase and then use the generated secrets to get the original passphrase back.

$ ssss-split -t 3 -n 3
Generating shares using a (3,3) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters: [Note=>hidden input where I typed "this is a very very long password] Using a 264 bit security level.
1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353

When you want to recover a secret, you will have to run ssss-combine and tell it how many secrets you have, they can be provided in any order.

$ ssss-combine -t 3
Enter 3 shares separated by newlines:
Share [1/3]: 2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
Share [2/3]: 3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
Share [3/3]: 1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
Resulting secret: this is a very very long password

Tips

If you want to easily store a secret or share it to a non-IT person (or in a vault), you can create a QR code and then print the picture. QR code has redundancy so if the paper is damaged you can still recover it, it's quite big on a paper so if it fades of you may not lose data and it also checks integrity.

Conclusion

ssss is a wonderful program to share a secret among a few people or put a few secrets here and there for a recovery situation. The program can receive the passphrase on its standard input allowing it to be scripted.

Interesting fact, if you run ssss-combine multiple times on the same text, you always get different secrets, so if you give a secret, no brute force can be used to find which input produced the secret.

What security does a default OpenBSD installation offer?

Written by Solène, on 14 February 2021.
Tags: #openbsd69 #openbsd #security

Comments on Mastodon

Introduction

In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.

There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.

Security matters

Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.

In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.

Pledge / unveil on userland

Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.

Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.

Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.

OpenBSD presentation about Unveil (BSDCan2019)

OpenBSD presentation of Pledge and Unveil (BSDCan2018)

Privilege separation

Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.

Clock synchronization

The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.

X display not as root

If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.

Resources limits

Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.

Genuine full disk encryption

When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.

W^X

Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.

OpenBSD presentation « Kernel W^X Improvements In OpenBSD »

Only one reliable randomness source

When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.

OpenBSD presentation about arc4random

Accurate documentation

OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.

OpenBSD man pages online

EuroBSDcon 2018 about « Better documentation »

IPSec and Wireguard out of the box

If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.

Memory safeties

OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.

Dedicated root account

When you install the system, a root account is created and its password is asked, then you create an user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.

Small network attack surface

The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).

Encrypted swap

By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.

SMT disabled

Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.

Meltdown: one of the first security issue related to speculative execution in the CPU

Micro and Webcam disabled

With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.

Maintainability, release often, update often

The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.

Signify chain of trust

Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.

Signify at BSDCan 2015

Packages

While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.

Chroot by default when available

Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.

Dedicated users for services

Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.

Installing a service doesn't enable it

When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.

rcctl man page

Conclusion

Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.

There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.

« Mitigations and other real security features » by Theo De Raadt

OpenBSD innovations

OpenBSD events, often including slides or videos

Firejail on Linux to sandbox all the things

Written by Solène, on 14 February 2021.
Tags: #linux #security #sandbox

Comments on Mastodon

Introduction

Firejail is a program that can prepare sandboxes to run other programs. This is an efficient way to keep a software isolated from the rest of the system without need of changing its source code, it works for network, graphical or daemons programs.

You may want to sandbox programs you run in order to protect your system for any issue that could happen within the program (security breach, code mistake, unknown errors), like Steam once had a "rm -fr /" issue, using a sandbox that would have partially saved a part of the user directory. Web browsers are major tools nowadays and yet they have access to the whole system and have many security issues discovered and exploited in the wild, running it in a sandbox can reduce the data a hacker could exfiltrate from the computer. Of course, sandboxing comes with an usability tradeoff because if you only allow access to the ~/Downloads/ directory, you need to put files in this directory if you want to upload them, and you can only download files into this directory and then move them later where you really want to keep your files.

Installation

On most Linux systems you will find a Firejail package that you can install. If your distribution doesn't provide a Firejail package, it seems the installing from sources process is quite easy, and as the project is written in C with limited dependencies it may be easy to get the build process done.

There are no service to enable and no kernel parameters to add. Apparmor or SELinux features in kernel can be used to integrates into Firejail profiles if you want to.

Usage

Start a program

The simplest usage is to run a command by adding Firejail before the command name.

$ Firejail firefox

Use a symlink

Firejail has a neat feature to allow starting software by their name without calling Firejail explicitly, if you create a symbolic link in your $PATH using a program name but targeting Firejail, when you call that name Firejail will automatically now what you want to start. The following example will run firefox when you call the symbolic link.

export PATH=~/bin/:$PATH
$ ln -s /usr/bin/firejail ~/bin/firefox
$ firefox

Listing sandboxes

There is a Firejail --list command that will tell you about all sandboxes running and what are their parameters. As a first column the identifier is available for more Firejail features.

$ firejail --list
6108:solene::/usr/bin/firejail /usr/bin/firefox 

Limit bandwidth per program

Firejail also has a neat feature that allows to limit the bandwidth available only for one sandbox environment. Reusing previous list output, I will reduce firefox bandwidth, the number are in kB/s.

$ firejail --bandwidth=6108 set wlan0 1000 40

You can find more information about this feature in the "TRAFFIC SHAPING" section of the Firejail man page.

Restrict network access

If for some reason you want to start a program with absolutely no network access, you can run a program and deny it any network.

$ firejail --net=none libreoffice

Conclusion

Firejail is a neat way to start software into sandboxes without requiring any particular setup. It may be more limited and maybe less reliable than OpenBSD programs who received unveil() features but it's a nice trade off between safety and required work within source code (literally none). It is a very interesting project that proves to work easily on any Linux system, with a simple C source code with little dependencies. I am not really familiar with Linux kernel and its features but Firejail seems to use seccomp-bpf and namespace, I guess they are complicated to use but powerful and Firejail comes here as a wrapper to automate all of this.

Firejail has been proven to be USABLE and RELIABLE for me while my attempts at sandboxing Firefox with AppArmor were tedious and not optimal. I really recommend it.

More resources

Official project website with releases and security information

Firejail sources and documentation

Community profiles 1

Community profiles 2

Filtering TCP connections by operating system on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction

In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.

OpenBSD pf.conf man page about OS Fingerprinting

Explanations

Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.

Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.

How to setup

The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).

# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22

# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22

This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.

Enable multi-factor authentication on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Mastodon

Introduction

In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD

What is TOTP (Time-based One time Password)

When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.

TOTP software

Here is a quick list of TOTP software

- command line: oathtool from package oath-toolkit

- GUI and multiplatform: KeepassXC

- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)

Setup

A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.

# pkg_add login_oath

You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:

# totp OR password
totp:\
        :auth=-totp,passwd:\
        :tc=default:

# totp AND password
totppw:\
        :auth=-totp-and-pwd:\
        :tc=default:

If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.

Local login

Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.

# usermod -L totp some_user

In the user directory, you have to generate a key and give it the correct permissions.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key

The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.

python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"

SSH login

It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.

This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.

sshd_config man page about AuthenticationMethods

# enable for everyone
AuthenticationMethods publickey,password

# for one user
Match User solene
	AuthenticationMethods publickey,password

Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.

Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.

Conclusion

In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.

Vger security analysis

Written by Solène, on 14 January 2021.
Tags: #vger #gemini #security

Comments on Mastodon

I would like to share about Vger internals in regards to how the security was thought to protect vger users and host systems.

Vger code repository

Thinking about security first

I claim about security in Vger as its main feature, I even wrote Vger to have a secure gemini server that I can trust. Why so? It's written in C and I'm a beginner developer in this language, this looks like a scam.

I chose to follow the best practice I'm aware of from the very first line. My goal is to be sure Vger can't be used to exfiltrate data from the host on which it runs or to allow it to run arbirary command. While I may have missed corner case in which it could crash, I think a crash is the worse that can happen with Vger.

Smallest code possible

Vger doesn't have to manage connections or TLS, this was a lot of code already removed by this design choice. There are better tools which are exactly made for this purpose, so it's time to reuse other people good work.

Inetd and user

Vger is run by inetd daemon, allowing to choose the user running vger. Using a dedicated user is always a good idea to prevent any harm in case of issue, but it's really not sufficient to protect vger to behave badly.

Another kind of security benefit is that vger runtime isn't looping like a daemon awaiting new connections. Vger accept a request, read a file if exist and gives its result and terminates. This is less error prone because no variable can be reused or tricked after a loop that could leave the code in an inconsistent or vulnerable state.

Chroot

A critical vger feature is the ability to chroot into a directory, meaning the directory is now seen as the root of the file system (/var/gemini would be seen as /) and prevent vger to escape it. In addition to the chroot feature, the feature allow vger to drop to an unprivileged user.

     /* 
      * use chroot() if an user is specified requires root user to be 
      * running the program to run chroot() and then drop privileges 
      */
     if (strlen(user) > 0) {

             /* is root? */
             if (getuid() != 0) {
                     syslog(LOG_DAEMON, "chroot requires program to be run as root");
                     errx(1, "chroot requires root user");
             }
             /* search user uid from name */
             if ((pw = getpwnam(user)) == NULL) {
                     syslog(LOG_DAEMON, "the user %s can't be found on the system", user);
                     err(1, "finding user");
             }
             /* chroot worked? */
             if (chroot(path) != 0) {
                     syslog(LOG_DAEMON, "the chroot_dir %s can't be used for chroot", path);
                     err(1, "chroot");
             }
             chrooted = 1;
             if (chdir("/") == -1) {
                     syslog(LOG_DAEMON, "failed to chdir(\"/\")");
                     err(1, "chdir");
             }
             /* drop privileges */
             if (setgroups(1, &pw->pw_gid) ||
                 setresgid(pw->pw_gid, pw->pw_gid, pw->pw_gid) ||
                 setresuid(pw->pw_uid, pw->pw_uid, pw->pw_uid)) {
                     syslog(LOG_DAEMON, "dropping privileges to user %s (uid=%i) failed",
                            user, pw->pw_uid);
                     err(1, "Can't drop privileges");
             }
     }

No use of third party libs

Vger only requires standard C includes, this avoid leaving trust to dozens of developers using fragile or barely tested code.

OpenBSD specific code

In addition to all the previous security practices, OpenBSD is offering a few functions to help restricting a lot what Vger can do.

The first function is pledge, allowing to restrict the system calls that can happen within the code itself. The current syscalls allowed in vger are related to the categories "rpath" and "stdio", basically standard input/output and reading files/directories only. This mean after pledge() is called, if any syscall not in those two categories is used, vger will be killed and a pledge error will be reported in the logs.

The second function is unveil, which will basically restrict access to the filesystem to anything but what you list, with the permission. Currently, vger only allows file access in read-only mode in the base directory used to serve files.

Here is an extract of the code relative to the OpenBSD specific code. With unveil available everywhere chroot wouldn't be required.

 #ifdef __OpenBSD__
         /* 
          * prevent access to files other than the one in path 
          */
         if (chrooted) {
                 eunveil("/", "r");
         } else {
                 eunveil(path, "r");
         }
         /* 
          * prevent system calls other parsing queryfor fread file and 
          * write to stdio 
          */
         if (pledge("stdio rpath", NULL) == -1) {
                 syslog(LOG_DAEMON, "pledge call failed");
                 err(1, "pledge");
         }
 #endif

The least code before dropping privileges

I made my best to use the least code possible before reducing Vger capabilities. Only the code managing the parameters is done before activating chroot and/or unveil/pledge.

int
main(int argc, char **argv)
{
     char            request  [GEMINI_REQUEST_MAX] = {'\0'};
     char            hostname [GEMINI_REQUEST_MAX] = {'\0'};
     char            uri      [PATH_MAX]           = {'\0'};
     char            user     [_SC_LOGIN_NAME_MAX] = "";
     int             virtualhost = 0;
     int             option = 0;
     char           *pos = NULL;

     while ((option = getopt(argc, argv, ":d:l:m:u:vi")) != -1) {
             switch (option) {
             case 'd':
                     estrlcpy(chroot_dir, optarg, sizeof(chroot_dir));
                     break;
             case 'l':
                     estrlcpy(lang, "lang=", sizeof(lang));
                     estrlcat(lang, optarg, sizeof(lang));
                     break;
             case 'm':
                     estrlcpy(default_mime, optarg, sizeof(default_mime));
                     break;
             case 'u':
                     estrlcpy(user, optarg, sizeof(user));
                     break;
             case 'v':
                     virtualhost = 1;
                     break;
             case 'i':
                     doautoidx = 1;
                     break;
             }
     }

     /* 
      * do chroot if an user is supplied run pledge/unveil if OpenBSD 
      */
     drop_privileges(user, chroot_dir); 

The Unix way

Unix is made of small component that can work together as small bricks to build something more complex. Vger is based on this idea by delegating the listening daemon handling incoming requests to another software (let's say relayd or haproxy). And then, what's left from the gemini specs once you delegate TLS is to take account of a request and return some content, which is well suited for a program accepting a request on its standard input and giving the result on standard ouput. Inetd is a key here to make such a program compatible with a daemon like relayd or haproxy. When a connection is made into the TLS listening daemon, a local port will trigger inetd that will run the command, passing the network content to the binary into its stdin.

Fine grained CGI

CGI support was added in order to allow Vger to make dynamic content instead of serving only static files. It has a fine grained control, you can allow only one file to be executable as a CGI or a whole directory of files. When serving a CGI, vger forks, a pipe is opened between the two processes and a process is using execlp to run the cgi and transmit its output to vger.

Using tests

From the beginning, I wrote a set of tests to be sure that once a kind of request or a use case work I can easily check I won't break it. This isn't about security but about reliability. When I push a new version on the git repository, I am absolutely confident it will work for the users. It was also an invaluable help for writing Vger.

As vger is a simple binary that accept data in stdin and output data on stdout, it is simple to write tests like this. The following example will run vger with a request, as the content is local and within the git repository, the output is predictable and known.

printf "gemini://host.name/autoidx/\r\n" | vger -d var/gemini/

From here, it's possible to build an automatic test by checking the checksum of the output to the checksum of the known correct output. Of course, when you make a new use case, this requires manually generating the checksum to use it as a comparison later.

OUT=$(printf "gemini://host.name/autoidx/\r\n" | ../vger -d var/gemini/ -i | md5)
if ! [ $OUT = "770a987b8f5cf7169e6bc3c6563e1570" ]
then
	echo "error"
	exit 1
fi

At this time, vger as 19 use case in its test suite.

By using the program `entr` and a Makefile to manage the build process, it was very easy to trigger the testing process while working on the source code, allowing me to check the test suite only by saving my current changes. Anytime a .c file is modified, entr will trigger a make test command that will be displayed in a dedicated terminal.

ls *.c | entr make test

Realtime integration tests? :)

Conclusion

By using best practices, reducing the amount of code and using only system libraries, I am quite confident about Vger good security. The only real issue could be to have too many connections leading to a quite high load due to inetd spawning new processes and doing a denial of services. This could be avoided by throttling simultaneous connection in the TLS daemon.

If you want to contribute, please do, and if you find a security issue please contact me, I'll be glad to examine the issue.

GPG2 cheatsheet

Written by Solène, on 06 September 2019.
Tags: #security

Comments on Mastodon

Introduction

I don’t use gpg a lot but it seems the only tool out there for encrypting data which “works” and widely used.

So this is my personal cheatsheet for everyday use of gpg.

In this post, I use the command gpg2 which is the binary to GPG version 2. On your system, “gpg” command could be gpg2 or gpg1. You can use gpg --versionif you want to check the real version behind gpg binary.

In your ~/.profile file you may need the following line:

export GPG_TTY=$(tty)

Install GPG

The real name of GPG is GnuPG, so depending on your system the package can be either gpg2, gpg, gnupg, gnugp2 etc…

On OpenBSD, you can install it with: pkg_add gnupg--%gnupg2

GPG Principle using private/public keys

  • YOU make a private and a public key (associated with a mail)
  • YOU give the public key to people
  • PEOPLE import your public key into they keyring
  • PEOPLE use your public key from the keyring
  • YOU will need your password everytime

I think gpg can do much more, but read the manual for that :)

Initialization

We need to create a public and a private key.

solene$ gpg2 --gen-key
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

In this part, you should put your real name and your email address and validate with “O” if you are okay with the input. You will get ask for a passphrase after.

Real name: Solene
Email address: solene@domain.example
You selected this USER-ID:
    "Solene <solene@domain.example>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 368E580748D5CA75 marked as ultimately trusted
gpg: revocation certificate stored as '/home/solene/.gnupg/openpgp-revocs.d/7914C6A7439EADA52643933B368E580748D5CA75.rev'
public and secret key created and signed.

pub   rsa2048 2019-09-06 [SC] [expires: 2021-09-05]
      7914C6A7439EADA52643933B368E580748D5CA75
uid                    Solene <solene@domain.example>
sub   rsa2048 2019-09-06 [E] [expires: 2021-09-05]

The key will expire in 2 years, but this is okay. This is a good thing, if you stop using the key, it will die silently at it expiration time. If you still use it, you will be able to extend the expiracy time and people will be able to notice you still use that key.

Export the public key

If someone asks your GPG key, this is what they want:

gpg2 --armor --export solene@domain.example > solene.asc

Import a public key

Import the public key:

gpg2 --import solene.asc

Delete a public key

In case someone change their public key, you will want to delete it to import a new one, replace $FINGERPRINT by the actual fingerprint of the public key.

gpg2 --delete-keys $FINGERPRINT

Encrypt a file for someone

If you want to send file picture.jpg to remote@mail then use the command:

gpg2 --encrypt --recipient remote@domain.example picture.jpg > picture.jpg.gpg

You can now send picture.jpg.gpg to remote@mail who will be able to read the file with his/her private key.

You can use `–armor`` parameter to make the output plaintext, so you can put it into a mail or a text file.

Decrypt a file

Easy!

gpg2 --decrypt image.jpg.gpg > image.jpg

Get public key fingerprint

The fingerprint is a short string made out of your public key and can be embedded in a mail (often as a signature) or anywhere.

It allows comparing a public key you received from someone with the fingerprint that you may find in mailing list archives, twitter, a html page etc.. if the person spreaded it somewhere. This allow to multiple check the authenticity of the public key you received.

it looks like:

4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909

This is my real key fingerprint, so if I send you my public key, you can use the fingerprint from this page to check it matches the key you received!

You can obtain your fingerprint using the following command:

solene@t480 ~ $ gpg2 --fingerprint
pub   rsa4096 2018-06-08 [SC]
      4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909
uid          [  ultime ] XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sub   rsa4096 2018-06-08 [E]

Add a new mail / identity

If for some reason, you need to add another mail to your GPG key (like personal/work keys) you can create a new identity with the new mail.

Type gpg2 --edit-key solene@domain.example and then in the prompt, type adduid and answer questions.

You can now export the public key with a different identity.

List known keys

If you want to get the list of keys you imported, you can use

gpg2 -k

Testing

If you want to do some tests, I’d recommend making new users on your system, exchanges their keys and try to encrypt a message from one user to another.

I have a few spare users on my system on which I can ssh locally for various tests, it is always useful.

Safely restrict commands through SSH

Written by Solène, on 08 November 2018.
Tags: #ssh #security #openbsd68 #highlight

Comments on Mastodon

sshd(8) has a very nice feature that is often overlooked. That feature is the ability to allow a ssh user to run a specified command and nothing else, not even a login shell.

This is really easy to use and the magic happens in the file authorized_keys which can be used to restrict commands per public key.

For example, if you want to allow someone to run the “uptime” command on your server, you can create a user account for that person, with no password so the password login will be disabled, and add his/her ssh public key in ~/.ssh/authorized_keys of that new user, with the following content.

restrict,command="/usr/bin/uptime" ssh-rsa the_key_content_here

The user will not be able to log-in, and doing the command ssh remoteserver will return the output of uptime. There is no way to escape this.

While running uptime is not really helpful, this can be used for a much more interesting use case, like allowing remote users to use vmctl without giving a shell account. The vmctl command requires parameters, the configuration will be slightly different.

restrict,pty,command="/usr/sbin/vmctl $SSH_ORIGINAL_COMMAND" ssh-rsa the_key_content_here"

The variable SSH_ORIGINAL_COMMAND contains the value of what is passed as parameter to ssh. The pty keyword also make an appearance, that will be explained later.

If the user connects to ssh, vmctl with no parameter will be output.

$ ssh remotehost
usage:  vmctl [-v] command [arg ...]
    vmctl console id
    vmctl create "path" [-b base] [-i disk] [-s size]
    vmctl load "path"
    vmctl log [verbose|brief]
    vmctl reload
    vmctl reset [all|vms|switches]
    vmctl show [id]
    vmctl start "name" [-Lc] [-b image] [-r image] [-m size]
            [-n switch] [-i count] [-d disk]* [-t name]
    vmctl status [id]
    vmctl stop [id|-a] [-fw]
    vmctl pause id
    vmctl unpause id
    vmctl send id
    vmctl receive id

If you pass parameters to ssh, it will be passed to vmctl.

$ ssh remotehost show
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME
1     -     1    1.0G       -       -       solene test
$ ssh remotehost start test
vmctl: started vm 1 successfully, tty /dev/ttyp9
$ ssh -t remotehost console test
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?

The ssh connections become a call to vmctl and ssh parameters become vmctl parameters.

Note that in the last example, I use “ssh -t”, this is so to force allocation of a pseudo tty device. This is required for vmctl console to get a fully working console. The keyword restrict does not allow pty allocation, that is why we have to add pty after restrict, to allow it.

Tor part 2: hidden service

Written by Solène, on 11 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

In this second Tor article, I will present an interesting Tor feature named hidden service. The principle of this hidden service is to make available a network service from anywhere, with only prerequisites that the computer must be powered on, tor not blocked and it has network access.

This service will be available through an address not disclosing anything about the server internet provider or its IP, instead, a hostname ending by .onion will be provided by tor for connecting. This hidden service will be only accessible through Tor.

There are a few advantages of using hidden services:

  • privacy, hostname doesn’t contain any hint
  • security, secure access to a remote service not using SSL/TLS
  • no need for running some kind of dynamic dns updater

The drawback is that it’s quite slow and it only work for TCP services.

From here, we assume that Tor is installed and working.

Running an hidden service require to modify the Tor daemon configuration file, located in /etc/tor/torrc on OpenBSD.

Add the following lines in the configuration file to enable a hidden service for SSH:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22

The directory /var/tor/ssh_service will be be created. The directory /var/tor is owned by user _tor and not readable by other users. The hidden service directory can be named as you want, but it should be owned by user _tor with restricted permissions. Tor daemon will take care at creating the directory with correct permissions once you reload it.

Now you can reload the tor daemon to make the hidden service available.

$ doas rcctl reload tor

In the /var/tor/ssh_service directory, two files are created. What we want is the content of the file hostname which contains the hostname to reach our hidden service.

$ doas cat /var/tor/ssh_service/hostname
piosdnzecmbijclc.onion

Now, we can use the following command to connect to the hidden service from anywhere.

$ torsocks ssh piosdnzecmbijclc.onion

In Tor network, this feature doesn’t use an exit node. Hidden services can be used for various services like http, imap, ssh, gopher etc…

Using hidden service isn’t illegal nor it makes the computer to relay tor network, as previously, just check if you can use Tor on your network.

Note: it is possible to have a version 3 .onion address which will prevent hostname collapsing, but this produce very long hostnames. This can be done like in the following example:

HiddenServiceDir /var/tor/ssh_service
HiddenServicePort 22 127.0.0.1:22
HiddenServiceVersion 3

This will produce a really long hostname like tgoyfyp023zikceql5njds65ryzvwei5xvzyeubu2i6am5r5uzxfscad.onion

If you want to have the short and long hostnames, you need to specify twice the hidden service, with differents folders.

Take care, if you run a ssh service on your website and using this same ssh daemon on the hidden service, the host keys will be the same, implying that someone could theoricaly associate both and know that this public IP runs this hidden service, breaking anonymity.

Tor part 1: how-to use Tor

Written by Solène, on 10 October 2018.
Tags: #openbsd68 #openbsd #unix #tor #security

Comments on Mastodon

Tor is a network service allowing to hide your traffic. People sniffing your network will not be able to know what server you reach and people on the remote side (like the administrator of a web service) will not know where you are from. Tor helps keeping your anonymity and privacy.

To make it quick, tor make use of an entry point that you reach directly, then servers acting as relay not able to decrypt the data relayed, and up to an exit node which will do the real request for you, and the network response will do the opposite way.

You can find more details on the Tor project homepage.

Installing tor is really easy on OpenBSD. We need to install it, and start its daemon. The daemon will listen by default on localhost on port 9050. On others systems, it may be quite similar, install the tor package and enable the daemon if not enabled by default.

# pkg_add tor
# rcctl enable tor
# rcctl start tor

Now, you can use your favorite program, look at the proxy settings and choose “SOCKS” proxy, v5 if possible (it manage the DNS queries) and use the default address: 127.0.0.1 with port 9050.

If you need to use tor with a program that doesn’t support setting a SOCKS proxy, it’s still possible to use torsocks to wrap it, that will work with most programs. It is very easy to use.

# pkg_add torsocks
$ torsocks ssh remoteserver

This will make ssh going through tor network.

Using tor won’t make you relaying anything, and is legal in most countries. Tor is like a VPN, some countries has laws about VPN, check for your country laws if you plan to use tor. Also, note that using tor may be forbidden in some networks (companies, schools etc..) because this allows to escape filtering which may be against some kind of “Agreement usage” of the network.

I will cover later the relaying part, which can lead to legal uncertainty.

Note: as torsocks is a bit of a hack, because it uses LD_PRELOAD to wrap network system calls, there is a way to do it more cleanly with ssh (or any program supporting a custom command for initialize the connection) using netcat.

ssh -o ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p' address.onion

This can be simplified by adding the following lines to your ~/.ssh/config file, in order to automatically use the proxy command when you connect to a .onion hostname:

Host *.onion
ProxyCommand='/usr/bin/nc -X 5 -x 127.0.0.1:9050 %h %p'

This netcat command is tested under OpenBSD, there are differents netcat implementations, the flags may be differents or may not even exist.

How to check your data integrity?

Written by Solène, on 17 March 2017.
Tags: #unix #security

Comments on Mastodon

Today, the topic is data degradation, bit rot, birotting, damaged files or whatever you call it. It’s when your data get corrupted over the time, due to disk fault or some unknown reason.

What is data degradation ?

I shamelessy paste one line from wikipedia: “Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay or data rot.”.

Data degradation on Wikipedia

So, how do we know we encounter a bit rot ?

bit rot = (checksum changed) && NOT (modification time changed)

While updating a file could be mistaken as bit rot, there is a difference

update = (checksum changed) && (modification time changed)

How to check if we encounter bitrot ?

There is no way you can prevent bitrot. But there are some ways to detect it, so you can restore a corrupted file from a backup, or repair it with the right tool (you can’t repair a file with a hammer, except if it’s some kind of HammerFS ! :D )

In the following I will describe software I found to check (or even repair) bitrot. If you know others tools which are not in this list, I would be happy to hear about it, please mail me.

In the following examples, I will use this method to generate bitrot on a file:

% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% generate_checksum_database_with_tool
% echo "a" >> my_data/some_file_that_will_be_corrupted
% touch -d "2017-03-16T21:04:00" my_data/some_file_that_will_be_corrupted
% start_tool_for_checking

We generate the checksum database, then we alter a file by adding a “a” at the end of the file and we restore the modification and acess time of the file. Then, we start the tool to check for data corruption.

The first touch is only for convenience, we could get the modification time with stat command and pass the same value to touch after modification of the file.

bitrot

This is a python script, it’s very easy to use. I will scan a directory and create a database with the checksum of the files and their modification date.

Initialization usage:

% cd /home/my_data/
% bitrot
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 189 new, 0 updated, 0 renamed, 0 missing.
Updating bitrot.sha512... done.
% echo $?
0

Verify usage (case OK):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
Finished. 199.41 MiB of data read. 0 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
% echo $?
0

Exit status is 0, so our data are not damaged.

Verify usage (case Error):

% cd /home/my_data/
% bitrot
Checking bitrot.db integrity... ok.
error: SHA1 mismatch for ./sometextfile.txt: expected 17b4d7bf382057dc3344ea230a595064b579396f, got db4a8d7e27bb9ad02982c0686cab327b146ba80d. Last good hash checked on 2017-03-16 21:04:39.
Finished. 199.41 MiB of data read. 1 errors found.
189 entries in the database, 0 new, 0 updated, 0 renamed, 0 missing.
error: There were 1 errors found.
% echo $?
1

When something is wrong. As the exit status of bitrot isn’t 0 when it fails, it’s easy to write a script running every day/week/month.

Github page

bitrot is available in OpenBSD ports in sysutils/bitrot since 6.1 release.

par2cmdline

This tool works with PAR2 archives (see below for more informations about what PAR ) and from them, it will be able to check your data integrity AND repair it.

While it has some pros like being able to repair data, the cons is that it’s not very easy to use. I would use this one for checking integrity of long term archives that won’t changes. The main drawback comes from PAR specifications, the archives are created from a filelist, if you have a directory with your files and you add new files, you will need to recompute ALL the PAR archives because the filelist changed, or create new PAR archives only for the new files, but that will make the verify process more complicated. That doesn’t seems suitable to create new archives for every bunchs of files added in the directory.

PAR2 let you choose the percent of a file you will be able to repair, by default it will create the archives to be able to repair up to 5% of each file. That means you don’t need a whole backup for the files (while it’s would be a bad idea) and only an approximately extra of 5% of your data to store.

Create usage:

% cd /home/
% par2 create -a integrity_archive -R my_data
Skipping 0 byte file: /home/my_data/empty_file

Block size: 3812
Source file count: 17
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7

Opening: my_data/[....]
[text cut here]
Opening: my_data/[....]

Computing Reed Solomon matrix.
Constructing: done.
Wrote 381200 bytes to disk
Writing recovery packets
Writing verification packets
Done

% echo $?
0

% ls -1
integrity_archive.par2
integrity_archive.vol000+01.par2
integrity_archive.vol001+02.par2
integrity_archive.vol003+04.par2
integrity_archive.vol007+08.par2
integrity_archive.vol015+16.par2
integrity_archive.vol031+32.par2
integrity_archive.vol063+37.par2
my_data

Verify usage (OK):

% par2 verify integrity_archive.par2 
Loading "integrity_archive.par2".
Loaded 36 new packets
Loading "integrity_archive.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.


All files are correct, repair is not required.
% echo $?
0

Verify usage (with error):

par2 verify integrity_archive.par.par2                                                 
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:


Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:


Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.

% echo $?
1

Repair usage:

% par2 repair integrity_archive.par.par2      
Loading "integrity_archive.par.par2".
Loaded 36 new packets
Loading "integrity_archive.par.vol000+01.par2".
Loaded 1 new packets including 1 recovery blocks
Loading "integrity_archive.par.vol001+02.par2".
Loaded 2 new packets including 2 recovery blocks
Loading "integrity_archive.par.vol003+04.par2".
Loaded 4 new packets including 4 recovery blocks
Loading "integrity_archive.par.vol007+08.par2".
Loaded 8 new packets including 8 recovery blocks
Loading "integrity_archive.par.vol015+16.par2".
Loaded 16 new packets including 16 recovery blocks
Loading "integrity_archive.par.vol031+32.par2".
Loaded 32 new packets including 32 recovery blocks
Loading "integrity_archive.par.vol063+37.par2".
Loaded 37 new packets including 37 recovery blocks
Loading "integrity_archive.par.par2".
No new packets found

There are 17 recoverable files and 0 other files.
The block size used was 3812 bytes.
There are a total of 2000 data blocks.
The total size of the data files is 7595275 bytes.

Verifying source files:

Target: "my_data/....." - found.
[...cut here...]
Target: "my_data/....." - found.
Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - damaged. Found 95 of 95 data blocks.

Scanning extra files:


Repair is required.
1 file(s) exist but are damaged.
16 file(s) are ok.
You have 2000 out of 2000 data blocks available.
You have 100 recovery blocks available.
Repair is possible.
You have an excess of 100 recovery blocks.
None of the recovery blocks will be used for the repair.


Wrote 361069 bytes to disk

Verifying repaired files:

Target: "my_data/Ebooks/Lovecraft/Quete Onirique de Kadath l'Inconnue.epub" - found.

Repair complete.

% echo $?
0

par2cmdline is only one implementation doing the job, others tools working with PAR archives exists. They should be able to all works with the same PAR files.

Parchive on Wikipedia

Github page

par2cmdline is available in OpenBSD ports in archivers/par2cmdline.

If you find a way to add new files to existing archives, please mail me.

mtree

One can write a little script using mtree (in base system on OpenBSD and FreeBSD) which will create a file with the checksum of every files in the specified directories. If mtree output is different since last time, we can send a mail with the difference. This is a process done in base install of OpenBSD for /etc and some others files to warn you if it changed.

While it’s suited for directories like /etc, in my opinion, this is not the best tool for doing integrity check.

ZFS

I would like to talk about ZFS and data integrity because this is where ZFS is very good. If you are using ZFS, you may not need any other software to take care about your data. When you write a file, ZFS will also store its checksum as metadata. By default, the option “checksum” is activated on dataset, but you may want to disable it for better performance.

There is a command to ask ZFS to check the integrity of the files. Warning: scrub is very I/O intensive and can takes from hours to days or even weeks to complete depending on your CPU, disks and the amount of data to scrub:

# zpool scrub zpool

The scrub command will recompute the checksum of every file on the ZFS pool, if something is wrong, it will try to repair it if possible. A repair is possible in the following cases:

If you have multiple disks like raid-Z or raid–1 (mirror), ZFS will be look on the differents disks if the non corrupted version of the file exists, if it finds it, it will restore it on the disk(s) where it’s corrupted.

If you have set the ZFS option “copies” to 2 or 3 (1 = default), that means that the file is written 2 or 3 time on the disk. Each file of the dataset will be allocated 2 or 3 time on the disk, so take care if you want to use it on a dataset containing heavy files ! If ZFS find thats a version of a file is corrupted, it will check the others copies of it and tries to restore the corrupted file is possible.

You can see the percentage of filesystem already scrubbed with

zfs status zpool

and the scrub can be stopped with

zfs scrub -s zpool

AIDE

Its name is an acronym for “Advanced Intrusion Detection Environment”, it’s an complicated software which can be used to check for bitrot. I would not recommend using it if you only need bitrot detection.

Here is a few hints if you want to use it for checking your file integrity:

/etc/aide.conf

/home/my_data/ R
# Rule definition
All=m+s+i+sha256
summarize_changes=yes

The config file will create a database of all files in /home/my_data/ (R for recursive). “All” line list the checks we do on each file. For bitrot checking, we want to check modification time, size, checksum and inode of the files. The summarize_change line permit to have a list of changes if something is wrong.

This is the most basic config file you can have. Then you will have to run aide to create the database and then run aide to create a new database and compare the two databases. It doesn’t update its database itself, you will have to move the old database and tell it where to found the older database.

My use case

I have different kind of data. On a side, I have static data like pictures, clips, music or things that won’t change over time and the other side I have my mails, documents and folders where the content changes regularly (creation, deletetion, modification). I am able to afford a backup for 100% of my data with some history of the backup on a few days, so I won’t be interested about file repairing.

I want to be warned quickly if a file get corrupted, so I can still get the backup in my history but I don’t keep every versions of my files for too long. I choose to go with the python tool bitrot, it’s very easy to use and it doesn’t become a mess with my folders getting updated often.

I would go with par2cmdline if I could not be able to backup all my data. Having 5% or 10% of redundancy of my files should be enough to restore it in case of corruption without taking too much space.

Let's encrypt on OpenBSD in 5 minutes

Written by Solène, on 20 January 2017.
Tags: #security #openbsd66 #openbsd

Comments on Mastodon

Let’s encrypt is a free service which provides free SSL certificates. It is fully automated and there are a few tools to generate your certificates with it. In the following lines, I will just explain how to get a certificate in a few minutes. You can find more informations on Let’s Encrypt website.

To make it simple, the tool we will use will generate some keys on the computer, send a request to Let’s Encrypt service which will use http challenging (there are also dns and another one kind of challenging) to see if you really own the domain for which you want the certificate. If the challenge process is ok, you have the certificate.

Please, if you don’t understand the following commands, don’t type it.

While the following is right for OpenBSD, it may change slightly for others systems. Acme-client is part of the base system, you can read the man page acme-client(1).

Prepare your http server

For each certificate you will ask a certificate, you will be challenged for each domain on the port 80. A file must be available in a path under “/.well-known/acme-challenge/”.

You must have this in your httpd config file. If you use another web server, you need to adapt.

server "mydomain.com" {
    root "/empty"
    listen on * port 80
    location "/.well-known/acme-challenge/*" {
        root { "/acme/" , request strip 2 }
    }
}

The request strip 2 part is IMPORTANT. (I’ve lost 45 minutes figuring out why root “/acme/” wasn’t working.)

Prepare the folders

As stated in acme-client man page and if you don’t need to change the path. You can do the following commands with root privileges :

# mkdir /var/www/acme
# mkdir -p /etc/ssl/acme/private /etc/acme
# chmod 0700 /etc/ssl/acme/private /etc/acme

Request the certificates

As root, in the acme-client sources folder, type the following the generate the certificates. The verbose flag is interesting and you will see if the challenging step work. If it doesn’t work, you should try manually to get a file like with the same path tried from Let’s encrypt, and try again the command when you succeed.

$ acme-client -vNn mydomain.com www.mydomain.com mail.mydomain.com

Use the certificates

Now, you can use your SSL certificates for your mail server, imap server, ftp server, http server…. There is a little drawback, if you generate certificates for a lot of domains, they are all written in the certificate. This implies that if someone visit one page, look at the certificate, this person will know every domain you have under SSL. I think that it’s possible to ask every certificate independently but you will have to play with acme-client flags and make some kind of scripts to automatize this.

Certificate file is located at /etc/ssl/acme/fullchain.pem and contains the full certification chain (as its name is explicit). And the private key is located at /etc/ssl/acme/private/privkey.pem.

Restart the service with the certificate.

Renew certificates

Certificates are valid for 3 months. Just type

./acme-client mydomain.com www.mydomain.com mail.mydomain.com

Restart your ssl services

EASY !

Port of the week: dnscrypt-proxy

Written by Solène, on 19 October 2016.
Tags: #unix #security #portoftheweek

Comments on Mastodon

2020 Update

Now, unwind on OpenBSD and unbound can support DNS over TLS or DNS over HTTPS, dnscrypt lost a bit of relevance but it’s still usable and a good alternative.

Dnscrypt

Today I will talk about net/dnscrypt-proxy. This let you encrypt your DNS traffic between your resolver and the remote DNS recursive server. More and more countries and internet provider use DNS to block some websites, and now they tend to do “man in the middle” with DNS answers, so you can’t just use a remote DNS you find on the internet. While a remote dnscrypt DNS server can still be affected by such “man in the middle” hijack, there is a very little chance DNS traffic is altered in datacenters / dedicated server hosting.

The article also deal with unbound as a dns cache because dnscrypt is a bit slow and asking multiple time the same domain in a few minutes is a waste of cpu/network/time for everyone. So I recommend setting up a DNS cache on your side (which can also permit to use it on a LAN).

At the time I write this article, their is a very good explanation about “how to install it” is named dnscrypt-proxy–1.9.5p3 in the folder /usr/local/share/doc/pkg-readmes/. The following article is made from this file. (Article updated at the time of OpenBSD 6.3)

While I write for OpenBSD this can be easily adapted to anthing else Unix-like.

Install dnscrypt

# pkg_add dnscrypt-proxy

Resolv.conf

Modify your resolv.conf file to this

/etc/resolv.conf :

nameserver 127.0.0.1
lookup file bind
options edns0

When using dhcp client

If you use dhcp to get an address, you can use the following line to force having 127.0.0.1 as nameserver by modifying dhclient config file. Beware, if you use it, when upgrading the system from bsd.rd, you will get 127.0.0.1 as your DNS server but no service running.

/etc/dhclient.conf :

supersede domain-name-servers 127.0.0.1;

Unbound

Now, we need to modify unbound config to tell him to ask DNS at 127.0.0.1 port 40. Please adapt your config, I will just add what is mandatory. Unbound configuration file isn’t in /etc because it’s chrooted

/var/unbound/etc/unbound.conf:

server:
    # this line is MANDATORY
    do-not-query-localhost: no

forward-zone:
    name: "."
    forward-addr: 127.0.0.1@40
    # address dnscrypt listen on

If you want to allow other to resolv through your unbound daemon, please see parameters interface and access-control. You will need to tell unbound to bind on external interfaces and allow requests on it.

Dnscrypt-proxy

Now we need to configure dnscrypt, pick a server in the following LIST /usr/local/share/dnscrypt-proxy/dnscrypt-resolvers.csv, the name is the first column.

As root type the following (or use doas/sudo), in the example we choose dnscrypt.eu-nl as a DNS provider

# rcctl enable dnscrypt_proxy
# rcctl set dnscrypt_proxy flags -E -m1 -R dnscrypt.eu-nl -a 127.0.0.1:40
# rcctl start dnscrypt_proxy

Conclusion

You should be able to resolv address through dnscrypt now. You can use tcpdump on your external interface to see if you see something on udp port 53, you should not see traffic there.

If you want to use dig hostname -p 40 @127.0.0.1 to make DNS request to dnscrypt without unbound, you will need net/isc-bind which will provide /usr/local/bin/dig. OpenBSD base dig can’t use a port different than 53.

Port of the week: pwgen

Written by Solène, on 12 August 2016.
Tags: #security #portoftheweek

Comments on Mastodon

I will talk about security/pwgen for the current port of the week. It’s a very light executable to generate passwords. But it’s not just a dumb password generator, it has options to choose what kind of password you want.

Here is a list of options with their flag, you will find a lot more in the nice man page of pwgen:

  • -A : don’t use capital letters
  • -B : don’t use characters which could be missread (O/0, I/l/1 …)
  • -v : don’t use vowels
  • etc…

You can also use a seed to generate your “random” password (which aren’t very random in this case), you may need it for some reason to be able to reproduce password you lost for a ftp/http access for example.

Example of pwgen output generating 5 password of 10 characters. Using –1 parameter so it will only display one password per line, otherwise it display a grid (on column and multiple lines) of passwords.

$ pwgen -1 10 5
fohchah9oP
haNgeik0ee
meiceeW8ae
OReejoi5oo
ohdae2Eisu

Stop being tracked by Google search with Firefox

Written by Solène, on 04 July 2016.
Tags: #security #web

Comments on Mastodon

When you use google search and you click on a link, you a redirected on a google server that will take care of saving your navigation choice from their search engine into their database.

  1. This is bad for your privacy
  2. This slow the process of using the search engine because you have a redirection (that you don’t see) when you want to visit a link

There is a firefox extension that will fix the links in the results of the search engine so when you click, you just go on the website without saying “hello Google I clicked there”: Google Search Link Fix

You can also use another web engine if you don’t like Google. I keep it because I have best results when searching technical. I tried to use Yahoo, Bing, Exalead, Qwant, Duck duck go, each one for a few days and Google has the bests results so far.