About me: My name is Solène Rapenne. I like learning and sharing my knowledge related to IT stuff. Hobbies: '(BSD OpenBSD h+ Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on Freenode, solene+www at dataswamp dot org or solene@bsd.network (mastodon). If for some reason you want to give me some money, I accept paypal at the address donate@perso.pw.

About pipelining OpenBSD ports contributions

Written by Solène, on 27 September 2020.
Tags: #openbsd #automation

Comments on Mastodon

After modest contributions to the NixOS operating system which made me learn about the contribution process, I found enjoyable to have an automatic report and feedback about the quality of the submitted work. While on NixOS this requires GitHub, I think this could be applied as well on OpenBSD and the mailing list contributing system.

I made a prototype before starting the real work and actually I’m happy with the result.

This is what I get after feeding the script with a mail containing a patch:

Determining package path         ✓    
Verifying patch isn't committed  ✓    
Applying the patch               ✓    
Fetching distfiles               ✓    
Distfile checksum                ✓    
Applying ports patches           ✓    
Extracting sources               ✓    
Building result                  ✓

It requires a lot of checks to find a patch in the file, because we have have patches generated from cvs or git which have a slightly different output. And then, we need to find from where to apply this patch.

The idea would be to retrieve mails sent to ports@openbsd.org by subscribing, then store metadata about that submission into a database:

Sender
Date
Diff (raw text)
Status (already committed, doesn't apply, apply, compile)

Then, another program will pick a diff from the database, prepare a VM using a derivated qcow2 disk from a base image so it always start fresh and clean and ready, and do the checks within the VM.

Once it is finished, a mail could be sent as a reply to the original mail to give the status of each step until error or last check. The database could be reused to make a web page to track what compiles but is not yet committed. As it’s possible to verify if a patch is committed in the tree, this can automatically prune committed patches over time.

I really think this can improve tracking patches sent to ports@ and ease the contribution process.

DISCLAIMER

  • This would not be an official part of the project, I do it on my own
  • This may be cancelled
  • This may be a bad idea
  • This could be used “as a service” instead of pulling automatically from ports, meaning people could send mails to it to receive an automatic review. Ideally this should be done in portcheck(1) but I’m not sure how to verify a diff apply on the ports tree without enforcing requirements
  • Human work will still be required to check the content and verify the port works correctly!

Birthdays dates management using calendar

Written by Solène, on 15 June 2020.
Tags: #openbsd #plaintext #automation

Comments on Mastodon

I manage my birthday list so I don’t forget about them in a calendar file so I can use it in scripts

The calendar file format is easy but sadly it only works using English month names.

This is an example file with differents spacing:

7  August   This is 7 august birthday!
 8 August   This is 8 august birthday!
16 August   This is 16 august birthday!

Now you have a calendar file you can use the calendar binary on it and show incoming events in the next n days using -A flag.

calendar -A 20

Note that the default file is ~/.calendar/calendar so if you use this file you don’t need to use the -f flag in calendar.

Now, I also use it in crontab with xmessage to show a popup once a day with incoming birthdays.

30 13 * * *  calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -

You have to set the DISPLAY variable so it appear on the screen.

It’s important to check if calendar will have any output before calling xmessage to prevent having an empty window.

New blog feature: Fediverse comments

Written by Solène, on 16 May 2020.
Tags: #fediverse #automation

Comments on Mastodon

I added a new feature to my blog today, when I post a new blog article this will trigger my dedicated Mastodon user https://bsd.network/@solenepercent to publish a Toot so people can discuss the content there.

Every article now contains a link to the toot if you want to discuss about an article.

This is not perfect but a good trade-off I think:

  1. the website remains static and light (nothing is included, only one more link per blog post)
  2. people who would like to discuss about it can proceed in a known place instead of writing reactions on reddit or other places without a chance for me to asnwer
  3. this is not relying on proprietary services

Of course, if you want to give me feedback, I’m still happy to reply to emails or on IRC.

BitreichCON 2019 talks available

Written by Solène, on 27 August 2019.
Tags: #unix #drist #automation #awk

Comments on Mastodon

Earlier in August 2019 happened the BitreichCON 2019. There was awesome talks there during two days but there are two I would like to share. You can find all the informations about this event at the following address with the Gopher protocol gopher://bitreich.org/1/con/2019

BrCON talks are happening through an audio stream, a ssh session for viewing the current slide and IRC for questions. I have the markdown files producing the slides (1 title = 1 slide) and the audio recording.

Simple solutions

This is a talk I have made for this conference. It as about using simple solutions for most problems. Simple solutions come with simple tools, unix tools. I explain with real life examples like how to retrieve my blog articles titles from the website using curl, grep, tr or awk.

Link to the audio

Link to the slides

Experiences with drist

Another talk from Parazyd about my deployment tool Drist so I feel obligated to share it with you.

In his talk he makes a comparison with slack (debian package, not the online community), explains his workflow with Drist and how it saves his precious time.

Link to the audio

Link to the slides

About the bitreich community

If you want to know more about the bitreich community, check gopher://bitreich.org or IRC #bitreich-en on Freenode servers.

There is also the bitreich website which is a website parody of the worse of what you can daily see.

Nginx and acme-client on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd68 #openbsd #nginx #automation

Comments on Mastodon

I write this blog post as I spent too much time setting up nginx and SSL on OpenBSD with acme-client, due to nginx being chrooted and not stripping path and not doing it easily.

First, you need to set up /etc/acme-client.conf correctly. Here is mine for the domain ports.perso.pw:

authority letsencrypt {
        api url "https://acme-v02.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-privkey.pem"
}

domain ports.perso.pw {
        domain key "/etc/ssl/private/ports.key"
        domain full chain certificate "/etc/ssl/ports.fullchain.pem"
        sign with letsencrypt
}

This example is for OpenBSD 6.6 (which is current when I write this) because of Let’s encrypt API URL. If you are running 6.5 or 6.4, replace v02 by v01 in the api url

Then, you have to configure nginx this way, the most important part in the following configuration file is the location block handling acme-challenge request. Remember that nginx is in chroot /var/www so the path to acme directory is acme.

http {
    include       mime.types;
    default_type  application/octet-stream;
    index         index.html index.htm;
    keepalive_timeout  65;
    server_tokens off;

    upstream backendurl {
        server unix:tmp/plackup.sock;
    }

    server {
      listen       80;
      server_name ports.perso.pw;

      access_log logs/access.log;
      error_log  logs/error.log info;

      root /htdocs/;

      location /.well-known/acme-challenge/ {
          rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
          root /acme;
      } 

      location / {
          return 301 https://$server_name$request_uri;
      }
    }

    server {
      listen 443 ssl;
      server_name ports.perso.pw;
      access_log logs/access.log;
      error_log logs_error.log info;
      root /htdocs/;

      ssl_certificate /etc/ssl/ports.fullchain.pem;
      ssl_certificate_key /etc/ssl/private/ports.key;
      ssl_protocols TLSv1.1 TLSv1.2;
      ssl_prefer_server_ciphers on;
      ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

      [... stuff removed ...]
    }

}

That’s all! I wish I could have find that on the Internet so I share it here.

RSS feed for OpenBSD stable packages repository (made with XSLT)

Written by Solène, on 05 June 2019.
Tags: #openbsd #automation

Comments on Mastodon

I am happy to announce there is now a RSS feed for getting news in case of new packages available on my repository https://stable.perso.pw/

The file is available at https://stable.perso.pw/rss.xml.

I take the occasion of this blog post to explain how the file is generated as I did not find easy tool for this task, so I ended up doing it myself.

I choosed to use XSLT, which is not quite common. Briefly, XSLT allows to use some kind of XML template on a XML data file, this allow loops, filtering etc… It requires only two parts: the template and the data.

Simple RSS template

The following file is a template for my RSS file, we can see a few tags starting by xsl like xsl:for-each or xsl:value-of.

It’s interesting to note that the xsl-for-each can use a condition like position < 10 in order to limit the loop to the 10 first items.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
     xmlns:xsl="http://www.w3.org/1999/XSL/Transform">

<xsl:template match="/">
    <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
        <channel>
            <description></description>

            <!-- BEGIN CONFIGURATION -->
            <title>OpenBSD unofficial stable packages repository</title>
            <link>https://stable.perso.pw/</link>
            <atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" />
            <!-- END CONFIGURATION -->

            <!-- Generating items -->
            <xsl:for-each select="feed/news[position()&lt;10]">
            <item>
                <title>
                    <xsl:value-of select="title"/>
                </title>
                <description>
                    <xsl:value-of select="description"/>
                </description>
                <pubDate>
                    <xsl:value-of select="date"/>
                </pubDate>
            </item>
            </xsl:for-each>

        </channel>
    </rss>
</xsl:template>
</xsl:stylesheet>

Simple data file

Now, we need some data to use with the template. I’ve added a comment block so I can copy / paste it to add a new entry into the RSS easily. As the date is in a painful format to write for a human, I added to my Makefile starting the commands a call to a script replacing the string DATE by the current date with the correct format.

<feed>
<news>
    <title>www/mozilla-firefox</title>
    <description>Firefox 67.0.1</description>
    <date>Wed, 05 Jun 2019 06:00:00 GMT</date>
</news>

<!-- copy paste for a new item
<news>
    <title></title>
    <description></description>
    <date></date>
</news>
-->
</feed>

Makefile

I love makefiles, so I share it even if this one is really short.

all:
    sh replace_date.sh
    xsltproc template.xml news.xml | xmllint -format - | tee rss.xml
    scp rss.xml perso.pw:/home/stable/

clean:
    rm rss.xml

When I want to add an entry, I copy / paste the comment block in news.xml, add DATE, run make and it’s uploaded :)

The command xsltproc is available from the package libxslt on OpenBSD.

And then, after writing this, I realise that manually editing the result file rss.xml is as much work as editing the news.xml file and then process it with xslt… But I keep that blog post as this can be useful for more complicated cases. :)

Simple way to use ssh tunnels in scripts

Written by Solène, on 15 May 2019.
Tags: #ssh #automation

Comments on Mastodon

While writing a script to backup a remote database, I did not know how to handle a ssh tunnel inside a script correctly/easily. A quick internet search pointed out this link to me: https://gist.github.com/scy/6781836

While I’m not a huge fan of the ControlMaster solution which consists at starting a ssh connection with ControlMaster activated, and tell ssh to close it, and don’t forget to put a timeout on the socket otherwise it won’t close if you interrupt the script.

But I really enjoyed a neat solution which is valid for most of the cases:

$ ssh -f -L 5432:localhost:5432 user@host "sleep 5" && pg_dumpall -p 5432 -h localhost > file.sql

This will create a ssh connection and make it go to background because of -f flag, but it will close itself after the command is run, sleep 5 in this case. As we chain it quickly to a command using the tunnel, ssh will only stops when the tunnel is not used anymore, keeping it alive only the required time for the pg_dump command, not more. If we interrupt the script, I’m not sure ssh will stop immediately or only after it ran successfully the command sleep, in both cases ssh will stop correctly. There is no need to use a long sleep value because as I said previously, the tunnel will stay up until nothing uses it.

You should note that the ControlMaster way is the only reliable way if you need to use the ssh tunnel for multiples commands inside the script.

Deploying munin-node with drist

Written by Solène, on 17 April 2019.
Tags: #drist #automation #openbsd

Comments on Mastodon

The following guide is a real world example of drist usage. We will create a script to deploy munin-node on OpenBSD systems.

We need to create a script that will install munin-node package but also configure it using the default proposal. This is done easily using the script file.

#!/bin/sh

# checking munin not installed
pkg_info | grep munin-node
if [ $? -ne 0 ]; then
    pkg_add munin-node
    munin-node-configure --suggest --shell | sh
    rcctl enable munin_node
fi

rcctl restart munin_node

The script contains some simple logic to prevent trying installing munin-node each time we will run it, and also prevent re-configuring it automatically every time. This is done by checking if pkg_info output contains munin-node.

We also need to provide a munin-node.conf file to allow our munin server to reach the nodes. For this how-to, I’ll dump the configuration in the commands using cat, but of course, you can use your favorite editor to create the file, or copy an original munin-node.conf file and edit it to suit your needs.

mkdir -p files/etc/munin/

cat <<EOF > files/etc/munin/munin-node.conf
log_level 4
log_file /var/log/munin/munin-node.log
pid_file /var/run/munin/munin-node.pid
background 1
setsid 1
user root
group wheel
ignore_file [\#~]$
ignore_file DEADJOE$
ignore_file \.bak$
ignore_file %$
ignore_file \.dpkg-(tmp|new|old|dist)$
ignore_file \.rpm(save|new)$
ignore_file \.pod$
allow ^127\.0\.0\.1$
allow ^192\.168\.1\.100$
allow ^::1$
host *
port 4949
EOF

Now, we only need to use drist on the remote host:

drist root@myserver

Last version of drist as now also supports privilege escalation using doas instead of connecting to root by ssh:

drist -s -e doas user@myserver

Drist release with persistent ssh

Written by Solène, on 18 February 2019.
Tags: #unix #automation #drist

Comments on Mastodon

Drist see its release 1.04 available. This adds support for the flag -p to make the ssh connection persistent across the script using the ssh ControlMaster feature. This fixes one use case where you modify ssh keys in two operations: copy file + script to change permissions and this makes drist a lot faster for fast tasks.

Drist makes a first ssh connection to get the real hostname of the remote machine, and then will ssh for each step (copy, copy-hostname, absent, absent-hostname, script, script-hostname), this mean in the use case where you copy one file and reload a service, it was doing 3 connections. Now with the persistent flag, drist will keep the first connection and reusing it, closing the control socket at the end of the script.

Drist is now 121 lines long.

Download v1.04

SHA512 checksum, it is split it in two to not break the display:

525a7dc1362877021ad2db8025832048d4a469b72e6e534ae4c92cc551b031cd
1fd63c6fa3b74a0fdae86c4311de75dce10601d178fd5f4e213132e07cf77caa

How to parallelize Drist

Written by Solène, on 06 February 2019.
Tags: #drist #automation #unix

Comments on Mastodon

This article will show you how to make drist faster by using it on multiple servers at the same time, in a correct way.

What is drist?

It is easily possible to parallelize drist (this works for everything though) using Makefile. I use this to deploy a configuration on my servers at the same time, this is way faster.

A simple BSD Make compatible Makefile looks like this:

SERVERS=tor-relay.local srvmail.tld srvmail2.tld
${SERVERS}:
        drist $*
install: ${SERVERS}
.PHONY: all install ${SERVERS}

This create a target for each server in my list which will call drist. Typing make install will iterate over $SERVERS list but it is so possible to use make -j 3 to tell make to use 3 threads. The output may be mixed though.

You can also use make tor-relay.local if you don’t want make to iterate over all servers. This doesn’t do more than typing drist tor-relay.local in the example, but your Makefile may do other logic before/after.

If you want to type make to deploy everything instead of make install you can add the line all: install in the Makefile.

If you use GNU Make (gmake), the file requires a small change:

The part ${SERVERS}: must be changed to ${SERVERS}: %:, I think that gmake will print a warning but I did not succeed with better result. If you have the solution to remove the warning, please tell me.

If you are not comfortable with Makefiles, the .PHONY line tells make that the targets are not valid files.

Make is awesome!

Configuration deployment made easy with drist

Written by Solène, on 29 November 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Hello, in this article I will present you my deployement tool drist (if you speak Russian, I am already aware of what you think). It reached a feature complete status today and now I can write about it.

As a system administrator, I started using salt a few years ago. And honestly, I can not cope with it anymore. It is slow, it can get very complicated for some tasks like correctly ordering commands and a configuration file can become a nightmare when you start using condition in it.

You may already have read and heard a bit about drist as I wrote an article about my presentation of it at bitreichcon 2018.

History

I also tried alternatives like ansible, puppet, Rex etc… One day, when lurking in the ports tree, I found sysutils/radmind which got a lot interest from me even if it is really poorly documented. It is a project from 1995 if I remember correctly, but I liked the base idea. Radmind works with files, you create a known working set of files for your system, and you can propagate that whole set to other machines, or see differences between the reference and the current system. Sets could be negative, meaning that the listed files should not be present on the system, but it was also possible to add extra sets for specific hosts. The whole thing is really really cumbersome, this requires a lot of work, I found little documentation etc… so I did not used it but, that lead me to write my own deployment tool using ideas from radmind (working with files) and from Rex (using a script for doing changes).

Concept

drist aims at being simple to understand and pluggable with standard tools. There is no special syntax to learn, no daemon to run, no agent, and it relies on base tools like awk, sed, ssh and rsync.

drist is cross platform as it has a few requirements but it is not well suited for deploying on too much differents operating systems.

When executed, drist will execute six steps in a specific order, you can use only steps you need.

Shamelessly copied from the man page, explanations after:

  1. If folder files exists, its content is copied to server rsync(1).
  2. If folder files-HOSTNAME exists, its content is copied to server using rsync(1).
  3. If folder absent exists, filenames in it are deleted on server.
  4. If folder absent-HOSTNAME exists, filenames in it are deleted on server.
  5. If file script exists, it is copied to server and executed there.
  6. If file script-HOSTNAME exists, it is copied to server and executed there.

In the previous list, all the existences checks are done from the current working directory where drist is started. The text HOSTNAME is replaced by the output of uname -n of the remote server, and files are copied starting from the root directory.

drist does not do anything more. In a more litteral manner, it copies files to the remote server, using a local filesystem tree (folder files). It will delete on the remote server all files present in the local filesystem tree (folder absent), and it will run on the remote server a script named script.

Each of theses can be customized per-host by adding a “-HOSTNAME” suffix to the folder or file name, because experience taught me that some hosts does require specific configuration.

If a folder or a file does not exist, drist will skip it. So it is possible to only copy files, or only execute a script, or delete files and execute a script after.

Drist usage

The usage is pretty simple. drist has 3 flags which are optionals.

  • -n flag will show what happens (simuation mode)
  • -s flag tells drist to use sudo on the remote host
  • -e flag with a parameter will tell drist to use a specific path for the sudo program

The remote server address (ssh format like user@host) is mandatory.

$ drist my_user@my_remote_host

drist will look at files and folders in the current directory when executed, this allow to organize as you want using your filesystem and a revision control system.

Simple examples

Here are two examples to illustrate its usage. The examples are easy, for learning purpose.

Deploying ssh keys

I want to easily copy my users ssh keys to a remote server.

$ mkdir drist_deploy_ssh_keys
$ cd drist_deploy_ssh_keys
$ mkdir -p files/home/my_user1/.ssh
$ mkdir -p files/home/my_user2/.ssh
$ cp -fr /path/to/key1/id_rsa files/home/my_user1/.ssh/
$ cp -fr /path/to/key2/id_rsa files/home/my_user2/.ssh/
$ drist user@remote-host
Copying files from folder "files":
    /home/my_user1/.ssh/id_rsa
    /home/my_user2/.ssh/id_rsa

Deploying authorized_keys file

We can easily create the authorized_key file by using cat.

$ mkdir drist_deploy_ssh_authorized
$ cd drist_deploy_ssh_authorized
$ mkdir -p files/home/user/.ssh/
$ cat /path/to/user/keys/*.pub > files/home/user/.ssh/authorized_keys
$ drist user@remote-host
Copying files from folder "files":
    /home/user/.ssh/authorized_keys

This can be automated using a makefile running the cat command and then running drist.

all:
    cat /path/to/keys/*.pub > files/home/user.ssh/authorized_keys
drist user@remote-host

Installing nginx on FreeBSD

This module (aka a folder which contain material for drist) will install nginx on FreeBSD and start it.

$ mkdir deploy_nginx
$ cd deploy_nginx
$ cat >script <<EOF
#!/bin/sh
test -f /usr/local/bin/nginx
if [ $? -ne 0 ]; then
    pkg install -y nginx
fi
sysrc nginx_enable=yes
service nginx restart
EOF
$ drist user@remote-host
Executing file "script":
    Updating FreeBSD repository catalogue...
    FreeBSD repository is up to date.
    All repositories are up to date.
    The following 1 package(s) will be affected (of 0 checked):

    New packages to be INSTALLED:
            nginx: 1.14.1,2

    Number of packages to be installed: 1

    The process will require 1 MiB more space.
    421 KiB to be downloaded.
    [1/1] Fetching nginx-1.14.1,2.txz: 100%  421 KiB 430.7kB/s    00:01
    Checking integrity... done (0 conflicting)
    [1/1] Installing nginx-1.14.1,2...
    ===> Creating groups.
    Using existing group 'www'.
    ===> Creating users
    Using existing user 'www'.
    [1/1] Extracting nginx-1.14.1,2: 100%
    Message from nginx-1.14.1,2:

    ===================================================================
    Recent version of the NGINX introduces dynamic modules support.  In
    FreeBSD ports tree this feature was enabled by default with the DSO
    knob.  Several vendor's and third-party modules have been converted
    to dynamic modules.  Unset the DSO knob builds an NGINX without
    dynamic modules support.

    To load a module at runtime, include the new `load_module'
    directive in the main context, specifying the path to the shared
    object file for the module, enclosed in quotation marks.  When you
    reload the configuration or restart NGINX, the module is loaded in.
    It is possible to specify a path relative to the source directory,
    or a full path, please see
    https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and
    http://nginx.org/en/docs/ngx_core_module.html#load_module for
    details.

    Default path for the NGINX dynamic modules is

    /usr/local/libexec/nginx.
    ===================================================================
    nginx_enable:  -> yes
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    nginx not running? (check /var/run/nginx.pid).
    Performing sanity check on nginx configuration:
    nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
    Starting nginx.

More complex example

Now I will show more complexes examples, with host specific steps. I will not display the output because the previous output were sufficient enough to give a rough idea of what drist does.

Removing someone ssh access

We will reuse an existing module here, an user should not be able to login anymore on its account on the servers using the ssh key.

$ cd ssh
$ mkdir -p absent/home/user/.ssh/
$ touch absent/home/user/.ssh/authorized_keys
$ drist user@server

Installing php on FreeBSD

The following module will install php and remove the opcache.ini file, and will install php72-pdo_pgsql if it is run on server production.domain.private.

$ mkdir deploy_php && cd deploy_php
$ mkdir -p files/usr/local/etc
$ cp /some/correct/config.ini files/usr/local/etc/php.ini
$ cat > script <<EOF
#!/bin/sh
test -f /usr/local/etc/php-fpm.conf || pkg install -f php-extensions
sysrc php_fpm_enable=yes
service php-fpm restart
test -f /usr/local/etc/php/opcache.ini || rm /usr/local/etc/php/opcache.ini
EOF
$ cat > script-production.domain.private <<EOF
#!/bin/sh
test -f /usr/local/etc/php/pdo_pgsql.ini || pkg install -f php72-pdo_pgsql
service php-fpm restart
EOF

The monitoring machine

This one is unique and I would like to avoid applying its configuration against another server (that happened to me once with salt and it was really really bad). So I will just do all the job using the hostname specific cases.

$ mkdir my_unique_machine && cd my_unique_machine
$ mkdir -p files-unique-machine.private/usr/local/etc/{smokeping,munin}
$ cp /good/config files-unique-machine.private/usr/local/etc/smokeping/config
$ cp /correct/conf files-unique-machine.private/usr/local/etc/munin/munin.conf
$ cat > script-unique-machine.private <<EOF
#!/bin/sh
pkg install -y smokeping munin-master munin-node
munin-configure --shell --suggest | sh
sysrc munin_node_enable=yes
sysrc smokeping_enable=yes
service munin-node restart
service smokeping restart
EOF
$ drist user@incorrect-host
$ drist user@unique-machine.private
Copying files from folder "files-unique-machine.private":
    /usr/local/etc/smokeping/config
    /usr/local/etc/munin/munin.conf
Executing file "script-unique-machine.private":
    [...]

Nothing happened on the wrong system.

Be creative

Everything can be automated easily. I have some makefile in a lot of my drist modules, because I just need to type “make” to run it correctly. Sometimes it requires concatenating files before being run, sometimes I do not want to make mistake or having to remember on which module apply on which server (if it’s specific), so the makefile does the job for me.

One of my drist module will look at all my SSL certificates from another module, and make a reed-alert configuration file using awk and deploying it on the monitoring server. All I do is typing “make” and enjoy my free time.

How to get it and install it

  • Drist can be downloaded at this address.
  • Sources can be cloned using git clone git://bitreich.org/drist

In the sources folder, type “make install” as root, that will copy drist binary to /usr/bin/drist and its man page to /usr/share/man/man1/drist.1

For copying files, drist requires rsync on both local and remote hosts.

For running the script file, a sh compatible shell is required (csh is not working).

Presenting drist at BitreichCON 2018

Written by Solène, on 21 August 2018.
Tags: #unix #drist #automation

Comments on Mastodon

Still about bitreich conference 2018, I’ve been presenting drist, an utility for server deployment (like salt/puppet/ansible…) that I wrote.

drist makes deployments easy to understand and easy to extend. Basically, it has 3 steps:

  1. copying a local file tree on the remote server (for deploying files)
  2. delete files on the remote server if they are present in a local tree
  3. execute a script on the remote server

Each step is run if the according file/folder exists, and for each step, it’s possible to have a general / per-host setup.

How to fetch drist

git clone git://bitreich.org/drist

It was my very first talk in english, please be indulgent.

Plain text slides (tgz)

MP3 of the talk

MP3 of questions/answers

Bitreich community is reachable on gopher at gopher://bitreich.org