About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(BSD OpenBSD Qubes OS Lisp cmdline gaming security QubesOS internet-stuff). I love percent and lambda characters. OpenBSD developer solene@. No AI is involved in this blog.

Contact me: solene at dataswamp dot org or @solene@bsd.network (mastodon).

I'm a freelance OpenBSD, FreeBSD, Linux and Qubes OS consultant, this includes DevOps, DevSecOps, technical writing or documentation work. If you enjoy this blog, you can sponsor my open source work financially so I can write this blog and contribute to Free Software as my daily job.

Asynchronous secure file transfer with nncp

Written by Solène, on 04 October 2024.
Tags: #privacy #security #network #unix

Comments on Fediverse/Mastodon

1. Introduction §

nncp (node to node copy) is a software to securely exchange data between peers. Is it command line only, it is written in Go and compiles on Linux and BSD systems (although it is only packaged for FreeBSD in BSDs).

The website will do a better job than me to talk about the numerous features, but I will do my best to explain what you can do with it and how to use it.

nncp official project website

2. Explanations §

nncp is a suite of tools to asynchronously exchange data between peers, using zero knowledge encryption. Once peers have exchanged their public keys, they are able to encrypt data to send to this peer, this is nothing really new to be honest, but there is a twist.

  • a peer can directly connect to another using TCP, you can even configure different addresses like a tor onion or I2P host and use the one you want
  • a peer can connect to another using ssh
  • a peer can generate plain files that will be carried over USB, network storage, synchronization software, whatever, to be consumed by a peer. Files can be split in chunks of arbitrary size in order to prevent anyone snooping from figuring how many files are exchanged or their name (hence zero knowledge).
  • a peer can generate data to burn on a CD or tape (it is working as a stream of data instead of plain files)
  • a peer can be reachable through another relay peer
  • when a peer receives files, nncp generates ACK files (acknowledgement) that will tell you they correctly received it
  • a peer can request files and/or trigger pre-configured commands you expose to this peer
  • a peer can send emails with nncp (requires a specific setup on the email server)
  • data transfer can be interrupted and resumed

What is cool with nncp is that files you receive are unpacked in a given directory and their integrity is verified. This is sometimes more practical than a network share in which you are never sure when you can move / rename / modify / delete the file that was transferred to you.

I identified a few "realistic" use cases with nncp:

  • exchange files between air gap environments (I tried to exchange files over sound or QR codes, I found no reliable open source solution)
  • secure file exchange over physical medium with delivery notification (the medium needs to do a round-trip for the notification)
  • start a torrent download remotely, prepare the file to send back once downloaded, retrieve the file at your own pace
  • reliable data transfer over poor connections (although I am not sure if it beats kermit at this task :D )
  • "simple" file exchange between computers / people over network

This let a lot of room for other imaginative use cases.

3. Real world example: Syncthing gateway §

My preferred workflow with nncp that I am currently using is a group of three syncthing servers.

Each syncthing server is running on a different computer, the location does not really matter. There is a single share between these syncthing instances.

The syncthing servers have incoming and outgoing directories on a NFS / SMB share, with a directory named after each peer in both directories. Putting a file in the "outgoing" directory of a peer will make nncp to prepare the file for this peer, put it into the syncthing share and let it share, the file is consumed in the process. In the same vein, in the incoming directory, new files are unpacked in the emitting peer directory.

Why is it cool? You can just drop a file in the peer you want to send to, it disappears locally and magically appears on the remote side. If something wrong happens, due to ACK, you can verify if the file was delivered and unpacked. With three shares, you can almost have two connected at the same time.

It is a pretty good file deposit that requires no knowledge to use.

This could be implemented with pure syncthing, however you would have to:

  • for each peer, configure a one-way directory share in syncthing for each other peer to upload data to
  • for each peer, configure a one-way directory share in syncthing for each other peer to receive data from
  • for each peer, configure an encrypted share to relay all one way share from other peers

This does not scale well.

Side note, I am using syncthing because it is fun and requires no infrastructure. But actually, a webdav filesystem, a Nextcloud drive or anything to share data over the network would work just fine.

4. Setup §

4.1. Configuration file and private keys §

On each peer, you have to generate a configuration file with its private keys. The default path for the configuration file is /etc/nncp.hjson but nothing prevents you from storing this file anywhere, you will have to use the parameter -cfg /path/to/config file in that case.

Generate the file like this:

nncp-cfgnew > /etc/nncp.hjson

The file contains comments, this is helpful if you want to see how the file is structured and existing options. Never share the private keys of this file!

I recommend checking the spool and log paths, and decide which user should use nncp. For instance, you can use /var/spool/nncp to store nncp data (waiting to be delivered or unpacked) and the log file, and make your user the owner of this directory.

4.2. Public keys §

Now, generate the public keys (they are just derived from the private keys generated earlier) to share with your peers, there is a command for this that will read the private keys and output the public keys in a format ready to put in the nncp.hjson file of recipients.

nncp-cfgmin > my-peer-name.pub

You can share the generated file with anyone, this will allow them to send you files. The peer name of your system is "self", you can rename it, it is just an identifier.

4.3. Import public keys §

When import public keys, you just need to add the content generated by the command nncp-cfgmin of a peer in your nncp configuration file.

Just copy / paste the content in the neigh structure within the configuration file, just make sure to rename "self" by the identifier you want to give to this peer.

If you want to receive data from this peer, make sure to add an attribute line incoming: "/path/to/incoming/data" for that peer, otherwise you will not be able to unpack received file.

5. Usage §

Now you have peers who exchanged keys, they are able to send data to each other. nncp is a collection of tools, let's see the most common and what they do:

  • nncp-file: add a file in the spool to deliver to a peer
  • nncp-toss: unpack incoming data (files, commands, file request, emails) and generate ack
  • nncp-reass: reassemble files that were split in smaller parts
  • nncp-exec: trigger a pre-configured command on the remote peer, stdin data will be passed as the command parameters. Let's say a peer offers a "wget" service, you can use echo "https://some-domain/uri/" | nncp-exec peername wget to trigger a remote wget.

If you use the client / server model over TCP, you will also use:

  • nncp-daemon: the daemon waiting for connections
  • nncp-caller: a daemon occasionally triggering client connections (it works like a crontab)
  • nncp-call: trigger a client connection to a peer

If you use asynchronous file transfers, you will use:

  • nncp-xfer: generates to / consumes files from a directory for async transfer

6. Workflow (how to use) §

6.1. Sending files §

For sending files, just use nncp-file file-path peername:, the file name will be used when unpacked, but you can also give the filename you want to give once unpacked.

A directory could be used as a parameter instead of a file, it will be stored automatically in a .tar file for delivery.

Finally, you can send a stream of data using nncp-file stdin, but you have to give a name to the resulting file.

6.2. Sync and file unpacking §

This was not really clear from the documentation, so here it is how to best use nncp when exchanging files using plain files, the destination is /mnt/nncp in my examples (it can be an external drive, a syncthing share, a NFS mount...):

When you want to sync, always use this scheme:

  1. nncp-xfer -rx /mnt/xfer
  2. nncp-toss -gen-ack
  3. nncp-xfer -keep -tx -mkdir /mnt/xfer
  4. nncp-rm -all -ack

This receives files using nncp-xfer -rx, the files are stored in nncp spool directory. Then, with nncp-toss -gen-ack, the files are unpacked to the "incoming" directory of each peer who sent files, and ACK are generated (older versions of nncp-toss does not handle ack, you need to generate ack befores and remove them after tx, with nncp-ack -all 4>acks and nncp-rm -all -pkt < acks).

nncp-xfer -tx will put in the directory the data you want to send to peers, and also the ack files generated by the rx which happened before. The -keep flag is crucial here if you want to make use of ACK, with -keep, the sent data are kept in the pool until you receive the ACK for them, otherwise the data are removed from the spool and will not be retransmited if the files were not received. Finally, nncp-rm will delete all ACK files so you will not transmit them again.

7. Explanations about ACK §

From my experience and documentation reading, there are three cases with the spool and ACK:

  • the shared drive is missing the files you sent (that are still in pool), and you received no ACK, the next time you run nncp-xfer, the files will be transmitted again
  • when you receive ACK files for files in spool, they are deleted from the spool
  • when you do not use -keep when sending files with nncp-xfer, the files will not be stored in the spool so you will not be able to know what to retransmit if ACK are missing

ACKs do not clean up themselves, you need to use nncp-rm. It took me a while to figure this, my nodes were sending ACKs to each other repeatedly.

8. Conclusion §

I really like nncp as it allows me to securely transfer files between my computers without having to care if they are online. Rsync is not always possible because both the sender and receiver need to be up at the same time (and reachable correctly).

The way files are delivered is also practical for me, as I already shared above, files are unpacked in a defined directory by peer, instead of remembering I moved something in a shared drive. This removes the doubt about files being in a shared drive: why is it there? Why did I put it there? What was its destination??

I played with various S3 storage to exchange nncp data, but this is for another blog post :-)

9. Going further §

There are more features in nncp, I did not play with all of them.

You can define "areas" in parallel of using peers, you can use emails notifications when a remote receives data from you to have a confirmation, requesting remote files etc... It is all in the documentation.

I have the idea to use nncp on a SMTP server to store encrypted incoming emails until I retrieve them (I am still working at improving the security of email storage), stay tuned :)