bitcoin++23 Workshop: Writing a NixOS Module for your_app

Saturday, October 7, 2023

Instructions for my Writing a NixOS Module for your_app workshop. Slides can be found here.

The workshop tries to convey the basics of writing a NixOS module. In the end, participants should be able to write a basic NixOS module including a systemd service for their project, be able to define and declare NixOS options, and be familiar with basic systemd hardening and running containers on NixOS.

A 360p recording of me giving the workshop can be found here.

Task 0 - Setup and starting the VM

This workshop makes use of GitHub Codespaces to spin up a personal environment to work in. GitHub currently offers 120 hours of free CPU time per month for Codespaces. All you need is a GitHub account. You can find the code for this workshop in 0xb10c/btcpp23-nixos-modules-workshop.

Open in GitHub Codespaces

This will open a new tab with a VSCode interface. The codespace automatically starts a NixOS virtual machine. This might take a couple of minutes. It will let you know when the VM is ready.

A downside of using GitHub Codespaces is that we don’t have KVM support for our NixOS VM and have to fall back to emulation, which is considerably slower than a KVM VM would be. Running nixos-rebuild switch inside the NixOS VM is quite slow and not recommended for this workshop. Rather, run sh on the host, which deploys the configuration to the VM.

I’ve marked commands with host$ <command> when you should run them from the GitHub Codespaces shell. Commands marked with vm$ <command> should be run from the VM shell. You can use ssh vm to log in to the VM.


For the workshop, I’ve written a very basic Rust program called your_app. Imagine this is a project you’ve been working on, and you now want to write a NixOS module for it. your_app starts a web server on a user-defined port and responds to requests. To show how easy it is to interact with other NixOS modules and services, your_app communicates with the RPC interface of a Bitcoin Core node.

$ your_app --help
your_app 0.1.0

    your_app --rpc-host <RPC_HOST> --rpc-port <RPC_PORT> --rpc-user <RPC_USER>
        --rpc-password <RPC_PASSWORD> <SUBCOMMAND>

    -h, --help
            Print help information

        --rpc-host <RPC_HOST>
            The host of the Bitcoin Core RPC server

        --rpc-password <RPC_PASSWORD>
            A password for authentication with the Bitcoin Core RPC server

        --rpc-port <RPC_PORT>
            The port of the Bitcoin Core RPC server

        --rpc-user <RPC_USER>
            A user for authentication with the Bitcoin Core RPC server

    -V, --version
            Print version information

    help      Print this message or the help of the given subcommand(s)
    server    Run the app with a web server

Task 1 - First steps

In this task, we enable the Bitcoin Core service that ships with NixOS and learn how to inspect a systemd service.

1.1: Enable the regtest Bitcoin Core node

In the configuration.nix file you’ll find a bitcoind service called regtest. This service is defined in the services/networking/bitcoind.nix module1. Searching for services.bitcoind on shows the options that can be set. For this workshop, I’ve configured a Bitcoin Core node on a local regtest test network with an RPC server listening on port 18444.

To enable the Bitcoin Core node, change the enable = false; into enable = true;. For the changes to take effect, run sh from the host to rebuild the VM. NixOS will automatically generate, enable, and start the systemd service defined for this node in the NixOS bitcoind.nix module.

host$ sh

1.2: Using systemd tools

Once the system is rebuilt, you can inspect the service with the systemctl status command. The status command only shows the last few log lines. If you want to see more lines, use the journalctl tool.

vm $ systemctl status bitcoind-regtest.service
vm$ journalctl --pager-end --follow --unit bitcoind-regtest
vm$ journalctl -efu bitcoind-regtest


  1. How many log lines does systemctl status bitcoind-regtest show?
  2. Using systemctl status bitcoind-regtest also shows the generated *.service file. Can you find the -datadir parameter passed to Bitcoin Core? Where is the datadir?

Task 2 - defining and declaring options

The NixOS module for your_app is located in modules/your_app/default.nix. I’ve already defined a your_app_server and a your_app_backup systemd service in the config section of the NixOS module. I’ve left a few comments on the options that already exist. The places where you’ll need to fill in something are marked with # FIXME: Task X.X. You’ve successfully completed the task when you can reach the your_app server web server from the host machine (from outside the VM).

2.1: Declare options for your_app_server

When running your_app server, it expects the following command line arguments from us:

  • --rpc-host and --rpc-port for the location of the Bitcoin Core RPC server to connect to
  • --rpc-user and --rpc-password for authenticating with the Bitcoin Core RPC server
  • and a port on which the web server will start to listen on

Your task is to declare options for these command line arguments in in modules/your_app/default.nix:

  • use the mkOption function - documentation can be found in the NixOS manual on mkOption
  • there’s a list of types you can use in the NixOS manual Option Types section
  • think about reasonable defaults for the options. Defaulting to null helps NixOS complain when an option is not set by the user.

2.2: Using the declared options

We can use the values from the options declared in 2.1 to define options from NixOS modules such as, for example, the systemd services module. I’ve prepared a systemd service in and have already defined a few options. It’s your task to fill in the command line arguments (marked with FIXME: 2.2) in serviceConfig.ExecStart with the options you defined in 2.1.

Hint 1: Help, where do I start?ExecStart is defined with a multi-line string. Each line contains a new argument. You can insert Nix expressions into strings with ${ <nix expression> }. If you have defined an option called username in 2.1., you could access it with ${ cfg.username }. Here, cfg is short for (see the let .. in at the top of the file).
Hint 2: error: cannot coerce an integer to a stringIntegers and strings don’t mix well in Nix. You can, however, convert an integer to a string with a function provided by NixOS. See the toString function.

2.3 Enable the your_app service in configuration.nix

The your_app module is imported in configuration.nix. We can now enable the services.your_app.enable option by setting it to true. We also need to define the options we declared in 2.1:

  • set the port for the web server to 4242 (this is important, otherwise the port forwarding to the VM won’t work)
  • the Bitcoin Core RPC server listens on localhost
  • you can set the Bitcoin Core RPC server port to"regtest".rpc.port
  • use the RPC user workshop and the password btcpp23berlin

2.4: Open the firewall

By default, NixOS has a firewall that blocks incoming packets. To be able to reach the web server from the host, you’ll need to open this port in the firewall. See the allowedTCPPorts option of the NixOS firewall for more information. Similar to accessing the Bitcoin Core RPC port, you can access the port from the your_app server service.

2.5: sh

To rebuild and apply the configuration, run sh NixOS might complain about errors in your configuration or module. Try to fix them, or ask someone next to you for help. Looking at the logs of the your_app_server.service systemd service might help.

Once you’ve managed to switch to your new configuration, try accessing the your_app web server from your host system.

vm$ journalctl -efu your_app_server
host$ curl localhost:4242

Task 3 - Secrets, security, and hardening

This task covers basic security and systemd hardening. However, this is likely not enough for a production setup.

3.1: RPC password is world-readable!

Your NixOS system configuration is world-readable by everyone with access to the nix/store/. Additionally, the systemd service configurations are world-readable too. This means the RPC password set in 2.3 is now world-readable, too. Can you find it?


  1. Where did you find the RPC password?
  2. How can this be avoided?
Hint for question 2Search for passwordFile on

3.2: systemd-analyze security

Under the “Principle of Least Privilege”, our newly set up systemd service should only have the minimum needed privileges. Systemd offers a bunch of sandboxing and hardening features that we can use to reduce the privileges of the your_app service.

The default systemd service options are quite lax. You can use

vm$ systemd-analyze security

to let systemd list you an “exposure” score (good = 0 - 10 = bad) for all loaded services. A high “exposure” score does not mean that the service isn’t sandboxed. It also does not mean that the service is vulnerable to attacks. It indicates that there is likely room for improvement by applying additional hardening settings to the service. Likewise, a perfect score doesn’t mean the service is completely secure. A better score indicates that the service has fewer privileges.


  1. What exposure score is shown for bitcoind-regest.service and your_app_server.service?
  2. Which service has the lowest score?

3.3: your_app hardening

There is room for improvement in the “exposure” score of your_app. You can use

vm$ systemd-analyze security your_app_server.service

to list hardening options that can be enabled to improve the score. For inspiration, take a look at the nix-bitcoin defaultHardening options.

Set the hardening options in in modules/your_app/default.nix. You need to do a sh for the changes to be applied. This will also restart the your_app_server.service. Check if it still starts and the web server is still reachable from the host. If not, you might have removed too many privileges.

host$ curl localhost:4242

Task 4 - Containers

Running software in containers is also possible on NixOS. NixOS supports declarative oci-containers (i.e., Docker containers) but also allows running imperative and declarative NixOS containers. OCI-containers can be useful if there’s software not (yet) packaged for Nix. NixOS containers might be useful if you want to run multiple instances of the same service on the same machine or need a place for a quick experiment.

4.1: OCI-Containers

To demonstrate running an OCI-container, we can use the nginxdemos/hello:plain-text image. In configuration.nix you’ll find a commented plaintext-hello definition under virtualisation.oci-containers.containers. Uncomment it and set the image (just use the image name above) and ports values (use "8000:80"). More options can be found here. You will need to do a sh to start the OCI container.

Test that the web server in the container is reachable with:

vm$ curl localhost:8000


  1. Which backend is being used by default to run oci-containers? Docker or podman?

4.2: NixOS containers

NixOS containers are lightweight systemd-nspawn containers running NixOS. These can be defined imperatively and declaratively. Imperative containers are great for short-to-medium term experimental setups, while declarative containers can be used for long-running container setups.

Imperative NixOS container

To imperatively create and start a NixOS container named btcpp23 use:

vm$ nixos-container create btcpp23
vm$ nixos-container start btcpp23

You can see the container logs and login as root with the following commands. See Imperative NixOS Containers for more.

vm$ systemctl status container@btcpp23
vm$ nixos-container root-login btcpp23

Declarative NixOS container

An empty, auto-starting, declarative NixOS container might look like:

containers.empty = {
  autoStart = true;
  privateNetwork = true;
  config = { config, pkgs, ... }: {
    # An emtpy NixOS container.

    system.stateVersion = "23.05";

Feel free to copy and paste this container into the configuration.nix file and rebuild the VM. You should be able to login with nixos-container root-login.

local install (not recommended)

0.1: Install nixos-shell

To utillize NixOS modules, we need a running NixOS system. In this workshop, we’ll start a NixOS qemu virtual machine with the nixos-shell tool. Don’t confuse this with the nix-shell command, which allows us to temporary bring Nix packages into our environment. We can however use nix-shell to install nixos-shell as it’s packaged in nixpkgs. This assumes you have Nix installed.

$ nix-shell -p nixos-shell

0.2: Clone the workshop repository

$ git clone

You can find the configuration for the nixos-shell VM in this repository. The vm.nix file defines qemu VM parameters such as the number of CPU cores to use, the amount of RAM to reserve, and the size of the VM’s disk. Feel free to leave all files as they are. You’ll only need to modify configuration.nix and the modules/your_app/default.nix module during the workshop.

0.3: Starting the VM and logging in

Inside the btcpp23-nixos-modules-workshop folder, start the VM by running nixos-shell. While initial VM setup might take a minute or two, all following starts should be faster.

You’ll be greeted with a message explaining how to login and how to quit the VM. Use the root user without a password to log in. To exit the VM, either use shutdown now to shut it down or Ctrl+a c and type quit.

0.4: Rebulding the system with sh (optional)

Skip this step if you plan to directly continue with Task 1.

Once logged in, you can rebuild the NixOS system from the configuration. Changes can be made to the configuration.nix from your favorite editor on the host. If you are setting up the VM up before the workshop. feel free to run sh once to rebuild the system.

  1. This module allows running multiple Bitcoin Core instances at the same time, which makes it a bit harder to reason about as a NixOS module beginner. ↩︎

All text and images in this work are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License Creative Commons License


Image for GitHub Metadata Backup and Mirror

August 10, 2023

GitHub Metadata Backup and Mirror

This year, the Bitcoin Core project will have its 13th anniversary being hosted on GitHub. 13 years of issues and pull requests with critical design decisions and nuanced discussions hosted with a US-based company known for shutting down open-source software repositories when needing to follow DMCA …