Ryan Rueger

ryan@rueg.re / me.jpg
$ curl -L rueg.re/pgp | gpg --import -

Installing and benchmarking Scott's ad-hoc finite field arithmetic

TLDR: Copy-pastable command at end

Typically, the practical speed of an implementation of an isogeny-based cryptosystem is governed by the underlying finite field arithmetic used. Whilst there are general-purpose libraries for this (like gmp), there may be something to be gained by switching to ad-hoc implementations generated for the specific prime used in the given scheme.

A recent paper by Michael Scott describes methodology to generate such ad-hoc implementations for arbitrary primes[1] . This is used by the latest iteration of the isogeny-based signature SQIsign, a candidate in the second round of NIST’s competition for additional post-quantum digital signatures.

Because it is very new, the accompanying software to Scott’s paper has likely not been packaged for the operating system you are using, and because it relies on a few other tools it is perhaps a little bit messy to install.

Since the procedure generates a single portable field.c library, the “compilation” step can be done on any other operating system or even architecture. This makes docker a natural candidate to for synthesising the code and then performing some timings.

Step 1 Installing the docker package for your distribution.

Because docker has an enterprise and a non-enterprise offering, there are differently named packages available, and from different sources.

On ubuntu, for example, there is a community-maintained package called docker.io in the universe repository. However one can also install docker from sources provided directly from docker themselves by adding a separate docker-specific repository.

I would recommend installing the docker package that is natively supplied by your repository. Check out this repology link to find the name (is it docker or docker.io or docker-ce for “community edition”).

On ubuntu you would need to execute

sudo apt install docker.io

On archlinux

sudo pacman -S docker

Step 2 Enabling the docker daemon

This is rather straightforward. Some distributions might already do this by default when installing the package.

To test whether the docker daemon is running you can execute

sudo docker container ls

If the response is

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Then the service is running. Else it will return

docker: Cannot connect to the Docker daemon at unix:///XYZ/docker.sock. Is the docker daemon running?

To permanently enable the daemon, call

sudo systemctl enable --now docker.service

To enable the daemon just for this boot, call

sudo systemctl start docker.service

After having installed docker and enabled the docker daemon, running

sudo docker run -it --rm archlinux -v /tmp/host:/tmp/container

will drop you into a root shell of a completely ephemeral docker container using the base archlinux image.

The flag -v /tmp/host:/tmp/container tells docker to mount /tmp/host on the host machine to /tmp/container inside the container. Because --rm was passed, the container will delete itself after you exit: only the files that were saved to /tmp/container inside the container will remain at /tmp/host on the host machine.

Docker’s philosophy is that containers should be ephemeral instances of an image. An image defines the “starting point” (e.g. all of the files in the root filesystem, the packages installed), and when you run an image with a command you create an instance called a container. After the command has exited in the container, the container can now be deleted again. It is ephemeral.

Using containers in a persistent way If we start a container without --rm, the container will remain after exiting. For example, let’s start a container that creates a file and then immediately exits. We then see we see that the container still exists.

$ sudo docker run -it --name mycontainer archlinux touch /hello_world
$ sudo docker container ls -a --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID   IMAGE          NAMES
ddc44d7d7300   archlinux      mycontainer

It is not running, but it also hasn’t been deleted. If we were to start the container again, we would see that our file hello_world still exists. That is, the stopped container still contains the data it had when it was running.

This stopped container cannot be started again (it is in docker’s philosophy that this is now a dead container), but its changes can be committed to create a new image that layers on top of the initial starting image.

So, we first commit the changes to create a new image, and then we run that new image.

$ sudo docker commit mycontainer local:myimage
$ sudo docker run -it --name mycontainer2 local:myimage ls /hello_world
hello_world # No error, the file exists!
$ # Optionally delete the first container
$ sudo docker container rm mycontainer

We don’t want to be commiting new containers. We want to start a container, do everything that we need to do, store the output somewhere, exit the container and then delete it.

Using ephemeral containers To ensure a container deletes itself after execution, we simply pass --rm

$ sudo docker run -it --name mycontainer --rm archlinux touch /hello_world
$ sudo docker container ls -a --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
# Nothing

However, we see that since the container has been deleted, our file hello_world is also gone. We fix this problem by mounting a directory inside the container to the outside

$ sudo docker run -it --name mycontainer -v /tmp/host:/tmp/container --rm archlinux touch /tmp/container/hello_world
$ sudo docker container ls -a --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
# Nothing
$ sudo ls /tmp/host/hello_world
hello_world # No error, the file exists!

Now we just need to install modarith and its dependencies.

First start the container. This will drop you into an interactive shell

sudo docker run -it --rm -v /tmp/host:/tmp/container bash

Now inside the container, install the dependencies we need

pacman -Sy --noconfirm git gcc python clang fakeroot binutils sudo make debugedit

We will need to install one package from the Arch User Repository (AUR). However, for security reasons, archlinux will not allow us to do this as a root user, so we must create a new user with sudo privileges. This is a bit of a chicane.

# Still inside the same container
useradd -m aur
echo "aur ALL=(ALL) ALL, NOPASSWD: ALL" >> /etc/sudoers
# Install the `libcpucycles` package from the AUR
runuser -l aur bash -c "cd; git clone https://aur.archlinux.org/libcpucycles.git; cd libcpucycles; makepkg -si --noconfirm"

The other dependency of modarith is addchain, but we can install this as root.

cd ~
git clone https://github.com/mmcloughlin/addchain
cd addchain
bash install.sh
# Put the addchain executable in the PATH
PATH=$PATH:/root/addchain/bin
cd -
# Downlaod `modarith`
git clone https://github.com/mcarrickscott/modarith

Now we are finally ready to use modarith. The following example uses the algorithm for montgomery-friendly primes (of the form $c2^f - 1$) and for architectures with word size 64.

cd modarith
python monty.py 64 "33*2**503-1"

Scott also provides pseudo.py for generating arithmetic modulo pseudo-mersenne primes (of the form $2^f - c$).

After doing lots of work, this will produce two files field.c and time.c. The first file, field.c contains the library which we can use in our application, and time.c a benchmarking script.

Recall, everything inside the container will be deleted at the end, so we must copy these files into /tmp/container because this directory is mounted to /tmp/host on the host machine.

cp field.c time.c /tmp/container

We can now exit the docker container. Because we executed the docker commands with sudo, these file on the host will belong to root. Reclaim them as yours by calling

# On the host machine again
sudo chown -R /tmp/host $(whoami)

We can now benchmark the arithmetic with

$ cd /tmp/container
$ gcc -march=native -mtune=native -O3 time.c -lcpucycles -o time
$ ./time

Of course we can perform the timing inside the docker container. Since docker processes run natively on the host hardware, the numbers obtained when benchmarking inside the docker conainter should reflect real-world speeds.

Summary

Summarising as one very long copy-and-pastable command. Replace PRIME with the prime number you are interested in (e.g. 33*2**503-1)

sudo docker run --rm -v /tmp/host/:/tmp/container/ archlinux bash -exc '
pacman -Sy --noconfirm git gcc python clang fakeroot binutils sudo make debugedit
useradd -m aur; echo "aur ALL=(ALL) ALL, NOPASSWD: ALL" >> /etc/sudoers
runuser -l aur bash -c "cd; git clone https://aur.archlinux.org/libcpucycles.git; cd libcpucycles; makepkg -si --noconfirm"
cd; git clone https://github.com/mmcloughlin/addchain; cd addchain; bash install.sh; PATH=$PATH:/root/addchain/bin
cd; git clone https://github.com/mcarrickscott/modarith; cd modarith
python monty.py 64 "PRIME" || echo
cp -v field.c time.c /tmp/container
'

We passed -e to bash, meaning that the container will halt at any failure (this is defined as any command having non-zero return values). However, because monty.py produces non-zero exit values, even in cases where everything is successful, we must mask mask the return value. Since echo always returns 0, calling || echo makes the full command return true, and so the container does not halt.

Recall, || is the or operator of bash, and so cmd || echo will always have a return value of zero (which marks success in bash).

Footnotes

1. To be more precise, montgomery-friendly primes or pseudo-mersenne primes. These are primes of the form $c2^f - 1$ or $2^f - c$ respectively. ↩︎