In the #atomic channel, I’ve seen a few complaints about slow speeds when trying to fetch the Fedora Atomic Workstation (FAW) content from the official sources. Especially when connecting from a location in Europe. This is a two-fold problem. First, the official FAW content is located in a datacenter in Phoenix, Arizona in the United States. Second, the FAW content is not mirrored as part of the official Fedora mirror network.

It is discouraging to see users who want to participate in the Project Atomic community being frustrated with slow speeds, so I decided I would investigate how to mirror the content in the European region.

Building A Host and Retrieving Content

Since I previously had a Digital Ocean account and they offered Fedora 27 Atomic Host as a VM option, I decided I would explore setting up a mirror using one of their droplets. I booted up an F27AH droplet and immediately used rpm-ostree upgrade to get the OS up to date. Once I had the OS up to date, I could start to think about how retrieve and host the FAW content on the host.

Thankfully, the ostree model for distributing content allows for easy mirroring of a repo using native functionality and this is covered nicely in the documentation. I initialized a repo and began the mirroring process.

# mkdir -p /var/srv/workstation
# cd /var/srv/workstation
# ostree --repo=repo init --mode=archive
# ostree --repo=repo remote add --set gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary onerepo
# ostree --repo=repo pull --mirror --depth=1 onerepo:fedora/27/x86_64/workstation

I’ll point out here that it is important to use the --mirror flag when setting up the mirror for this content, as this will properly setup the refs in the heads/ directory rather than the remotes/ directory. Quoting the ostree pull manpage:

This makes the target repo suitable to be exported for other clients to pull from as an ostree remote.

Additionally, I’ve specified a depth of ‘1’ for the pull operation to only pull the very latest FAW content. I chose to do this to 1) reduce the amount of time required to pull/push the content around and 2) because it is not necessary to mirror every commit in the repo for it to operate as an upgrade target. This means that anyone using this ‘pirate’ repo will only be able to upgrade to the latest available commit in the mirror, rather than being able to deploy any commit in the history.

The process of mirroring the content from the official sources took a good amount of time, but I ended up with about 2 GB of content spread across 112828 objects. All contained in two commits:

# ostree --repo=repo log fedora/27/x86_64/workstation
commit 88ef5feb77aebc7fec3e4fe6c17c490d1b5dc076927f07aa964a6da6fd336970
Date:  2018-03-01 16:15:51 +0000
Version: 27.86
(no subject)

commit 0feaa33a9e102a24cdc4a18e6a77da218f2d64cec6113ac173196310d1e5ebfc
Date:  2018-02-27 17:03:09 +0000
Version: 27.85
(no subject)

<< History beyond this commit not fetched >>

Serving Up the Mirror via HTTP

With the content pulled to my VM, I needed to make it available via HTTP and since I’m on F27AH, I needed a container to run a web server. Despite having no experience using it, I decided I would work with the nginx container to serve up the mirror content.

After pulling the container, I spent some time learning how to configure nginx and the right way to invoke the container with the content mounted into it.

My default.conf file looked like this:

$ cat /etc/nginx/conf.d/default.conf
server {
    listen       80;

    root   /usr/share/nginx/html;
    location /repo {
        root /usr/share/nginx/html;
        autoindex on;

Additionally, I had to setup the SELinux labeling for the mounts to be passed into the container, so I did:

$ sudo chcon -R -h -t container_file_t /etc/nginx/conf.d/
$ sudo chcon -R -h -t container_file_t /var/srv/workstation/repo/

Then I was finally able to invoke the container like this:

$ sudo docker run \
       -v /etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf:ro \
       -v /var/srv/workstation/repo/:/usr/share/nginx/html/repo:ro \
       -d -p 80:80 \

This allowed me to access the ostree content from successfully!

Securing the Transport Layer with Let’s Encrypt

I was encouraged with my success thus far and wanted to take the next step of securing the transport layer via HTTPS. Of course, I was going to use Let’s Encrypt to get my free SSL certificate. The question was how to do it on an Atomic Host using containers.

The great folks at the EFF have created a project called certbot that automates the process of requesting an SSL cert from Let’s Encrypt. And they even have a container that we can use!

The certbot container directions are pretty clear, but I still tried them a few times using the staging environment to make sure I understood how the process would work.

The resulting docker run command looked like this:

$ sudo docker run -it --rm \
       -p 443:443 -p 80:80 \
       --name certbot \
       -v /etc/letsencrypt:/etc/letsencrypt \
       -v /var/lib/letsencrypt-lib/:/var/lib/letsencrypt \ certonly

As before, I also had to set the SELinux label on my mounts to container_file_t.

The process was successful and I ended up with the necessary certificates in /etc/letsencrypt.

Now I needed to take those certificates and modify the nginx config file to use them. I studied the certbot documentation to understand where the certificates were located and how to configure nginx to use them.

The result was a config file that looked like this:

$ cat /etc/nginx/conf.d/default.conf
server {
    listen       80;
    return 301 https://$host:$uri;

server {
    listen      443 ssl;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;


    ssl_prefer_server_ciphers on;

    ssl_dhparam /usr/share/nginx/dhparams.pem;

    location / {
        root /usr/share/nginx/html/repo/;
        autoindex on;
    root /usr/share/nginx/html/;
    location /repo {
        autoindex on;


The first server stanza tells ngninx to listen on port 80, but redirect the client to the HTTPS version of the URI. The second server stanza tells nginx to listen on port 443 and which certificates to use. I also found some documentation about how to configure nginx on how to deploy Diffie-Hellman for TLS that seemed like good advice.

The advice instructed me to generate a strong DH group via openssl and configure nginx to disable export grade cipher suites. These are reflected in the server stanza via the parameters ssl_ciphers and ssl_dhparam.

The last thing to do is put it all of this together to run the nginx container:

$ sudo docker run --restart always \
       -v /etc/nginx/dhparams.pem:/usr/share/nginx/dhparams.pem:ro \
       -v /etc/letsencrypt:/etc/letsencrypt:ro \
       -v /etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf:ro \
       -v /var/srv/workstation/repo/:/usr/share/nginx/html/repo:ro \
       -d -p 443:443 -p 80:80 \

I checked that accessing via HTTP and HTTPS was successful (and always ended up using HTTPS).

Automating the Mirror

The last thing I wanted to do was to automate as much of as this as possible…using containers, of course!

After a few experiments, I was able to create a solution using a container, a bash script, and some systemd functionality.

I started with a bash script that would handle the mirroring of the content. This was just a wrapper around some of the ostree commands I had used earlier.

$ cat
set -xeou pipefail

# You need to define a 'prod' and 'stage' directory for the script to run
# properly.  If you don't pass in those arguments to the script, it assumes
# you have your directories at '/host/{prod,stage}'.  This is because the
# script is normally executed in a container with directories bind mounted
# into the container.

if [[ ! -d "$prod" ]] || [[ ! -d "$stage" ]]; then
    echo "Must have 'staging' and 'repo' directories present"
    exit 1

# Add the source of truth, mirror the latest commit, prune anything older
# than 7 days, generate the summary and then rsync to prod.
# NOTE: because this is typically run from a container, we assume the
# location of the 'rsync-repos' script
ostree --repo=$stage remote add --if-not-exists --set gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary onerepo
ostree --repo=$stage pull --mirror --depth=1 onerepo:$ref
ostree --repo=$stage prune --keep-younger-than="7 days ago" $ref
ostree --repo=$stage summary -u
/root/rsync-repos --src $stage --dest $prod

We define ‘stage’ and ‘prod’ locations for the mirroring. This is done to avoid any porential race conditions where a client may be updating from the mirror at the same time new content is pulled in (or being pruned). So we end up mirroring content to the ‘stage’ location, then using the rysnc-repos script to intelligently sync the data from the ‘stage’ location to the ‘prod’ location.

Next up was to create a container to run this script for us. (I mean, I could have just stuck the script in /usr/local/bin, but where is the excitement in that?)

$ cat Dockerfile
LABEL maintainer="Micah Abbott <>"
RUN dnf -y install ostree python2 rsync && \
    dnf clean all && \
    curl -L -o /root/rsync-repos && \
    chmod +x /root/rsync-repos
COPY /root/
ENTRYPOINT ["/root/"]

Really simple, no? Just installing some packages, copying in the rsync-repos script, and setting an entrypoint.

When we invoke the container, we’ll mount in our ‘stage’ and ‘prod’ locations so that the knows how to find them, like this:

$ sudo docker run \
       -v /var/srv/workstation/stage:/host/stage:ro \
       -v /var/srv/workstation/prod:/host/prod:ro \

The last part of the solution is the systemd portion. I knew I could configure a systemd.timer to kick off a systemd.service, so I went looking for examples of both. It was a bit harder to find an example of a systemd.service running a container with mounts, but I was able to sort that all out. And I used the rpm-ostreed-automatic.timer as reference for my systemd.timer.

$ cat piratemirror.service
Description=FAW Pirate Mirror

ExecStartPre=-/usr/bin/docker \
              pull \
ExecStart=/usr/bin/docker \
          run \
          $STAGE_MNT \
          $PROD_MNT \

$ cat piratemirror.sysconfig
STAGE_MNT="-v /path/to/stage/directory:/host/stage "
PROD_MNT="-v /path/to/prod/directory:/host/prod "

$ cat piratemirror.timer
Description=FAW Pirate Mirror Timer



The piratemirror.service is a oneshot sevice that reads in the config file at /etc/sysconfig/piratemirror to populate the values of STAGE_MNT and PROD_MNT that can be used to run the container. These values provide the volume mounts for the container (including the actual -v flag).

As you might have guessed, piratemirror.sysconfig gets copied to /etc/sysconfig/piratemirror.

And finally, the piratemirror.timer defines a timer that will start 1h after boot and will run again every 12h.

With all those in place, the mirror is basically running itself! Here’s what the piratemirror.service looks like in action:

$ sudo systemctl status piratemirror.service
● piratemirror.service - FAW Pirate Mirror
   Loaded: loaded (/etc/systemd/system/piratemirror.service; static; vendor preset: disabled)
   Active: inactive (dead) since Thu 2018-03-15 12:48:13 UTC; 2h 32min ago
  Process: 15460 ExecStart=/usr/bin/docker run $STAGE_MNT $PROD_MNT (code=exited, status=0/SUCCESS)
  Process: 15448 ExecStartPre=/usr/bin/docker pull (code=exited, status=0/SUCCESS)
 Main PID: 15460 (code=exited, status=0/SUCCESS)
      CPU: 73ms

Mar 15 12:47:59 f27ah-ams3-01.localdomain docker[15460]: 2 metadata, 0 content objects fetched; 1 KiB transferred in 7 seconds
Mar 15 12:47:59 f27ah-ams3-01.localdomain docker[15460]: + ostree --repo=/host/stage/ prune '--keep-younger-than=7 days ago' fedora/27/x86_64/workstation
Mar 15 12:48:02 f27ah-ams3-01.localdomain docker[15460]: Total objects: 120474
Mar 15 12:48:02 f27ah-ams3-01.localdomain docker[15460]: No unreachable objects
Mar 15 12:48:02 f27ah-ams3-01.localdomain docker[15460]: + ostree --repo=/host/stage/ summary -u
Mar 15 12:48:02 f27ah-ams3-01.localdomain docker[15460]: + /root/rsync-repos --src /host/stage/ --dest /host/prod/
Mar 15 12:48:13 f27ah-ams3-01.localdomain docker[15460]: Executing: rsync -rlpt --include=/objects --include=/objects/** --include=/deltas --include=/deltas/** --exclude=* /host/stage/ /host/prod/ --ignore-exist
Mar 15 12:48:13 f27ah-ams3-01.localdomain docker[15460]: Executing: rsync -rlpt --include=/refs --include=/refs/** --include=/summary --include=/summary.sig --exclude=* /host/stage/ /host/prod/ --delete --ignore
Mar 15 12:48:13 f27ah-ams3-01.localdomain docker[15460]: Executing: rsync -rlpt --include=/objects --include=/objects/** --include=/deltas --include=/deltas/** --exclude=* /host/stage/ /host/prod/ --ignore-exist
Mar 15 12:48:13 f27ah-ams3-01.localdomain systemd[1]: Started FAW Pirate Mirror.


After all that, I’ve managed to setup an automated mirror of the Fedora 27 Atomic Workstation ostree content in the European region! The last thing to do is to start using it! Assuming you are already running Fedora 27 Atomic Workstation, you can use the following commands to start using the mirror:

# ostree remote add --set gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary pirate
# rpm-ostree rebase pirate:fedora/27/x86_64/workstation

(Well, I still need to automate the Let’s Encrypt renewal process, but maybe that will be another post!)

I’ve made most of the code available at

In the future, I may describe some of the failures I ran into when exploring this project, but for now this project is complete!

Let me know what you think!

DISCLAIMER: The resulting mirror at has no official affliation with the Fedora project or Project Atomic. There is no official support for the mirror. By using this mirror, you accept all the risks that come with its use.