There are two components to builds.sr.ht: the job runner and the master server. Typically installations will have one master and many runners distributed on many servers, but both can be installed on the same server for small installations (though not without risk). We'll start by setting up the master server.
The master server is a standard sr.ht web service and can be installed as
such. However, it is important that you configure two Redis
servers - one that the runners should have access to, and one that they should
not. Insert connection details for the former into build.sr.ht's configuration
file under the redis
key. Each build runner will also need a local redis
instance running. In an insecure deployment (all services on the same server)
you can get away with a single Redis instance.
We suggest using an SSH tunnel to share the slave Redis instance between job runners and the master server, but you can use any method you prefer. If you use an SSH tunnel, you will likely want to use a reverse tunnel initiated from the master server, so the slaves are unable to SSH into the master server.
Let's start with a brief overview of the security model of builds.sr.ht. Because builds.sr.ht runs arbitrary user code (and allows users to utilize root), it's important to carefully secure the build environments. To this end, builds run in a sandbox which consists of:
We suggest you take similar precautions if your servers could be running untrusted builds. Remember that if you build only your own software, integration with other services could end up running untrusted builds (for example, automatic testing of patches via lists.sr.ht).
On each runner, install the builds.sr.ht-images and builds.sr.ht-worker packages.
Create two users, one for the master and one for the runners (or one for each runner if you prefer). They need the following permissions:
If you are running the master and runners on the same server, you will only be able to use one user - the master user. Configure both the web service and build runner with this account. Otherwise, two separate accounts is recommended.
Note: in the future runners will not have database access.
On the runner, install the builds.sr.ht-images
package (if building from
source, this package is simply the images
directory copied to
/var/lib/images
), as well as docker. Build the docker image like so:
$ cd /var/lib/images
$ docker build -t qemu -f qemu/Dockerfile .
This will build a docker image named qemu
which contains a statically linked
build of qemu and nothing else.
A genimg
script is provided for each image which can be run from a working
image of that guest to produce a new image. You need to manually prepare a
working guest of each image type (that is, to build the Arch Linux image you
need a working Arch Linux installation to bootstrap from). Then you can run the
provided genimg
to produce the disk image. A build.yml
file is also provided
for each image to build itself on your build infrastructure once you have it set
up. It's recommended that you set up cron jobs to build fresh images frequently.
If you require additional images, study the control
script to understand how
the top-level boot process works. You should then prepare a disk image for your
new system (name it root.img.qcow2
) and write a functions
file. The only
required function is boot
, which should call _boot
with any additional
arguments you want to pass to qemu. If your image will boot up with no
additional qemu arguments, this function will likely just call _boot
. You can
optionally provide a number of other functions in your functions
file to
enable various features:
install
function with the following usage:
install [ssh port] [packages...]
add_repository
function: add_repository [ssh port] [name] [source]
. The source
is usually
vendor-specific, you can make this any format you want to encode repo URLs,
package signing keys, etc.In order to run builds, we require the following:
10.0.2.15/25
and gateway 10.0.2.2
.
Don't forget to configure DNS, too.build
to log into SSH with, preferrably with uid 1000Not strictly necessary, but recommended:
build
NOPASSWD: ALL
functions
file, set poweroff_cmd
to a command we can SSH into the
box and use to shut off the machine. If you don't, we'll just kill the qemu
process.sanity_check
function which takes no
arguments, but boots up the image and runs any tests necessary to verify
everything is working and return a nonzero status code if not.You will likely find it useful to read the scripts for existing build images as
a reference. Once you have a new image, email the scripts to
~sircmpwn/sr.ht-dev@lists.sr.ht
so
we can integrate them upstream!
Write an /etc/sr.ht/builds.ini
configuration file similar to the one you wrote
on the master server. Only the [sr.ht]
and [builds.sr.ht]
sections are
required for the runners. images
should be set to the installation path of
your images (/var/lib/images
) and buildlogs
should be set to the path where
the runner should write its build logs (the runner user should be able to create
files and directories here). Set runner
to the hostname of the build runner.
You will need to configure nginx to serve the build logs directory in order for
build logs to appear correctly on the website.
Once all of this is done, start the builds.sr.ht-worker
service and it's off
to the races. Submit builds on the master server and they should run correctly
at this point.