There are two components to builds.sr.ht: the job runner and the master server.
Typically installations will have one master and many runners distributed on
many servers, but both can be installed on the same server for small
installations (though [not without risk](#security-model)). We'll start by
setting up the master server.
# Prerequisites
- [meta.sr.ht](/meta.sr.ht/installation.md)
- [An OAuth client ID and secret for meta.sr.ht](https://meta.sr.ht/oauth)
- PostgreSQL server
- Two redis servers
## Security model
Let's start with a brief overview of the security model of builds.sr.ht.
Because builds.sr.ht runs arbitrary user code (and allows users to utilize
root), it's important to carefully secure the build environments. To this end,
builds run in a sandbox which consists of:
- A KVM virtual machine via qemu
- Inside of an otherwise empty docker image
- Running as an unprivledged user
- On a server which is isolated by:
- Being on physically separate server from anything important
- Using its own isolated redis instance
- Having minimal database access
We suggest you take similar precautions if your servers could be running
untrusted builds. Remember that if you build only your own software, integration
with other services could end up running untrusted builds (for example,
automatic testing of patches via lists.sr.ht).
## PostgreSQL configuration
Start by setting your config.ini's connection string to a superuser, then run
the following commands to create the initial schema:
$ python3
>>> from buildsrht.app import db
>>> db.create()
Then create two users, one for the master and one for the runners (or one for
each runner if you prefer). They need the following permissions:
- **master** should have ownership over the database and full read/write/alter
access to all table
- **runner** should have read/write access to the jobs table
If you are running the master and runners on the same server, you will only be
able to use one user.
Note: in the future runners will not have database access.
## Redis configuration
The master server should have two redis servers, one for its own use (as part of
the standard sr.ht configuration) and one for communication with the runners.
Runners should be provided access to only their redis instance - if distributed,
we recommend setting up an SSH tunnel from the master to the runner to offer
access to the appropriate Redis instance.
## Package installation
On the master server, install builds.sr.ht either [from
source](https://git.sr.ht/~sircmpwn/builds.sr.ht) or [from
packages](/packages.md). On the runners, install builds.sr.ht and
builds.sr.ht-images, the latter containing templates for our upstream build
images and glue code for running them.
Place a config file in `/etc/sr.ht/builds.ini` based on
[config.ini.example](https://git.sr.ht/~sircmpwn/builds.sr.ht/tree/config.ini.example).
[Register an OAuth client](https://meta.sr.ht/oauth) and include its client ID
and secret in the config file. Under the `[builds.sr.ht]` section, configure
`redis` on the master to point to the *runner* redis - it will use this to
submit jobs to the runners. The other options are relevant to the build runner,
which we will set up later.
Start builds.sr.ht (`systemctl start builds.sr.ht` or `service builds.sr.ht
start`) and configure a reverse proxy - here's our nginx configuration for
builds.sr.ht upstream:
server {
listen 80;
server_name builds.sr.ht;
location / {
return 302 https://$server_name$request_uri;
}
location ^~ /.well-known {
root /var/www;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name builds.sr.ht;
client_max_body_size 100M;
ssl_certificate /var/lib/acme/live/builds.sr.ht/fullchain;
ssl_certificate_key /var/lib/acme/live/builds.sr.ht/privkey;
ssl_trusted_certificate /var/lib/acme/live/builds.sr.ht/fullchain;
location / {
proxy_pass http://127.0.0.1:5002;
}
location /static {
root /usr/lib/python3.6/site-packages/buildsrht;
}
location ^~ /.well-known {
root /var/www;
}
}
## Runner setup
On the runner, install the `builds.sr.ht-images` package (if building from
source, this package is simply the `images` directory copied to
`/var/lib/images`), as well as docker. Build the docker image like so:
$ cd /var/lib/images
$ docker build -t qemu -f qemu/Dockerfile .
This will build a docker image named `qemu` which contains a statically linked
build of qemu and nothing else. Next, build any of the images you need:
$ cd archlinux
$ sudo ./genimg
$ ../control archlinux sanity-check
$ cd ../debian/jessie
$ sudo ./genimg
$ ../../control debian/jessie sanity-check
... and so on ...
Note: some images may not build on your host system without the installation of
extra tools, which may not be availalbe. However, all images can be built on
themselves, and a build.yml file is typically provided which can do the job if
run in a similar environment as the target image.
## Creating new images
If you require any additional images, you should write a `genimg` script which
produces `root.img.qcow2`, `initrd`, and `kernel` files. You should also write a
`functions` script, which is sourced by the control script for booting your
image, installing packages, and so on. Review existing images for guidance, it
should be possible to make virtually any kind of image (including non-Linux
operating systems - the only hard requirement is an ssh daemon) with enough
effort.
## Configuring the runner
Write an `/etc/sr.ht/builds.ini` configuration file similar to the one you wrote
on the master server. Only the `[sr.ht]` and `[builds.sr.ht]` sections are
required for the runners. `images` should be set to the installation path of
your images (`/var/lib/images`) and `buildlogs` should be set to the path where
the runner should write its build logs (the runner user should be able to create
files and directories here). Set `runner` to the hostname of the build runner.
You will need to configure nginx to serve the build logs directory in order for
build logs to appear correctly on the website.
Once all of this is done, start the `builds.sr.ht-runner` service and it's off
to the races. Submit builds on the master server and they should run correctly
at this point.