aboutsummaryrefslogtreecommitdiffstats
path: root/builds.sr.ht/installation.md
blob: 54d3e2ebe5ddb3853c7d0b8a49738b4b6140534f (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
title: builds.sr.ht installation
---

There are two components to builds.sr.ht: the job runner and the master server.
Typically installations will have one master and many runners distributed on
many servers, but both can be installed on the same server for small
installations (though [not without risk](#security-model)). We'll start by
setting up the master server.

# Web service

The master server is a standard sr.ht web service and can be [installed as
such](/installation.md). However, it is important that you configure *two* Redis
servers - one that the runners should have access to, and one that they should
not. Insert connection details for the former into build.sr.ht's configuration
file under the `redis` key. Each build runner will also need a local redis
instance running. In an insecure deployment (all services on the same server)
you can get away with a single Redis instance.

We suggest using an SSH tunnel to share the slave Redis instance between job
runners and the master server, but you can use any method you prefer. If you use
an SSH tunnel, you will likely want to use a reverse tunnel initiated from the
master server, so the slaves are unable to SSH into the master server.

# Security model

Let's start with a brief overview of the security model of builds.sr.ht.
Because builds.sr.ht runs arbitrary user code (and allows users to utilize
root), it's important to carefully secure the build environments. To this end,
builds run in a sandbox which consists of:

- A KVM virtual machine via qemu
- Inside of an otherwise empty docker image
- Running as an unprivledged user
- On a server which is isolated by:
    - Being on physically separate server from anything important
    - Using its own isolated redis instance
    - Having minimal database access

We suggest you take similar precautions if your servers could be running
untrusted builds. Remember that if you build only your own software, integration
with other services could end up running untrusted builds (for example,
automatic testing of patches via lists.sr.ht).

# Package installation

On each runner, install the builds.sr.ht-images and builds.sr.ht-worker
packages.

# Database configuration

Create two users, one for the master and one for the runners (or one for each
runner if you prefer). They need the following permissions:

- **master** should have ownership over the database and full read/write/alter
  access to all table
- **runner** should have read/write access to the job table and artifact table.

If you are running the master and runners on the same server, you will only be
able to use one user - the master user. Configure both the web service and build
runner with this account. Otherwise, two separate accounts is recommended.

Note: in the future runners will not have database access.

# Install images

On the runner, install the `builds.sr.ht-images` package (if building from
source, this package is simply the `images` directory copied to
`/var/lib/images`), as well as docker. Build the docker image like so:

    $ cd /var/lib/images
    $ docker build -t qemu -f qemu/Dockerfile .

This will build a docker image named `qemu` which contains a statically linked
build of qemu and nothing else.

## Bootstrapping our images

A `genimg` script is provided for each image which can be run from a working
image of that guest to produce a new image. You need to manually prepare a
working guest of each image type (that is, to build the Arch Linux image you
need a working Arch Linux installation to bootstrap from). Then you can run
the provided `genimg` to produce the disk image. You should read the genimg
script to determine what dependencies need to be installed before it can be
run to completion.

The directory structure for bootable images should have the format
images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch
directory.

A `build.yml` file is also provided for each image to build itself on your
build infrastructure once you have it set up, which you should customize as
necessary. It's recommended that you set up cron jobs to build fresh images
frequently - a script at `contrib/submit_image_build` is provided for this
purpose.

Note: You will need nested virtualization enabled in order to build images
from within a pre-existing build image (i.e. via the `build.yml` file). If you
run into issues with `modprobe kvm_intel` within the genimg script, you can
fix this by removing the module and then re-inserting it with `insmod
kvm_intel.ko nested=1` in the directory containing the kernel module.

### Image-specific notes

* NixOS can be bootstrapped from any distribution. The provided build.yml does
  it from Alpine, but it can be easily switched to eg. Archlinux just by
  changing the host image and adjusting the packages.

## Creating new images

If you require additional images, study the `control` script to understand how
the top-level boot process works. You should then prepare a disk image for your
new system (name it `root.img.qcow2`) and write a `functions` file. The only
required function is `boot`, which should call `_boot` with any additional
arguments you want to pass to qemu. If your image will boot up with no
additional qemu arguments, this function will likely just call `_boot`. You can
optionally provide a number of other functions in your `functions` file to
enable various features:

- To enable installing packages specified in the build manifest, write an
  `install` function with the following usage:
  `install [ssh port] [packages...]`
- To enable adding third-party package repositories, write an `add_repository`
  function: `add_repository [ssh port] [name] [source]`. The `source` is usually
  vendor-specific, you can make this any format you want to encode repo URLs,
  package signing keys, etc.

In order to run builds, we require the following:

- The disk should be able to boot itself up, make sure to install a bootloader
  and set up partitions however you like.
- Networking configured with IPv4 address `10.0.2.15/25` and gateway `10.0.2.2`.
  Don't forget to configure DNS, too.
- SSH listening on port 22 (the standard port) with passwordless login *enabled*
- A user named `build` to log into SSH with, preferrably with uid 1000
- Bash (temporary - we'll make this more generic at some point)

Not strictly necessary, but recommended:

- Set the hostname to `build`
- Configure NTP and set the timezone to UTC
- Add the build user to the sudoers file with `NOPASSWD: ALL`
- In your `functions` file, set `poweroff_cmd` to a command we can SSH into the
  box and use to shut off the machine. If you don't, we'll just kill the qemu
  process.
- It is also recommended to write a `sanity_check` function which takes no
  arguments, but boots up the image and runs any tests necessary to verify
  everything is working and return a nonzero status code if not.

You will likely find it useful to read the scripts for existing build images as
a reference. Once you have a new image, email the scripts to
[`~sircmpwn/sr.ht-dev@lists.sr.ht`](https://lists.sr.ht/~sircmpwn/sr.ht-dev) so
we can integrate them upstream!

# Additional configuration

Write an `/etc/sr.ht/builds.ini` configuration file similar to the one you wrote
on the master server. Only the `[sr.ht]` and `[builds.sr.ht]` sections are
required for the runners. `images` should be set to the installation path of
your images (`/var/lib/images`) and `buildlogs` should be set to the path where
the runner should write its build logs (the runner user should be able to create
files and directories here). Set `runner` to the hostname of the build runner.
You will need to configure nginx to serve the build logs directory at
http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the
website.

Once all of this is done, make sure the worker is compiled (with go 1.11 or
later) by running `go build` in the worker/ directory, start the
`builds.sr.ht-worker` service and it's off to the races. Submit builds on the
master server and they should run correctly at this point.

For SSH access to (failed) builds you will need to install `git.sr.ht` and
configure `[git.sr.ht::dispatch]` for `buildsrht-keys`.