aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--builds.sr.ht/configuration.md171
-rw-r--r--builds.sr.ht/configuration_reference.md5
-rw-r--r--builds.sr.ht/installation.md178
3 files changed, 195 insertions, 159 deletions
diff --git a/builds.sr.ht/configuration.md b/builds.sr.ht/configuration.md
new file mode 100644
index 0000000..aa14291
--- /dev/null
+++ b/builds.sr.ht/configuration.md
@@ -0,0 +1,171 @@
+---
+title: builds.sr.ht Configuration
+---
+
+This document covers the configuration process for builds.sr.ht.
+
+# Security Model
+
+Let's start with a brief overview of the security model of builds.sr.ht.
+Since builds.sr.ht runs arbitrary user code (and allows users to utilize
+root), it's important to carefully secure the build environments.
+
+To that end, our build jobs run in a sandbox that consists of:
+
+- A [KVM](https://www.linux-kvm.org) virtual machine (via
+ [QEMU](https://www.qemu.org)), which
+- runs inside of an otherwise empty Docker image, which
+- runs as an unprivledged user on
+- a server that is physically separate from anything important, uses its own
+ isolated Redis instance, and has minimal database access.
+
+We suggest you take similar precautions if your servers may run untrusted
+builds.
+
+<div class="alert alert-warning">
+ <strong>Warning:</strong> Even if you only build your own software,
+ integration with other services may cause you to run untrusted builds (e.g.
+ automatic testing of patches via lists.sr.ht).
+</div>
+
+# Master Server
+
+## Web Service
+
+The master server requires *two* Redis servers – one that runners should have
+access to, and one that they should not have access to. For the former,
+insert connection details into build.sr.ht's configuration file under the `redis`
+key.
+
+Each runner also requires a local Redis instance running.
+
+<div class="alert alert-info">
+ <strong>Note:</strong> In a deployment where all services are on the same
+ server, running only trusted builds, you can get away with a single Redis
+ instance.
+</div>
+
+# Database
+
+Create two users, one for the master and one for the runners (or one for each
+runner if you prefer). They need the following permissions:
+
+- **master** should have ownership over the database and full read/write/alter
+ access to all table
+- **runner** should have read/write access to the job table and artifact table.
+
+If you are running the master and runners on the same server, you will only be
+able to use one user - the master user. Configure both the web service and build
+runner with this account. Otherwise, two separate accounts is recommended.
+
+Note: in the future runners will not have database access.
+
+# Install images
+
+On the runner, install the `builds.sr.ht-images` package (if building from
+source, this package is simply the `images` directory copied to
+`/var/lib/images`), as well as docker. Build the docker image like so:
+
+ $ cd /var/lib/images
+ $ docker build -t qemu -f qemu/Dockerfile .
+
+This will build a docker image named `qemu` which contains a statically linked
+build of qemu and nothing else.
+
+## Bootstrapping our images
+
+A `genimg` script is provided for each image which can be run from a working
+image of that guest to produce a new image. You need to manually prepare a
+working guest of each image type (that is, to build the Arch Linux image you
+need a working Arch Linux installation to bootstrap from). Then you can run
+the provided `genimg` to produce the disk image. You should read the genimg
+script to determine what dependencies need to be installed before it can be
+run to completion.
+
+The directory structure for bootable images should have the format
+images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch
+directory.
+
+A `build.yml` file is also provided for each image to build itself on your
+build infrastructure once you have it set up, which you should customize as
+necessary. It's recommended that you set up cron jobs to build fresh images
+frequently - a script at `contrib/submit_image_build` is provided for this
+purpose.
+
+**Note**: it is recommended that you modify our `build.yml` files to suit your
+instance's needs, then run it on *our* hosted builds.sr.ht instance to bootstrap
+your images. This is the fastest and most convenient way to bootstrap the images
+you need.
+
+**Note**: You will need nested virtualization enabled in order to build images
+from within a pre-existing build image (i.e. via the `build.yml` file). If you
+run into issues with `modprobe kvm_intel` within the genimg script, you can fix
+this by removing the module and then re-inserting it with `insmod kvm_intel.ko
+nested=1` in the directory containing the kernel module.
+
+## Creating new images
+
+If you require additional images, study the `control` script to understand how
+the top-level boot process works. You should then prepare a disk image for your
+new system (name it `root.img.qcow2`) and write a `functions` file. The only
+required function is `boot`, which should call `_boot` with any additional
+arguments you want to pass to qemu. If your image will boot up with no
+additional qemu arguments, this function will likely just call `_boot`. You can
+optionally provide a number of other functions in your `functions` file to
+enable various features:
+
+- To enable installing packages specified in the build manifest, write an
+ `install` function with the following usage:
+ `install [ssh port] [packages...]`
+- To enable adding third-party package repositories, write an `add_repository`
+ function: `add_repository [ssh port] [name] [source]`. The `source` is usually
+ vendor-specific, you can make this any format you want to encode repo URLs,
+ package signing keys, etc.
+
+In order to run builds, we require the following:
+
+- The disk should be able to boot itself up, make sure to install a bootloader
+ and set up partitions however you like.
+- Networking configured with IPv4 address `10.0.2.15/25` and gateway `10.0.2.2`.
+ Don't forget to configure DNS, too.
+- SSH listening on port 22 (the standard port) with passwordless login *enabled*
+- A user named `build` to log into SSH with, preferrably with uid 1000
+- git config setting user.name to builds.sr.ht and user.email to builds@sr.ht
+- Bash (temporary - we'll make this more generic at some point)
+
+Not strictly necessary, but recommended:
+
+- Set the hostname to `build`
+- Configure NTP and set the timezone to UTC
+- Add the build user to the sudoers file with `NOPASSWD: ALL`
+- In your `functions` file, set `poweroff_cmd` to a command we can SSH into the
+ box and use to shut off the machine. If you don't, we'll just kill the qemu
+ process.
+- It is also recommended to write a `sanity_check` function which takes no
+ arguments, but boots up the image and runs any tests necessary to verify
+ everything is working and return a nonzero status code if not.
+
+You will likely find it useful to read the scripts for existing build images as
+a reference. Once you have a new image, email the scripts to
+[`~sircmpwn/sr.ht-dev@lists.sr.ht`](https://lists.sr.ht/~sircmpwn/sr.ht-dev) so
+we can integrate them upstream!
+
+# Additional configuration
+
+Write an `/etc/sr.ht/builds.ini` configuration file similar to the one you wrote
+on the master server. Only the `[sr.ht]` and `[builds.sr.ht]` sections are
+required for the runners. `images` should be set to the installation path of
+your images (`/var/lib/images`) and `buildlogs` should be set to the path where
+the runner should write its build logs (the runner user should be able to create
+files and directories here). Set `runner` to the hostname of the build runner.
+You will need to configure nginx to serve the build logs directory at
+http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the
+website.
+
+Once all of this is done, make sure the worker is compiled (with go 1.11 or
+later) by running `go build` in the worker/ directory, start the
+`builds.sr.ht-worker` service and it's off to the races. Submit builds on the
+master server and they should run correctly at this point.
+
+For SSH access to (failed) builds you will need to install `git.sr.ht` and
+configure `[git.sr.ht::dispatch]` for `buildsrht-keys`.
diff --git a/builds.sr.ht/configuration_reference.md b/builds.sr.ht/configuration_reference.md
new file mode 100644
index 0000000..4b5b5e1
--- /dev/null
+++ b/builds.sr.ht/configuration_reference.md
@@ -0,0 +1,5 @@
+---
+title: builds.sr.ht Configuration Reference
+---
+
+This document covers the configuration options for the builds.sr.ht service.
diff --git a/builds.sr.ht/installation.md b/builds.sr.ht/installation.md
index 2ed6cd1..111bc60 100644
--- a/builds.sr.ht/installation.md
+++ b/builds.sr.ht/installation.md
@@ -1,169 +1,29 @@
---
-title: builds.sr.ht installation
+title: builds.sr.ht Installation
---
-There are two components to builds.sr.ht: the job runner and the master server.
-Typically installations will have one master and many runners distributed on
-many servers, but both can be installed on the same server for small
-installations (though [not without risk](#security-model)). We'll start by
-setting up the master server.
+This document covers the installation steps for builds.sr.ht, a continuous
+integration service.
-# Web service
+builds.sr.ht is comprised of two components: a **master server** and **job
+runners**. Typically, deployments have one master and many runners, which are
+distributed across multiple servers.
-The master server is a standard sr.ht web service and can be [installed as
-such](/installation.md). However, it is important that you configure *two* Redis
-servers - one that the runners should have access to, and one that they should
-not. Insert connection details for the former into build.sr.ht's configuration
-file under the `redis` key. Each build runner will also need a local redis
-instance running. In an insecure deployment (all services on the same server,
-running only trusted builds), you can get away with a single Redis instance.
+<div class="alert alert-info">
+ <strong>Note:</strong> For smaller deployments, job runners can be installed
+ alongside the master server, but
+ <a href="/builds.sr.ht/configuration.md#security-model"
+ class="alert-link">not without risk</a>.
+</div>
-# Security model
+# Installation
-Let's start with a brief overview of the security model of builds.sr.ht.
-Because builds.sr.ht runs arbitrary user code (and allows users to utilize
-root), it's important to carefully secure the build environments. To this end,
-builds run in a sandbox which consists of:
+On the master server, install the `builds.sr.ht` package.
-- A KVM virtual machine via qemu
-- Inside of an otherwise empty docker image
-- Running as an unprivledged user
-- On a server which is isolated by:
- - Being on physically separate server from anything important
- - Using its own isolated redis instance
- - Having minimal database access
+On each server hosting a job runner, install the `builds.sr.ht-worker` and
+`builds.sr.ht-images` packages.
-We suggest you take similar precautions if your servers could be running
-untrusted builds. Remember that if you build only your own software, integration
-with other services could end up running untrusted builds (for example,
-automatic testing of patches via lists.sr.ht).
+## Daemons
-# Package installation
-
-On each runner, install the builds.sr.ht-images and builds.sr.ht-worker
-packages.
-
-# Database configuration
-
-Create two users, one for the master and one for the runners (or one for each
-runner if you prefer). They need the following permissions:
-
-- **master** should have ownership over the database and full read/write/alter
- access to all table
-- **runner** should have read/write access to the job table and artifact table.
-
-If you are running the master and runners on the same server, you will only be
-able to use one user - the master user. Configure both the web service and build
-runner with this account. Otherwise, two separate accounts is recommended.
-
-Note: in the future runners will not have database access.
-
-# Install images
-
-On the runner, install the `builds.sr.ht-images` package (if building from
-source, this package is simply the `images` directory copied to
-`/var/lib/images`), as well as docker. Build the docker image like so:
-
- $ cd /var/lib/images
- $ docker build -t qemu -f qemu/Dockerfile .
-
-This will build a docker image named `qemu` which contains a statically linked
-build of qemu and nothing else.
-
-## Bootstrapping our images
-
-A `genimg` script is provided for each image which can be run from a working
-image of that guest to produce a new image. You need to manually prepare a
-working guest of each image type (that is, to build the Arch Linux image you
-need a working Arch Linux installation to bootstrap from). Then you can run
-the provided `genimg` to produce the disk image. You should read the genimg
-script to determine what dependencies need to be installed before it can be
-run to completion.
-
-The directory structure for bootable images should have the format
-images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch
-directory.
-
-A `build.yml` file is also provided for each image to build itself on your
-build infrastructure once you have it set up, which you should customize as
-necessary. It's recommended that you set up cron jobs to build fresh images
-frequently - a script at `contrib/submit_image_build` is provided for this
-purpose.
-
-**Note**: it is recommended that you modify our `build.yml` files to suit your
-instance's needs, then run it on *our* hosted builds.sr.ht instance to bootstrap
-your images. This is the fastest and most convenient way to bootstrap the images
-you need.
-
-**Note**: You will need nested virtualization enabled in order to build images
-from within a pre-existing build image (i.e. via the `build.yml` file). If you
-run into issues with `modprobe kvm_intel` within the genimg script, you can fix
-this by removing the module and then re-inserting it with `insmod kvm_intel.ko
-nested=1` in the directory containing the kernel module.
-
-## Creating new images
-
-If you require additional images, study the `control` script to understand how
-the top-level boot process works. You should then prepare a disk image for your
-new system (name it `root.img.qcow2`) and write a `functions` file. The only
-required function is `boot`, which should call `_boot` with any additional
-arguments you want to pass to qemu. If your image will boot up with no
-additional qemu arguments, this function will likely just call `_boot`. You can
-optionally provide a number of other functions in your `functions` file to
-enable various features:
-
-- To enable installing packages specified in the build manifest, write an
- `install` function with the following usage:
- `install [ssh port] [packages...]`
-- To enable adding third-party package repositories, write an `add_repository`
- function: `add_repository [ssh port] [name] [source]`. The `source` is usually
- vendor-specific, you can make this any format you want to encode repo URLs,
- package signing keys, etc.
-
-In order to run builds, we require the following:
-
-- The disk should be able to boot itself up, make sure to install a bootloader
- and set up partitions however you like.
-- Networking configured with IPv4 address `10.0.2.15/25` and gateway `10.0.2.2`.
- Don't forget to configure DNS, too.
-- SSH listening on port 22 (the standard port) with passwordless login *enabled*
-- A user named `build` to log into SSH with, preferrably with uid 1000
-- git config setting user.name to builds.sr.ht and user.email to builds@sr.ht
-- Bash (temporary - we'll make this more generic at some point)
-
-Not strictly necessary, but recommended:
-
-- Set the hostname to `build`
-- Configure NTP and set the timezone to UTC
-- Add the build user to the sudoers file with `NOPASSWD: ALL`
-- In your `functions` file, set `poweroff_cmd` to a command we can SSH into the
- box and use to shut off the machine. If you don't, we'll just kill the qemu
- process.
-- It is also recommended to write a `sanity_check` function which takes no
- arguments, but boots up the image and runs any tests necessary to verify
- everything is working and return a nonzero status code if not.
-
-You will likely find it useful to read the scripts for existing build images as
-a reference. Once you have a new image, email the scripts to
-[`~sircmpwn/sr.ht-dev@lists.sr.ht`](https://lists.sr.ht/~sircmpwn/sr.ht-dev) so
-we can integrate them upstream!
-
-# Additional configuration
-
-Write an `/etc/sr.ht/builds.ini` configuration file similar to the one you wrote
-on the master server. Only the `[sr.ht]` and `[builds.sr.ht]` sections are
-required for the runners. `images` should be set to the installation path of
-your images (`/var/lib/images`) and `buildlogs` should be set to the path where
-the runner should write its build logs (the runner user should be able to create
-files and directories here). Set `runner` to the hostname of the build runner.
-You will need to configure nginx to serve the build logs directory at
-http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the
-website.
-
-Once all of this is done, make sure the worker is compiled (with go 1.11 or
-later) by running `go build` in the worker/ directory, start the
-`builds.sr.ht-worker` service and it's off to the races. Submit builds on the
-master server and they should run correctly at this point.
-
-For SSH access to (failed) builds you will need to install `git.sr.ht` and
-configure `[git.sr.ht::dispatch]` for `buildsrht-keys`.
+- `builds.sr.ht` - The web service (master server).
+- `builds.sr.ht-worker` - The job runner.