blob: 4abe4eaa84a2fb83799ce2103984b23bd4efb7d7 (
plain) (
blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
hg.sr.ht migration plan
1. ~~Build & install the new server (sakuya1) as VM host gen 2~~
1. ~~Announce planned outage a week in advance~~
1. ~~Spin up an hg.sr.ht stack and restore from the last backup. This is a
good opportunity to test our backups in action.~~
1. Test everything!
1. ~~Set up pgbouncer on hg.sr.ht¹~~
1. ~~Copy over host keys from hg.sr.ht¹ to hg.sr.ht²~~
1. ~~Move root image back to ZFS on sakuya1~~
1. Await planned outage date
1. Set hg.sr.ht¹ to read-only mode (via pgbouncer, probably, and disable
the hg SSH login account)
1. rsync any changes which have occured between steps 3 and 6 to
hg.sr.ht²
1. Cut DNS over to hg.sr.ht² and monitor as users get transferred over, reset
TTL to default
1. Disable cronjobs on hg.sr.ht¹
1. Monitor hg.sr.ht¹ and shut it off when traffic is more or less done
hitting it
1. Remove backup credentials for hg.sr.ht¹ from konapku.sr.ht
1. Remove SQL access from pg_hba.conf for hg.sr.ht¹
1. Finish configuration of acme cronjob for SSL certs on hg.sr.ht²
1. Wait 2 weeks and then decommission hg.sr.ht¹
Things to double check on hg.sr.ht²:
- Is monitoring working? Double check node exporter
- Are backups working?
- Are ZFS snapshots being taken correctly?
- Are ZFS scrubs being run? Double check on the 1st
|