aboutsummaryrefslogtreecommitdiffstats
path: root/ops/robust-psql.md
blob: bd24387d8d47a9dc4a70ca2bddaeea4ec692459e (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
PostgreSQL is a critical part of the SourceHut infrastructure in terms of
availability, performance, and data integrity.

# Streaming replication

**Goals**

1. Real-time off-site backup of the database
1. Ability to access the database as it appeared in the past
1. Ability to failover to the secondary in the event of an outage on the
   primary, with minimal application downtime
1. Ability to failover to the secondary during planned maintenance of the
   primary, with minimal application downtime

**Prerequisites**

✓ PostgreSQL 12.x on primary

✓ PostgreSQL 12.x on secondary

**Nice-to-have**

✗ Alpine 3.12 on primary

✓ Alpine 3.12 on secondary

If we have to restart the primary PostgreSQL daemon for any reason, we should
take the opportunity to upgrade the host to Alpine 3.12.

## Preparing the hot standby

First we'll have to prepare a [warm standby][warm standby] (which can be
promoted to primary at short notice, but does not accept queries), then we can
reconfigure it as a [hot standby][hot standby] (which will accept queries).

1. Set up a user on the primary for the secondary to connect as for streaming
   replication. It must have the REPLICATION and LOGIN privileges and
   [pg_hba.conf][hba changes] must be updated.
1. Take a [base backup][base backup] of the primary and ship it to the
   secondary. Be advised: we have full_page_writes disabled as a ZFS
   optimization; this may cause performance degredation during the basebackup
   process.
1. Fill out `primary_conninfo` on the secondary with the connection details for
   the account created for replication.
1. Configure a replication slot on the primary for the secondary:
    
        SELECT * FROM pg_create_physical_replication_slot('konpaku');

    Then set `primary_slot_name` to `konpaku` on the secondary.
1. Set `hot_standby` to true on the secondary.
1. [Restore the base backup][restore base backup] on the secondary. It will
   automatically sync with the primary when complete; monitor the logs. TODO:
   What should our `restore_command` be? Want to figure it out in advance.
1. Sanity check the secondary database.
1. Roll back the `restore_command` change so that we aren't surprised by it
   later. Drop the base backup files as well.

[warm standby]: https://www.postgresql.org/docs/current/warm-standby.html
[hot standby]: https://www.postgresql.org/docs/current/hot-standby.html
[base backup]: https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-BASE-BACKUP
[restore base backup]: https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-PITR-RECOVERY
[hba changes]: https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-AUTHENTICATION

# Failover procedure (primary & secondary role swap)

TBD

# Database upgrade procedure

TBD

## Minor version upgrades

TBD

## Major version upgrades

TBD

# Accessing past data

TBD

# Changes to make on the next primary reboot

- Set `wal_level` to replica; we are not going to use logical replication. TODO:
  Will this create WAL files which are incompatible with one another?
- Drop the `archive_command` and set `archive_mode` to off; we're going to use
  streaming replication instead of log shipping.

# Open questions

- Would pgbouncer/pgpool be valuable here? It could be used to quickly redirect
  traffic from some or all nodes to one server or another. If so, we should
  probably put it on a dedicated host so that it's robust to host-level failures
  in the primary or secondary servers.
- What is the plan of execution for replacing the server hardware?
- How often are WALs recycled, how far into the past can we peek?
- Can we/should we detect failure conditions and automate the failover
  procedure?
- What kinds of monitoring and alarms can we/should we set up?
- How can we maintain an ongoing high level of confidence in our system?

It may be possible to *always* service SELECT statements from either the primary
or secondary. If we eventually require load balancing, these queries could be
eligible. However, queries (or transactions?) which mutate the database may only
be executed on the primary. Therefore:

- If we require more capacity for writes, is vertical scaling our only option?
- In the event of a primary outage, are we able to transparently direct reads
  to the secondary?
- Can application-level changes be implemented which would allow for reduced
  service availability (read-only) in the event of a primary outage?
- If applications cannot write to the database, but can write to other data
  stores (filesystem, redis, etc), can inconsistent states be produced?
- Can we simulate typical and excessive levels of application traffic against
  the database for mock scenarios?

# Mock contingency scenarios

We need to set up a test network with a primary and secondary and induce failure
scenarios to learn how our applications behave under those conditions and
practice the failure resolution steps.

This will require at least the following:

- A couple of sr.ht applications (meta.sr.ht and git.sr.ht would be good)
- A primary and secondary PostgreSQL server

At least three separate VMs are required here, four if we set up a standalone
pgbouncer or pgpool. Then we should play out the following scenarios:

- Promoting the secondary to the primary
- Shut down the primary or secondary without pre-preparation
- Accessing the database state at an arbitrary point in the past
- Major and minor version upgrades (pg 11 to 12)
- Host OS upgrades (Alpine 3.11 to 3.12 or edge)
- Cause a ZFS resliver on the primary and monitor database performance: does
  this trigger a "soft outage" where performance is awful but the database is
  ostensibly online?

At each stage of these processes, observe application behavior and record these
observations.