Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
Backup/Recovery
===============
This document covers the details of backups of the data required for PrivateStorageio to operate.
It describes the situations in which these backups are intended to be useful.
It also explains how to use these backups to recover in these situations.
Tahoe-LAFS Storage Nodes
------------------------
The state associated with a Tahoe-LAFS storage node consists of at least:
1. the "node directory" containing
configuration,
logs,
public and private keys,
and service fURLs.
2. the "storage" directory containing
user ciphertext,
garbage collector state,
and corruption advisories.
Node Directories
~~~~~~~~~~~~~~~~
The "node directory" changes gradually over time.
New logs are written (including incident reports).
The announcement sequence number is incremented.
The introducer cache is updated.
The critical state necessary to reproduce an identical storage node does not change.
This state consists of
* the node id (my_nodeid)
* the node private key (private/node.privkey)
* the node x509v3 certificate (private/node.pem)
A backup of the node directory can be used to create a Tahoe-LAFS storage node with the same identity as the original storage node.
It *cannot* be used to recover the user ciphertext held by the original storage node.
Nor will it recover the state which gradually changes over time.
Backup
``````
A one-time backup has been made of these directories in the PrivateStorageio 1Password account.
The "Tahoe-LAFS Storage Node Backups" vault contains backups of staging and production node directories.
The process for creating these backups is as follows:
::
DOMAIN=private.storage
FILES="node.pubkey private/ tahoe.cfg my_nodeid tahoe-client.tac node.url permutation-seed"
DIR=/var/db/tahoe-lafs/storage
for n in $(seq 1 5); do
NODE=storage00${n}.${DOMAIN}
ssh $NODE tar vvjcf - -C $DIR $FILES > ${NODE}.tar.bz2
done
tar vvjcf ${DOMAIN}.tar.bz2 *.tar.bz2
Recovery
````````
#. Prepare a system onto which to recover the node directory.
The rest of these steps assume that PrivateStorageio is deployed on the node.
#. Download the backup tarball from 1Password
#. Extract the particular node directory backup to recover from ::
[LOCAL]$ tar xvf ${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}.tar.bz2
#. Upload the node directory backup to the system onto which recovery is taking place ::
[LOCAL]$ scp ${NODE}.${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}:recovery.tar.bz2
#. Clean up the local copies of the backup files ::
[LOCAL]$ rm -iv ${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}.tar.bz2
#. The rest of the steps are executed on the system on which recovery is taking place.
Log in ::
[LOCAL]$ ssh ${NODE}.${DOMAIN}
#. On the node make sure there is no storage service running ::
[REMOTE]$ systemctl status tahoe.storage.service
If there is then figure out why and stop it if it is safe to do so ::
[REMOTE]$ systemctl stop tahoe.storage.service
#. On the node make sure there is no existing node directory ::
[REMOTE]$ stat /var/db/tahoe-lafs/storage
If there is then figure out why and remove it if it is safe to do so.
#. Unpack the node directory backup into the correct location ::
[REMOTE]$ mkdir -p /var/db/tahoe-lafs/storage
[REMOTE]$ tar xvf recovery.tar.bz2 -C /var/db/tahoe-lafs/storage
#. Mark the node directory as created and consistent ::
[REMOTE]$ touch /var/db/tahoe-lafs/storage.created
#. Start the storage service ::
[REMOTE]$ systemctl start tahoe.storage.service
#. Clean up the remote copies of the backup file ::
[REMOTE]$ rm -iv recovery.tar.bz2