Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
Loading items

Target

Select target project
  • tomprince/PrivateStorageio
  • privatestorage/PrivateStorageio
2 results
Select Git revision
Loading items
Show changes
Commits on Source (855)
Showing
with 928 additions and 52 deletions
# Define rules for a job that should run for events related to a merge request
# - merge request is opened, a new commit is pushed to its branch, etc. This
# definition does nothing by itself but can be referenced by jobs that want to
# run in this condition.
.merge_request_rules: &RUN_ON_MERGE_REQUEST
rules:
# If the pipeline is triggered by a merge request event then we should
# run.
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
# If the pipeline is triggered by anything else then we should not run.
- when: "never"
# As above, but rules for running when the scheduler triggers the pipeline.
.schedule_rules: &RUN_ON_SCHEDULE
rules:
# There are multiple schedules so make sure this one is for us. The
# `SCHEDULE_TARGET` variable is explicitly, manually set by us in the
# schedule configuration.
- if: '$SCHEDULE_TARGET != $CI_JOB_NAME'
when: "never"
# Make sure this is actually a scheduled run
- if: '$CI_PIPELINE_SOURCE != "schedule"'
when: "never"
# Conditions look good: run.
- when: "always"
stages:
- "build"
- "deploy"
default:
# Guide the choice of an appropriate runner for all these jobs.
# https://docs.gitlab.com/ee/ci/runners/#runner-runs-only-tagged-jobs
......@@ -10,6 +43,7 @@ variables:
GET_SOURCES_ATTEMPTS: 10
docs:
<<: *RUN_ON_MERGE_REQUEST
stage: "build"
script:
- "nix-build --attr docs --out-link result-docs"
......@@ -22,18 +56,20 @@ docs:
expose_as: "documentation"
unit-tests:
stage: "test"
<<: *RUN_ON_MERGE_REQUEST
stage: "build"
script:
- "nix-build --attr unit-tests && cat result"
.morph-build: &MORPH_BUILD
stage: "test"
<<: *RUN_ON_MERGE_REQUEST
timeout: "3 hours"
stage: "build"
script:
- |
set -x
# GRID is set in one of the "instantiations" of this job template.
nix-shell --run "morph build --show-trace morph/grid/${GRID}/grid.nix"
nix-shell --pure --run "morph build --show-trace morph/grid/${GRID}/grid.nix"
morph-build-localdev:
......@@ -47,12 +83,11 @@ morph-build-localdev:
# just needs this tweak.
echo '{}' > morph/grid/${GRID}/public-keys/users.nix
morph-build-testing:
morph-build-staging:
<<: *MORPH_BUILD
variables:
GRID: "testing"
morph-build-production:
<<: *MORPH_BUILD
variables:
......@@ -60,7 +95,8 @@ morph-build-production:
vulnerability-scan:
stage: "test"
<<: *RUN_ON_MERGE_REQUEST
stage: "build"
script:
- "ci-tools/vulnerability-scan security-report.json"
- "ci-tools/count-vulnerabilities <security-report.json"
......@@ -71,10 +107,11 @@ vulnerability-scan:
system-tests:
stage: "test"
<<: *RUN_ON_MERGE_REQUEST
timeout: "3 hours"
stage: "build"
script:
- "nix-build --attr system-tests"
- "nix-shell --pure --run 'nix-build --attr system-tests'"
# A template for a job that can update one of the grids.
.update-grid: &UPDATE_GRID
......@@ -91,6 +128,7 @@ system-tests:
# Update the staging deployment - only on a commit to the develop branch.
update-staging:
<<: *UPDATE_GRID
# https://docs.gitlab.com/ee/ci/yaml/#rules
rules:
# https://docs.gitlab.com/ee/ci/yaml/index.html#rulesif
......@@ -113,6 +151,7 @@ update-staging:
# Update the production deployment - only on a commit to the production branch.
deploy-to-production:
<<: *UPDATE_GRID
# https://docs.gitlab.com/ee/ci/yaml/#rules
rules:
# https://docs.gitlab.com/ee/ci/yaml/index.html#rulesif
......@@ -124,3 +163,27 @@ deploy-to-production:
# See notes in `update-staging`.
name: "production"
url: "https://monitoring.private.storage/"
update-nixpkgs:
<<: *RUN_ON_SCHEDULE
stage: "build"
script:
- |
./ci-tools/with-ssh-agent \
./ci-tools/update-nixpkgs \
"$CI_SERVER_URL" \
"$CI_SERVER_HOST" \
"$CI_PROJECT_PATH" \
"$CI_PROJECT_ID" \
"$CI_DEFAULT_BRANCH"
update-production:
<<: *RUN_ON_SCHEDULE
stage: "build"
script:
- |
./ci-tools/update-production \
"$CI_SERVER_URL" \
"$CI_PROJECT_ID" \
"develop" \
"production"
Deployment notes
================
- 2023-06-19
ZKAPAuthorizer's Tahoe-LAFS plugin name changed from "privatestorageio-zkapauthz-v1" to "privatestorageio-zkapauthz-v2".
This causes Tahoe-LAFS to use a different filename to persist the plugin's Foolscap fURL.
To preserve the original fURL value (required) each storage node needs this command run before the deployment::
cp /var/db/tahoe-lafs/storage/private/storage-plugin.privatestorageio-zkapauthz-v{1,2}.furl
- 2023-04-19
The team switched from Slack to Zulip.
For the monitoring notifications to reach Zulip, a webhook bot has to be created in Zulip and a secret URL has to be constructed as described in `https://zulip.com/integrations/doc/grafana`_ and added to the ``private_keys`` directory (See ``grid/local/private-keys/grafana-zulip-url`` for an example).
Find the secret URL for production at `https://my.1password.com/vaults/7flqasy5hhhmlbtp5qozd3j4ga/allitems/rb22ipb6gvokohzq2d2hhv6t6u`_.
- 2021-12-20
`https://whetstone.private.storage/privatestorage/privatestorageops/-/issues/399`_ requires moving the PaymentServer database on the ``payments`` host onto a new dedicated filesystem.
Follow these steps *before* deploying this version of PrivateStorageio:
0. Deploy the `PrivateStorageOps change <https://whetstone.private.storage/privatestorage/privatestorageops/-/merge_requests/169>`_ that creates a new dedicated volume.
1. Put a disk label on the new dedicated volume ::
nix-shell -p parted --run 'parted /dev/nvme1n1 mklabel msdos'
2. Put a properly aligned partition in the new disk label ::
nix-shell -p parted --run 'parted /dev/nvme1n1 mkpart primary ext2 4096s 4G'
3. Create a labeled filesystem on the partition ::
mkfs.ext4 -L zkapissuer-data /dev/nvme1n1p1
4. Deploy the PrivateStorageio update.
5. Move the database file to the new location ::
mv -iv /var/lib/zkapissuer/vouchers.sqlite3 /var/lib/zkapissuer-v2
6. Clean up the old state directory ::
rm -ir /var/lib/zkapissuer
7. Start the PaymentServer service (not running because its path assertions were not met earlier) ::
systemctl start zkapissuer
- 2021-10-12 The secret in ``private-keys/grafana-slack-url`` needs to be changed to remove the ``SLACKURL=`` prefix.
- 2021-09-30 `Enable alerting <https://whetstone.privatestorage.io/privatestorage/PrivateStorageio/-/merge_requests/185>`_ needs a secret in ``private-keys/grafana-slack-url`` looking like the template in ``morph/grid/local/private-keys/grafana-slack-url`` and pointing to the secret API endpoint URL saved in `this 1Password entry <https://privatestorage.1password.com/vaults/7flqasy5hhhmlbtp5qozd3j4ga/allitems/cgznskz2oix2tyx5xyntwaos5i>`_ (or create a new secret URL at https://www.slack.com/apps/A0F7XDUAZ).
- 2021-09-30 `Enable alerting <https://whetstone.private.storage/privatestorage/PrivateStorageio/-/merge_requests/185>`_ needs a secret in ``private-keys/grafana-slack-url`` looking like the template in ``morph/grid/local/private-keys/grafana-slack-url`` and pointing to the secret API endpoint URL saved in `this 1Password entry <https://privatestorage.1password.com/vaults/7flqasy5hhhmlbtp5qozd3j4ga/allitems/cgznskz2oix2tyx5xyntwaos5i>`_ (or create a new secret URL at https://www.slack.com/apps/A0F7XDUAZ).
- 2021-09-07 `Manage access to payment metrics <https://whetstone.privatestorage.io/privatestorage/PrivateStorageio/-/merge_requests/146>`_ requires moving and chown'ing the PaymentServer database on the ``payments`` host::
- 2021-09-07 `Manage access to payment metrics <https://whetstone.private.storage/privatestorage/PrivateStorageio/-/merge_requests/146>`_ requires moving and chown'ing the PaymentServer database on the ``payments`` host::
mkdir /var/lib/zkapissuer
......@@ -15,4 +63,3 @@ Deployment notes
chmod 750 /var/lib/zkapissuer
chmod 640 /var/lib/zkapissuer/vouchers.sqlite3
Project Hosting Moved
=====================
This project can now be found at https://whetstone.privatestorage.io/privatestorage/PrivateStorageio
PrivateStorageio
================
......@@ -20,5 +15,6 @@ The documentation can be built using this command::
$ nix-build docs.nix
The documentation is also built on and published by CI.
The documentation is also built on and published by CI:
Navigate to the `list of finished jobs <https://whetstone.private.storage/privatestorage/PrivateStorageio/-/jobs>`_ and download the artefact of the latest ``docs`` build.
#!/usr/bin/env bash
set -euo pipefail
KEY=$1
shift
DOMAIN=$1
shift
PRODUCT_ID=$(
curl https://api.stripe.com/v1/products \
-u "${KEY}:" \
-d "name=30 GB-months" \
-d "description=30 GB-months of Private.Storage storage × time" \
-d "statement_descriptor=PRIVATE STORAGE" \
-d "url=https://${DOMAIN}/" |
jp --unquoted id
)
echo "Product: $PRODUCT_ID"
PRICE_ID=$(
curl https://api.stripe.com/v1/prices \
-u "${KEY}:" \
-d "currency=USD" \
-d "unit_amount=650" \
-d "tax_behavior=exclusive" \
-d "product=${PRODUCT_ID}" |
jp --unquoted id
)
echo "Price: $PRICE_ID"
LINK_URL=$(
curl https://api.stripe.com/v1/payment_links \
-u "${KEY}:" \
-d "line_items[0][price]=${PRICE_ID}" \
-d "line_items[0][quantity]=1" \
-d "after_completion[type]"=redirect \
-d "after_completion[redirect][url]"="https://${DOMAIN}/payment/success" |
jp --unquoted url
)
echo "Payment link: $LINK_URL"
#!/usr/bin/env bash
set -euo pipefail
KEY=$1
shift
DOMAIN=$1
shift
curl \
https://api.stripe.com/v1/webhook_endpoints \
-u "${KEY}:" \
-d url="https://payments.${DOMAIN}/v1/stripe/webhook" \
-d "enabled_events[]"="checkout.session.completed"
monitoring.deerfield.leastauthority.com,vps-50812a54.vps.ovh.net,51.38.134.175 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIgegzAxXPhxFK8vglBlUAFTzUoCj5TxqcLS57NaL2l
payments.deerfield.leastauthority.com,vps-3cbcf174.vps.ovh.net,217.182.78.151 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFI32csriKoUUD3e813gcEAD5CCuf8rUnary70HfJMSr
storage001.deerfield.leastauthority.com,185.225.209.174 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKX9Ei+WdNVvIncHQZ9CdEXZeSj2zBM/NQEuqmMbep0A
storage002.deerfield.leastauthority.com,38.170.241.34 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK3TAQj5agAv9AOZQhE95vATQKcNbNZj5Y3xMb5cjzGZ
storage003.deerfield.leastauthority.com,ns3728736.ip-151-80-28.eu,151.80.28.108 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsh9No4PT3hHDsY/07kDSRCg1Jse38n7GY0Rk9DnyPe
monitoring.privatestorage-staging.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINI9kvEBaOMvpWqcFH+6nFvRriBECKB4RFShdPiIMkk9
payments.privatestorage-staging.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0eO/01VFwdoZzpclrmu656eaMkE19BaxtDdkkFHMa8
storage001.privatestorage-staging.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFP8L6OHCxq9XFd8ME8ZrCbmO5dGZDPH8I5dm0AwSGiN
storage001.privatestorage-staging.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6iWHO9/4s3h9VIpaxgD+rgj/OQh8+jupxBoOmie3St
......@@ -74,7 +74,7 @@ update_grid_nodes() {
# Find the names of all hosts that belong to this grid. This list includes
# one extra string, "network", which is morph configuration stuff and we need
# to filter out later.
nodes=$(nix eval --json "(builtins.concatStringsSep \" \" (builtins.attrNames (import $grid_dir/grid.nix)))" | jp --unquoted @)
nodes=$(nix --extra-experimental-features nix-command eval --impure --json --expr "(builtins.concatStringsSep \" \" (builtins.attrNames (import $grid_dir/grid.nix)))" | jp --unquoted @)
# Tell every server in the network to update itself.
for node in ${nodes}; do
......
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p git curl python3
# ^^
# we need git to commit and push our changes
# we need curl to create the gitlab MR
# we need python to format the data as json
set -eux -o pipefail
main() {
# This is a base64-encoded OpenSSH-format SSH private key that we can use
# to push and pull with git over ssh.
local SSHKEY=$1
shift
# This is a GitLab authentication token we can use to make API calls onto
# GitLab.
local TOKEN=$1
shift
# This is the URL of the root of the GitLab API.
local SERVER_URL=$1
shift
# This is the hostname of the GitLab server (suitable for use in a Git
# remote).
local SERVER_HOST=$1
shift
# This is the "group/project"-style identifier for the project we're working
# with.
local PROJECT_PATH=$1
shift
# The GitLab id of the project (eg, from CI_PROJECT_ID in the CI
# environment).
local PROJECT_ID=$1
shift
# The name of the branch on which to base changes and which to target with
# the resulting merge request.
local DEFAULT_BRANCH=$1
shift
# Only proceed if we have an ssh-agent.
check_agent
# Pick a branch name into which to push our work.
local SOURCE_BRANCH="nixpkgs-upgrade-$(date +%Y-%m-%d)"
setup_git
checkout_source_branch "$SSHKEY" "$SERVER_HOST" "$PROJECT_PATH" "$DEFAULT_BRANCH" "$SOURCE_BRANCH"
build "result-before"
# If nothing changed, report this and exit without an error.
if ! update_nixpkgs; then
echo "No changes."
exit 0
fi
build "result-after"
local DIFF=$(compute_diff "./result-before" "./result-after")
commit_and_push "$SSHKEY" "$SOURCE_BRANCH" "$DIFF"
create_merge_request "$SERVER_URL" "$TOKEN" "$PROJECT_ID" "$DEFAULT_BRANCH" "$SOURCE_BRANCH" "$DIFF"
}
# Add the ssh key required to push and (maybe) pull to the ssh-agent. This
# may have a limited lifetime in the agent so operations that are going to
# require the key should refresh it immediately before starting.
refresh_ssh_key() {
local KEY_BASE64=$1
shift
# A GitLab CI/CD variable set for us to use.
echo "${KEY_BASE64}" | base64 -d | ssh-add -
}
# Make git usable by setting some global mandatory options.
setup_git() {
# We may not know the git/ssh server's host key yet. In that case, learn
# it and proceed.
export GIT_SSH_COMMAND="ssh -o StrictHostKeyChecking=accept-new"
git config --global user.email "update-bot@private.storage"
git config --global user.name "Update Bot"
}
# Return with an error if no ssh-agent is detected.
check_agent() {
# We require an ssh-agent to be available so we can put the ssh private
# key in it. The key is given to us in memory and we don't really want to
# put it on disk anywhere so an agent is the easiest way to make it
# available for git/ssh operations.
if [ ! -v SSH_AUTH_SOCK ]; then
echo "ssh-agent is required but missing, aborting."
exit 1
fi
}
# Make a fresh clone of the repository, make it our working directory, and
# check out the branch we intend to commit to (the "source" of the MR).
checkout_source_branch() {
local SSHKEY=$1
shift
local SERVER_HOST=$1
shift
local PROJECT_PATH=$1
shift
# The branch we'll start from.
local DEFAULT_BRANCH=$1
shift
# The name of our branch.
local BRANCH=$1
shift
# To avoid messing with the checkout we're running from (which GitLab
# tends to like to share across builds) clone it to a new temporary path.
git clone . working-copy
cd working-copy
# Make sure we know the name of a remote that points at the right place.
# Then use it to make sure the base branch is up-to-date. It usually
# should be already but in case it isn't we don't want to start from a
# stale revision.
git remote add upstream gitlab@"$SERVER_HOST":"$PROJECT_PATH".git
refresh_ssh_key "$SSHKEY"
git fetch upstream "$DEFAULT_BRANCH"
# Typically this tool runs infrequently enough that the branch doesn't
# already exist. However, as a convenience for developing on this tool
# itself, if it does already exist, wipe it and start fresh for greater
# predictability.
git branch -D "${BRANCH}" || true
# Then create a new branch starting from the mainline development branch.
git checkout -B "${BRANCH}" upstream/"$DEFAULT_BRANCH"
}
# Build all of the grids (the `morph` attribute of `default.nix`) and link the
# result to the given parameter. This will give us some material to diff.
build() {
# The name of the nix result symlink.
local RESULT=$1
shift
# The local grid can only build if you populate its users.
echo '{}' > morph/grid/local/public-keys/users.nix
nix-build -A morph -o "$RESULT"
}
# Perform the actual dependency update. If there are no changes, exit with an
# error code.
update_nixpkgs() {
# Spawn *another* nix-shell that has the *other* update-nixpkgs tool.
# Should sort out this mess sooner rather than later... Also, tell the
# tool (running from another checkout) to operate on this clone's package
# file instead of the one that's part of its own checkout.
nix-shell ../shell.nix --run 'update-nixpkgs ${PWD}/nixpkgs.json'
# Signal a kind of error if we did nothing (expected in the case where
# nixpkgs hasn't changed since we last ran).
if git diff --exit-code; then
return 1
fi
}
# Return a description of the package changes resulting from the dependency
# update.
compute_diff() {
local LEFT=$1
shift
local RIGHT=$1
shift
nix --extra-experimental-features nix-command store diff-closures "$LEFT" "$RIGHT"
}
# Commit and push all changes in the working tree along with a description of
# the package changes.
commit_and_push() {
local SSHKEY=$1
shift
local BRANCH=$1
shift
local DIFF=$1
shift
git commit -am "bump nixpkgs
```
$DIFF
```
"
refresh_ssh_key "$SSHKEY"
git push --force upstream "${BRANCH}:${BRANCH}"
}
# Create a GitLab MR for the branch we just pushed, including a description of
# the package changes it implies.
create_merge_request() {
local SERVER_URL=$1
shift
local TOKEN=$1
shift
local PROJECT_ID=$1
shift
# The target branch of the MR.
local TARGET_BRANCH=$1
shift
# The source branch of the MR.
local SOURCE_BRANCH=$1
shift
local DIFF=$1
shift
local BODY=$(python3 -c '
import sys, json, re
def rewrite_escapes(s):
# `nix store diff-closures` output is fancy and includes color codes and
# such. That looks a bit less than nice in a markdown-formatted comment so
# strip all of it. If we wanted to be fancy we could rewrite it in a
# markdown friendly way (eg using html).
return re.sub(r"\x1b\[[^m]*m", "", s)
print(json.dumps({
"id": sys.argv[1],
"target_branch": sys.argv[2],
"source_branch": sys.argv[3],
"remove_source_branch": True,
"title": "bump nixpkgs version",
"description": f"```\n{rewrite_escapes(sys.argv[4])}\n```",
}))
' "$PROJECT_ID" "$TARGET_BRANCH" "$SOURCE_BRANCH" "$DIFF")
curl --verbose -X POST --data "${BODY}" --header "Content-Type: application/json" --header "PRIVATE-TOKEN: ${TOKEN}" "${SERVER_URL}/api/v4/projects/${PROJECT_ID}/merge_requests"
}
# Pull the private ssh key and GitLab token from the environment here so we
# can work with them as arguments everywhere else. They're passed to us in
# the environment because *maybe* this is *slightly* safer than passing them
# in argv.
SSHKEY="$UPDATE_NIXPKGS_PRIVATE_SSHKEY_BASE64"
TOKEN="$UPDATE_NIXPKGS_PRIVATE_TOKEN"
# Before proceeding, remove the secrets from our environment so we don't pass
# them to child processes - none of which need them.
unset UPDATE_NIXPKGS_PRIVATE_SSHKEY_BASE64 UPDATE_NIXPKGS_PRIVATE_TOKEN
main "$SSHKEY" "$TOKEN" "$@"
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p git curl python3
set -eux -o pipefail
main() {
local TOKEN=$1
shift
local SERVER_URL=$1
shift
local PROJECT_ID=$1
shift
local SOURCE_BRANCH=$1
shift
local TARGET_BRANCH=$1
shift
# Make sure the things we want to talk about are locally known. GitLab
# seems to prefer to know about as few refs as possible.
checkout_git_ref "$SOURCE_BRANCH"
checkout_git_ref "$TARGET_BRANCH"
# If there have been no changes we'll just abandon this update.
if ! ensure_changes "$SOURCE_BRANCH" "$TARGET_BRANCH"; then
echo "No changes."
exit 0
fi
local NOTES=$(describe_update "$SOURCE_BRANCH" "$TARGET_BRANCH")
create_merge_request "$TOKEN" "$SERVER_URL" "$PROJECT_ID" "$SOURCE_BRANCH" "$TARGET_BRANCH" "$NOTES"
}
checkout_git_ref() {
local REF=$1
shift
git fetch origin "$REF"
}
ensure_changes() {
local SOURCE_BRANCH=$1
shift
local TARGET_BRANCH=$1
shift
if [ "$(git rev-parse origin/"$SOURCE_BRANCH")" = "$(git rev-parse origin/"$TARGET_BRANCH")" ]; then
return 1
fi
}
describe_merge_request() {
git show $rev | grep 'See merge request' | sed -e 's/See merge request //' | tr -d '[:space:]'
}
describe_merge_requests() {
local RANGE=$1
shift
local TARGET=$1
shift
# Find all of the relevant merge revisions
local onelines=$(git log --merges --first-parent -m --oneline "$RANGE" | grep "into '$TARGET'")
# Describe each merge revision
local IFS=$'\n'
for line in $onelines; do
local rev=$(echo "$line" | cut -d ' ' -f 1)
echo -n "* "
describe_merge_request $rev
echo
done
}
describe_update() {
local SOURCE_BRANCH=$1
shift
local TARGET_BRANCH=$1
shift
# Since the target (production) should not diverge from the source
# (develop) it is fine to use `..` instead of `...` in the git ranges here.
# `...` encounters problems related to discovering the merge base because
# of the way GitLab manages the git checkout on CI (I think).
local NOTES=$(git diff origin/"$TARGET_BRANCH"..origin/"$SOURCE_BRANCH" -- DEPLOYMENT-NOTES.rst)
# There often are no notes and that makes for boring reading so toss in a
# diffstat as well.
local DIFFSTAT=$(git diff --stat origin/"$TARGET_BRANCH"..origin/"$SOURCE_BRANCH")
local WHEN=$(git log --max-count=1 --format='%cI' origin/"$TARGET_BRANCH")
# Describe all of the MRs that were merged into the source branch that are
# about to be merged into the target branch.
local MR=$(describe_merge_requests origin/"$TARGET_BRANCH"..origin/"$SOURCE_BRANCH" "$SOURCE_BRANCH")
echo "\
Changes from $SOURCE_BRANCH since $WHEN
=======================================
Deployment Notes
----------------
\`\`\`
$NOTES
\`\`\`
Included Merge Requests
-----------------------
$MR
Diff Stat
---------
\`\`\`
$DIFFSTAT
\`\`\`
"
}
create_merge_request() {
local TOKEN=$1
shift
local SERVER_URL=$1
shift
local PROJECT_ID=$1
shift
# THe source branch of the MR.
local SOURCE_BRANCH=$1
shift
# The target branch of the MR.
local TARGET_BRANCH=$1
shift
local NOTES=$1
shift
local BODY=$(python3 -c '
import sys, json
print(json.dumps({
"id": sys.argv[1],
"source_branch": sys.argv[2],
"target_branch": sys.argv[3],
"remove_source_branch": True,
"title": f"update {sys.argv[3]}",
"description": sys.argv[4],
}))
' "$PROJECT_ID" "$SOURCE_BRANCH" "$TARGET_BRANCH" "$NOTES")
curl --verbose -X POST --data "${BODY}" --header "Content-Type: application/json" --header "PRIVATE-TOKEN: ${TOKEN}" "${SERVER_URL}/api/v4/projects/${PROJECT_ID}/merge_requests"
}
# Pull the GitLab token from the environment here so we can work with them as
# arguments everywhere else. They're passed to us in the environment because
# *maybe* this is *slightly* safer than passing them in argv.
#
# The name is slightly weird because it is shared with the update-nixpkgs job.
TOKEN="$UPDATE_NIXPKGS_PRIVATE_TOKEN"
# Before proceeding, remove the secrets from our environment so we don't pass
# them to child processes - none of which need them.
unset UPDATE_NIXPKGS_PRIVATE_TOKEN
main "$TOKEN" "$@"
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p openssh
# This minimal helper just runs another process with an ssh-agent available to
# it. ssh-agent itself does most of that work for us so the main benefit of
# the script is that it guarantees ssh-agent is available for us to run.
# Just give ssh-agent the commmand and it will run it and then exit when it
# does. This is a nice way to do process management so as to avoid leaking
# ssh-agents. Just in case cleanup fails for some reason, we'll also give
# keys a lifetime with `-t <seconds>` so secrets don't say in memory
# indefinitely. Note this means the process run by ssh-agent must finish its
# key-requiring operation within this number of seconds of adding the key.
ssh-agent -t 30 "$@"
{ pkgs ? import ./nixpkgs-2105.nix { } }:
{ pkgs ? import ./nixpkgs.nix { } }:
{
# Render the project documentation source to some presentation format (ie,
# html) with Sphinx.
......@@ -11,4 +11,9 @@
# Run some unit tests of the Nix that ties all of these things together (ie,
# PrivateStorageio-internal library functionality).
unit-tests = pkgs.callPackage ./nixos/unit-tests.nix { };
# Build all grids into a single derivation. The derivation also has several
# attributes that are useful for exploring the configuration in a repl or
# with eval.
morph = pkgs.callPackage ./morph {};
}
......@@ -61,7 +61,7 @@ master_doc = 'index'
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
......
......@@ -24,11 +24,10 @@ The system tests are run using this command::
$ nix-build --attr system-tests
The system tests boot QEMU VMs which prevents them from running on CI at this time.
The build requires > 10 GB of disk space,
and the VMs might be timing out on slow or busy machines.
If you run into timeouts,
try `raising the number of retries <https://whetstone.privatestorage.io/privatestorage/PrivateStorageio/-/blob/e8233d2/nixos/modules/tests/run-introducer.py#L55-62>`_.
try `raising the number of retries <https://whetstone.private.storage/privatestorage/PrivateStorageio/-/blob/e8233d2/nixos/modules/tests/run-introducer.py#L55-62>`_.
It is also possible go through the testing script interactively - useful for debugging::
......@@ -36,7 +35,7 @@ It is also possible go through the testing script interactively - useful for deb
This will give you a result symlink in the current directory.
Inside that is bin/nixos-test-driver which gives you a kind of REPL for interacting with the VMs.
The kind of `Python in this testScript <https://whetstone.privatestorage.io/privatestorage/PrivateStorageio/-/blob/78881a3/nixos/modules/tests/private-storage.nix#L180>`_ is what you can enter into this REPL.
The kind of `Python in this testScript <https://whetstone.private.storage/privatestorage/PrivateStorageio/-/blob/78881a3/nixos/modules/tests/private-storage.nix#L180>`_ is what you can enter into this REPL.
Consult the `official documentation on NixOS Tests <https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests>`_ for more information.
Updatings Pins
......@@ -45,22 +44,27 @@ Updatings Pins
Nixpkgs
```````
To update the version of NixOS we deploy with, run:
.. code: shell
To update the version of NixOS we deploy with, run::
nix-shell --run 'update-nixpkgs'
That will update ``nixpkgs-2015.json`` to the latest release on the nixos-21.05 channel.
That will update ``nixpkgs.json`` to the latest release on the nixos release channel.
To update the channel, the script will need to be updated,
along with the filenames that have the channel in them.
To create a text summary of what an update changes - to put in Merge Requests, for example - run::
nix-build -A morph -o result-before
update-nixpkgs
nix-build -A morph -o result-after
nix-shell -p nixUnstable
nix --extra-experimental-features nix-command store diff-closures ./result-before/ ./result-after/
Gitlab Repositories
```````````````````
To update the version of packages we import from gitlab, run:
.. code: shell
To update the version of packages we import from gitlab, run::
nix-shell --command 'update-gitlab-repo nixos/pkgs/<package>/repo.json'
......
System Designs
--------------
.. toctree::
:maxdepth: 2
System Design Template <template>
$HEADLINE
=========
*The goal is to do the least design we can get away with while still making a quality product.*
*Think of this as a tool to help define the problem, analyze solutions, and share results.*
*Feel free to skip sections that you don't think are relevant*
*(but say that you are doing so).*
*Delete the bits in italics*
**Contacts:** *The primary contacts for this design.*
**Date:** *The last time this design was modified. YYYY-MM-DD*
*Short description of the feature.*
*Consider clarifying by also describing what it is not.*
Rationale
---------
*Why are we doing this now?*
*What value does this give to our users?*
*Which users?*
User Stories
------------
**$STORY NAME**
**Category:** *must / nice to have / must not*
As a **$PERSON** I want **$FEATURE** so that **$BENEFIT**.
**Acceptance Criteria:**
* *What concrete conditions must be met for the implementation to be acceptable?*
* *Surface assumptions about the user story that may not be shared across the team.*
* *Describe failure modes and negative scenarios when preconditions for using the feature are not met.*
* *Place the story in a performance/scaling context with real numbers.*
*Have as many as you like.*
*Group user stories together into meaningfully deliverable units.*
*Gather Feedback*
-----------------
*It might be a good idea to stop at this point & get feedback to make sure you're solving the right problem.*
Alternatives Considered
-----------------------
*What we've considered.*
*What trade-offs are involved with each choice.*
*Why we've chosen the one we did.*
Detailed Implementation Design
------------------------------
*Focus on:*
* external and internal interfaces
* how externally-triggered system events (e.g. sudden reboot; network congestion) will affect the system
* scalability and performance
Data Integrity
~~~~~~~~~~~~~~
*If we get this wrong once, we lose forever.*
*What data does the system need to operate on?*
*How will old data be upgraded to meet the requirements of the design?*
*How will data be upgraded to future versions of the implementation?*
Security
~~~~~~~~
*What threat model does this design take into account?*
*What new attack surfaces are added by this design?*
*What defenses are deployed with the implementation to keep those surfaces safe?*
Backwards Compatibility
~~~~~~~~~~~~~~~~~~~~~~~
*What existing systems are impacted by these changes?*
*How does the design ensure they will continue to work?*
Performance and Scalability
~~~~~~~~~~~~~~~~~~~~~~~~~~~
*How will performance of the implementation be measured?*
*After measuring it, record the results here.*
Further Reading
---------------
*Links to related things.*
*Other designs, tickets, epics, mailing list threads, etc.*
......@@ -6,13 +6,16 @@
Welcome to PrivateStorageio's documentation!
============================================
Howdy! We separated the documentation into parts addressing different audiences. Please enjoy our docs for:
Howdy!
We separated the documentation into parts addressing different audiences.
Please enjoy our docs for:
.. toctree::
:maxdepth: 2
Administrators <ops/README>
Developers <dev/README>
System Designs <dev/designs/index>
Naming
......
......@@ -3,11 +3,11 @@ Administrator documentation
This contains documentation regarding running PrivateStorageio.
.. include::
../../morph/README.rst
.. include::
monitoring.rst
.. include::
generating-keys.rst
.. toctree::
:maxdepth: 2
morph
monitoring
generating-keys
backup-recovery
stripe
Backup/Recovery
===============
This document covers the details of backups of the data required for PrivateStorageio to operate.
It describes the situations in which these backups are intended to be useful.
It also explains how to use these backups to recover in these situations.
Tahoe-LAFS Storage Nodes
------------------------
The state associated with a Tahoe-LAFS storage node consists of at least:
1. the "node directory" containing
configuration,
logs,
public and private keys,
and service fURLs.
2. the "storage" directory containing
user ciphertext,
garbage collector state,
and corruption advisories.
Node Directories
~~~~~~~~~~~~~~~~
The "node directory" changes gradually over time.
New logs are written (including incident reports).
The announcement sequence number is incremented.
The introducer cache is updated.
The critical state necessary to reproduce an identical storage node does not change.
This state consists of
* the node id (my_nodeid)
* the node private key (private/node.privkey)
* the node x509v3 certificate (private/node.pem)
A backup of the node directory can be used to create a Tahoe-LAFS storage node with the same identity as the original storage node.
It *cannot* be used to recover the user ciphertext held by the original storage node.
Nor will it recover the state which gradually changes over time.
Backup
``````
A one-time backup has been made of these directories in the PrivateStorageio 1Password account.
The "Tahoe-LAFS Storage Node Backups" vault contains backups of staging and production node directories.
The process for creating these backups is as follows:
::
DOMAIN=private.storage
FILES="node.pubkey private/ tahoe.cfg my_nodeid tahoe-client.tac node.url permutation-seed"
DIR=/var/db/tahoe-lafs/storage
for n in $(seq 1 5); do
NODE=storage00${n}.${DOMAIN}
ssh $NODE tar vvjcf - -C $DIR $FILES > ${NODE}.tar.bz2
done
tar vvjcf ${DOMAIN}.tar.bz2 *.tar.bz2
Recovery
````````
#. Prepare a system onto which to recover the node directory.
The rest of these steps assume that PrivateStorageio is deployed on the node.
#. Download the backup tarball from 1Password
#. Extract the particular node directory backup to recover from ::
[LOCAL]$ tar xvf ${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}.tar.bz2
#. Upload the node directory backup to the system onto which recovery is taking place ::
[LOCAL]$ scp ${NODE}.${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}:recovery.tar.bz2
#. Clean up the local copies of the backup files ::
[LOCAL]$ rm -iv ${DOMAIN}.tar.bz2 ${NODE}.${DOMAIN}.tar.bz2
#. The rest of the steps are executed on the system on which recovery is taking place.
Log in ::
[LOCAL]$ ssh ${NODE}.${DOMAIN}
#. On the node make sure there is no storage service running ::
[REMOTE]$ systemctl status tahoe.storage.service
If there is then figure out why and stop it if it is safe to do so ::
[REMOTE]$ systemctl stop tahoe.storage.service
#. On the node make sure there is no existing node directory ::
[REMOTE]$ stat /var/db/tahoe-lafs/storage
If there is then figure out why and remove it if it is safe to do so.
#. Unpack the node directory backup into the correct location ::
[REMOTE]$ mkdir -p /var/db/tahoe-lafs/storage
[REMOTE]$ tar xvf recovery.tar.bz2 -C /var/db/tahoe-lafs/storage
#. Mark the node directory as created and consistent ::
[REMOTE]$ touch /var/db/tahoe-lafs/storage.created
#. Start the storage service ::
[REMOTE]$ systemctl start tahoe.storage.service
#. Clean up the remote copies of the backup file ::
[REMOTE]$ rm -iv recovery.tar.bz2
Storage Directories
~~~~~~~~~~~~~~~~~~~
The user ciphertext is backed up using `Borg backup <https://borgbackup.readthedocs.io/>`_ to a separate location - currently a SaaS backup storage service (`borgbase.com <https://borgbase.com>`_).
Borg backup uses a *RepoKey* secured by a *passphrase* to encrypt the backup data and an *SSH key* to authenticate against the backup storage service.
Each Borg backup job requires one *backup repository*.
The backups are automatically checked periodically.
SSH keys
````````
Borgbase `recommends creating ed25519 ssh keys with one hundred KDF rounds <https://www.borgbase.com/ssh>`_.
We create one key pair per grid (not per host)::
$ ssh-keygen -f borgbackup-appendonly-staging -t ed25519 -a 100
$ ssh-keygen -f borgbackup-appendonly-production -t ed25519 -a 100
Save the key without a passphrase and upload the public part to `Borgbase SSH keys <https://www.borgbase.com/ssh>`_.
Passphrase
``````````
Make up a passphrase to encrypt our repository key with. Use computer help if you like::
nix-shell --packages pwgen --command 'pwgen --secure 83 1' # 83 is the year I was born. Very random.
Create & initialize the backup repository
`````````````````````````````````````````
Borgbase.com offers a `borgbase.com GraphQL API <https://docs.borgbase.com/api/>`_.
Since our current number of repositories is small we save time by creating the repositories by clicking a few buttons in the `borgbase.com Web Interface <https://www.borgbase.com/repositories>`_:
* Set up one repository per backup job.
* Set the *Repository Name* to the FQDN of the host to be backed up.
* Add the SSH key created earlier as *Append-Only Access* key.
* Leave the other settings at their defaults.
Then initialize those repositories with our chosen parameters::
export BORG_PASSCOMMAND="cat borgbackup-passphrase-staging"
export BORG_RSH="ssh -i borgbackup-appendonly-staging"
borg init -e repokey-blake2 xyxyx123@xyxyx123.repo.borgbase.com:repo
Reliability checks
``````````````````
Borg handles large amounts of data.
Given enough bits rare, spurious bit flips become a problem.
That is why regular runs of ``borg check`` are recommended
(see the `borgbase FAQ <https://docs.borgbase.com/faq/#how-often-should-i-run-borg-check>`_).
Recovery
````````
Borg offers various methods to restore backups.
A very convenient method is to mount a backup set using FUSE.
Please consult the restore documentation at `Borgbase <https://docs.borgbase.com/restore/>`_ and `Borg <https://borgbackup.readthedocs.io/en/stable/usage/mount.html>`_.
......@@ -42,17 +42,6 @@ For example::
echo -n "SILOWzbnkBjxC1hGde9d5Q3Ir/4yLosCLEnEQGAxEQE=" > ristretto.signing-key
ZKAP-Issuer TLS
```````````````
The ZKAPIssuer.service needs a working TLS certificate and expects it in the certbot directory for the domain you configured, in my case::
openssl req -x509 -newkey rsa:4096 -nodes -keyout privkey.pem -out cert.pem -days 3650
touch chain.pem
Move the three .pem files into the payment's server ``/var/lib/letsencrypt/live/payments.localdev/`` directory and issue a ``sudo systemctl restart zkapissuer.service``.
Monitoring VPN
``````````````
......