Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
Loading items

Target

Select target project
  • tomprince/PrivateStorageio
  • privatestorage/PrivateStorageio
2 results
Select Git revision
Loading items
Show changes
Showing
with 423 additions and 26 deletions
{ "publicStoragePort": 8898
}
# Load the helper function and call it with arguments tailored for the testing
# grid. It will make the morph configuration for us. We share this function
# with the testing grid and have one fewer possible point of divergence.
import ./make-grid.nix {
name = "Production";
nodes = cfg: {
# Here are the hosts that are in this morph network. This is sort of like
# a server manifest. We try to keep as many of the specific details as
# possible out of *this* file so that this file only grows as server count
# grows. If it grows too much, we can load servers by listing contents of
# a directory or reading from another JSON file or some such. For now,
# I'm just manually maintaining these entries.
#
# The name on the left of the `=` is mostly irrelevant but it does provide
# a default hostname for the server if the configuration on the right side
# doesn't specify one.
#
# The names must be unique!
# Pass the whole grid configuration to the module and let it take what it
# wants.
"storage000" = import ./storage000.nix cfg;
};
}
/.vagrant
/public-keys/users.nix
Set up and use a network of local development VMs
-------------------------------------------------
... using `Vagrant <https://www.vagrantup.com/>`_ to manage VirtualBox VMs.
(The author of this documentation wasted a lot of time trying to get Vagrant to work with KVM/libvirt.
Issues with networking that looked like guest misconfigurations vanished after changing to the better-tested combination of Vagrant and VirtualBox.)
This requires `NixOS <https://nixos.org/>`_.
Nix without the OS will not work.
Use the local development environment
`````````````````````````````````````
0. Add to your NixOS system configuration at ``/etc/nixos/configuration.nix`` (and rebuild)::
# Enable libvirt - likely incompatible with virtualisation.virtualbox!
virtualisation.libvirtd.enable = true;
# Required for LibVirt
security.polkit.enable = true;
# Enable HW acceleration if (nested virtualisation is) available
#boot.kernelModules = [ "kvm-amd" "kvm-intel" ];
1. Enter the morph local grid directory::
cd morph/grid/local
2. Enter the project's nix-shell::
nix-shell ../../../shell.nix
3. Build and start the VMs::
vagrant up --provider=libvirt
Optionally, to switch from QEMU to KVM virtualization, edit the virtual machine definition of all the machines and replace the "qemu" on the first line with "kvm"::
sudo virsh list
sudo virsh edit <machine id> (once for every machine)
vagrant halt
vagrant up
4. Then, add the Vagrant SSH configuration to your user's ``~/.ssh/config`` file::
install -d ~/.ssh ; vagrant ssh-config >> ~/.ssh/config
Latest Morph honors the ``SSH_CONFIG_FILE`` environment variable (`since 3f90aa88 (March 2020, v 1.5.0) <https://github.com/DBCDK/morph/commit/3f90aa885fac1c29fce9242452fa7c0c505744ef#diff-d155ad793bd62e6ea4c44ba985049ecb13a4f4f32f799791b2bce695a16c0101>`_), so in the future this should get a bit more convenient.
5. Create a ``public-keys/users.nix`` file with your SSH key (see ``public-keys/users.nix.example`` for the format) so you'll be able to log in after deploying the new configuration::
$EDITOR public-keys/users.nix
6. Then, build and deploy our software to the Vagrant VMs::
morph build grid.nix
morph push grid.nix
morph deploy grid.nix boot
vagrant halt
vagrant up
morph upload-secrets grid.nix
You should now be able to log in with the users and keys you set in your ``users.nix`` file.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# This Vagrantfile worked for Florian Sesser using Vagrant 2.2.19 and
# the LibVirt with QEmu Hypervisor. Earlier Vagrant and VirtualBox did worked too.
# Get a dedicated LibVirt pool name or use default one
pool_name = ENV.has_key?('POOL_NAME') ? ENV['POOL_NAME'] : 'default'
# For instance, one could create such pool beforehand as follows:
# export POOL_NAME=morph_local_$(id -un)
# POOL_PATH="/path/to/your/storage"
# mkdir -p "${POOL_PATH}"
# sudo virsh pool-define-as ${POOL_NAME} --type dir --target "${POOL_PATH}"
# sudo virsh pool-autostart ${POOL_NAME}
# sudo virsh pool-start ${POOL_NAME}
Vagrant.configure("2") do |config|
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Select the base image
config.vm.box = "esselius/nixos"
config.vm.box_version = "20.09"
config.vm.box_check_update = false
# No need to sync the working dir. with the guest boxess
# Better use SFTP to transfer
config.vm.synced_folder ".", "/vagrant", disabled: true
# Tune LibVirt/QEmu guests
config.vm.provider :libvirt do |domain|
# The default of one CPU should work
# Increase to speed up boot/push/deploy
# domain.cpus = 1
# To use the self-updating deployment system you need more memory. Giving
# all of the VMs enough memory for this is rather taxing, though, and the
# self-updating deployment system is not particularly useful for local
# dev. But should you want to:
#
# domain.memory = 4096
#
# Meanwhile, 1024 was apparently the default with VirtualBox
domain.memory = 1024
# Using a specific pool may help to manage the disk space
domain.storage_pool_name = pool_name
domain.snapshot_pool_name = pool_name
# No need of graphics - better use serial
domain.graphics_type = "none"
domain.video_type = "none"
end
config.vm.define "payments.localdev" do |config|
config.vm.hostname = "payments"
# Assign a static IP address inside the box host-only (Vagrant
# calls it "private") network. The address must be in the range
# VirtualBox allows.
# https://www.virtualbox.org/manual/ch06.html#network_hostonly says some
# things about this.
config.vm.network "private_network", ip: "192.168.56.21"
# Add self signed SSL key for zkap-issuer:
config.vm.provision "file", source: "private-keys/payments-localdev-ssl", destination: "/tmp/payments-localdev-ssl"
config.vm.provision "shell", inline: "sudo mkdir -p /var/lib/letsencrypt/live/payments.localdev/"
config.vm.provision "shell", inline: "sudo mv /tmp/payments-localdev-ssl/* /var/lib/letsencrypt/live/payments.localdev/"
end
config.vm.define "storage1.localdev" do |config|
config.vm.hostname = "storage1"
config.vm.network "private_network", ip: "192.168.56.22"
end
config.vm.define "storage2.localdev" do |config|
config.vm.hostname = "storage2"
config.vm.network "private_network", ip: "192.168.56.23"
end
config.vm.define "monitoring.localdev" do |config|
config.vm.hostname = "monitoring"
config.vm.network "private_network", ip: "192.168.56.24"
end
# To make the VMs assign the static IPs to the network interfaces we need a rebuild:
## Rename to 'nix.settings.trusted-users' after 20.09 or so:
config.vm.provision "shell",
inline: "echo '{ nix.trustedUsers = [ \"@wheel\" \"root\" \"vagrant\" ]; boot.kernelParams = [ \"console=tty0\" \"console=ttyS0,115200\" ]; }' > /etc/nixos/custom-configuration.nix"
config.vm.provision "shell", inline: "nixos-rebuild switch"
config.vm.provision "shell", inline: "systemctl stop firewall.service"
config.vm.provision "shell", inline: "systemctl start serial-getty@ttyS0.service"
config.trigger.after :up do |trigger|
trigger.info = "Hostname and IP address this host actually uses:"
trigger.run_remote = {inline: "echo `hostname` `ifconfig | egrep -o '192.168.56.[0-9]* '`"}
end
end
{ "domain": "localdev"
, "publicStoragePort": 8898
, "publicKeyPath": "./public-keys"
, "privateKeyPath": "./private-keys"
, "monitoringvpnEndpoint": "192.168.56.24:51820"
, "passValue": 1000000
, "issuerDomains": ["payments.localdev"]
, "monitoringDomains": ["monitoring.localdev"]
, "letsEncryptAdminEmail": "florian@privatestorage.io"
, "allowedChargeOrigins": [
"http://localhost:5000"
]
, "monitoringGoogleOAuthClientID": ""
}
let
gridlib = import ../../lib;
grid-config = builtins.fromJSON (builtins.readFile ./config.json);
ssh-users = let
ssh-users-file = ./public-keys/users.nix;
in
if builtins.pathExists ssh-users-file then
import ssh-users-file
else
# Use builtins.toString so that nix does not add the file
# to the nix store before including it in the string.
throw ''
ssh-keys for local grid are not configured.
Refusing to build a possibly inaccessible configuration.
Please create ${builtins.toString ssh-users-file} before building.
See ${builtins.toString ./README.rst} for more information.
'';
# Module with per-grid configuration
grid-module = {config, ...}: {
imports = [
gridlib.base
# Allow us to remotely trigger updates to this system.
../../../nixos/modules/deployment.nix
# Give it a good SSH configuration.
../../../nixos/modules/ssh.nix
# Configure things specific to the virtualisation environment.
gridlib.hardware-vagrant
];
services.private-storage.sshUsers = ssh-users;
# Include the ssh-users config in a form that can be read by nix,
# so the self-update deployment system can access it.
# nixos/modules/update-deployment imports the nix file into
# the checkout of this repository it creates.
environment.etc."nixos/ssh-users.json" = {
# Output the loaded value, rather than just copying the file, in case the
# file has external references.
mode = "0666";
text = builtins.toJSON ssh-users;
};
environment.etc."nixos/ssh-users.nix" = {
# This is the file that is imported by update-deployment.
# We don't directly read the JSON so that the script doesn't
# depend on the format we use.
mode = "0666";
text = ''
# Include the ssh-users config
builtins.fromJSON (builtins.readFile ./ssh-users.json)
'';
};
networking.domain = grid-config.domain;
# Convert relative paths to absolute so library code can resolve names
# correctly.
grid = {
publicKeyPath = toString ./. + "/${grid-config.publicKeyPath}";
privateKeyPath = toString ./. + "/${grid-config.privateKeyPath}";
inherit (grid-config) monitoringvpnEndpoint letsEncryptAdminEmail;
};
# Configure deployment management authorization for all systems in the grid.
services.private-storage.deployment = {
authorizedKey = builtins.readFile "${config.grid.publicKeyPath}/deploy_key.pub";
gridName = "local";
};
};
payments = {
imports = [
gridlib.issuer
grid-module
];
config = {
grid.monitoringvpnIPv4 = "172.23.23.11";
grid.publicIPv4 = "192.168.56.21";
grid.issuer = {
inherit (grid-config) issuerDomains allowedChargeOrigins;
};
};
};
storage1 = {
imports = [
gridlib.storage
grid-module
];
config = {
grid.monitoringvpnIPv4 = "172.23.23.12";
grid.publicIPv4 = "192.168.56.22";
grid.storage = {
inherit (grid-config) passValue publicStoragePort;
};
system.stateVersion = "19.09";
};
};
storage2 = {
imports = [
gridlib.storage
grid-module
];
config = {
grid.monitoringvpnIPv4 = "172.23.23.13";
grid.publicIPv4 = "192.168.56.23";
grid.storage = {
inherit (grid-config) passValue publicStoragePort;
};
system.stateVersion = "19.09";
};
};
monitoring = {
imports = [
gridlib.monitoring
grid-module
];
config = {
grid.monitoringvpnIPv4 = "172.23.23.1";
grid.publicIPv4 = "192.168.56.24";
grid.monitoring = {
inherit paymentExporterTargets blackboxExporterHttpsTargets;
inherit (grid-config) monitoringDomains;
googleOAuthClientID = grid-config.monitoringGoogleOAuthClientID;
enableZulipAlert = false;
};
system.stateVersion = "19.09";
};
};
# TBD: derive these automatically:
paymentExporterTargets = [ "payments.monitoringvpn" ];
blackboxExporterHttpsTargets = [
# "https://private.storage/"
# "https://payments.private.storage/"
];
in {
network = {
description = "PrivateStorage.io LocalDev Grid";
inherit (gridlib) pkgs;
};
inherit payments monitoring storage1 storage2;
}
Deployment Secrets
==================
Deploying PrivateStorageio requires certain secrets.
For the localdev grid these secrets are kept in this (public) directory.
This is intended to help make it as easy as possible to launch a local deployment.
It also serves as an example of what secrets are required for any other deployment.
You can find more information about some of these secrets in ``ops/generating-keys.rst``.
deploy_key
----------
This SSH private key authenticates an account used by the continuous deployment system.
Each node authorizes that account to trigger a deployment update on itself.
The corresponding SSH public key is kept in the ``public-keys`` location.
grafana-admin.password
----------------------
This is the initial admin password for the Grafana web admin on the monitoring host.
grafana-slack-url
-----------------
This file is read by Grafana's systemd service to set an environment variable with a secret Slack WebHook URL to post alerts to.
The only line in the file should be the secret URL.
Use the url from `this 1Password entry <https://privatestorage.1password.com/vaults/7flqasy5hhhmlbtp5qozd3j4ga/allitems/cgznskz2oix2tyx5xyntwaos5i>`_ or get a new secret URL for your Slack channel at https://www.slack.com/apps/A0F7XDUAZ.
grafana-zulip-url
-----------------
This file should contain a single line with the secret Zulip alerting Webhook Bot URL.
The URLs for Staging and Production are both stored in 1Password.
See `https://zulip.com/integrations/doc/grafana`_ for documentation and ``grid/local/private-keys/grafana-zulip-url`` for an example.
stripe.secret
-------------
This is the Stripe secret key which the payment server uses to finalize payment processing using Stripe.
The corresponding Stripe public key is kept in the ``public-keys`` location.
ristretto.signing-key
---------------------
This is the Ristretto-group private key used by the ZKAP issuer.
monitoringvpn
-------------
This directory holds Wireguard private keys for each of the hosts so they can participate in the deployment VPN.
The corresponding public keys are kept in the ``public-keys`` location.
payments-localdev-ssl
---------------------
This secret is *only* present for the localdev grid.
This contains a TLS certificate and private key for the payment server.
Other deployments will automatically generate a key and obtain a certificate from Let's Encrypt.
The most interesting passphrase in the world.
-----BEGIN OPENSSH PRIVATE KEY-----
ratatatratatatratatatratatatratatatratatatratatatratatatratatatratatat
ratatatratatatratatatratatatratatatratatatratatatratatatratatatratatat
ratatatratatatratatatratatatratatatratatatratatatratatatratatatratatat
ratatatratatatratatatratatatratatatratatatratatatratatatratatatratatat
ratatatratatatratatatratatatratatatratatatc=
-----END OPENSSH PRIVATE KEY-----
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACADU1IBThyH0blWBG8afIA/h/bmVdUQkFAuAIQgAWE+ewAAAJjsA+8c7APv
HAAAAAtzc2gtZWQyNTUxOQAAACADU1IBThyH0blWBG8afIA/h/bmVdUQkFAuAIQgAWE+ew
AAAED6aLiQi/K2qG8sLsvV8Xar9PjJeFxKfb+GUvmseu8TqQNTUgFOHIfRuVYEbxp8gD+H
9uZV1RCQUC4AhCABYT57AAAADmV4YXJrdW5AYmFyeW9uAQIDBAUGBw==
-----END OPENSSH PRIVATE KEY-----
Naht3Pha
https://hooks.slack.com/services/x/y/z
https://yourZulipDomain.zulipchat.com/api/v1/external/grafana?api_key=abcdefgh&stream=stream%20name&topic=your%20topic
cLP62YAYoA7FY+OhSLR64DIHekOjGGQlfJAWp5cYP00=
qFjBtvJKBchzl2HwFvEDoe3zFzyc10osiRlP8HOk2n0=
8HlKTvxZBAZeww6JaNk9kBPjSfT0pVMbDJbzV67yUGE=
E7KTLVnWMmP/mIEkU8WX2DBZJaeMS2+sYArRZuGT1o4=
iOp2pk2HWyNgRnke7nJgFwodkTWMyHRIKwe8pk+bN3M=
-----BEGIN CERTIFICATE-----
MIIDGTCCAgGgAwIBAgIUM9YnOMe2yYQuCLpYiI2TpZg21AMwDQYJKoZIhvcNAQEL
BQAwHDEaMBgGA1UEAwwRcGF5bWVudHMubG9jYWxkZXYwHhcNMjEwNjA5MTQzNzU4
WhcNMzEwNjA3MTQzNzU4WjAcMRowGAYDVQQDDBFwYXltZW50cy5sb2NhbGRldjCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOBCvkMprdRjD8ECeweMVTFT
pILu5VA8HC8XK15Y4iE8G48cSYE3m/e6w5g4JvfXgED9eY+1/ZazQXVI6sojnfHL
Vj9XqsUww3IVtq2Zn+0B8YTrbNxcDH77mUx0mUziAm1bglmoNbd4+q0Fe8FhK/EN
Jj7UppRjD2ziV9Mw7UT1JjMbfSo7/7ZOV2cWk+hezOywsE+3BM6mju1aHbbYryAd
EzRwfQKcI/9PC84Bck9e+tEiWQImBf8DguQ+ChuSOmGtP1rI+hq5HfCimH0NCCNS
iUfNVcubz920FRBeAx9oM0G/u4feeS8T5a1PThZrdhjIycEmU5wsM7F34RzkuEkC
AwEAAaNTMFEwHQYDVR0OBBYEFAbBfAFOlx4c53CNgsRlobFwUIBbMB8GA1UdIwQY
MBaAFAbBfAFOlx4c53CNgsRlobFwUIBbMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI
hvcNAQELBQADggEBACD+47l30gGaZBPvuCMc/CENLmuqZqj7WGGmWcmUAzsdEgHt
Q1zhj4un4u4qQ78jeT+cjqNK9qartQIjxzYTpn6lKDBpM9sS2G7RdLxOWWRVAXQj
Cv6l0tEQArIqqMnzmECe0VDCc97kTg6tnMLFeyQL29XSSNIpS0DiQ+NC69fyrF2E
vt2bAi+QTlN1U1ZGDJqWlwqkpI3xTRDJnmRPuVNrt07gkMWeSJHEI98Qcve7Ujf3
hFcAnbshJ8iXxbNjTFIQcCJzHq1UF+eZo8BF1ixYmrk/jmaLNBOSy93tOk1XeJfS
XtV6X9jmAE20V0XV8uCli6cDxQQYthll/1q5JYQ=
-----END CERTIFICATE-----