Virtual Environments: Servers

There are two predefined virtual environments in the Vagrantfile:

This document explains the purpose of, and how to get started working with, each one.

Note

If you plan to alter the configuration of any of these machines, make sure to review the Testing: Configuration Tests documentation.

Note

If you have errors with mounting shared folders in the Vagrant guest machine, you should look at GitHub #1381.

Note

If you see test failures due to Too many levels of symbolic links and you are using VirtualBox, try restarting VirtualBox.

Staging

A compromise between the development and production environments. This configuration can be thought of as identical to the production environment, with a few exceptions:

  • The Debian packages are built from your local copy of the code, instead of installing the current stable release packages from https://apt.freedom.press.
  • The staging environment is configured for direct SSH access so it’s more ergonomic for developers to interact with the system during debugging.
  • The Postfix service is disabled, so OSSEC alerts will not be sent via email.

This is a convenient environment to test how changes work across the full stack.

You should first bring up the VM required for building the app code Debian packages on the staging machines:

make build-debs
vagrant up /staging/
vagrant ssh app-staging
sudo su
cd /var/www/securedrop
./manage.py add-admin
pytest -v tests/

To rebuild the local packages for the app code and update on staging:

make build-debs
vagrant up /staging/
vagrant provision

The Debian packages will be rebuilt from the current state of your local git repository and then installed on the staging servers.

Note

If you are using macOS and you run into errors from Ansible such as OSError: [Errno 24] Too many open files, you may need to increase the maximum number of open files. Some guides online suggest a procedure to do this that involves booting to recovery mode and turning off System Integrity Protection (csrutil disable). However this is a critical security feature and should not be disabled. Instead follow this procedure to increase the file limit.

Set /Library/LaunchDaemons/limit.maxfiles.plist to the following:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
    <dict>
      <key>Label</key>
        <string>limit.maxfiles</string>
      <key>ProgramArguments</key>
        <array>
          <string>launchctl</string>
          <string>limit</string>
          <string>maxfiles</string>
          <string>65536</string>
          <string>65536</string>
        </array>
      <key>RunAtLoad</key>
        <true/>
      <key>ServiceIPC</key>
        <false/>
    </dict>
  </plist>

The plist file should be owned by root:wheel:

sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist

This will increase the maximum open file limits system wide on macOS (last tested on 10.11.6).

The web interfaces and SSH are available over Tor. A copy of the the Onion URLs for Source and Journalist Interfaces, as well as SSH access, are written to the Vagrant host’s install_files/ansible-base directory, named:

  • app-source-ths
  • app-journalist-aths

For working on OSSEC monitoring rules with most system hardening active, update the OSSEC-related configuration in install_files/ansible-base/staging-specific.yml so you receive the OSSEC alert emails.

Direct SSH access is available via Vagrant for staging hosts, so you can use vagrant ssh app-staging and vagrant ssh mon-staging to start an interactive session on either server.

Production

This is a production installation with all of the system hardening active, but virtualized, rather than running on hardware. You will need to configure prod-like secrets, or export ANSIBLE_ARGS="--skip-tags validate" to skip the tasks that prevent the prod playbook from running with Vagrant-specific info.

You can provision production VMs from an Admin Workstation (most realistic), or from your host. Instructions for both follow.

Install from an Admin Workstation VM

In SecureDrop, admin tasks are performed from a Tails Admin Workstation. You should configure a Tails VM in order to install the SecureDrop production VMs by following the instructions in the Virtualizing Tails guide.

Once you’re prepared the Admin Workstation, you can start each VM:

vagrant up --no-provision /prod/

At this point you should be able to SSH into both app-prod and mon-prod. From here you can follow the server configuration instructions to test connectivity and prepare the servers. These instructions will have you generate SSH keys and use ssh-copy-id to transfer the key onto the servers.

Note

If you have trouble SSHing to the servers from Ansible, remember to remove any old ATHS files in install_files/ansible-base.

Now from your Admin workstation:

cd ~/Persistent/securedrop
./securedrop-admin setup
./securedrop-admin sdconfig
./securedrop-admin install

Note

The sudo password for the app-prod and mon-prod servers is by default vagrant.

After install you can configure your Admin Workstation to SSH into each VM via:

./securedrop-admin tailsconfig

Install from Host OS

If you are not virtualizing Tails, you can manually modify site-specific, and then provision the machines. You should set the following options in site-specific:

ssh_users: "vagrant"
monitor_ip: "10.0.1.5"
monitor_hostname: "mon-prod"
app_hostname: "app-prod"
app_ip: "10.0.1.4"

Note that you will also need to generate Submission and OSSEC PGP public keys, and provide email credentials to send emails to. Refer to this document on configuring prod-like secrets for more details on those steps.

To create the prod servers, run:

vagrant up /prod/
vagrant ssh app-prod
sudo su
cd /var/www/securedrop/
./manage.py add-admin

A copy of the Onion URLs for Source and Journalist Interfaces, as well as SSH access, are written to the Vagrant host’s install_files/ansible-base directory, named:

  • app-source-ths
  • app-journalist-aths
  • app-ssh-aths
  • mon-ssh-aths

SSH Access

By default, direct SSH access is not enabled in the prod environment. You will need to log in over Tor after initial provisioning or set enable_ssh_over_tor to “false” during ./securedrop-admin tailsconfig. See Connecting to VMs via SSH over Tor or Configuring SSH for local access for more info.