Skip to content

Infrastructure - Installation

Overview

Infrastructure overview

Admin's machine is called BASTION in the rest of the installation manual

Bastion requirements

  • miniforge
  • git
  • jq

Dependencies

Terraform

This project exploits Terraform to deploy the infrastructure on the Cloud Provider.
The fully detailed documentation and configuration options are available on its page: https://www.terraform.io/

Kubespray

This project exploits Kubespray to deploy Kubernetes.
The fully detailed documentation and configuration options are available on its page: https://kubespray.io/

HashiCorp Vault (optional)

This project can integrate credentials from a custom HashiCorp Vault instance, see the specific documentation: how to/Credentials

Openstack CLI

This project exploits Openstack CLI to manage the state of the infrastructure on the Cloud Provider.
The fully detailled documentation and configuration options are available on its page: https://docs.openstack.org/newton/user-guide/cli.html

Quickstart

1. Get the rs-infrastructure repository

git clone https://github.com/RS-PYTHON/rs-infrastructure.git
cd rs-infrastructure

2. Install requirements

# Install miniforge
mkdir -p ~/miniforge3
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh" -O ~/miniforge3/miniforge.sh
bash ~/miniforge3/miniforge.sh -b -u -p ~/miniforge3
rm -f ~/miniforge3/miniforge.sh

# Init conda depending on your shell
~/miniforge3/bin/conda init bash
~/miniforge3/bin/conda init zsh

# Create conda env with python=3.11 and activate it
conda create -y -n rspy python=3.11
conda activate rspy

# Install Ansible, Terraform, Openstackclient
conda install conda-forge::ansible
conda install conda-forge::terraform
conda install conda-forge::python-openstackclient
conda install conda-forge::passlib

# Init Kubespray collection with remote
git submodule update --init --remote

pip install -U pyOpenSSL ecdsa -r collections/kubespray/requirements.txt

ansible-galaxy collection install \
    kubernetes.core \
    openstack.cloud

3. Copy the sample inventory

cp -rfp inventory/sample inventory/mycluster

4. Review and change the default configuration to match your needs

cp -rfp roles/terraform/create-cluster/tasks/.env.template roles/terraform/create-cluster/tasks/.env

Copy the openrc.sh.template into openrc.sh and change the values inside to match your configuration :

cp -rfp inventory/mycluster/openrc.sh.template inventory/mycluster/openrc.sh
  • Credentials, domain name, the stash license, S3 endpoints in inventory/mycluster/host_vars/setup/main.yaml
  • Credentials in roles/terraform/create-cluster/tasks/.env
  • Credentials, domain name in inventory/mycluster/openrc.sh
  • Node groups, Network sizing, S3 buckets in inventory/mycluster/cluster.tfvars
  • S3 backend for terraform in inventory/mycluster/backend.tfvars
  • Optimization for well-known zones and/or internal-only domains, i.e. VPN/Object Storage for internal networks in inventory/mycluster/host_vars/setup/kubespray.yaml
  • Values for custom parameters in inventory/mycluster/host_vars/setup/apps.yml

Private container registry

As the RS-PYTHON/rs-infrastructure is private for now, the custom docker image for JupyterHub is also private for now. You need to generate a token in order to set up the docker config secrets for the private ghcr.io registry. Follow the how-to GitHub Container Registry.

ansible-playbook generate_inventory.yaml \
    -i inventory/mycluster/hosts.yaml

5. Create and configure machines

ansible-playbook cluster.yaml \
    -i inventory/mycluster/hosts.yaml

Note: DNS configuration

At this point, you should configure your domain name to point to the master's IP from the inventory/mycluster/hosts.yaml file.

6. Deploy Kubernetes with kubespray

ansible-playbook kubernetes.yaml \
    -i inventory/mycluster/hosts.yaml

7. Deploy the apps

Disclaimer: For Wazuh Server installation

See "A. Pre-Install 1. Enable Bcrypt encryption for installation" in the Wazuh-Server_Install and update the encrypt.py library before deploy the apps.

ansible-playbook apps.yaml \
    -i inventory/mycluster/hosts.yaml

Disclaimer: For Prefect-Worker post-configuration

See "2. set Concurrency Limit on workpool _on-demand-k8s-pool"_ in the Prefect-Worker after deploy the app.

Disclaimer: For Wazuh Server installation

See "B. Post-Install: Apply modifications set during installation process (new credentials and SSO)" in the Wazuh-Server_Install and execute scripts after deploy the app.

Disclaimer: For Neuvector post-configuration

See "Enable SSO" in the Neuvector after deploy the app.

8. Deploy the rs-server

ansible-playbook rs-server.yaml \
    -i inventory/mycluster/hosts.yaml

Copyright and license

The Reference System Software as a whole is distributed under the Apache License, version 2.0. A copy of this license is available in the LICENSE file. Reference System Software depends on third-party components and code snippets released under their own license (obviously, all compatible with the one of the Reference System Software). These dependencies are listed in the NOTICE file.



This project is funded by the EU and ESA.