Last updated: 2024-11-21 22:06:43.752553 File source: link on GitLab
This folder holds files that are deployed in our infrastructure.
Unless stated otherwise these files are manually deployed.
NGINX config file describing the ci reports web server. Deployed at dev.nunet.io.
Cronjob file to remove reports older than 90 days. Deployed at dev.nunet.io.
Last updated: 2024-11-21 22:06:44.571516 File source: link on GitLab
This file is deployed at the same server whose IP is pointed by the URL. In the server the file will be different because certbot manages the certificate there.
Last updated: 2024-11-21 22:06:44.019196 File source: link on GitLab
This project aims to leverage terraform to provision LXD instances where multiple DMS executions will reside.
The ultimate goal of this is to have a generic and flexible provisioning standard to setup the DMS clusters wherever there is access to the LXD api.
LXD API enabled on the target hosts. See HOW-TO: expose LXD to the network.
For legacy versions you can refer to Directly interacting with the LXD API. This authentication method has been removed in recent versions of LXD.
Dependencies:
LXD CLI which must provide the lxc
command line interface.
DMS Deb file
To download the latest dms release, refer to dms installation guide
yq, a jq wrapper
This project can alternatively be run using the provided Dockerfile
.
If using docker:
build the image
run the image
You can use the dockerfile as a complete reference for all the dependencies that are expected to be present in order to execute this project
You can run all commands through Docker after building the image:
The -v ~/.ssh:/root/.ssh:ro
mount is optional but useful if you need SSH access to the host machine.
First copy the configuration dist file:
Then modify the values accordingly:
If desired, you can customize the amount of DMS deployments by adding dms_instances_count
to the config.yml file:
If ommited, one DMS instance per LXD host is deployed by default.
An ssh key called lxd-key
(and lxd-key.pub
) is created and used for the deployment of the instances. If you want to override the key, just add to terraform.tfvars
:
The default terraform variables can be seen in variables.tf
. Customizing their default values is optional.
To customize variables, for instance the dms file, which is "dms_deb_filepath", add this line to terraform.tfvars
. Create the file if it doesn't exist:
For a complete list of variables, check the file variables.tf
.
This project also supports using nunet/nebula which is a project that is based off slackhq/nebula.
Note that the nunet project is private as of the time of writing this document.
To enable the use of nebula, add to the terraform.tfvars
:
And provide the necessary nebula users with their respective associated IPs, adding them to the config.yml
file:
Notice that you must provide at least the same amount of users as the expected dms instances to be deployed, otherwise the execution will fail.
NOTE: If using docker, run these inside the container.
Spin up the cluster using bash make.sh
. NOTE: make
isn't actually used for the deployment.
Use lxd_vm_addresses.txt
to connect and execute code in the remote instances:
When done, destroy the infrastructure using bash destroy.sh
.
The following files are produced after running this project with make.sh
.
This script is a helper to add the lxd remote servers to your local lxd client in order to help with managing remote instances.
It looks something like this:
Upon execution, the remotes are added to your local machine. You can then list the virtual machines in each remote:
You can then terminate instances at will, for instance if while using this project the opentofu component enters an inconsistent state:
This is a list of hosts that have been tested and are reachable from the machine where this project is being executed:
This is a list of unreachable hosts, that during test failed to respond:
This is a list of the IPv4s available for connection after provisioning the infrastructure. It is a simple file with one IP per line which can be easily iterated over using bash or any other language like python.
If nebula is enabled, these IPs are replaced with the internal IPs of nebula, assigned to each VM:
To iterate over the list for connecting over ssh using bash:
For processing the file in a script like python:
There are instances where docker prevents lxd instances to communicate with the internet consistently. This issue manifests itself in a scenario where the user can upgrade and install packages with APT but anything else will halt indefinitely.
To overcome this, add the following rules to iptables
(using sudo
whenever necessary):
NOTE: in the current state, virtual-machine
and container
will work with the same terraform file at the expense of having async installation of DMS. Therefore beware that the terraform will return successfully while there will still be code running inside the lxd instances. Check either /var/log/cloud-init-output.log
or /var/log/init.log
inside each lxd instance for information whether installation finished successfully.
virtual-machine
wasn't working before because the file
block in lxd_instance
resources expect to be able to provision files while the lxd instance is still in a stopped
state, which work for containers because of the nature of their filesystem (overlayfs or similar) but not for virtual machines which have file system that isn't accessible as a direct folder. Using lxd_instance_file
resource, which will upload a file once the instance is up and running solves the issue. However exec blocks in lxd_instance
resource, which work synchornously with terraform won't work with lxd_instance_file
if it depends on the file because execution can't be staggered until the file is provisioned. Therefore we have to leverage cloud-init
's runcmd
for that, which runs in the background after terraform returns.
This part of the documentation is generated automatically from terraform using terraform-docs.
To update it, run:
No modules.
Name | Version |
---|---|
Name | Version |
---|---|
Name | Type |
---|---|
Name | Description | Type | Default | Required |
---|---|---|---|---|
Name | Description |
---|---|
2.3.0
2.5.2
2.3.0
resource
resource
resource
resource
resource
resource
Name of the environment to draw configuration from
string
null
no
Tells terraform whether lxd has been installed via snap
string
"other"
no
n/a
n/a
n/a