ocap_auth
Last updated: 2025-02-05 01:11:15.011374 File source: link on GitLab
NuNet NuActor OCAPS Authentication Demo
This folder contains the scripts required to spin up a lab to see how tokens can be generated by nodes in nunet in order to grant specific capabilities. To run the testlab we reccomend a vanilla ubuntu server (So as not to interfere with any other configurations on a server that is also using LXD.) We will be using LXD VM's for the various nodes in the network.
Demo Environment
ansible-server (used to build the environment)
Keymanager (used to create keys and capabilities)
Orchestrator 1 (used to deploy workloads)
Orchestrator 2 (used to deploy workloads)
Compute Provider 1, 2, 3. (Used to run workloads)
pre-requisites
A ubuntu server with lxd installed and enough resources to satisfy the configuration that you specify in the variables (num_vms * ram and num_vms * cores should be less than 90% of your servers capacity.) enough ip addresses on your network for all the servers to obtain a dhcp lease
if LXD is not installed you can install with
create-testlab-and-profile.sh
This script will check for an LXD environment and configure it if its not present. It will basiccly check if LXD has been initalised and if not do that / configure storage. If the lxd server has alredy been initalised it will not modify anything. It will create a custom profile that will be used to launch vm's with the spec defined in the environmnet variables.
The VM's created will use MACVLAN adapaters so that they have IP addresses allocated from your local network as opposed to bridge on your machine. (This gives machines direct access to internet without a Dual NAT if required for distributed testing) So be aware that you need to have enough spare addresses in your DHCP scope if you are spinning a large number of nodes or running on a cloud host.
These are the default variables, you should make sure they are suitable for your environment. If you have a vanilla ubuntu server with at least 64GB ram and at least 7 avaiallbe DHCP leases you should be fine.
If you want to remotley access the servers you can add your public ssh key and the script will add it to the auth
Once you have the environment variables set accordingly you can run the script to bring up the environment. Take care to check the network interface name is correct (it should be the interface that has internet access) as there is no error checking on this yet.
Note you will be prompted for a public ssh key that the script will add to all the machines if you want direct remote access over the network.
Once the environment is deployed do the following
you can check the status of vms in the environment with
You can also manage any of the machines directly using terminal with (just change the machine name in the lxc exec command below)
The commands you need to run to configure the machines in the environment are as follows
modify create-inventory-yml.sh to reflect your environment
There is an ansible folder that contains the playbooks and scripts for the ansible server that manage the deployment, configuration as well as execute distributed commands across the machines in the lab enabling the exchange of capability keys between participants. This folder will be copied into the ansible server when it becoms available.
The folder mainly contains playbooks but there is one script that will create the inventory.yml based on what happend during the lab deployment. There a few variables in here that you can change to modify the environment.
Change the organization name as required and passphrases to something unique. The organization_did variable will be popluated as part of the lab setup
Run commands on the Ansible server directly via lxd
check it contains all the nodes with the correct heirarchy.
To run the lab and configure the environment run the following command
this script will run all the playbooks in sequence to create the environment the sequence is as follows
Running the inventory creation script to generate inventory.yml
Running the playbook to check access to all nodes
Running the playbook to install NuNet DMS on all nodes.
Running the playbook to create organization keys dynamically
Running the playbook to create participant keys.
Running the playbook to delegate capabilities to orchestrator nodes.
Running the playbook to delegate capabilities to edge nodes.
Running the playbook to start DMS on all nodes.
Running the playbook to onboard all nodes.
Once the lab environment has been configured you can interact with the various nodes as if you were running commands locally you can follow the steps below.
Lab steps
look at / check the structure of the organization key file
The key files are stored on the disk of dms but protected by the passphrase.
You can see the keyfile using this command.
Check the organization DID, and capabilities
Run the following command to check the organization did and capabilities note you will be prompted for the passphrase for each command as spcified the inventory.yml
Note the organisation does not have any capabilities listed meanining it has all capabilities. As soon as there is one capability defined those are the only capabilites the token implies. Need to explain this more.
Check the Organization Admin DID and capabilities
The organization admin is used to delegate rights to other did's this is its sole purpose and should be kept secure, multiple admin accounts can be delegate rights for scaling and the ability to revoke a particular admins chain of delegations should there be a compromise etc.
You can see the capabilities are just on the provide anchor and the capabilities it has are: cap":["/dms","/broadcast","/public"] meaning it can grant any capabilties beyond these specific e.g /dms/deployment but it could not grant a capability of say /storage or /data only a sub capability of the ones it has been delegated by the organization.
Check the orchestrator DMS did and capabilities
While a DMS node can be both a service provider and a compute provider in this example of a deployment in an organization we can allocate a role to nodes by provisioning the capabilities accordingly. Lets look at the capabilities.
Note we are running this command on the asi-test-vm-2 which is in the orchesstrators group in the inventory.yml
we can see it has a require and provide token,
The require token was granted by the organization and grants capabilities of /dms/deployment this means it requires a token with /dms/deployment or deeper in order for it to respond. This basically means it can run and respond to any deployment tasks.
The provide token was issued by the orgadmin user with the rights to provide or request any capability of /dms/deployment or further. This means the orchestrator can request any deployment action from another node that is trusted by the organization.
While this means the orchestrator in theory can request to deploy or be requested to deploy the limiting factor is weather or not the organisation or organization admin grants the rights (tokens) to be able to do this.
In this scenario the orchestrator nodes are able to run deployments on each other.
You can also see the chain of delegation of the provide token that has the token issued by the orgadmin that also has the delegation issued by the organization, thereby proving it has the correctly delegated rights, the chain is signed by both delegators to prove authenticity. This chain that comes from the org root on the provide is what is making the require token accept it as the require token trusts the organization.
Check the compute provider (edge_nodes) capabilities
Lets look at the other side of the delegation, in this scenario we dont want to allow the compute providers to be able to deploy jobs we just want them run jobs that are deployed to them. They do however need to be able to respond to deployment requests so we have a slight more restrictive delegation for them.
Note we are running this command on the first compute provider / edge node
We can see this node also as both a require and provide token (any machine that will participate in the network will need both)
The require token allows the capabilities of /dms/deploy meaning it will recieve requests to deploy The provide token has capabilities of /dms/deployment/bid and /dms/deployment/commit effectively limiting the validity of messages it can send related to deployment to only be able to bid or commit to a deplyment and not the ability to request a deployment e.g. /dms/deployment/request as seen this diagram
[https://gitlab.com/nunet/device-management-service/-/blob/main/dms/jobs/specs/diagrams/ensemble_deployment.png?ref_type=heads]
Tests we can run in this environment
We can confirm the token permissions granted by the organization are working by running the following tests.
Orchestrator Deploys ensemble to an edge node
The following command will run a deployment on the first orchestrator node. It will return a uuid for the deployment.
Copy that uuid and paste it into the command below, to return the status of the deployment
To see more details of the deployment you can view the manifest
review the log file to review what happend.
Edge Node tries to deploy to another edge node
We can test the the limited permissions given to the edge nodes by seeing if they can make a deployment
Copy that uuid and paste it into the command below, to return the status of the deployment
review the log file to review what happend.
Edge node tries to deploy to an Orchestrator
Additional P2P Tests
Now we have proved the organization granted permissions restrict the ability of edge nodes to deploy on each other we can see how actors (in this case the DMS nodes) can grant permissions to each other directly. Without using the organization delegation.
Edge node grants another edge node permissions to deploy to it (p2p)
As an edge node we can grant permissions to a specific did. Lets find the did of one edge node and grant it permissions to deploy. This command will get the did of the local dms on test-vm-4
The following command will create the token for the did of that dms the deployment rights to be able to deploy that is valid for 1 hour (copy paste the DID from the command above)
This command outputs a token.
Copy this token and apply it on the test-vm-4 machine using the following command, note you need to wrap the token in the single quotes
Re-run the deploy command that failed in the previous step.
Copy that uuid and paste it into the command below, to return the status of the deployment
review the log file to review what happend.
Additional Organizations
We can add additional organizations and show how they can trust each other
Create new organization
TBD
OCaps token distribution sequence
Deployment diagram
Last updated