LogoLogo
WebsiteTechnical Discussions
Developer documentation
Developer documentation
  • Public Technical Documentation
    • device-management-service
      • main
        • actor
        • api
        • cmd
          • actor
          • cap
        • db
          • repositories
            • clover
        • dms
          • behaviors
          • hardware
          • jobs
          • node
          • onboarding
          • resources
        • docs
          • deployments
          • onboarding
          • private_network
        • executor
          • docker
          • firecracker
          • background_tasks
          • config
        • maint-scripts
        • network
          • libp2p
        • observability
        • specs
          • basic_controller
          • s3
          • volume
          • integration
        • tokenomics
        • types
        • utils
          • validate
      • release
        • actor
        • api
        • cmd
          • actor
          • cap
        • db
          • repositories
            • clover
        • dms
          • behaviors
          • hardware
          • jobs
          • node
          • onboarding
          • resources
        • docs
          • deployments
          • onboarding
          • private_network
        • executor
          • docker
          • firecracker
        • internal
          • background_tasks
          • config
        • maint-scripts
        • network
          • libp2p
        • observability
        • plugins
        • specs
        • storage
          • basic_controller
          • s3
          • volume
        • test
        • tokenomics
        • types
        • utils
          • validate
    • solutions
      • asi-node
        • main
          • demos
            • ocap_auth
              • ansible
              • videos
          • did-auth-use-scenarios
            • create-a-hosting-deployment
            • make-a-payment
            • manage-dids-and-ocaps
            • register-an-ai-agent
            • submit-a-compute-job
          • pilots
            • asi-create-authentication-poc
        • release
      • nunet-appliance
        • main
        • release
    • test-suite
      • main
        • cicd
          • tests
            • feature_environment
          • cli
          • dms-on-lxd
            • local
        • environments
          • development
          • feature
          • production
          • staging
        • infrastructure
          • cloud-init
          • dms-on-lxd
          • nginx
        • lib
        • stages
          • dependency_scanning
          • functional_tests
          • integration_tests
          • load_tests
          • regression_tests
          • security_tests_1
          • security_tests_2
          • security_tests_live
          • unit_tests
          • user_acceptance_tests
      • release
        • cicd
          • tests
            • feature_environment
          • cli
          • dms-on-lxd
            • local
        • environments
          • development
          • feature
          • production
          • staging
        • infrastructure
          • cloud-init
          • dms-on-lxd
          • nginx
        • lib
        • stages
          • dependency_scanning
          • functional_tests
          • integration_tests
          • load_tests
          • regression_tests
          • security_tests_1
          • security_tests_2
          • security_tests_live
          • unit_tests
          • user_acceptance_tests
    • team-processes-and-guidelines
      • main
        • best_practices
        • ci_cd_pipeline
        • community_feedback_process
        • contributing_guidelines
        • git_workflows
        • nunet_test_process_and_environments
        • secure_coding_guidelines
        • specification_and_documentation
        • team_process
          • a_project_management
          • b_ceremonies_artifacts
          • c_drum_buffer_rope
          • d_development_process
          • e_culture_rules
          • f_mr_review
        • vulnerability_management
          • devsecops_maturity_models
          • nunet_security_pipeline
          • secret_management
          • sop_security_mr_review
Powered by GitBook
On this page
  • Device Management Service Test Suite
  • Introduction
  • Prerequisites
  • How to Run
  • Structure
  • Best Practices
Export as PDF
  1. Public Technical Documentation
  2. device-management-service
  3. main
  4. specs

integration

PreviousvolumeNexttokenomics

Last updated 4 hours ago

Last updated: 2025-05-11 01:05:54.192986 File source:

Device Management Service Test Suite

Introduction

This directory contains the test suite for the DMS. The tests in this directory verify the functionality of the DMS by creating a network of nodes and testing their interactions . These tests ensure that the core features of the DMS are working correctly in a multi-node environment.

Prerequisites

Before running the tests, ensure you have the following prerequisites installed:

  • GlusterFS

  • Docker

  • The DMS binary built and available in the test directory (optional for make rule usage)

GlusterFS Setup

Ensure GlusterFS is installed and pull the container.

sudo modprobe fuse
sudo apt install glusterfs-client
docker pull ghcr.io/gluster/gluster-containers:fedora

How to Run

Using Make:

sudo make itest

Using Go:

go test -tags=integration ./...

To run a specific test:

go test -tags=integration -run TestIntegration/BasicTests

Available test suites:

  • BasicTests: Tests basic node communication

  • DeploymentTests: Tests deployment functionality

  • DeploymentWithVolumesTests: Tests deployment with storage volumes

  • StorageTests: Tests storage functionality

Structure

The test suite is organized as follows:

  • integration_test.go: Entry point for all tests

  • suite_test.go: Defines the test suite structure and common functionality

  • client_test.go: Client implementation for interacting with DMS nodes

  • basic_test.go: Basic communication tests

  • deployment_test.go: Tests for deployment functionality

  • glusterfs_test.go: GlusterFS setup for storage tests

  • storage_test.go: Tests for storage functionality

  • volume_test.go: Tests for volume management

  • utils_test.go: Utility functions for tests

  • testdata/: Test deployment ensembles

Key Components

  • TestSuite: The main test suite that sets up a network of nodes

  • Client: A wrapper around the DMS CLI for testing

  • prefixWriter: Used to prefix node logs with node identifiers

Best Practices

1. Parallelism

Tests should be run in parallel to speed up the test suite.

To add a test for a new feature in parallel, here's the suggested workflow:

  1. Create a new test file in the test/integration directory.

  2. Define a runner function that takes a *TestSuite parameter:

    func NewFeatureTest(suite *TestSuite) {
        // New feature test implementation
    }
  3. Add your test to the TestIntegration function in integration_test.go:

    t.Run("NewFeatureTests", func(t *testing.T) {
        t.Parallel()
    
        newFeatureTests := &TestSuite{
            numNodes:      3,  // Adjust as needed
            Name:          "new_feature_tests",
            restPortIndex: 8100,  // Use unique port ranges
            p2pPortIndex:  10700,  // Use unique port ranges
            runner:        NewFeatureTest,
        }
        suite.Run(t, newFeatureTests)
    })

2. Port Allocation

Each test suite must use unique port ranges to avoid conflicts:

  • Allocate unique restPortIndex and p2pPortIndex for each test suite

  • Increment by at least 3 (for a 3-node test) from the previous test suite's ports

  • Document the port ranges used in comments to avoid future conflicts

3. Resource Management

Tests should properly clean up resources:

  • Use t.Cleanup() for Docker containers and other external resources

  • Ensure all nodes are properly shut down in the TearDownSuite method

  • Verify resource allocation and deallocation in deployment tests

4. Test Data Organization

  • Store test ensembles in testdata/ensembles/

  • Use descriptive names for test files

  • When using dynamic hostnames, use the replaceHostnameInFile utility

5. Error Handling

  • Use suite.Require() instead of package-level assertions to ensure proper test failure tracking

  • Add descriptive failure messages to assertions

  • Use suite.T().Logf() for detailed logging during test execution

6. Test Isolation

  • Each test suite should be completely independent

  • Do not share state between test suites

  • Use unique node directories and configurations

7. Timeouts and Retries

  • Use suite.Require().Eventually() for operations that may take time to complete

  • Set appropriate timeouts based on operation complexity

  • Include descriptive timeout messages

7. Assertions

What and how you should assert deployments:

Before deploying:

  1. (CP): resources

    1. free resources == onboarded resources

    2. allocated resources == 0

  2. (CP): allocations/list is empty

After deploying:

Note: it mostly does not apply for transient allocations as we usually use executions of short periods.. to assert the following with task allocations, we need to use transients allocations that run for more than 2 minutes

  1. (CP) assert allocation running (cmd: /dms/node/allocations)

    1. (CP) assert container running if possible

  2. (CP) assert resources

    1. allocated resources increased

    2. free resources decreased

  3. (Orchestrator) assert deployment status depending on allocations type

  4. (Orchestrator) assert manifest

  5. (CP) assert conn between containers is working for tests with multiple allocations!!!

After completed (if only tasks) or shutting it down:

  1. (CP) assert allocation NOT listed (cmd: /dms/node/allocations)

    1. (CP) assert container not running if possible

  2. (CP) assert resources

    1. allocated resources decreased

    2. free resources increased

  3. (Orchestrator) assert deployment status depending on allocations types

  4. (CP) assert subnet deleted (including tunneling iface)

link on GitLab