specification_and_documentation

Last updated: 2024-12-24 01:10:59.285188 File source: link on GitLab

Specification and Documentation

This document explains the current framework or structure for specification of platform components. It also outlines the process through which updates to platform specification and corresponding documentation are managed.

Table of Contents

Specification Framework

The specification for each component / package / sub-package is described in the README file situated in the same folder.

The README file consists of following elements:

  1. Static section: This section contains the package name and some links to information about the project. Note that each README file has the same set of links in this section. The links are followed by a Table of Contents which again is common across all READMEs.

  2. Description: This section should have a brief description of what the package is about and its core functionality.

  3. Structure and organisation: Here we give an high level overview of the contents of the package. This includes any file or folder created within the directory.

  4. Class Diagram: Each package/sub-package is represented by a class diagram created in plantuml. This file named class_diagram.puml is present in the specs folder present in each directory. Note that the class diagram needs to be detailed at the sub-pacakge level. The package level diagram automatically gets created by integrating the diagrams of each sub-package into a single file. Similarly, the global class diagram at the component level gets created by integrating diagrams of all packages into a single file.

  5. Functionality: This section typically explains the interfaces and methods that define the functionality of the package. Developers can choose to link documentation auto generated from the code as long as it clearly explains the package functionality. Alternatively, they can also follow the structures/templates prescribed in Interfaces & Methods section of this document. It is recommended to specify any additional information (as applicable) to enhance the clarity and understanding of the reader.

  6. Data Types: This sections enlists the various data models used by the package. As a default Go structs have been used to describe the data types. However, developer may choose to use an equivalent structure as per the language (ex. Python) used in the component/package. The conventions to be followed for specifying data types are further explained in Data Models section of this document.

  7. Testing: This section is to be used to explain the reader how to test the functionality. This may cover unit tests, functional tests or anything else as required.

  8. Proposed Functionality / Requirements: This section allows developers to capture the functionality that is not yet built but is in the pipeline. This requirement could be coming from other packages needing a functionality or it could be features that from the roadmap of the said package. This section essentially serves two roles:

    a. Give an idea to the reader what modifications are expected to the current package. Refer to the list of issues that are referenced in this section to access work being done/planned for the package.

    b. This is the place where request for new functionality can be specified. The process for doing the same is outlined in this section.

    Note: All future functionality should have a proposed tag in the heading to make it clear to the user. See below for an illustration.

  9. References: Any additional links or content relevant for the reader can be mentioned here. For example, Nunet research blog is referenced in several places to give an idea of the background research prior to development of the functionality for those who are interested.

Interfaces & Methods

Interfaces can be written directly in the README file using the code block, which mostly applies to proposed interfaces. It is best to explain the purpose of the interface and its methods in plain English.

Template / Structure for method description

A recommended (but not mandatory) structure for describing methods is as follows:

  • signature: <function_signature>

  • input #1: <explanation of first input parameter>

  • input #2: <explanation of second input parameter>

  • output (success): <Expected output data type>

  • output (error): <Output in case of any error>

<Function_name> function <function_description>

See below for an illustration

Note: It is recommended to specify only the main methods that describe the core functionality. Helper functions need not be described in the README file.

Naming Convention

It is recommended to use camelCase for function names with first word in lower case and following words having the first letter capitalized. For example - sendMessage or publishBidRequest.

Data Models

All data structures required or used in the functionality should be specified. See below for an example data model.

type Bid struct {
	// BidRequest is the request for the bid propogated in the network
	// by the service provider
	BidRequest dms.orchestrator.BidRequest

	// JobID is ID of the job
	JobID int

	// bidder contains the ID details of the DMS which is sending the bid
	bidder dms.node.NodeID

	// PriceBid contains price information of the bid
	PriceBid dms.orchestrator.PriceBid

	// TimeBid contains time information of the bid
	TimeBid dms.orchestrator.TimeBid 

	// Timeout is the Timestamp until which compute provider will be waiting
	// for the job request
	Timeout int64

	// ValidOffer indicates whether the bid offer is currently valid
	ValidOffer bool

	// ResourcesLockedUntilTimeout indicates whether the resources are locked until timeout
	ResourcesLockedUntilTimeout bool
}

Naming Conventions

The data models are specified using a standard code block. Following naming convention need to be utilised.

{package}.{subpackage}.{NameOfDataType}

For example dms.orchestrator.BidRequest implies that a data type by the name BidRequest is defined in the orchestrator sub-pacakge of dms package. It is important to adhere to this convention for discoverability and readability of the specifications.

Another applicable convention is to capitalize the first letter of the word in the name of data models. For example - BidRequest or PriceBid.

Note: Only the data models defined in the current package need to be specified using the code block. For data models from other packages, only the name should be mentioned along with a brief explanation of what role it is playing in the package functionality.

For example, see below a illustrative screenshot from executor package.

Note that the data types defined in types package only explained here in context of the package functionality. It does not specify its parameters which have alredy been defined in the README of the types package. However, LogStreamRequest data type is fully described as it is being defined in the executor package itself.

API End Points

Below is the recommended structure for describing API endpoints. However, developers can modify this or use alternate tools like Swagger if that is more beneficial.

Item
Value

endpoint:

<endpoint url>

method:

<method being used>

input:

<expected input data type>

output:

<expected output data type>

<Explanation of the endpoint functionality>

See below for an illustrative example of onboard end point

Sequence Diagrams

Sequence Diagrams can be a useful tool to describe a functionality in the initial or design phase. Developers are recommended to make use of this wherever they see fit.

We suggest to use PlantUML for creating sequence diagrams. Few reasons for this choice is because

  • PlantUML files support the naming convention we follow. See naming conventions for data types

  • It is very easy to insert a .puml file in the README. In fact class diagrams of all the packages in DMS (Device Management Service) are made using PlantUML.

  • PlantUML allows for both constructing the whole diagram as well as parts of it as well. This allows us to divide specs to each package and store close to the code

Alternatively one can also use Mermaid.

See below for an illustrative example of search and match operation where DMS tries to find elibile compute providers from the network.

The important aspects that typically should be covered in the sequence diagram are:

  • It should show the entities involved in the functionality. Ex, User, Compute Provider DMS, Elasticsearch etc

  • Define routines and subroutines. In the above example, the loop Compute provider decision to accept/reject covers the functionality where Compute Provider is assessing the job opportunity. Within this loop we have two subroutines covering the two possible scenarios - decision to bid or not bid.

  • Add description, comments etc. It is particularly useful to make the sequences more clear to the reader.

Gherkin Feature File

Gherkin feature files are another useful tool that can be used by developers to specify a functionality. Gherkin syntax allows us to describe the functionality of a component using natural language.

This means we develop a granular list of steps that should be executed for each functionality that is being offered by the component or package. This also covers different scenarios of interaction which can lead to different outcomes within the same functionality.

The steps written in Gherkin are saved in a file with an extension .feature. An example of such a feature file with a single scenario is shown below.

Feature: Cleanup Docker Resources

  Scenario: Removing Docker resources associated with the executor
    Given the Device Management Service (DMS) is installed
    When the "executor.docker.Cleanup" method of the Executor is called with a valid context
    Then the Executor removes all Docker containers, networks, and volumes with the specified label
    And the Executor logs a message indicating successful cleanup
    And no error is returned

  Scenario: Unable to remove Docker resources
    Given the Device Management Service (DMS) is installed
    And an error occurs while removing Docker resources
    When the "executor.docker.Cleanup" method of the Executor is called
    Then an error is returned indicating the failure to remove containers
    And no cleanup message is logged

The important aspects that should be covered in the feature file are:

  • The applicable function and data models should be referenced.

  • Define endpoints if applicable.

  • Explain the precondition that should exist using the Given keyword.

  • Define the different scenarios than can occur within this functionality

  • Note the naming convention which specifies the package in which the said function/data model will be located - createBid function in orchestrator package of DMS component.

Update Procedure

All documentation at Nunet should be considered as Living Documentation which essentially means that it gets updated along with code and evolution of the project.

The below steps describe the typical process that is followed for making any change or update to the platform functionality. Which means, that contributions to documentation are considered contributions to the code base and should follow the NuNet contributing guidelines (see document).

Steps to suggest or propose a change

  1. Open the list of issues link in the Proposed Functionality / Requirements section in the README file.

  2. Check if a similar issue has already been created. If the answer is Yes, add your comments on the issue to facilitate discussion. Use sequence diagrams, Gherkin files as required. Skip step 3. If the answer is No continue to step 3.

  3. Create a new issue explaining the proposed functionality. Use sequence diagrams, Gherkin files as required.

  4. Tag developers who are maintaining the code/package on the issue / comment.

  5. Create a new feature branch from main branch. If a feature alredy exisits with related updates, it may be useful to clone your feature from there.

  6. Update the Proposed Functionality / Requirements section in the README with proposed interfaces, methods, data types etc.

  7. Created a Merge Request (MR) for review. Assign maintainers of the code as reviewers.

  8. Link the MR created in the previous step on the issue created earlier.

The initial review Process

  1. The assignees will either review the merge request themselves or assign a responsible core team member (who is knowledgeable about the functionality).

  2. The issue and linked MR will be reviewed by the assigned person. If it is found acceptable, we move to the discussion stage. If the proposal is simple, the reviewers can accept or reject it immediately.

In-depth Discussion Process

  1. The issue will be discussed further as needed with core team members. The discussion has to be coordinated by the assigned reviewer.

    • The free discussion can happen on the issue as comments broken into topics if needed.

    • The discussion is closed when the reviewer accepts the proposed changes to specifications and the README file on the feature branch adequately described the proposed functionality in the relevant section.

  2. Upon acceptance, the feature branch will be merged to the main branches. Note that here only README file has been updated.

  3. New Issue will be created for development of the proposed changes and placed into Open and described as needed in order to place into kb::backlog, considering the flow of team work at the moment.

    • The specification of the feature will be placed in the description of the issue so that developers and contributors can easily pick it up and implement.

    • The specifications will serve as acceptance criteria for the issue to be accepted for merge into main branch (after implementation is done).

Implementation

  1. Create a feature branch from the main branch.

  2. Implement the changes on the feature branch to incorporate the proposed functionality.

  3. Update the README file so that the specification matches the state of implementation.

  4. Create a merge request to the main branch.

  5. Link the MR on the relevant issue.

Implementation Review and Acceptance

  1. The linked MR will be reviewed by the core team.

  2. The reviewer will check if the implemented functionality meets the specification and all specified tests pass as defined in Test Management process and makes the decision to merge / close it.

  3. Upon acceptance, the feature branch will be merged to the main branch.

  4. The issue will be marked as kb::done.

Last updated