Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Last updated: 2024-12-24 01:10:57.741747 File source: link on GitLab
This process aims to encourage and enable the NuNet community to actively engage in discussions, report bugs, ask questions, and suggest new features. GitLab serves as the platform to facilitate this, allowing the community to stay closely connected to the open-source code and documentation driving the NuNet platform.
In the following sections, we outline issue templates designed to guide the community in creating issues on GitLab. Moreover, we emphasize the importance of timely responses by the NuNet team. To support this, new boards, tagging strategies, and automations have been introduced.
To streamline the issue creation process, templates are provided with fields that guide the user. Below are the available templates and links to access them:
Default - Provides an overview of the different templates available.
Bug - For reporting reproducible issues.
Question - For asking questions or reporting non-reproducible issues.
Feature - For suggesting new features for the NuNet platform.
Discussion - For initiating conversations or sharing thoughts on specific topics.
Issue - Reserved for internal use by the NuNet team for issue creation aligned with the software process.
NuNet hosts multiple projects on GitLab, so to maintain a consistent and standardized approach, all templates are stored in the team-process-and-guidelines repository. To ensure consistency across projects, a script in the pipeline automatically updates relevant repositories whenever a template is modified. This automation ensures uniformity across all projects.
The following boards can be used to manage community feedback:
Feedback Board: Displays all open issues submitted by the community or the NuNet team, organized with one column for each tag: ~"type::bug" ~"type::question" ~"type::feature" ~"type::discussion".
Bug Tracking Board: Lists all open bugs, whether reported by community members or the NuNet team, with separate columns for each status: Open, ~"kb::requested" ~"kb::backlog" ~"kb::on hold" ~"kb::doing" ~"kb::review" ~"kb::done".
Tags used in the Bug tracking workflow:
~"kb::requested" – The bug is under review and analysis.
~"kb::backlog" – The bug has been verified.
~"kb::doing" – Developers are actively working on the bug.
~"kb::review" – The fix is currently being reviewed.
~"kb::done" – The bug has been resolved.
Similarly these are the tags used for managing questions, feature requests, and discussions:
~"kb::requested" – Someone is assigned to respond the question, analyze the feature request, or engage in the discussion.
~"kb::done":
The question has been answered.
A proposed feature or a discussion has led to the creation of new issues for implementation. These new issues follow our current development process and are linked to the original issue.
Optionally, feature requests or discussion issues can be tagged with ~"kb::backlog" for implementation.
The following automations support the community feedback process:
GitLab-Slack Integration: Connects GitLab and Slack to send messages to the support Slack channel whenever an issue with the following tags is opened or updated: ~"type::bug", ~"type::question", ~"type::feature", or ~"type::discussion".
Time Tracking: Enhances the Slack integration to notify the support channel of the expiration time for open issues, ensuring timely responses.
Stalled Issues: If no response is received from a community member for an extended period, the support channel is notified, allowing for a review and a decision on whether to close the issue manually.
Email-GitLab Integration: Allows users to reply directly via email using GitLab’s email integration, accommodating personal preferences.
Template Management Automation: A pipeline script automatically updates templates across other repositories when changes are made in the team-process-and-guidelines repository.
NuNet aims to establish the following timelines as guidelines, particularly concerning timely responses:
Up to 12 hours, the first response team must acknowledge the open issue and request more information if needed. If all necessary information is provided, change the tag to ~"kb::requested" and assign someone for analysis. Currently, the first response team consists of @janaina.senna.
Up to 24 hours after assignment, the responsible person must review the issue and add any additional information. If more time or information is required, this should be clearly stated in the issue.
If additional information is requested, the responsible person has up to 12 hours to review the community's response, request further feedback if necessary, or either close the issue (if the bug isn’t confirmed or the question is answered) or change the issue's tag to ~"kb::backlog" if it requires implementation.
At this point, the issue becomes part of the development process.
Up to 24 hours the first response team must acknowledge the open issue, change the tag to ~"kb::requested", and assign someone for analysis. Currently, the first response team consists of @janaina.senna, with support from @vyzo and @dagims as needed.
Up to 72 hours after assignment, the responsible person must review the issue and add any additional information. If more time is needed, this should be clearly stated in the issue.
If feedback from a community member is requested, the responsible person has up to 24 hours to review the community's response, request further feedback if necessary, or either close the issue if the discussion has concluded or if the feature will not be implemented. If the feature or the discussion will lead to implementation, the responsible person can change the tag to ~"kb::backlog" or create new issues (as per our development process) linked to the original community member's issue.
At this point, the issue becomes part of the development process.
Last updated: 2024-12-24 01:10:57.250494 File source: link on GitLab
NuNet's CI/CD pipeline is an automation pipeline to support NuNet's products development. It is designed to reflect the testing processes and support testing programs, campaigns and environments. It is invaluable for the the internal development team and the public development community for it provides them with the necessary feedback and structure to increase confidence and reliability of the changes made to the codebase during the development lifecycle.
Continuous delivery and continuous integration is a broad topic that spans practically the entire lifecycle of the software product, from the first commit of a new feature to its deployment in production. The scope of the pipeline in NuNet is no different, so it isn't hard to imagine a pipeline that spans multiple repositories, each specialized in a specific part of the process, from code that is responsible for testing, to jobs that automate deployment and continuous monitoring of the application.
The goal of a CI/CD pipeline is to test every new version of a platform on a network that is as close to the production environment as possible, automating the reproduction of all possible behaviors of the platform on this network. This enables the development team to rerun them on each new modification of the codebase in order to identify and eventually eliminate errors before releasing a new version of the components that compose the production network.
The complication compared to a usual CI/CD pipelines comes with the fact that NuNet is a decentralized platform running on different, heterogenous computers around the world. Therefore, we need to spawn a testnet that already contains independent compute providers to pass CI/CD jobs quite early in the pipeline. Spawning the required testnet for the purposes of testing the platform involves the execution of different flavors of 'deploy' jobs, while actually testing these deployments require direct interaction with the deployed components. Additional complication is that NuNet platform isn't prescriptive about the hardware that it runs on, since it is owned by compute providers's machine to which NuNet will not have root access. Therefore, advanced stages of testing pipeline will invariably involve manual tests, which are encorage by community testers programs.
When it comes to the actual implementation of the CI/CD pipeline, we leverage Gitlab CI/CD pipelines. This enables us to easily define and reuse components of the pipeline in every repository as we see fit.
Each repository has it's own .gitlab-ci.yml
file which defines the pipeline for that repository. The repository won't have to redefine the entire pipeline however. It is able to import and reuse, as described above, pieces of the pipeline from external repositories.
Currently, we have the entirety of the pipeline described in nunet/test-suite, which is the repository responsible for tracking the development of the testing stack.
Before it's able to inherit the pipeline from test-suite the repository, however, the repository must define some jobs, namely:
the docker build job, which produces an image that contains the necessary dependencies for compiling and running the code in that repo.
the build job, which is responsible for producing runnable and/or installable artifacts, which the pipeline uses to run functional tests.
For reference, you can refer to the actual pipeline implementation of nunet/device-management-service.
We also have nunet/nunet-infra which is responsible for tracking progress related to the deployment of the infrastructure at NuNet. Lines between test-suite and nunet-infra blur when you think that there is need for specific infrastrucutre deployments to support the testing of software, which should provide the necessary components that the software needs to interact (databases, object stores, container runtimes etc...). Therefore there is infrastructure code in test-suite, but it shouldn't be mistaken with code in nunet-infra, which is related to the operational part of the pipeline.
Below is the explanation of each state in the general pipeline considering above.
The NuNet CI/CD pipeline combines a number of stages, which are called from the template file. All templates are contained in nunet/test-suite/cicd. Note, that the CI/CD pipeline is being implemented iterativelly step by step, and also is constantly augmented -- to cover all new functionalities and aspects of the platform --, therefore some jobs and stages may still contain empty templates which will be implemented or considerably change in the future.
The states are dependent on each other and ordered by order of execution. However, due to differences of code in different repositories, they may run in different order.
Static Analysis / Code Quality (name static_analysis
)
Contains static analysis of the code quality using a number of tools that check and report code quality level. Code quality results for each run of this job are displayed via the gitlab interface. Definition: Code-Quality.gitlab-ci.yml
Unit Tests (name unit_tests
)
Runs units tests on the codebase for each language which exists in the codebase (since NuNet is a language agnostic platform, it may contain multiple language code). Coverage report is displayed via gitlab interface.
Definition: Unit-Tests.gitlab-ci.yml
Static Security tests (name security_tests_1
)
Automated security testing (using third party tools) does not need live environments or a testnet, i.e. can be run on the static repository code.
Definition: Security-Tests-1.gitlab-ci.yml
Build (name build
)
Builds all NuNet platform components that are needed for deploying the platform on a testnet (and eventually on the production network) including multi-architecture packages of device-management-service and centralized services as needed. The usual source is develop
branches of each respective repository, which contains the bleeding edge and unstable version of the code for each component.
Definition: Build.gitlab-ci.yml
Feature environment / Functional tests / API tests (name functional_tests
)
Tests each API call as defined by the NuNet Open API of the respective version that is being tested. The goal of this stage is to make sure that the released versions of the platform fully correspond to the released Open APIs, which will be used by core team, community developers and app integrators to build further.
This stage is responsible for provisioning the feature environment, deploying the respective DMS build to be tested in virtual machines hosted by a cluster of LXD providers, and running functional tests, as implemented in nunet/test-suite/stages/functional_tests against this deployed network.
Definition: Feature-Environment.gitlab-ci.yml
Automated security tests (name security_tests_2
) -- to be implemented
Tests security of API calls that does need deployed testnetwork.
Definition: Security-Tests-2.gitlab-ci.yml
User Acceptance Tests (name user_acceptance_tests
) -- to be implemented
Testing user behaviors from user perspective. Include all identify possible user behaviors and is constantly updated as soon as these behaviors are identified. The goal is to run most of the user acceptance tests automatically (describing scenarios BDD style), however, some of the tests will need to be run manually by the network of beta testers.
Definition: User-Acceptance-Tests.gitlab-ci.yml
Deploy Staging Network (name deploy_staging
) -- to be implemented
As described in Git workflow -- branching strategy and NuNet test process and environments, a pre-release version of the platform with frozen features for release version will be separated to the release
branches on each respective repository, which will be build and extensively tested with advanced tests, requiring large network of compute providers covering all possible hardware configurations and supported operating systems. The staging network will be organized via collaboration with the community.
Definition: Deploy-Staging.gitlab-ci.yml
Regression tests (name regression_tests
) -- to be implemented
Regression tests will deploy and run all applications that are running on the platform. These tests may overlap or include user acceptance testing and testing behaviors on these applications as well as deployment behaviors. As well as user acceptance testing stage, regression tests may include manual beta testing phase.
Definition: Regression-Tests.gitlab-ci.yml
Performance and load testing (name load_tests
) -- to be implemented
Define performance scenarios that would exhaust the system and automatically run them for checking confidentiality, availability and integrity of the platform.
Definition: Load-Tests.gitlab-ci.yml
Live Security Tests (name security_tests_3
) -- to be implemented
Security Tests that need full platform and all applications running to test security aspects from user perspective. Will be done mostly manually and include 'red team' style penetration testing. All detected vulnerabilities will be included into security_tests_1 and security_tests_2 stages for automated testing of the further platform versions.
Deploy into production (name deploy_prod
) -- to be implemented
If all tests pass on the staging network, the release
branches are tagged and announced as per Git workflow -- branching strategy.
Last updated: 2024-12-24 01:10:56.346535 File source: link on GitLab
This repository hosts all team processes, resource desriptions and guidelines followed by both core team members and community developers.
Last updated: 2024-12-24 01:10:58.002798 File source: link on GitLab
This page contains guidelines for the community to contribute in NuNet's development. We appreciate any kind of contribution, including:
Bug reporting and fixes
Documentation improvements
Code contributions & improvements
New features
Testing etc
Please take a look at Device Management Service (DMS) README to get an overall understanding of the project architecture and the underlying concepts. This will also tell you basics of how to get started with using the DMS functionality.
It is recommended that you familiarize yourself with the codebase. Read the associated documentation specified in the README
files present in each directory. To understand how the README
files are structured refer to the Specification and Documentation Framework
Bug reports should be submitted to the issue tracker on the appropriate repository. For example, if the bug about Device Management Service (DMS), use the DMS issue tracker.
Note: Bugs should be reproducible! Include detailed steps on how to reproduce the problem, along with any error messages. Screenshots and GIFs are helpful too! If you are unsure if what you are experiencing is reproducible, reach out to our community and ask help to reproduce what you need in order to confirm!
Always search the Bug Tracking Board FIRST! It’s likely someone caught this before you, or already reported something similar, and it will save time and effort. If you find an existing issue, show your support with an award emoji and/or join the discussion.
Use the Bug Issue Template! This template can be chosen while creating a new issue. See below for an illustrative screenshot.
The boxes in template have some prompters to help you on what should be included. The template can be populated and edited as needed.
Add the ~type::bug label to your issue!
Questions should be submitted to the issue tracker on the appropriate repository. For example, if the question is about Device Management Service (DMS), use the DMS issue tracker.
Always search the Feedback Board FIRST! It's likely that someone already reported something similar, and it will save time and effort. If you find an existing issue, show your support with an award emoji and/or join the discussion.
Use the Questions template! This template can be chosen while creating a new issue. See below for an illustrative screenshot.
Add the ~type::question label to your issue!
Feature proposals should be submitted to the issue tracker.
Refer to the Update Procedure for steps to be followed while suggesting a change to the existing functionality.
Keep it simple! Keep feature proposals as small and simple as possible, complex ones might be edited to make them small and simple.
Always search the Feedback Board FIRST! It’s likely someone already reported something similar, and it will save time and effort. If you find an existing issue, show your support with an award emoji and/or join the discussion.
Use the Feature Proposal Issue Template! This template can be chosen while creating a new issue. See below for an illustrative screenshot.
The boxes in template have some prompters to help you on what should be included. The template can be populated and edited as needed.
Add the ~type::feature label to your issue!
Search the Issue Tracker to look at already created issues ready for development.
Apply the filter with ~good-first-issue label to see issues suitable for those who are new to the project.
Familiarise yourself with the Specification and Documentation Framework.
Here you can find the development workflow used in NuNet platform.
Refer to these steps for the process to be followed while implementing a new feature or functionality.
Create a merge request with the contributed code, filling out all requested information accordingly to the merge request template.
After submitting the merge request, verify that all CI/CD pipeline stages are running successfully. Fix the merge request if necessary.
All code and contributions have to include appropriate documentation updates, corresponding to the code changes, as explained in Specification and Documentation README
file.
To start a conversation or to share your thoughts on a specific topic, a new issue should be submitted to the issue tracker on the appropriate repository. For example, if the question is about Device Management Service (DMS), use the DMS issue tracker.
Always search the Feedback Board FIRST! It's likely that someone already reported something similar, and it will save time and effort. If you find an existing issue, show your support with an award emoji and/or join the discussion.
Use the Discussion template! This template can be chosen while creating a new issue. See below for an illustrative screenshot.
Add the ~type::discussion label to your issue!
Last updated: 2024-12-24 01:10:58.492452 File source: link on GitLab
This page contains the workflows used in GIT for the NuNet development team. The first section details the workflow from a developer's point of view, the second section details the branching strategy to organize the GIT repositories, and the third section details how version increments work for NuNet projects.
The developer should use the feature branching strategy i.e. each feature/issue is implemented on its own branch.
From the Issue on GitLab, the developer uses the Create branch
button. GitLab automatically fills the branch name. The developer chooses the source branch as main
(see picture below).
A branch tends to be short-lived, making it easier to merge.
A developer should commit/push every day, even when it is not yet ready for review, and write good commit messages. The developer should also merge the main
branch every day on the feature branch.
When the developer is done, the developer merges the main
branch on the feature branch, opens a merge request (MR) and assigns a peer reviewer based on one of these aspects: familiarity with the requirements, knowledge of the module or code, a useful ability, or relevant experience.
A MR triggers the CI pipeline. The developer makes changes if necessary so the merge request passes through the pipeline successfully.
Reviewers should leave questions, comments, and suggestions. Reviewers should see if the README.md files were updated. Reviewers can comment on the whole pull request or add comments to specific lines. The developer can continue to commit and push changes in response to the reviews. The MR will update automatically, and the CI pipeline will run again.
In addition to peer reviews, some MRs will also undergo an architectural or conceptual review, preferably conducted by @kabir.kbr or @vyzo. This review can be requested by either the MR creator or the peer reviewer.
Each MR needs to approved by one person from the technical board and one person from the security team.
Once the MR is approved, the tech lead or the product owner merges the request to main
branch. The options to “Delete source branch” and “Squash commits” should be marked as shown in the following picture.
The CI pipeline runs in the main
branch. It is unusual but, if necessary, the developer makes changes to the main
branch so it can pass through the pipeline successfully.
Observation: If for some reason the developer creates a branch that will not be merged, the developer needs to remove it after its use.
Feature branches are created from Issues using the GitLab interface. The developer uses the Create branch
button. GitLab automatically fills the branch name. The developer chooses the source branch as main
(see picture at the above section Git workflow - developer view).
When an issue is complete it is merged into the main
branch through an approved merge request (MR).
The release
branch tracks the current (latest) release. Code is merged/cherry picked from main
and tags are created for every release.
The Release version is reached in a series of steps.
a. When code is merged/cherry picked from main
to the release branch, it is considered as a release candidate. A tag vX.X-rc1
is created for it.
b. A feature freeze is declared on the main
branch.
c. Testing starts on the release candidate present on the release
branch.
d. Bug fixes and development continue on the main
branch.
e. This continues until we are ready for the next release candidate (vX.X-rc2
). At this stage, main
is merged/cherry picked into the release
branch.
f. The above steps are repeated until we are confident there all features are incorporated and there are no known bugs left that can block the release. At this point, final release is created with the tag vX.X
.
Post release, the feature freeze on the main
branch lifted. The development process continues as normal.
For long term development features, it may be required to create a next
branch as an alternate to main
while the feature freeze is in place due to the release. In this case next
is merged to main
and it is deleted prior to resumption of development activities.
For bugs discovered in development, patch releases can be created with a tag, for example vX.X.1
. Feature branches to bugs and critical issues can be directly created from release
branch by the developer.
For release with long term support, it may be needed to keep the patch release in a different branch, ex release/vX.X.1
instead of merging to the release
branch.
A label version is composed by major.minor.hotfix:
major version with incompatible API changes;
minor version with functionality in a backwards compatible manner;
hotfix version with backwards compatible bug fixes.
Here are some common types used in messages accordingly to Conventional Commits specification:
feat: A new feature for the user or a significant enhancement
fix: A bug fix
refactor: Code changes that neither fix a bug nor add a feature
perf: Performance improvements
test: Adding or modifying tests
style: Code style changes (e.g., formatting)
docs: Documentation changes
revert: Reverting a previous commit
build: Changes that affect the build system or external dependencies (e.g. npm, docker, nexus)
chore: Routine tasks, maintenance, or general refactoring
ci: Changes to the project's Continuous Integration (CI) configuration
release: Version number changes, typically associated with creating a new release
If the changes in the MR require updates or may break existing functionality, instead of using a type from the above list, it is necessary to use: BREAKING CHANGE: <MR title>
Messages added to the MR are included to the CHANGELOG file and appended to the release before publishing on GitLab.
The table below illustrates the mapping of MR types to version upgrade and release types.
fix refactor perf test style docs
Patch
Fix Release
feat
Minor
Feature Release
BREAKING CHANGE
Example:
perf(pencil): remove graphiteWidth option
BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons.
Major
Breaking Release
NOTE: The BREAKING CHANGE: token must be included at the end of the commit message.
build chore ci
None
No new release created
Isolate Features in Dedicated Branches
Reason: Ensures that changes remain modular and easy to review.
How: Create a new branch for each feature or bug fix, such as feature/xyz
or bugfix/abc
.
Tip: Use descriptive branch names reflecting the branch’s purpose for better navigation.
Commit Early and Often
Reason: Committing small, incremental changes improves version control and debugging.
How: Break tasks into smaller chunks and commit frequently with meaningful messages.
Tip: Avoid vague messages like 'fix bug'; instead, use 'Fix dashboard layout rendering issue.'
Rebase or Merge Regularly
Reason: Keeps feature branches up-to-date with the main branch, avoiding merge conflicts.
How:
For short-lived branches, use git rebase origin/main
.
For long-lived branches, use git merge origin/main
.
Tip: Regularly review changes after rebase or merge and run tests to ensure stability.
Use Merge Requests (MRs) for Code Review
Reason: Ensures code quality and documents the reasoning behind changes.
How: Open an MR when ready for review, with detailed descriptions and related tickets.
Tip: Include tests and comprehensive explanations in the MR for clarity.
Test Features in Isolation
Reason: Prevents regressions or feature interactions, improving stability.
How: Use continuous integration (CI) pipelines to automatically test each feature branch.
Tip: Use GitLab CI/CD pipelines to enforce passing tests before merging.
Avoid Merging Feature Branches Into Each Other Prematurely
Reason: Creates dependencies and complicates tracking, leading to conflicts or bugs.
How: Wait until each branch is complete and reviewed before merging into the main branch.
Learning: Keep features independent to simplify testing and review.
Don’t Bypass Code Reviews or Testing
Reason: Skipping reviews or tests introduces risks like bugs or performance issues.
How: Ensure all changes go through MRs with proper reviews, and CI tests pass.
Learning: Following thorough reviews reduces future rework.
Don’t Push a Feature Branch Into Another as a Shortcut
Reason: Merging one branch into another for speed complicates history and debugging.
How: Merge each branch individually into the main branch or use an 'integration' branch for combined testing.
Learning: Stick to independent merges to maintain a clean commit history.
Don’t Force Push Without Caution
Reason: Force pushing can overwrite commits, leading to data loss or conflicts.
How: Use git push --force
only when necessary, or use git push --force-with-lease
to avoid overwriting others' work.
Tip: Avoid force pushing to the main branch unless absolutely necessary.
Don’t Delay Merging for Too Long
Reason: Delaying merges increases the risk of conflicts and makes tracking harder.
How: Rebase or merge regularly and aim to merge branches as soon as they are reviewed.
Tip: Adopt 'merge early, merge often' to minimize conflicts and keep the codebase up to date.
Last updated: 2024-12-24 01:10:59.029870 File source: link on GitLab
Author: @umair-nunet from Issue https://gitlab.com/nunet/nunet-infra/-/issues/97#note_1000806200
The purpose of this document is to help developers bring a security mindset to writing code. The idea is that when a developer writes code and is ready to deploy it they do a quick security check to make sure they have done their due diligence in securing their creation. This should be a good guideline when securing application code. Only pay attention to the sections that pertain to code. For example, if you wrote something only has to do with databases, only look at the database portion. There is no need to read about file management unless your code does file management. The list was provided by the Open Web Application Security Project (OWASP) and their best-securing code practices.
Conduct all data validation on a trusted system (e.g., The server)
Identify all data sources and classify them into trusted and untrusted. Validate all data from untrusted [sources (e.g., Databases, file streams, etc.)]
There should be a centralized input validation routine for the application
Specify proper character sets, such as UTF-8, for all sources of input
Encode data to a common character set before validating (Canonicalize)
All validation failures should result in input rejection
Determine if the system supports UTF-8 extended character sets and if so, validate after UTF-8 decoding is completed
Validate all client-provided data before processing, including all parameters, URLs, and HTTP header content (e.g. Cookie names and values). Be sure to include automated postbacks from JavaScript, Flash, or other embedded code
Verify that header values in both requests and responses contain only ASCII characters
Validate data from redirects (An attacker may submit malicious content directly to the target of the redirect, thus circumventing application logic and any validation performed before the redirect)
Validate for expected data types
Validate data range
Validate data length
Validate all input against a "white" list of allowed characters, whenever possible
If any potentially hazardous characters must be allowed as input, be sure that you implement additional controls like output encoding, secure task specific APIs, and accounting for the utilization of that data throughout the application. Examples of common hazardous characters include: < > " ' % ( ) & + \ ' "
If your standard validation routine cannot address the following inputs, then they should be checked discretely
Check for null bytes (%00)
Check for new line characters (%0d, %0a, \r, \n)
Check for “dot-dot-slash" (../ or ..) path alterations characters. In cases where UTF-8 extended character set encoding is supported, address alternate representation like: %c0%ae%c0%ae/ (Utilize canonicalization to address double encoding or other forms of obfuscation attacks)
Conduct all encoding on a trusted system (e.g., The server)
Utilize a standard, tested routine for each type of outbound encoding
Contextually output encodes all data returned to the client that originated outside the application's trust boundary. HTML entity encoding is one example but does not work in all cases
Encode all characters unless they are known to be safe for the intended interpreter
Contextually sanitize all output of un-trusted data to queries for SQL, XML, and LDAP
Sanitize all output of un-trusted data to operating system commands
Require authentication for all pages and resources, except those specifically intended to be public
All authentication controls must be enforced on a trusted system (e.g., The server)
Establish and utilize standard, tested, authentication services whenever possible
Use a centralized implementation for all authentication controls, including libraries that call external authentication services
Segregate authentication logic from the resource being requested and use redirection to and from the centralized authentication control
All authentication controls should fail securely
All administrative and account management functions must be at least as secure as the primary authentication mechanism
If your application manages a credential store, it should ensure that only cryptographically strong one-way salted hashes of passwords are stored and that the table/file that stores the passwords and keys is writeable only by the application. (Do not use the MD5 algorithm if it can be avoided)
Password hashing must be implemented on a trusted system (e.g., The server).
Validate the authentication data only on completion of all data input, especially for sequential authentication implementations
Authentication failure responses should not indicate which part of the authentication data was incorrect. For example, instead of "Invalid username" or "Invalid password", just use "Invalid username and/or password" for both. Error responses must be truly identical in both display and source code
Utilize authentication for connections to external systems that involve sensitive information or functions
Authentication credentials for accessing services external to the application should be encrypted and stored in a protected location on a trusted system (e.g., The server). The source code is NOT a secure location
Use only HTTP POST requests to transmit authentication credentials
Only send non-temporary passwords over an encrypted connection or as encrypted data, such as in an encrypted email. Temporary passwords associated with email resets may be an exception
Enforce password complexity requirements established by policy or regulation. Authentication credentials should be sufficient to withstand attacks typical of the deployed environment's threats. (e.g., requiring the use of alphabetic as well as numeric and/or special characters)
Enforce password length requirements established by policy or regulation. Eight characters are commonly used, but 16 is better or consider the use of multi-word pass phrases
Password entry should be obscured on the user's screen. (e.g., on web forms use the input type "password")
Enforce account disabling after an established number of invalid login attempts (e.g., five attempts is common). The account must be disabled for a period of time sufficient to discourage brute force guessing of credentials, but not so long as to allow for a denial-of-service attack to be performed
Password reset and changing operations require the same level of control as account creation and authentication.
Password reset questions should support sufficiently random answers. (e.g., "favorite book" is a bad question because “The Bible” is a very common answer)
If using email-based resets, only send email to a pre-registered address with a temporary link/password
Temporary passwords and links should have a short expiration time
Enforce the changing of temporary passwords on the next use
Notify users when a password reset occurs
Prevent password re-use
Passwords should be at least one-day-old before they can be changed, to prevent attacks on password re-use
Enforce password changes based on requirements established in policy or regulation. Critical systems may require more frequent changes. The time between resets must be administratively controlled
Disable the "remember me" functionality for password fields
The last use (successful or unsuccessful) of a user account should be reported to the user at their next successful login
Implement monitoring to identify attacks against multiple user accounts, utilizing the same password.
This attack pattern is used to bypass standard lockouts when user IDs can be harvested or guessed
Change all vendor-supplied default passwords and user IDs or disable the associated accounts
Re-authenticate users prior to performing critical operations
Use Multi-Factor Authentication for highly sensitive or high-value transactional accounts
If using third-party code for authentication, inspect the code carefully to ensure it is not affected by any malicious code
Use only trusted system objects, e.g. server-side session objects, for making access authorization decisions
Use a single site-wide component to check access authorization. This includes libraries that call external authorization services
Access controls should fail securely
Deny all access if the application cannot access its security configuration information
Enforce authorization controls on every request, including those made by server-side scripts, "includes" and requests from rich client-side technologies like AJAX and Flash
Segregate privileged logic from other application code
Restrict access to files or other resources, including those outside the application's direct control, to only authorized users
Restrict access to protected URLs to only authorized users
Restrict access to protected functions to only authorized users
Restrict direct object references to only authorized users
Restrict access to services to only authorized users
Restrict access to application data to only authorized users
Restrict access to user and data attributes and policy information used by access controls
Restrict access security-relevant configuration information to only authorized users
Server-side implementation and presentation layer representations of access control rules must match
If state data must be stored on the client, use encryption and integrity checking on the server-side to catch state tampering.
Enforce application logic flows to comply with business rules
Limit the number of transactions a single user or device can perform in a given period of time. The transactions/time should be above the actual business requirement, but low enough to deter automated attacks
Use the "referer" header as a supplemental check only, it should never be the sole authorization check, as it is can be spoofed
If long authenticated sessions are allowed, periodically re-validate a user’s authorization to ensure that their privileges have not changed and if they have, log the user out and force them to re-authenticate
Implement account auditing and enforce the disabling of unused accounts (e.g., After no more than 30 days from the expiration of an account’s password.)
The application must support disabling of accounts and terminating sessions when authorization ceases (e.g., Changes to the role, employment status, business process, etc.) Service accounts or accounts supporting connections to or from external systems should have the least privilege possible
Create an Access Control Policy to document an application's business rules, data types and access authorization criteria and/or processes so that access can be properly provisioned and controlled. This includes identifying access requirements for both the data and system resources
All cryptographic functions used to protect secrets from the application user must be implemented on a trusted system (e.g., The server)
Protect master secrets from unauthorized access
Cryptographic modules should fail securely
All random numbers, random file names, random GUIDs, and random strings should be generated using the cryptographic module’s approved random number generator when these random values are intended to be un-guessable
Cryptographic modules used by the application should be compliant to FIPS 140-2 or an equivalent standard. (See http://csrc.nist.gov/groups/STM/cmvp/validation.html)
Establish and utilize a policy and process for how cryptographic keys will be managed
Do not disclose sensitive information in error responses, including system details, session identifiers, or account information
Use error handlers that do not display debugging or stack trace information
Implement generic error messages and use custom error pages
The application should handle application errors and not rely on the server configuration
Properly free allocated memory when error conditions occur
Error handling logic associated with security controls should deny access by default
All logging controls should be implemented on a trusted system (e.g., The server)
Logging controls should support both the success and failure of specified security events
Ensure logs contain important log event data
Ensure log entries that include un-trusted data will not execute as code in the intended log viewing interface or software
Restrict access to logs to only authorized individuals
Utilize a master routine for all logging operations
Do not store sensitive information in logs, including unnecessary system details, session identifiers or passwords
Ensure that a mechanism exists to conduct log analysis
Log all input validation failures
Log all authentication attempts, especially failures
Log all access control failures
Log all apparent tampering events, including unexpected changes to state data
Log attempts to connect with invalid or expired session tokens
Log all system exceptions
Log all administrative functions, including changes to the security configuration settings
Log all backend TLS connection failures
Log cryptographic module failures
Use a cryptographic hash function to validate log entry integrity
Implement least privilege, restrict users to only the functionality, data, and system information that is required to perform their tasks
Protect all cached or temporary copies of sensitive data stored on the server from unauthorized access and purge those temporary working files as soon as they are no longer required.
Encrypt highly sensitive stored information, like authentication verification data, even on the server-side. Always use well-vetted algorithms, see "Cryptographic Practices" for additional guidance
Protect server-side source code from being downloaded by a user
Do not store passwords, connection strings or other sensitive information in clear text or in any non-cryptographically secure manner on the client-side. This includes embedding in insecure formats like MS ViewState, Adobe flash, or compiled code
Remove comments in user-accessible production code that may reveal backend system or other sensitive information
Remove unnecessary application and system documentation as this can reveal useful information to attackers
Do not include sensitive information in HTTP GET request parameters
Disable auto-complete features on forms expected to contain sensitive information, including authentication
Disable client-side caching on pages containing sensitive information. Cache-Control: no-store, may be used in conjunction with the HTTP header control "Pragma: no-cache", which is less effective, but is HTTP/1.0 backward compatible
The application should support the removal of sensitive data when that data is no longer required. (e.g. personal information or certain financial data) -Implement appropriate access controls for sensitive data stored on the server. This includes cached data, temporary files, and data that should be accessible only by the specific system user
Implement encryption for the transmission of all sensitive information. This should include TLS for protecting the connection and may be supplemented by discrete encryption of sensitive files or non HTTP based connections
TLS certificates should be valid and have the correct domain name, not be expired, and be installed with intermediate certificates when required
Failed TLS connections should not fall back to an insecure connection
Utilize TLS connections for all content requiring authenticated access and for all other sensitive information
Utilize TLS for connections to external systems that involve sensitive information or functions
Utilize a single standard TLS implementation that is configured appropriately
Specify character encodings for all connections
Filter parameters containing sensitive information from the HTTP referer, when linking to external sites
Ensure servers, frameworks, and system components are running the latest approved version
Ensure servers, frameworks, and system components have all patches issued for the version in use
Turn off directory listings
Restrict the web server, process, and service accounts to the least privileges possible
When exceptions occur, fail securely
Remove all unnecessary functionality and files
Remove test code or any functionality not intended for production, prior to deployment
Prevent disclosure of your directory structure in the robots.txt file by placing directories not intended for public indexing into an isolated parent directory. Then "Disallow" that entire parent directory in the robots.txt file rather than Disallowing each individual directory
Define which HTTP methods, Get or Post, the application will support and whether it will be handled differently on different pages in the application
Disable unnecessary HTTP methods, such as WebDAV extensions. If an extended HTTP method that supports file handling is required, utilize a well-vetted authentication mechanism
If the web server handles both HTTP 1.0 and 1.1, ensure that both are configured in a similar manner or insure that you understand any difference that may exist (e.g. handling of extended HTTP methods)
Remove unnecessary information from HTTP response headers related to the OS, web-server version, and application frameworks
The security configuration store for the application should be able to be output in the human-readable form to support auditing
Implement an asset management system and register system components and software in it
Isolate development environments from the production network and provide access only to authorized development and test groups. Development environments are often configured less securely than production environments and attackers may use this difference to discover shared weaknesses or as an avenue for exploitation
Implement a software change control system to manage and record changes to the code both in development and production
Clear your system of any unnecessary components and ensure all working software is updated with current versions and patches.
If you work in multiple environments, make sure you’re managing your development and production environments securely.
Outdated software is a major source of vulnerabilities and security breaches.
Software updates include patches that fix vulnerabilities, making regular updates one of the most vital, secure coding practices.
A patch management system may help your business to keep on top of updates.
Use strongly typed parameterized queries
Utilize input validation and output encoding and be sure to address meta characters. If these fail, do not run the database command
Ensure that variables are strongly typed
The application should use the lowest possible level of privilege when accessing the database
Use secure credentials for database access
Connection strings should not be hard coded within the application. Connection strings should be stored in a separate configuration file on a trusted system and they should be encrypted.
Use stored procedures to abstract data access and allow for the removal of permissions to the base tables in the database
Close the connection as soon as possible
Remove or change all default database administrative passwords. Utilize strong passwords/phrases or implement multi-factor authentication
Turn off all unnecessary database functionality (e.g., unnecessary stored procedures or services, utility packages, install only the minimum set of features and options required (surface area reduction))
Remove unnecessary default vendor content (e.g., sample schemas)
Disable any default accounts that are not required to support business requirements
The application should connect to the database with different credentials for every trust distinction (e.g., user, read-only user, guest, administrators)
Do not pass user supplied data directly to any dynamic including function
Require authentication before allowing a file to be uploaded
Limit the type of files that can be uploaded to only those types that are needed for business purposes
Validate uploaded files are the expected type by checking file headers. Checking for file type by extension alone is not sufficient
Do not save files in the same web context as the application. Files should either go to the content server or in the database.
Prevent or restrict the uploading of any file that may be interpreted by the web server.
Turn off execution privileges on file upload directories
Implement safe uploading in UNIX by mounting the targeted file directory as a logical drive using the associated path or the chrooted environment
When referencing existing files, use a white list of allowed file names and types. Validate the value of the parameter being passed and if it does not match one of the expected values, either reject it or use a hard-coded default file value for the content instead
Do not pass user-supplied data into a dynamic redirect. If this must be allowed, then the redirect should accept only validated, relative path URLs
Do not pass directory or file paths, use index values mapped to a pre-defined list of paths
Never send the absolute file path to the client
Ensure application files and resources are read-only
Scan user uploaded files for viruses and malware
Utilize input and output control for un-trusted data
Double-check that the buffer is as large as specified
When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string
Check buffer boundaries if calling the function in a loop and make sure there is no danger of writing past the allocated space
Truncate all input strings to a reasonable length before passing them to the copy and concatenation functions
Specifically close resources, don’t rely on garbage collection. (e.g., connection objects, file handles, etc.)
Use non-executable stacks when available
Avoid the use of known vulnerable functions (e.g., printf, strcat, strcpy, etc.)
Properly free allocated memory upon the completion of functions and at all exit points
Use tested and approved managed code rather than creating new unmanaged code for common tasks
Utilize task-specific built-in APIs to conduct operating system tasks. Do not allow the application to issue commands directly to the Operating System, especially through the use of application-initiated command shells
Use checksums or hashes to verify the integrity of interpreted code, libraries, executables, and configuration files
Utilize locking to prevent multiple simultaneous requests or use a synchronization mechanism to prevent race conditions
Protect shared variables and resources from inappropriate concurrent access
Explicitly initialize all your variables and other data stores, either during declaration or just before the first usage
In cases where the application must run with elevated privileges, raise privileges as late as possible, and drop them as soon as possible
Avoid calculation errors by understanding your programming language's underlying representation and how it interacts with numeric calculation. Pay close attention to byte size discrepancies, precision, signed/unsigned distinctions, truncation, conversion and casting between types, "not-a-number" calculations, and how your language handles numbers that are too large or too small for its underlying representation
Do not pass user-supplied data to any dynamic execution function
Restrict users from generating new code or altering existing code
Review all secondary applications, third-party code, and libraries to determine the business necessity and validate safe functionality, as these can introduce new vulnerabilities
Implement safe updating. If the application will utilize automatic updates, then use cryptographic signatures for your code and ensure your download clients verify those signatures. Use encrypted channels to transfer the code from the host server
Last updated: 2024-12-24 01:10:56.686355 File source:
This document is a store of best practices to be followed for the development of Nunet components. The current file is more focused on Golang best practices for building Device Management Service (DMS), which is the core component of the Nunet platform.
All file names should be in lowercase. And it is recommended to use underscore between two words.
Example: capability_comparator.go
The files containing tests should have _test.go
suffix in the name
Example: capability_comparator_test.go
The name of any error variable must be err
or prefixed with err
. Consistent naming for error variables makes the code more readable and easier to follow. See below for an illustrative example.
Named fields improve code readability and reduce errors caused by misordered fields. For example,
Including the unit(e.g. time) in the name. Append the interval type to the consts, for example dialupTimeoutSecond
or dialupTimeoutMillisecond
Including the unit in the name clarifies its meaning and prevents unit-related bugs.
It is recommended to limit the line size to 100 characters per line
. Keeping line lengths manageable (e.g., under 80-100 characters) helps readability, especially on smaller screens.
Aim for functions that fit within a single screen of code without scrolling
This usually means keeping functions under 20-30 lines, making them easier to read and understand.
As a general principle, it is recommended to avoid using global variables as much as possible, because it can lead to tight coupling between components of the system. It also simplifies testing.
As an alternative, it is suggested to use dependency injection instead.
It is recommend to use objects (structs) to group related handlers. See below for an illustrative example.
Type definitions should be done on top of the page. This provides a clear overview of the structures and types used in the file.
The recommended order of type definitions is
consts
interfaces
structs
A logical order (e.g., constants, interfaces, structs, functions) helps readers understand the code structure quickly. It is recommeded that the Constructor should be defined first immediately after the struct
section.
Accept interfaces and return structs
This follows Go’s idiomatic way to make code more flexible and easier to test. Special cases: When needed to return the interface, we have another constructor returning the interface
Return pointers to structs in constructors
This has several benefits:
Efficiency: Reduced Copying: When you return a pointer, you avoid copying the entire struct. This is especially important for large structs, as copying can be expensive in terms of performance. Memory Allocation: Using pointers ensures that only one instance of the struct is created and manipulated, reducing memory overhead.
Mutability: Modification: With a pointer, the receiver can modify the original struct's data. This is useful when you want to update fields of the struct after it's been constructed. Shared State: If multiple parts of the program need to share and modify the same instance, pointers facilitate this by allowing all references to point to the same underlying data.
Consistency: Interface Implementation: Methods that implement interfaces often require pointer receivers to modify the struct. Returning pointers ensures that these methods can be used as intended. Idiomatic Go: Returning pointers is a common and idiomatic practice in Go, aligning with community conventions and expectations.
Avoid very similar methods doing almost the same thing without adding any business logic
For example, in teh below interface, the second and third method are simply returning a filtered value of GetGPUs. This kind of practice decreases the abstraction of an interface.
Below is another example where the second method probably does not have any additional business logic but is added to execute a for loop. It is best to avoid such definitions, and keep only the first method.
Always wrap errors and return them
Wrapping errors with context provides more informative error messages.
Don’t log errors in functions that return errors
Logging should be done at the top level where errors are handled, not within lower-level functions.
Non error logs levels can be used as needed.
Consistent Error Handling
Handle errors consistently. Avoid ignoring errors, and handle them as soon as possible.
Avoid Panics
Use panics
only for unrecoverable errors. For recoverable errors, use error returns
Use table-driven tests
Table-driven tests are a common Go
pattern that make it easy to add new test cases and improve test readability.
Use assert library for testing
Using an assert library makes tests cleaner and provides better error messages when tests fail.
Only use mocks if not otherwise possible
Avoiding excessive mocking leads to more reliable tests that don't break with internal changes.
Elements in maps are not ordered and randomly retrieved
Never rely on ordering in a map. Using maps in tests can help ensure your code doesn’t rely on specific orderings, improving robustness.
Do not ignore "contexts"
Contexts are critical for handling deadlines, cancellation signals, and request-scoped values across API boundaries and goroutines.
Note: Telemetry uses contexts heavily, see how this is affected
Avoid Unnecessary Slices Allocations
Preallocate slices when the size is known to avoid unnecessary allocations.
Avoid Exporting Unnecessary Types and Functions
Keep the API surface small by only exporting types and functions that are intended for public use.
Check for nil Before Dereferencing Pointers
Always check for nil pointers before dereferencing to avoid panics.
Use filepath Package for File Path Manipulation
Use path/filepath for manipulating file paths to ensure cross-platform compatibility. Don’t create path by concatenation “/” or “\”
It is recommended to use golangci-lint
Linter.
Linters enforce coding standards and catch common mistakes early, improving code quality.
Avoid magic numbers and use consts
Constants give meaningful names to otherwise obscure numbers, improving code readability.
Avoid new keyword if possible
Using composite literals (e.g., &Person{}) is more idiomatic and clear in Go.
Last updated: 2024-12-24 01:11:00.432379 File source:
Drum: The drum is the constraint, which sets the pace for the entire system, much like a drumbeat. In a software development setting, this might be a particular stage in the workflow that is the slowest or has the least capacity.
Buffer: This is a time buffer placed before the constraint to ensure it is always fed with work and never starved. This helps to accommodate variability and keep the constraint working continuously. In software terms, it could mean prioritizing tasks so that the development team (the constraint) always has a backlog of ready work.
Rope: This is the mechanism that controls the release of new work into the system. The rope ensures that work is introduced at a rate that the constraint can handle, preventing excessive work-in-progress (WIP) that can lead to bottlenecks and delays.
sDBR is just normal DBR without a constraint buffer and has only a project buffer, as in our case. The market / release plans are considered the constraint and the drum is set to meet all the due dates. When you have a market constraint, you need to explicit that constraint by making sure you are on time to all customers.
In sDBR, the drum is the due date. Therefore there is no need to sequence jobs at the constraint or the ‘would be’ capacity constrained resource (CCR) since the constraint is the market. Raw materials (tasks / work to be done) are released to the team on the task due date (scheduled in the gantt chart) minus the project buffer. Releasing the right jobs in the right order is critical. Therefore the most important thing is to set priorities for jobs / tasks.
DBR is a ‘pull’ system. When constraint finishes a task, another task is released into the system (from the backlog). Kanban is a ‘don’t push’ system - if the Kanban board is full we do not push more into the WIP / doing. In DBR, the buffer is time; In Kanban, the buffer is space.
Kanban's ‘don’t push’ system aligns well with the rope aspect of DBR, controlling WIP by only pulling new work when the current work is completed. This reduces the risk of overburdening the system and helps maintain a smooth workflow. DBR adds an extra layer by focusing on the constraint and ensuring it's always productive.
Project management page for the milestone: https://docs.nunet.io/project-management-portal/device-management-service-version-0-5-x
All About Lean provides an in-depth look at the DBR method and compares it with other methodologies like Kanban, discussing its advantages and potential drawbacks: All About Lean - A Critical Look at Goldratt's Drum-Buffer-Rope Method (AllAboutLean.com).
Velocity Scheduling System offers a comprehensive summary of DBR, including simplified versions and practical applications in scheduling: Velocity Scheduling System - Drum Buffer Rope Summary (Velocity Scheduling System).
Smartsheet discusses how DBR can be applied within Agile and Kanban frameworks, providing insights into its implementation in software development: Smartsheet - All About Kanban Software Development (Smartsheet).
Last updated: 2024-12-24 01:10:59.550170 File source:
This document outlines the project management and technical development processes at NuNet, engineering meetings and ceremonies, and the rules of engagement for the Engineering Team Members. Contributors have comment access, and team members are invited to submit suggestions via merge requests to this file, with @janainasenna indicated as the reviewer.
Last updated: 2024-12-24 01:10:59.869121 File source:
At NuNet, we’ve adopted Critical Chain Project Management (CCPM), a that emphasizes resource availability and to ensure timely and efficient project completion. The next sections give an overview of the key concepts of CCPM applied in NuNet.
The critical chain is the longest sequence of dependent tasks in a project, considering both task dependencies and resource constraints. Unlike the critical path, which only considers task dependencies, the critical chain also accounts for the availability of resources required to perform tasks, often making it longer than the critical path. In NuNet, a is used to monitor each project. Tasks highlighted in red represent the critical chain and determine the overall project duration. Non-critical tasks are represented in blue.
In traditional methods, task durations are usually estimated conservatively to include safety margins. Critical Chain Project Management (CCPM), however, uses optimistic estimates and places safety margins into buffers instead. The project buffer is a time buffer placed at the end of the critical chain to protect the project completion date from delays. A feeding buffer is placed where non-critical tasks feed into the critical chain, protecting the critical chain from delays in these feeding paths. The health of the project is monitored by observing buffer consumption. If buffers are being consumed faster than planned, it signals potential delays, allowing project managers to take corrective actions before the project is endangered.
CCPM focuses on ensuring that resources are not over-committed and are available when required for critical tasks. The approach encourages multi-tasking reduction, ensuring that resources can concentrate on one task at a time for better efficiency.
In CCPM, a is a visual tool used to monitor the overall health of multiple projects within a portfolio. It tracks the progress of each project by comparing buffer consumption (the time or resources used up compared to what was allocated) against project completion percentage. The chart typically uses a color-coded system of green, yellow, and red zones to indicate whether a project is on track, at risk, or in critical condition respectively. By visualizing the status of all projects in a single chart, the portfolio fever chart helps managers quickly identify which projects need attention and prioritize resources accordingly, ensuring that the entire portfolio remains on course for successful completion.
Last updated: 2024-12-24 01:10:59.285188 File source:
This document explains the current framework or structure for specification of platform components. It also outlines the process through which updates to platform specification and corresponding documentation are managed.
The specification for each component / package / sub-package is described in the README
file situated in the same folder.
The README
file consists of following elements:
Static section: This section contains the package name and some links to information about the project. Note that each README
file has the same set of links in this section.
The links are followed by a Table of Contents which again is common across all READMEs
.
Description: This section should have a brief description of what the package is about and its core functionality.
Structure and organisation: Here we give an high level overview of the contents of the package. This includes any file or folder created within the directory.
Class Diagram: Each package/sub-package is represented by a class diagram created in plantuml
. This file named class_diagram.puml
is present in the specs
folder present in each directory.
Note that the class diagram needs to be detailed at the sub-pacakge level. The package level diagram automatically gets created by integrating the diagrams of each sub-package into a single file. Similarly, the global class diagram at the component level gets created by integrating diagrams of all packages into a single file.
Testing: This section is to be used to explain the reader how to test the functionality. This may cover unit tests, functional tests or anything else as required.
Proposed Functionality / Requirements: This section allows developers to capture the functionality that is not yet built but is in the pipeline. This requirement could be coming from other packages needing a functionality or it could be features that from the roadmap of the said package. This section essentially serves two roles:
a. Give an idea to the reader what modifications are expected to the current package. Refer to the list of issues that are referenced in this section to access work being done/planned for the package.
Note: All future functionality should have a proposed
tag in the heading to make it clear to the user. See below for an illustration.
Interfaces can be written directly in the README
file using the code block, which mostly applies to proposed interfaces. It is best to explain the purpose of the interface and its methods in plain English.
Template / Structure for method description
A recommended (but not mandatory) structure for describing methods is as follows:
signature: <function_signature>
input #1: <explanation of first input parameter>
input #2: <explanation of second input parameter>
output (success): <Expected output data type>
output (error): <Output in case of any error>
<Function_name>
function <function_description>
See below for an illustration
Note: It is recommended to specify only the main methods that describe the core functionality. Helper functions need not be described in the README
file.
Naming Convention
It is recommended to use camelCase
for function names with first word in lower case and following words having the first letter capitalized. For example - sendMessage
or publishBidRequest
.
All data structures required or used in the functionality should be specified. See below for an example data model.
Naming Conventions
The data models are specified using a standard code block. Following naming convention need to be utilised.
For example dms.orchestrator.BidRequest
implies that a data type by the name BidRequest
is defined in the orchestrator
sub-pacakge of dms
package. It is important to adhere to this convention for discoverability and readability of the specifications.
Another applicable convention is to capitalize the first letter of the word in the name of data models. For example - BidRequest
or PriceBid
.
Note: Only the data models defined in the current package need to be specified using the code block. For data models from other packages, only the name should be mentioned along with a brief explanation of what role it is playing in the package functionality.
For example, see below a illustrative screenshot from executor
package.
Note that the data types defined in types
package only explained here in context of the package functionality. It does not specify its parameters which have alredy been defined in the README
of the types
package. However, LogStreamRequest
data type is fully described as it is being defined in the executor
package itself.
Below is the recommended structure for describing API endpoints. However, developers can modify this or use alternate tools like Swagger if that is more beneficial.
<Explanation of the endpoint functionality>
See below for an illustrative example of onboard
end point
Sequence Diagrams can be a useful tool to describe a functionality in the initial or design phase. Developers are recommended to make use of this wherever they see fit.
It is very easy to insert a .puml
file in the README. In fact class diagrams of all the packages in DMS (Device Management Service) are made using PlantUML.
PlantUML allows for both constructing the whole diagram as well as parts of it as well. This allows us to divide specs to each package and store close to the code
See below for an illustrative example of search and match
operation where DMS tries to find elibile compute providers from the network.
The important aspects that typically should be covered in the sequence diagram are:
It should show the entities involved in the functionality. Ex, User, Compute Provider DMS, Elasticsearch etc
Define routines and subroutines. In the above example, the loop Compute provider decision to accept/reject
covers the functionality where Compute Provider is assessing the job opportunity. Within this loop we have two subroutines covering the two possible scenarios - decision to bid or not bid.
Add description, comments etc. It is particularly useful to make the sequences more clear to the reader.
This means we develop a granular list of steps that should be executed for each functionality that is being offered by the component or package. This also covers different scenarios of interaction which can lead to different outcomes within the same functionality.
The important aspects that should be covered in the feature file are:
The applicable function and data models should be referenced.
Define endpoints if applicable.
Explain the precondition that should exist using the Given
keyword.
Define the different scenarios than can occur within this functionality
Note the naming convention which specifies the package in which the said function/data model will be located - createBid
function in orchestrator
package of DMS component.
All documentation at Nunet should be considered as Living Documentation
which essentially means that it gets updated along with code and evolution of the project.
Open the list of issues link in the Proposed Functionality / Requirements
section in the README
file.
Check if a similar issue has already been created. If the answer is Yes
, add your comments on the issue to facilitate discussion. Use sequence diagrams, Gherkin files as required. Skip step 3.
If the answer is No
continue to step 3.
Create a new issue explaining the proposed functionality. Use sequence diagrams, Gherkin files as required.
Tag developers who are maintaining the code/package on the issue / comment.
Create a new feature branch from main
branch. If a feature alredy exisits with related updates, it may be useful to clone your feature from there.
Update the Proposed Functionality / Requirements
section in the README
with proposed interfaces, methods, data types etc.
Created a Merge Request (MR) for review. Assign maintainers of the code as reviewers.
Link the MR created in the previous step on the issue created earlier.
The assignees will either review the merge request themselves or assign a responsible core team member (who is knowledgeable about the functionality).
The issue and linked MR will be reviewed by the assigned person. If it is found acceptable, we move to the discussion stage. If the proposal is simple, the reviewers can accept or reject it immediately.
The issue will be discussed further as needed with core team members. The discussion has to be coordinated by the assigned reviewer.
The free discussion can happen on the issue as comments broken into topics if needed.
The discussion is closed when the reviewer accepts the proposed changes to specifications and the README
file on the feature branch adequately described the proposed functionality in the relevant section.
Upon acceptance, the feature branch will be merged to the main
branches. Note that here only README
file has been updated.
New Issue will be created for development of the proposed changes and placed into Open
and described as needed in order to place into kb::backlog
, considering the flow of team work at the moment.
The specification of the feature will be placed in the description of the issue so that developers and contributors can easily pick it up and implement.
The specifications will serve as acceptance criteria for the issue to be accepted for merge into main
branch (after implementation is done).
Create a feature branch from the main
branch.
Implement the changes on the feature branch to incorporate the proposed functionality.
Update the README
file so that the specification matches the state of implementation.
Create a merge request to the main
branch.
Link the MR on the relevant issue.
The linked MR will be reviewed by the core team.
The reviewer will check if the implemented functionality meets the specification and all specified tests pass as defined in Test Management process and makes the decision to merge / close it.
Upon acceptance, the feature branch will be merged to the main
branch.
The issue will be marked as kb::done
.
Last updated: 2024-12-24 01:11:00.132347 File source:
At NuNet, we’ve adopted the following ceremonies and developed automations to help us monitor essential resources and prioritize dependent tasks, so we can complete milestones as efficiently as possible:
Critical Chain Daily Meetings: These meetings are held daily to monitor critical chain progress, ensuring projects/milestones stay on track. Mandatory attendance for the milestone technical owner and individuals directly or indirectly involved in critical chain issues. Optional for other developers unless their attendance is directly requested by a team member on a critical chain AND/OR in case of not providing an async update in a WIP GitLab issue.
Weekly Tech All-Hands: A weekly obligatory meeting for all engineering team members aimed at providing clarity on broader context and tech-related updates.
Ad-hoc Sync Meetings: Scheduled sessions to discuss issues in detail, identify blockers, and set the course for resolution.
Automatic Processes: Implementing automation to update the Kanban board and post messages to Slack channels regarding updates related to Critical Chain Project Management (CCPM) and Kanban.
Definition: Daily meeting to discuss the critical chain of a milestone with a locked project buffer (=defined scope of the milestone) as per CCPM methodology.
Purpose: Monitor progress along the critical chain to maintain project/milestone alignment and enable developers to streamline technical collaboration and move fast with the issues on the critical chain (see the section at the end of this document about how Drum Buffer Rope (DBR) methodology works - ).
Duration: 15-to-30 minutes
Attendance: Mandatory attendance for the milestone technical owner and individuals directly or indirectly involved in critical chain issues (see below). Optional for other developers unless their attendance is directly requested by a team member on a critical chain AND/OR in case of not providing an async update in a WIP GitLab issue.
An automatic message is posted in the #status-update-<milestone>
Slack channel each day, listing the critical chain work packages and the developers associated with them (as shown in the picture below).
Developers with direct involvement in the critical chain can request attendance from others indirectly involved by sending a message in the #status-update-<milestone>
Slack channel.
If multiple milestones have locked project buffers, separate meetings will be held for each milestone. In other words, if a developer is on a critical chain in both milestones with locked project buffers, they will be expected to attend two daily meetings.
Definition: One meeting per week with the whole team to update about all milestones.
Purpose: Provide comprehensive clarity on the broader context, ensuring that the development team understands the overarching scope of NuNet projects.
Duration: 30 minutes
Attendance: Obligatory attendance for all tech team members.
Responsibility: The milestone owner is responsible for updates. The owner can choose to share the word with others to give a better view of the update if needed.
Create a summary written record of those meetings, and post it in the #status-update
Slack channel that is used to general updates about the milestones.
Send the updated portfolio fever chart every week to the #status-update
Slack channel.
Definition: Periodical synchronization meetings according to the needs.
Purpose: Provide an opportunity for everyone to update and discuss their issues. Also important to highlight blockers and set the course.
Suggested duration: 30 minutes to one hour
Responsibility: Responsibility to create and manage these meetings is up to the team leads. Each team lead can schedule it from every day to once a week depending on the project and team’s needs.
The team lead can also schedule more than one meeting, splitting the team, according to the topics being discussed.
The team leads (Kabir, Dagim, Janaina) can also schedule meetings with people from different teams.
In summary, the idea is to have ad hoc meetings according to the needs at the moment.
The team leads need to make a clear and shared agenda for these sync meetings. If there is nothing on the agenda 1-day prior to the scheduled meeting, just skip it.
In addition to these synchronization meetings, asynchronous collaboration via GitLab issues, research blogs, and documentation is essential due to our distributed context, open source projects, community participation, and multiple time zones.
Weekly company-wide All-hands: A 30-minute meeting once a week to share general updates with the entire NuNet team.
Technical discussions: One hour meeting dedicated to discussing specific topics that are important to the development team. Once a topic is defined, team participation will be suggested so individuals can determine whether their attendance is necessary.
Retrospective: These sessions will be scheduled on demand to discuss the development process with all the development team.
Kick-off meetings: Held to start a new project following the CCPM methodology.
Mission control: Conducted to address critical issues urgently.
Code review sessions: These sessions will be scheduled on demand to ensure code quality and consistency while promoting team collaboration and knowledge sharing.
Technical management board consists of Kabir, Dagim, Janaina and Vyzo. The goal of this board is to agree on high-level architectural concepts and align communication to the team. Concrete concerns/aspects to take into account and solve by this board:
Kabir is responsible for the high-level conceptual architecture of the platform and the process of translating the concepts (actor model, graph traversals, etc.) into specifications that can be implemented by the team.
Vyzo is responsible for translating high-level requirements and architectural principles into implementable technical designs and prototypes to validate architectural decisions and work with Dagim and the development team to ensure prototypes are scalable and can be transitioned into full production systems.
Dagim is responsible for the coordination of low-level development and code refactoring work, communicating with developers, etc. For that, high-level conceptual architecture has to be aligned with the low-level tech possibilities and constraints.
Janaina is responsible for the project management process, task and issue alignments, milestone designs, etc., and thus often needs to relate high-level architectural concepts and decisions with everyday tasks and issues on a lower level.
Meetings:
Tech board alignment: Weekly architecture alignment call 30-60 minutes for in-depth discussion of the architectural aspects to be implemented.
Daily alignment call: 15 minutes for:
Identifying aspects that need to be aligned and communicated to the team;
Distributing technical leads to the respective priorities.
Last updated: 2024-12-24 01:10:58.765286 File source:
NuNet is running an extensive testing program and framework in order to reduce possibility of errors on its production network, where the NTX-backed global economy of decentralized computing will be operating. This page explains the NuNet test environments, architecture of network and the process of constructing, maintaining and growing the network -- integrated into the NuNet overall development process and CI/CD pipeline.
Related documentation:
Contents of this page
feature environment runs the ci/cd pipeline on the main
branch of the NuNet repositories;
staging environment runs extensive pre-release testing on the frozen features in the release
branch;
production environment runs the final releases of NuNet network, exposed to end users.
The feature environment is composed of a network of heterogeneous devices sourced from the community. Since NuNet, as a decentralized network, will not have control of the devices sourced from community, the feature environment will encompass communication channels with the community members who will participate in NuNet testers programs.
Branch: main
branch
Feature environment is used to run the following CI/CD pipeline stages according to the pre-defined schedule, to be communicated to community testers:
static analysis
unit tests
static security tests
build
functional tests / API tests
security tests
automatic regression tests
The feature environment contains:
virtual machines and containers hosted in NuNet cloud servers;
machines owned by NuNet team members;
machines provided by the community members on constant basis via NuNet Network private testers programs.
The CI/CD pipeline in feature environment is triggered in two cases:
according to the pre-determined schedule for running stages that are more heavy on compute requirements -- which ideally may include the more advanced stages; depending on the speed of development, NuNet may be schedule weekly or nightly builds and runs of the platform with the full pipeline (possibly including the latest stages of the CI/CD pipeline normally reserved for staging environment only). In principle, feature environment should be able to run all automatic tests.
Testnet is this network and is used by developers, QA/security engineers and community testers. Manged by the Product Owner.
Branch: release
branch, created from main
by freezing features scheduled for release;
CI/CD pipeline runs the following stages automatically as well as manually where required:
static analysis
unit tests
static security tests
build
functional tests / API tests
security tests
regression tests
performance and load tests
live security tests
The staging environment contains:
virtual machines and containers hosted in NuNet cloud servers
machines owned by NuNet team members
extensive network of community testers' machines/devices provided via NuNet Network private testers, covering all possible configurations of the network and most closely resembling the actual NuNet network in production:
with different hardware devices
owned by separate individuals and entities
connected to internet by different means:
having IP addresses
behind different NAT types
having different internet speeds
having different stability of connection
etc
Testing on staging environment is triggered manually as per platform life-cycle and release schedule. When the staging branch is created from main
branch with the frozen features ready for release, the following actions are performed:
The staging environment / testnet is constructed by inviting community testers to join their machines in order to cover architecture described above;
All applications are deployed on the network (as needed) in preparation for automatic and manual regression testing and load testing;
Manual testing schedule is released and communicated to community testers;
CI/CD pipeline is triggered with all automatic tests and manual tests;
Bug reports are collected and resolved;
Manual tests resulting in bugs are automated and included into CI/CD pipeline;
The above cycle is repeated until no bugs are observed. When this happens, the staging branch is marked for release into production environment.
This is the live environment used by the community to onboard machines/devices or to use the computational resources available on the NuNet platform.
The Production environment contains all community machines/devices connected to production network.
When the tests in the Testnet (staging environment) are finished with success and approved by the testers, the module(s)/API(s) should be released to production. The following processes are being defined:
versioning process: versioning of modules and APIs;
compatibility/deprecation process: releasing modules/APIs that do not have compatibility with others modules/APIs currently running on the platform should be avoided since NuNet is a highly decentralized network; however old versions should be deprecated so maintaining the compatibility will not create other problems related to security, performance, code readability, etc.
communication process: how the community is notified of modules updates, bugs, security issues
updating process: how the modules/APIs are updated.
Functionality: This section typically explains the interfaces and methods that define the functionality of the package. Developers can choose to link documentation auto generated from the code as long as it clearly explains the package functionality. Alternatively, they can also follow the structures/templates prescribed in section of this document. It is recommended to specify any additional information (as applicable) to enhance the clarity and understanding of the reader.
Data Types: This sections enlists the various data models used by the package. As a default Go
structs have been used to describe the data types. However, developer may choose to use an equivalent structure as per the language (ex. Python) used in the component/package. The conventions to be followed for specifying data types are further explained in section of this document.
b. This is the place where request for new functionality can be specified. The process for doing the same is outlined in section.
References: Any additional links or content relevant for the reader can be mentioned here. For example, Nunet is referenced in several places to give an idea of the background research prior to development of the functionality for those who are interested.
We suggest to use for creating sequence diagrams. Few reasons for this choice is because
PlantUML files support the naming convention we follow. See
Alternatively one can also use .
feature files are another useful tool that can be used by developers to specify a functionality. syntax allows us to describe the functionality of a component using natural language.
The steps written in are saved in a file with an extension .feature
. An example of such a feature file with a single scenario is shown below.
The below steps describe the typical process that is followed for making any change or update to the platform functionality. Which means, that contributions to documentation are considered contributions to the code base and should follow the NuNet contributing guidelines (see ).
Similarly to other decentralized computing projects (as blockchains), the network is running on the hardware provisioned via independent devices. In NuNet case, there is an additional complexity due to the fact that test networks have to resemble heterogeneity of the population of devices, operating systems and setups. Therefore, large portion of the have to run not on centralized servers (e.g. in our case, via gitlab-ci runners), but on the geographically dispersed network. In order to manage the full life-cycle of the platform, including testing of separate features and iterations of the network components, NuNet is using isolated channels categorized into three environments:
More details about the architecture supporting the current implementation of the feature environment can be found .
automatically when is merged into the main
branch;
No CI/CD pipeline stages are running on production environment. However, all users are provided with tools and are encouraged to report any bugs or file feature requests following the .
endpoint:
<endpoint url>
method:
<method being used>
input:
<expected input data type>
output:
<expected output data type>
Last updated: 2024-12-24 01:11:01.225822 File source: link on GitLab
Considering our remote setup and the Open Source nature of our project, written communication is like the lifeblood or oxygen of our globally distributed team. Without it, we can’t function efficiently, let alone integrate a network of global Open Source contributors. Without strong and asynchronous communication, we will struggle with duplicate work, dependencies, conflicts, and misunderstandings.
To ensure our success, every engineering team member, whether on the critical team or not, should adhere to a set of core principles. These principles are designed to promote effective communication, streamline workflows, and maintain high-quality standards across our distributed team. Please find the aforementioned principles in the following subpages.
We expect mandatory attendance for the milestone technical owner and individuals directly or indirectly involved in critical chain issues. It is optional for other developers unless their attendance is directly requested by a team member on a critical chain AND/OR in case of not providing an async update in a WIP GitLab issue.
Why is this important? Critical chain issues are crucial as they essentially make or break the milestone. Sharing daily progress and blockers with the team and technical leadership is vital. Therefore, if you’re working on a critical chain issue directly or indirectly, you must attend a synchronous daily meeting.
All tech team members need to update their work-in-progress GitLab issues with a comment every day. If no comments are submitted, it’s assumed that no progress has been made, and your presence in the daily meeting is required, regardless of whether the issue is on the critical chain or not.
Why is this important? Imagine NuNet with 100 software engineers and twice as many OS community developers. You find an interesting issue that someone else has been working on, but there’s no information on what’s been done or what needs to be completed. You end up reverse engineering their code. It would be much more efficient if they had left an update with a to-do list in the comments. Remember, communication fuels us forward more efficiently.
Considering the Open Source nature of our project, NuNet needs GitLab as the main communication platform for all tech team members, both internal and external (OS developers).
Why is it important? Otherwise, communication, the oxygen fueling NuNet Platform success, gets trapped and siloed. Avoid technical discussions in Slack or DMs to prevent siloed communication. This ensures transparency and accessibility for everyone, especially for the external Open Source Contributors.
Everyone on the team should always have an issue assigned to them. It’s each team member’s responsibility to assign themselves an issue. If you don’t have an issue or have finished one, pick a new one from the backlog and inform Janaina via comment. If more technical context is required, reach out to Kabir, Dagim, or Janaina via GitLab comments for increased visibility.
Why is this important? Picture the development process with numerous OS contributors. As a milestone or WP owner, you’re leading Mainnet integration with Cardano. Imagine 20 developers finishing their issues simultaneously and asking you what to work on next. Instead of reviewing the contributions, you spend your morning assigning issues top-down. You don’t feel like going on holidays because you’re constantly worried the backlog won’t move forward with the same velocity. Not the nicest feeling, is it?
Now, imagine the same scenario but with these 20 developers self-assigning issues from the backlog based on set priorities, their experience levels, and interests, and only reaching out for technical questions if needed. Now ask yourself: which option sounds more efficient and less frustrating? Which of these 20 developers would you prefer to work with? In which group do you see yourself thriving as a developer?
We aim for approximately 90% test coverage to ensure everything is tested and to facilitate quicker merges. Every MR must include unit tests for new functionalities or changes to existing functionalities. Otherwise, the MR will be rejected by default and pushed back to the developer to add unit tests.
There are five core principles we expect all engineering team members to follow for efficient coordination in our remote setup:
Attend daily meetings if you’re on the critical chain.
Submit daily technical progress updates on issues.
Use GitLab for all communication, updates, comments, and technical discussions.
Be proactive and self-assign issues based on the broader priorities in the milestone.
Contribute to our code quality by including unit tests in every MR.
And remember: I will communicate as much as possible because it’s the oxygen of a globally distributed company.
Last but not least, if you have any better ideas on how to ensure coordination and efficiency in our distributed setup and want to add or amend the dev process at NuNet, please be vocal and contribute! While we believe the five core principles above help us to be an efficient team, we’re open to continuous improvement and learning how to build things better. Submit your suggestions via merge requests to this file, indicating @janainasenna as reviewer.
Last updated: 2024-12-24 01:11:01.491398 File source: link on GitLab
To scale MR reviews, a peer code review system is adopted. When opening an MR, the developer should ideally choose a peer reviewer based on:
familiarity with the requirements;
knowledge of the module or code;
relevant skills;
prior experience.
It is also possible to take advantage of different time zones by assigning default reviewers in regions like APAC, EMEA, and Brazil to ensure real-time coordination and faster development. This approach improves review quality, fosters mentoring relationships among developers, and boosts long-term development velocity.
When opening an MR, the creator must tag it with one of the following labels:
domain::core platform
domain::platform
domain::application
domain::documentation
domain::infrastructure
domain::testing
Additionally, use the Priority::High
tag if the MR is blocking other developers.
If the code is not ready for review, it must be marked as Draft
.
Peers must review MRs with three main focuses:
Functionality: Verify if the code meets the functional requirements outlined in the issue.
Code quality: Review for adherence to established standards, code quality, bugs, security vulnerabilities, duplication, performance, and maintainability.
Testing: Ensure tests for new functionalities are created with good coverage and quality. For bug fixes, ensure the unit test is updated to specifically cover the condition that was fixed.
In addition to peer reviews, some MRs will also undergo an architectural or conceptual review. This review can be requested by either the MR creator or the peer reviewer.
The approval process differs depending on the target branch:
An MR opened to the main
branch requires approval from a reviewer on the development team.
An MR opened to a release
branch requires approval from a reviewer and a member of the security team.
Each repository has an ultimate responsible for merging code, but this person will not conduct in-depth reviews of all MRs. Peer reviews will be managed by other developers, with the responsible person doing a final check before merging. Going forward, the responsible should only be assigned as a reviewer if it is expected an actively involvement in the review.
Mote: See this documentation for details related to the branching strategy.
The peer reviewer must analyze the status of each pipeline stage and, if approving an MR with warnings or failures in any stage, provide an explanation in a comment. For example, the reviewer can create a new issue to address the problem and link it in the comment.
Similarly, if a comment remains unresolved in the MR, a follow-up issue can be created, linked in the comment, and the MR can still be approved.
The reviewer must comment on the MR, even if it is just a "Looks Good To Me" (LGTM), to indicate that the changes have been reviewed and there are no objections or further comments.
The MR creator must promptly address any comments until the reviewer approves or closes the MR.
Last updated: 2024-12-24 01:11:00.722279 File source: link on GitLab
This documentation provides an overview of the processes, procedures, and frameworks used at NuNet to enhance the development workflow.
Useful documentation related to the development process:
NuNet development team process is based on Kanban, a visual project management methodology aimed at optimizing workflow efficiency and flexibility. Its primary objectives include visualizing work to enhance transparency, limiting work in progress (WIP) to prevent bottlenecks, and focusing on continuous delivery and improvement. The issues are visible on this board and we move them through stages: backlog
, doing
, review
and done
. If an issue has a blocker, we put it on stage on hold
to make it visible allowing the team to clearly understand why progress on certain issues has stalled and what is required to move them forward.
An automatic message (as shown in the picture below) is posted in the #status-update
Slack channel each day, listing:
team members without assigned issues;
issues with one day left;
issues with no assignee;
issues with no weight (the weight represents the estimated days to finish the issue).
An automatic script scheduled to run once daily at 23h UTC from Monday to Friday decreases the expectation days (weight) by one for issues with the kb::doing
label except if:
the issue has only one day left;
the team member is marked as AFK (away from keyboard) in internal NuNet Calendar.
Note: Refer to the Project Management Documentation to better understand some terms used in this section.
Developers review the message posted in the #status-update
Slack channel before the critical chain daily meeting. If their name is mentioned, it indicates that they need to take action, such as assigning an issue to themselves, setting the weight for an issue, moving an issue to review, and so forth.
If developers need to change the weight of their issues, they should do so directly in GitLab, also adding a comment explaining the motivation. If the issue belongs to a work package in the critical chain, the work package owner should be copied on the comment, as this will impact the project buffer. The same process applies when creating new issues.
Once a week, the work package owner should review the work package weight that represents the Estimated Time To Completion (ETTC), which is automatically updated by the pipeline at 23h UTC based on all open issues within the work package.
During the review, the work package owner should:
Link any new issues to the work package if they are not already linked.
Analyze whether new issues need to be created and create them as necessary.
Manually run the pipeline if any updates are made that may impact the ETTC.
Evaluate if the ETTC represents the time required to complete the work package and inform the milestone owner.
Last updated: 2024-12-24 01:11:01.805140 File source: link on GitLab
Please refer here for complete picture of NuNet Security Pipeline: NuNet Security Pipeline
Open-source application vulnerability management correlation and security orchestration tool used for management dashboard.
Each product represent a repository
Each Engagement represents a workflow, which was executed with each commit/PR to master, staging and develop branch
Each Commit to develop
,staging
,main
branches
Each Pull Request/Merge Request to develop
,staging
,main
branches
Binary
Container
1.1.5.1 Tools in Container Pipeline
1
2
3
1.1.5.2 Tools in Binary pipeline
1
2
3
All merge requests will be scanned by Gitlab and outputs pushed to DefectDojo, Security Team will be focused on Critical and High severity issues from defectDojo.
Security Team will use DefectDojo board to create tickets which are resultant of Pentest, community tests, any Bug Bounty report that we receive, including the external auditing.
Issue Sync to DefectDojo: All security issues originating from Security Pipeline are synced to DefectDojo. This serves as a centralized platform for managing and tracking security vulnerabilities.
Prioritization: Due to the potentially large volume of issues, there is a focus on prioritizing Critical and High severity issues from DefectDojo. This prioritization strategy helps the team concentrate on addressing the most impactful vulnerabilities first.
Security Board for Ticket Creation: A Security Board is used as a platform for creating tickets based on the prioritized Critical and High severity issues. This board could be a visual representation, possibly using a a Security Vulnerabilities Repo and creating issues on that, where the security team manages and tracks the progress of security-related tasks.
Ticket Creation Criteria: Tickets are created on the Security Board for issues deemed Critical and High severity in DefectDojo. These tickets likely include detailed information about the nature of the vulnerability, its potential impact, and steps to remediate.
Source of Issues: The issues that contribute to the creation of tickets come from various sources, including Pentests, community tests, Bug Bounty reports, and external auditing. This comprehensive approach ensures that security vulnerabilities from different testing methodologies are considered. We can separate issues using labels, in that way we can measure, monitor and prioritise them.
Since, a lot of issues won't be relevant in our case, be it a False Positive or not relevant in our environment we will provide developers/service owners a possibility to mark it as exception. This sheet will be maintained and issue will be verified for exception by security team through #security slack channel.
Exception Marking by Developers/Service Owners: Developers or service owners have the ability to mark certain security issues as exceptions. This could be due to the issues being false positives or not applicable to your specific environment.
Maintaining an Exception Sheet: A centralized sheet or database is maintained to document these exceptions. This sheet serves as a record of issues that have been marked as exceptions, along with relevant details such as the reason for the exception and the individuals responsible.
Verification by Security Team: The security team is responsible for verifying the exceptions. This involves a thorough review of the marked issues to ensure that they are indeed false positives or not applicable in the given context.
Communication Through Slack Channel: The #security Slack channel is used as a communication platform for the security team to discuss and verify the exceptions. This allows for collaboration and transparent communication within the team.
Service Level Agreements in place to fix the vulnerabilities in defined period of time e.g. 7 days for Critical/High vulnerabilities
We may introduce a system of points/credit given to the service, and deducted on number of issues present in the service, when the score/credit goes to 0 or negative, the service owners cannot move forward with their development without fixing the issues.
Service Level Agreements (SLAs) for security vulnerabilities are agreements that define the expected response time, resolution time, and other terms related to addressing and fixing security vulnerabilities within a system or application. These agreements are crucial for organizations to ensure a timely and effective response to security issues.
Technical Debt: Technical debt refers to the additional work that arises when a development team takes shortcuts or defers necessary work during the software development process. It often results from choosing expedient solutions over more robust ones. Over time, like financial debt, technical debt can accumulate and may need to be "repaid" through activities such as refactoring or improving the codebase.
Error budgets in the context of SRE likely refers to the Site Reliability Engineering (SRE) approach, which is a set of practices and principles developed by Google for managing large-scale, reliable software systems. Here's a breakdown: Error Budgets: In the context of Site Reliability Engineering (SRE), an error budget is a concept used to quantify the acceptable level of service disruptions or errors that a system can experience within a given time frame. The idea is to set a threshold for the acceptable amount of downtime or errors, and if the system's performance exceeds this threshold, it triggers a reassessment of the development and release processes. Error budgets are a way of balancing reliability with the need for continuous development and innovation.
A Security Champions Program is an initiative within an organization that aims to enhance and promote security awareness, knowledge, and best practices among its development and operational teams. The program typically involves selecting and empowering individuals from various departments to act as "security champions" or advocates within their respective teams. Here are key aspects of a Security Champions Program:
Selection of Champions: Identify individuals from different teams or departments who have an interest in security and demonstrate a willingness to contribute to improving security practices.
Training and Education: Provide specialized training and education to the selected security champions. This can include sessions on secure coding practices, threat modeling, incident response, and other relevant security topics.
Roles and Responsibilities: Define the roles and responsibilities of security champions. They often act as liaisons between their teams and the central security team, helping to disseminate security information and best practices.
Advocacy and Communication: Security champions serve as advocates for security within their teams. They communicate security initiatives, updates, and best practices, helping to bridge the gap between security teams and other departments.
Collaboration with Security Teams: Foster collaboration between security champions and the central security team. Security champions may participate in regular meetings with the security team to discuss ongoing projects, emerging threats, and ways to improve security measures.
Code Reviews and Best Practices: Encourage security champions to participate in code reviews and promote secure coding best practices within their teams. They can help identify and address security issues at the development stage.
Incident Response Training: Provide incident response training to security champions so that they are better equipped to handle security incidents within their teams and can act as first responders in the event of a security incident.
Feedback Loop: Establish a feedback loop where security champions can provide insights, concerns, and feedback to the central security team. This helps in continuously improving the security program.
Recognition and Rewards: Recognize and reward the efforts of security champions. This can include acknowledgment, certificates, or other incentives to motivate individuals to actively contribute to the security program.
Continuous Improvement: Regularly assess and refine the Security Champions Program based on feedback and evolving security needs. This ensures that the program remains effective and aligned with organizational goals.
TL;DR Write high level summary with impact Example: An Unauthenticated SoluM API Swagger Interface was found which could control the Facility’s systems i.e. CRUD operations on ESL, Cloud, APs, Picking System, Station, Packing, etc. Which could be used by any malicious entity to hinder in the production.
Description/summary and business impact: Write the details of vulnerability and its business impact Example:
The Solum Gateways are used to send information to the Electronic Shelf Labels. They support PoE and use the frequency; 868Mhz.
The API is unauthenticated and CRUD operations can be performed on ESL, Cloud, APs, Picking System, Station, Packing, etc.
Which could be used by any malicious entity can change the labels which could impede in the production.
Impacted services/components/endpoints: Here you can mention endpoints and service which is impacted Example: http://10.200.240.10:9001/swagger-ui.html
[Optional] Steps to reproduce: Steps to reproduce a vulnerability Example:
Open the url in browser: http://10.200.240.10:9001/swagger-ui.html
You will get access to swagger UI with CRUD operations
Mitigation steps: Write detailed mitigation steps for Dev Team to understand and fix the vulnerability Example:
Authentication and Authorization:
Implement authentication and authorization mechanisms for accessing the Solum API Swagger UI. This could involve using API keys, tokens, or integrating with an existing authentication system.
Ensure that only authenticated users with the necessary permissions can access the API documentation.
Access Control Lists (ACL):
Configure access control lists to restrict access to the Swagger UI based on IP addresses, user roles, or groups.
Whitelist only trusted IP addresses that are allowed to access the documentation.
HTTPS Encryption:
Ensure that the Swagger UI is accessed over HTTPS to encrypt the communication between the user's browser and the server.
Use SSL certificates from trusted certificate authorities.
Disable Swagger UI in Production:
Consider disabling the Swagger UI in production environments to prevent unintentional exposure of sensitive API documentation.
CVSS Score: Calculate using: https://www.first.org/cvss/calculator/3.1
Likelihood of exploitation: 5/5 [Critical]
Impact: 5/5 [Critical]
Overall risk score: 5/5 [Critical]
Last updated: 2024-12-24 01:11:02.357496 File source: link on GitLab
Before diving into this please read: DevSecOps Maturity Models
Get yourself familiar with our Vulnerability Management Software: DefectDojo
This document aims to provide a comprehensive overview of the security pipeline architecture implemented in NuNet. It is designed to serve as an informative guide for developers, and stakeholders involved in the software development and deployment process. The primary focus is to detail the various stages of the pipeline, the security tools integrated at each stage, and the specific roles these tools play in enhancing the security of the software development lifecycle. By outlining the workflow, tool specifics, and the conditions under which the pipeline is triggered.
The security tools are applied in different stages of the pipeline workflow. These tools are:
Commit
Secret Detection
Dependency Scanning
Coverage Fuzz Testing (Not in use)
Build
Container Scanning
Test
API Security (Not in use)
Deploy
Operational Container Scanning (TBD)
While GitLab's SAST framework supports many programming languages, at NuNet, our primary programming languages are Python, Golang and JavaScript.
This pipeline applies to both container and binary projects.
This pipeline is triggered on merge request to every branch.
It is run on the test (or Security-Test-1) stage.
The Secret Detection tool scans repositories to help prevent secrets from being exposed during commits. The primary Secret Detection tool is Gitleaks.
This pipeline applies to both container and binary projects.
Gitleaks is used for secret detection in repositories.
This pipeline is triggered on merge request to every branch.
It is run on the test (or Security-Test-1) stage.
This stage analyzes an application's dependencies for known vulnerabilities.
The primary tool for scanning application dependencies is Gemnasium.
This pipeline is currently disabled but can be configured to run every branch commit.
It is run on the test (or Security-Test-1) stage.
The container scanning tool inspects docker images for known vulnerabilities.
After building the docker image, the following tools scan the built containers.
Trivy (Default Gitlab container scanner)
This pipeline is triggered on merge request to every branch.
This stage is run on the test (or Security-Test-1) stage, primarily after the build stage.
Static Depth
Run SAST scans with minor tweaks to rules. ✅
Run SCA scans with minor tweaks to rules. ✅
Run Secret scans with minor tweaks to rules. ✅
Dynamic Depth
Run DAST scans with minor tweaks to baseline settings. (Binary case) ❌
Run DAST scans with minor tweaks to baseline settings. (Container Scanning) ✅
Intensity
Scans to be twice a month. Frequency here is too much, every commit on every branch
Consolidation
The findings in a vulnerability register after analysis. Vulnerability Management Process designed for this purpose
Last updated: 2024-12-24 01:11:02.105086 File source: link on GitLab
Learn DevOps practices and how your organization works.
Maintain relationships with Developers, QA, and Operations teams.
Do not fail builds unless you are at maturity level 3 or 4.
Do not run any tool which takes more than 10 minutes in CI/CD pipelines.
Create separate jobs for each tool/scan.
Roll out tools/scans in phases(iteratively).
Do not buy tools that don’t provide APIs or CLIs.
Verify that tool vendors can do incremental/baseline scans.
Do not be afraid to create custom rules.
Try to do everything as code.
Write documentation wikis.
Last updated: 2024-12-24 01:11:02.892626 File source:
Get yourself Familiar with Secure Coding Guidelines here:
Get yourself familiar with defect dojo here:
See if there are any High/Critical Severity issues found with the commit associated with PR/MR
Login to defectDojo
Go to Actives engagements
Here each engagement correspond to the commit, product gives you information about the repo, each product corresponds to the repo
Click on engagement, it will give you information about the branch, commit hash, findings and other information
Click on findings, it will gives you information about the findings, if there is any finding with high or critical this PR is not suitable for merge.
If the issue/vulnerability is easily understood by developer and can be fixed, then developer should fix it.
If the vulnerability/issue needs enrichment then create an issue on the repo using ticketing template from here:
Make sure to include secvuln label
assign it to developer
If the vulnerability cannot be fixed add the label to the ticket as exception along with secvuln and include the reason for the decisions in the comment.