Talks about the basic functionalities of NuNet workflows
Please read NuNets Disclaimer before installing any software on your devices.
In order to effectively allocate resources for machine learning/computational tasks on the NuNet platform, it is essential to categorize the different resource types available. We have classified resource types into three main categories based on the capabilities of the machines and GPUs:
Low Resource Usage: This category represents low-end machines with low-end GPUs.
Moderate Resource Usage: This category represents medium-end machines with medium-end GPUs.
High Resource Usage: This category represents high-end machines with high-end GPUs.
The pseudocode provided outlines a function called estimate_resource
, which is designed to estimate the resource parameters for different categories of machines based on their resource usage type. The function accepts a single input, resource_usage
, which can take on one of three possible values: "Low", "Moderate", or "High".
The function then checks the value of resource_usage
and, depending on the category, sets the minimum and maximum values for the CPU, RAM, and GPU VRAM, as well as the estimated levels of GPU power and GPU usage. These values are assigned based on the specific resource usage category:
For "Low" resource usage, the function sets lower values for CPU, RAM, and GPU VRAM, as well as lower GPU power and usage levels. This category represents low-end machines with low-end GPUs.
For "Moderate" resource usage, the function sets medium values for CPU, RAM, and GPU VRAM, as well as medium GPU power and usage levels. This category represents medium-end machines with medium-end GPUs.
For "High" resource usage, the function sets higher values for CPU, RAM, and GPU VRAM, as well as higher GPU power and usage levels. This category represents high-end machines with high-end GPUs.
Finally, the function returns a dictionary containing all the estimated parameters for the given resource usage category. By using this function, the NuNet platform can estimate resource parameters for machines in different categories, helping to efficiently allocate resources for machine learning tasks based on the specific requirements of each job.
For each resource type, resource prices are calculated in the NuNet ML on GPU API. These functions help estimate the cost of using different types of machines for executing machine learning tasks. The process involves the Estimated Static NTX.
Estimated Static NTX is calculated using the user's input for the estimated execution time of the task and the chosen resource type (Low, Moderate, or High). The function Calculate_Static_NTX_GPU(resource_usage)
calculates this value based on these inputs and returns the Estimated NTX.
This function works to estimate and calculate resource prices for various types of machines, ensuring that users are billed fairly based on the actual resources used during the execution of their machine learning tasks.
The pseudocode for "Reporting Task Results" describes two functions used in the NuNet ML on GPU API that allow users to monitor the progress of their machine learning tasks and access the results.
Upload_Compute_Job_Result() is a function that regularly updates the task's progress. It runs in a loop and performs the following steps every 2 minutes until the job is completed or the off-chain transaction of the Estimated Static NTX is done:
Wait for 2 minutes.
Save the machine learning log output as a file that can be appended with new information.
Upload the file to the cloud, making sure that only an authenticated user can access it.
This function allows users to keep track of their tasks and view intermediate results during the execution.
Send_Answer_Message() is a function that provides a unique link to the WebApp, which helps users track their tasks. It performs the following steps:
Retrieve a unique URL (permalink) for each machine learning job.
Send the permalink from the Decentralized Management System (DMS) to the WebApp as an answer message.
This function enables users to access their task's progress updates and results using the provided link.
In summary, these functions work together to ensure users can monitor their machine learning tasks' progress and access the results in a user-friendly and organized manner.