To ensure your GPU(s) are performing optimally and to identify any bottlenecks, you can run a benchmark test. Optimal performance significantly increases your chances of being selected for jobs, as inference speed is a crucial factor in the selection process. This guide will show you how to run a benchmark test on your GPU(s) to determine the best pipeline/model combination to serve on your GPU(s), thereby maximizing your revenue.Documentation Index
Fetch the complete documentation index at: https://na-36-docs-v2.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before running a benchmarking test, make sure you have met all the prerequisites for running an Orchestrator node.Benchmarking Steps
Pull the AI Runner Docker Image
First, pull the latest AI runner Docker image from Docker Hub. This image contains the necessary tools for benchmarking.
Pull Pipeline-Specific Images (optional)
Next, pull any pipeline-specific images if needed. Check the pipelines documentation for more information. For example, to pull the image for the segment-anything-2 pipeline:
Download AI Models
Download the AI models you want to benchmark. For more information see the Download AI Models guide.
Execute the Benchmark Test
Run the benchmark test using the following command. This will simulate the pipeline/model combination on your GPU(s) and measure performance.Example command:In this command, the following parameters are used:
<GPU_IDs>: Specify which GPU(s) to use. For example,'"device=0"'for GPU 0,'"device=0,1"'for GPU 0 and GPU 1, or'"device=all"'for all GPUs.<PIPELINE>: The pipeline to benchmark (e.g.,text-to-image).<MODEL_ID>: The model ID to use for benchmarking (e.g.,stabilityai/sd-turbo).<RUNS>: The number of benchmark runs to perform.<NUM_INFERENCE_STEPS>: The number of inference steps to perform (optional).