Orchestrators run video transcoding (NVENC/FFmpeg) and/or AI inference on GPUs. Hardware directly affects job selection, reputation, and revenue. Below are minimum, recommended, and AI-optimised guidelines for 2026.Documentation Index
Fetch the complete documentation index at: https://na-36-docs-v2.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Minimum (development / testing)
Suitable for testnet, low-volume workloads, and learning.Recommended (video / production)
Optimised for real-time streaming and multi-resolution transcoding.AI inference
AI workloads are VRAM-bound. Stake does not determine AI job routing; capability and price do. Also ensure: CUDA 12+, NVIDIA Container Toolkit, good cooling, high IOPS storage for model weights.Network and ops
- Latency: <50 ms to major regions helps streaming and gateway selection.
- Production: Static IP, reverse proxy (e.g. Nginx), TLS, firewall rules.
- Monitoring: Prometheus, Grafana, NVIDIA DCGM exporter; track GPU utilisation, VRAM, segment/job success rate.
Checklist before going live
- GPU visible via
nvidia-smi - Docker sees GPU (
--gpus all) - CUDA functional
- Ports open (e.g. 8935)
- Stable Arbitrum RPC
- Monitoring configured