Livepeer orchestrators use NVIDIA GPUs for video transcoding (NVENC/NVDEC hardware encoders) and AI inference (CUDA cores / Tensor cores). GPU compatibility, session limits, and driver requirements.Documentation Index
Fetch the complete documentation index at: https://na-36-docs-v2.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Supported GPU Families
go-livepeer requires NVIDIA GPUs with NVENC and NVDEC support. AMD and Intel GPUs are not supported.NVENC Session Limits
Consumer NVIDIA GPUs enforce a hard limit on concurrent NVENC encoding sessions. This directly limits how many simultaneous transcoding streams your orchestrator can handle per GPU.Removing the Session Limit
The community-maintained nvidia-patch removes the NVENC session limit on consumer GPUs. This is widely used by Livepeer orchestrators and pool operators (Titan Node uses this in their worker setup).CUDA and Driver Requirements
Checking Your Versions
VRAM Requirements by Workload
For detailed per-pipeline VRAM planning, see the Model and Demand Reference.GPU Selection Guidance
- Transcoding Only
- Transcoding + AI
- AI-Heavy / LLM
Any supported NVIDIA GPU works. For cost efficiency, an RTX 3060 12GB or RTX 4060 Ti 16GB provides good transcoding throughput at low power draw. Patch the NVENC limit to handle more concurrent sessions.Budget pick: GTX 1660 Super (6 GB) – cheapest entry for transcoding-only.