LeRobot documentation

PyTorch accelerators

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

PyTorch accelerators

LeRobot supports multiple hardware acceleration options for both training and inference.

These options include:

  • CPU: CPU executes all computations, no dedicated accelerator is used
  • CUDA: acceleration with NVIDIA & AMD GPUs
  • MPS: acceleration with Apple Silicon GPUs
  • XPU: acceleration with Intel integrated and discrete GPUs

Getting Started

To use particular accelerator, a suitable version of PyTorch should be installed.

For CPU, CUDA, and MPS backends follow instructions provided on PyTorch installation page. For XPU backend, follow instructions from PyTorch documentation.

Verifying the installation

After installation, accelerator availability can be verified by running

import torch
print(torch.<backend_name>.is_available())  # <backend_name> is cuda, mps, or xpu

How to run training or evaluation

To select the desired accelerator, use the --policy.device flag when running lerobot-train or lerobot-eval. For example, to use MPS on Apple Silicon, run:

lerobot-train
    --policy.device=mps ...
lerobot-eval \
    --policy.device=mps ...

However, in most cases, presence of an accelerator is detected automatically and policy.device parameter can be omitted from CLI commands.

Update on GitHub