No matching torch==2.9.0.dev20250804+cu128 wheel for Python 3.12 – cannot install vllm==0.10.1+gptoss with official instructions
Hi team,
When following the official installation instructions for vllm==0.10.1+gptoss with Python 3.12 and CUDA 12.2, the installation fails because there is no torch==2.9.0.dev20250804+cu128 wheel published for cp312 (Python 3.12) in the referenced extra index (https://download.pytorch.org/whl/nightly/cu128/).
Steps to reproduce:
uv pip install --system --pre vllm==0.10.1+gptoss
--extra-index-url https://wheels.vllm.ai/gpt-oss/
--extra-index-url https://download.pytorch.org/whl/nightly/cu128
--index-strategy unsafe-best-match
Error:
ERROR: No matching distribution found for torch==2.9.0.dev20250804+cu128
- Python version: 3.12
- CUDA: 12.2
- OS: (e.g., Ubuntu 22.04, cloud/container)
- vllm: 0.10.1+gptoss
Expected behavior
Install should succeed following official instructions, as Python 3.12 is the officially supported version for gpt-oss.
Request
- Please publish
torch==2.9.0.dev20250804+cu128wheels for Python 3.12 in the nightly or gpt-oss index, or clarify which torch version should be used for Python 3.12. - If not supported, please update README/installation instructions or provide the expected timeline for Python 3.12 support.
Thanks!
torch==2.9.0.dev20250804+cu128 wheels do not exist for Python 3.12 at the moment, but they do for Python 3.11. You can downgrade to Python 3.11 and retry installing torch in venv.
Tried with Python 3.11 but no luck. torch==2.9.0.dev20250804+cu128 not found. I think the package index is changed and it is gone.
$ uv pip install --pre vllm==0.10.1+gptoss --extra-index-url https://wheels.vllm.ai/gpt-oss/ --extra-index-url https://download.py
torch.org/whl/nightly/cu128 --index-strategy unsafe-best-match
Using Python 3.11.14 environment at: .venv3.11
⠴ numba==0.61.2 × No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.9.0.dev20250804+cu128 and vllm==0.10.1+gptoss depends on
torch==2.9.0.dev20250804+cu128, we can conclude that vllm==0.10.1+gptoss cannot be used.
And because you require vllm==0.10.1+gptoss, we can conclude that your requirements are unsatisfiable.
Tried with Python 3.11 but no luck. torch==2.9.0.dev20250804+cu128 not found. I think the package index is changed and it is gone.
$ uv pip install --pre vllm==0.10.1+gptoss --extra-index-url https://wheels.vllm.ai/gpt-oss/ --extra-index-url https://download.py
torch.org/whl/nightly/cu128 --index-strategy unsafe-best-match
Using Python 3.11.14 environment at: .venv3.11
⠴ numba==0.61.2 × No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.9.0.dev20250804+cu128 and vllm==0.10.1+gptoss depends on
torch==2.9.0.dev20250804+cu128, we can conclude that vllm==0.10.1+gptoss cannot be used.
And because you require vllm==0.10.1+gptoss, we can conclude that your requirements are unsatisfiable.
Tried searching the wheels manually, and it seems the new wheels might be at https://download.pytorch.org/whl/nightly/, divided into torch, torchaudio and torchvision, because the cu128 folder redirects
back to the same link. The old ones were most likely removed, or their indexing was changed.
Also found these:
Torchvision:
https://download.pytorch.org/whl/nightly/cu128/torchvision-0.25.0.dev20250926%2Bcu128-cp312-cp312-win_amd64.whl
https://download.pytorch.org/whl/nightly/cu128/torchvision-0.25.0.dev20250926%2Bcu128-cp312-cp312-manylinux_2_28_x86_64.whl
Torch:
https://download.pytorch.org/whl/nightly/cu128/torch-2.10.0.dev20250925%2Bcu128-cp312-cp312-manylinux_2_28_x86_64.whl
https://download.pytorch.org/whl/nightly/cu128/torch-2.10.0.dev20250925%2Bcu128-cp312-cp312-win_amd64.whl
Torchaudio:
https://download.pytorch.org/whl/nightly/cu128/torchaudio-2.10.0.dev20251018%2Bcu128-cp312-cp312-manylinux_2_28_x86_64.whl
https://download.pytorch.org/whl/nightly/cu128/torchaudio-2.10.0.dev20251018%2Bcu128-cp312-cp312-win_amd64.whl
Compatible python 3.11 (cp311) wheels were also present, you may try installing them in the venv.
this might only work with --no-deps attribute, although if it errors for the dependency not proper to be used, I think vllm won't work till the new indexes are known (if they exist).
Note that this is still a hopeful solution, just a workaround with 2.10.0 torch, I haven't tested it. It might just fail at runtime.
I would, though, advice using a docker build, instead of what's stated above.
UPDATE: Found these wheels.
https://download.pytorch.org/whl/cu128/torch-2.9.0%2Bcu128-cp312-cp312-win_amd64.whl#sha256=c97dc47a1f64745d439dd9471a96d216b728d528011029b4f9ae780e985529e0
https://download.pytorch.org/whl/cu128/torch-2.9.0%2Bcu128-cp312-cp312-manylinux_2_28_x86_64.whl#sha256=87c62d3b95f1a2270bd116dbd47dc515c0b2035076fbb4a03b4365ea289e89c4
These have a better chance of working.