Workflow?
Any test workflow for using this?
You need the longcat branch of the Wrapper.
https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/longcat/LongCat/longcat_i2v_testing.json
LongCat_TI2V_comfy_fp8_e5m2_scaled_KJ.safetensors ?
WanVideoModelLoader
'blocks.0.ffn.0.bias'
Hello. Can I increase the fps to 30, but without increasing the speed?
WanVideoModelLoader
'blocks.0.ffn.0.bias'
same error
Hi!... e5m2 architecture?. It's impossible to run this model with any Torch Sage Triton alignment. For the rtx 30xx series it's imposible avoid dtype error. Does anyone else have any idea?
Hi!... e5m2 architecture?. It's impossible to run this model with any Torch Sage Triton alignment. For the rtx 30xx series it's imposible avoid dtype error. Does anyone else have any idea?
The issue with e4m3fn and torch.compile on 30XX GPUs has been fixed in the Oct 15 update to triton-windows: https://github.com/woct0rdho/triton-windows/releases
Hello genius, thanks so much for the reply...
I already used version 3.5 of Triton-Windows, but the torch.compile error persists. The combination I used was PY312, Torch 2.8, CUDA 128. I also tried the combination PY313, Torch 2.9, CUDA 130. I tested with several workflows, all based on yours. I also tried to get it working with the example WF from your repository. Thank you very much.
Hello genius, thanks so much for the reply...
I already used version 3.5 of Triton-Windows, but the torch.compile error persists. The combination I used was PY312, Torch 2.8, CUDA 128. I also tried the combination PY313, Torch 2.9, CUDA 130. I tested with several workflows, all based on yours. I also tried to get it working with the example WF from your repository. Thank you very much.
Are you sure it's the e4m3fn error then? And are you sure you are using the specific v3.5.0-windows.post21 or 3.5.1 version, because it specifically mentions the issue fixed and I've also had confirmations from people using it that it works.
Thanks again... I had removed it, but I'm going to try again. I have several instances installed; I tried with Triton-Windows Post21, but I'll double-check because there's a Post20 version. I hope it works. If it gives me an error again, I'll provide more details. Thank you very much.
Testting with >> python 3.13 / pytorch 2.8 / cuda 12.8 / triton-windows==3.5.0.post21
Activando entorno virtual de ComfyUI Py 13 Torch 2.8 Cud 12.8...
Ejecutando check_env_and_acelerators.py...
🔍 Chequeo de entorno y aceleradores AI para ComfyUI
📂 Python en uso: L:\Comfyui-p13-t28-c128\venv\Scripts\python.exe
✅ Triton encontrado - versión: 3.5.0
✅ SageAttention encontrado - versión: Versión no especificada
❌ FlashAttention no encontrado
============================================================
✅ Chequeo finalizado
Desactivando entorno virtual...
✔ Chequeo de entorno finalizado para ComfyUI Py 13 Torch 2.8 Cud 12.8.
Consola:
backend='inductor' raised:
ImportError: cannot import name 'triton_key' from 'triton.compiler.compiler' (L:\Comfyui-p13-t28-c128\venv\Lib\site-packages\triton\compiler\compiler.py)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
I was getting a dtype error before, this is a better than that, right?... This is the Sage wheel I'm using: sageattention-2.2.0+cu128torch2.8.0-cp313-cp313-win_amd64.whl
Thanks
I was getting a dtype error before, this is a better than that, right?... This is the Sage wheel I'm using: sageattention-2.2.0+cu128torch2.8.0-cp313-cp313-win_amd64.whl
Thanks
Yes this is different error seemingly unrelated to the dtype, I can't say what's causing it, but it seems more generic issue with the triton installation. Since the error is from inductor, it's probably unrelated to sageattention.
You should probably ask here instead about that: https://github.com/woct0rdho/triton-windows
Thank you so much, I'll keep researching since I wanted to try that model. Thanks for all the fantastic work you do. We really appreciate it.
Thank you so much, I'll keep researching since I wanted to try that model. Thanks for all the fantastic work you do. We really appreciate it.
To be clear: it's NOT necessary to run the model, it just makes running it faster, you can always just disconnect/remove the torch.compile node too.
Wow, I didn't know that... anyway, the official Triton Windows repository recommends using Torch 2.9 for Triton Windows 3.5. I'll try Torch 2.9, and if it doesn't work, I'll remove the torch.compile. Thanks for the info.
Don't bother, LongCat sucks. I did a few examples and posted my workflow on a previous post here.