LTX2 NAG Error
Thanks for all your hard work Kijai!
I've found that when using the LTX2 NAG node with LTX-2.3 diffusion dev model transformer only on the RTX6000 Pro in dynamic vram mode instead of highvram mode the NAG node fails (using LTX-2.3 model). The error complains about not all tensors being on the same device. This did not happen when using the LTX-2 diffusion dev model transformer only. Also, the official LTX-2.3 dev checkpoint model works with your LTX2 NAG node in dynamic vram mode. Could you please look in to it.
Edit: I'm running KJNodes 1.3.3 and ComfyUI 0.16.3
Update: Interestingly I just found that it doesn't matter what model is used if running a fresh workflow. That is, if the model is not loaded in to cache first then the LTX2 NAG node throws the error. If I run the workflow with the NAG node bypassed first, then run again with the model loaded in to cache, then the LTX2 NAG node does not throw an error. My initial request still stands, could you please look in to this?
I presume that there isn't any issue on your end testing the LTX2 NAG node with LTX-2.3?
This is the full error from the console window using RuneXX's workflow as well with the LTX-2.3 Dev transform only model. It's the same error I get with the LTX2 NAG node in my workflow:
!!! Exception during processing !!! Expected all tensors to be on the same device, but got mat1 is on cuda:0, different from other tensors on cpu (when checking argument in method wrapper_CUDA_addmm)
Traceback (most recent call last):
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\execution.py", line 524, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\execution.py", line 333, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\execution.py", line 307, in async_map_node_over_list
await process_inputs(input_dict, i)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\execution.py", line 295, in process_inputs
result = f(**inputs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy_api\internal_init.py", line 149, in wrapped_func
return method(locked_class, **inputs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy_api\latest_io.py", line 1764, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-kjnodes\nodes\ltxv_nodes.py", line 483, in execute
context_video = diffusion_model.preprocess_text_embeds(context_video.to(device=device, dtype=dtype), unprocessed=True)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\av_model.py", line 578, in preprocess_text_embeds
out_vid = self.video_embeddings_connector(context_vid)[0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\embeddings_connector.py", line 297, in forward
hidden_states = block(
hidden_states, attention_mask=attention_mask, pe=freqs_cis
)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\embeddings_connector.py", line 93, in forward
attn_output = self.attn1(norm_hidden_states, mask=attention_mask, pe=pe)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\model.py", line 401, in forward
q = self.to_q(x)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 375, in forward
return super().forward(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "A:_AI_Programs\ComfyUI_windows_portable_nvidia_0.16.3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 134, in forward
return F.linear(input, self.weight, self.bias)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Do you have KJNodes in latest nightly version? There were some bugs with it before, I did fix everything I found couple of days ago and I have been using NAG in almost every workflow myself with no issues.
Thank you for looking in to this. I thought I updated the nodes. I'm now on ComfyUI 0.16.4, and I updated KJNodes today through the manager. I'll do a git pull and see if that fixes it
Do you have KJNodes in latest nightly version? There were some bugs with it before, I did fix everything I found couple of days ago and I have been using NAG in almost every workflow myself with no issues.
Yes! The recent update fixed the issue with both your transformer only model, and the default checkpoint model when using LTX2 NAG. Many thanks!!
