Last active
March 6, 2025 18:52
-
-
Save OrsoEric/2bc2930afa8bda1587c2774810c3e3ee to your computer and use it in GitHub Desktop.
Adrenaline 24 HIP 6.2.4 Zluda Comfy UI
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sleepyrobo | |
• | |
5d ago | |
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides#amd-comfyui-with-zluda | |
Try this, its for windows using zluda | |
_____________________________________________________________________________________________________________________________________________ | |
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides#amd-comfyui-with-zluda | |
need to use hip 6.2 | |
_____________________________________________________________________________________________________________________________________________ | |
[AMD] Comfyui with ZLUDA | |
Option 1: For an nearly full automated install follow the Guide of PatientX Comfyui AMD Fork: | |
Its made for ROCm 5.7 but also supports 6.1 and 6.2 if you know how to manually switch. RX580 works with it. https://github.com/patientx/ComfyUI-Zluda | |
Option 2: If you have a AMD RX 6000 or 7000 Series and want ROCm 6.2 or 6.1 support follow my manual Guide below: | |
Install ComfyUI on AMD Windows with Zluda: | |
For Advanced Users | |
GPUs Supported | |
Requires AMD Adrenalin 24.1.1 Driver or newer. | |
Preparation: | |
Make sure your Graphics Card Driver is updated. | |
You need to download and install Git 64bit setup.exe from here: https://git-scm.com/download/win | |
You need to download and install Python 3.10.11 64bit setup.exe from here: https://www.python.org/downloads/release/python-31011/ | |
In the first screen check "Add Python to Path" | |
Getting ComfyUI Zluda Fork: | |
Make a new Folder on your Drive (not on Desktop, Downloads, Documents, Programms, Onedrive) and name it SD-Zluda. | |
You go into the folder you created in this case SD-Zluda, then click in the File Explorer bar (not searchbar) and type cmd then press enter. | |
Then you copy and paste this command: | |
git clone https://github.com/LeagueRaINi/ComfyUI.git && cd ComfyUI && git checkout nodes-cudnn-patch | |
Press enter and after its done you can close the cmd. You should now have a ComfyUI folder. Press F5 to reload if its not visible. | |
Download any Model from Civitai.com like for example Dreamshaper v8 and move it into ComfyUI/models/checkpoints. | |
You can also link your Existing models from other Webui's or Folders. For that checkout the Steps at the bottom. | |
Launching ComfyUI | |
Now we need to create a Launch File to easily install and start ComfyUI: | |
To create the File first click the link below: | |
https://raw.githubusercontent.com/CS1o/Stable-Diffusion-Info/main/Resources/Start-ComfyUI.bat | |
Then on the Site make a right click and select "Save Site as/under" | |
Before Saving make sure to set the Filetype to All Files (.) then press Save. | |
If it's still a .txt file, then you need to Enable the File Extensions in the File Explorer by clicking: | |
in Win10 View > File Name Extensions | |
in Win11 View > Show > File Name Extensions | |
Then rename the File and change the .txt to .bat | |
Move the Start-ComfyUI.bat into the ComfyUI Folder. | |
Launch the Start-Comfuyi.bat. It will now install everything needed and when it's done the Webui opens in your default Browser. | |
Launching Comfyui wont work as Zluda is needed, so close everything and proceed with the next steps. | |
Setting up Zluda: | |
Important: If you already installed Automatic1111 or Forge Zluda Version with my Guide, you can jump directly to Step 5. of the Zluda setup. | |
Important: If you want to use the latest ROCm HIP SDK 6.2.4 and your GPU is higher/newer than a RX6800, then: | |
Download and install AMD HIP SDK 6.2.4 from here | |
Download the latest Zluda files for ROCm 6 from here | |
Unzip as Folder and move it onto the C: Drive and rename the Folder to ZLUDA | |
Then restart your PC and skip to Step 4 below and proceed there with the Guide. | |
If you want to use the latest ROCm HIP SDK 6.2.4 and your GPU is below/older than a RX 6800, you need to do the following steps: | |
Check if your GPU is supported by comparing the GPU GFX List with the List of GFX Library Files for 6.2.4 | |
If the files are available for your GPU then you need to download them from here: | |
Download and install AMD HIP SDK 6.2.4 from here | |
Download the latest Zluda files for ROCm 6 from here | |
Unzip as Folder and move it onto the C: Drive and rename the Folder to ZLUDA | |
Go into C:\Program Files\AMD\ROCm\6.2\bin\rocblas folder. | |
There copy and rename the library folder to old_library | |
Open the .zip file and drag and drop all files of the library folder into the library folder. Not into the old_library folder. | |
If the zip contains a rocblas.dll, copy that into the C:\Program Files\AMD\ROCm\6.2\bin\rocblas folder. | |
Important: Restart your PC after this Step before proceeding at Step 4 (skip 1-3) of the Guide below. | |
If your GPU is not supported by ROCm 6.2.4 then proceed with the Guide at Step 1 below: | |
IMPORTANT: If your GPU is higher/newer than a RX6800, then skip Step 3 | |
Install AMD HIP SDK 6.1.2 from here: https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html | |
Dont check the Pro Driver. | |
Download the latest ZLUDA build for ROCm 6.1.2 as zip file from here | |
Unzip as Folder and move it onto the C: Drive and rename the Folder to ZLUDA | |
If your GPU is below a RX 6800, you need to do the following steps: | |
Check your GPUs GFX Version here: GPU GFX List | |
3.1 Download the corresponding .zip File for your GPUs GFX Version from here | |
For gfx900/906 - VEGA 56, 64 and Vii its this | |
3.2 Go into C:\Program Files\AMD\ROCm\6.1\bin\rocblas folder. | |
There copy and rename the library folder to old_library | |
3.3 Open the .zip file and drag and drop all files of the library folder into the library folder. Not into the old_library folder. | |
If the zip contains a rocblas.dll, copy that into the C:\Program Files\AMD\ROCm\6.1\bin\rocblas folder. | |
Important: Restart your PC after this Step before proceeding. | |
Add the C:\ZLUDA folder and %HIP_PATH%bin to your PATH in the System Variables. Like shown here: | |
https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows | |
It should look like this here | |
Go into the C:\ZLUDA\ Folder. | |
There make copy of the cublas.dll and the cusparse.dll and nvrtc.dll repaste them inside the folder. | |
Rename the copies to cublas64_11.dll and cusparse64_11.dll and nvrtc64_112_0.dll | |
Copy these three files into ComfyUI\venv\Lib\site-packages\torch\lib and overwrite if asked. | |
Launch the Start-Comfuyi.bat. The Webui opens in your default Browser and is ready to Generate! | |
Noteworthy: The first image generation can take from 15 up to 40 minutes. First time only. | |
If you get a NUMPY Error do the following: | |
Install the Comfyui-Manager from below, then Click on Manager > Click on Install PIP Packages > type numpy==1.26.4 > Click OK and restart the server as prompted. | |
If you get a security warning you have to go into ComfyUI\user\default\ComfyUI-Manager and there edit the config.ini. | |
Change the Security Level to Weak, then save and try install numpy again. | |
After that set the Security Level back to normal. | |
Getting ComfyUI-Manager: | |
After the installation is done go into ComfyUI/custom_nodes folder | |
Click in the File Explorer bar (not searchbar) and type cmd then press enter. | |
Then you copy and paste this command: | |
git clone https://github.com/ltdrdata/ComfyUI-Manager.git | |
Press enter and after its done you can close the cmd. | |
Relaunch the Start-ComfyUI.bat | |
Linking Models, Loras, etc. from other Webui's or Folders to ComfyUI: | |
If you want to link all models from an other Webui you can do that by renaming the extra_model_paths.example.yaml to extra_model_paths.yaml | |
Then right click the file and edit it with Notepad or Notepad++. | |
If you have Automatic1111 installed you only need to change the base_path line like in my Example that links to the Zluda Auto1111 Webui: | |
base_path: C:\SD-Zluda\stable-diffusion-webui-amdpgu | |
Then save and relaunch the Start-Comfyui.bat | |
_____________________________________________________________________________________________________________________________________________ | |
C:\Users\FatherOfMachines>python --version | |
Python 3.10.11 | |
Microsoft Windows [Version 10.0.22631.4602] | |
(c) Microsoft Corporation. All rights reserved. | |
F:\SD-Zluda> | |
F:\SD-Zluda>F:\SD-Zluda>git clone https://github.com/LeagueRaINi/ComfyUI.git && cd ComfyUI && git checkout nodes-cudnn-patch | |
F:\SD-Zluda>git clone https://github.com/LeagueRaINi/ComfyUI.git && cd ComfyUI && git checkout nodes-cudnn-patch | |
Cloning into 'ComfyUI'... | |
remote: Enumerating objects: 17073, done. | |
remote: Counting objects: 100% (2845/2845), done. | |
remote: Compressing objects: 100% (243/243), done. | |
remote: Total 17073 (delta 2702), reused 2602 (delta 2602), pack-reused 14228 (from 1) | |
Receiving objects: 100% (17073/17073), 56.74 MiB | 7.97 MiB/s, done. | |
Resolving deltas: 100% (11493/11493), done. | |
Switched to a new branch 'nodes-cudnn-patch' | |
branch 'nodes-cudnn-patch' set up to track 'origin/nodes-cudnn-patch'. | |
_____________________________________________________________________________________________________________________________________________ | |
https://github.com/lshqqytiger/ZLUDA/releases/ | |
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides#amd-comfyui-with-zluda | |
restart | |
_____________________________________________________________________________________________________________________________________________ | |
ComfyUI Error Report | |
## Error Details | |
- **Node ID:** 7 | |
- **Node Type:** CLIPTextEncode | |
- **Exception Type:** RuntimeError | |
- **Exception Message:** CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` | |
## Stack Trace | |
``` | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\nodes.py", line 69, in encode | |
return (clip.encode_from_tokens_scheduled(tokens), ) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd.py", line 149, in encode_from_tokens_scheduled | |
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd.py", line 211, in encode_from_tokens | |
o = self.cond_stage_model.encode_token_weights(tokens) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 640, in encode_token_weights | |
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights | |
o = self.encode(to_encode) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 252, in encode | |
return self(tokens) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 224, in forward | |
outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 137, in forward | |
x = self.text_model(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 113, in forward | |
x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 70, in forward | |
x = l(x, mask, optimized_attention) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 51, in forward | |
x += self.self_attn(self.layer_norm1(x), mask, optimized_attention) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 17, in forward | |
q = self.q_proj(x) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\ops.py", line 68, in forward | |
return self.forward_comfy_cast_weights(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\ops.py", line 64, in forward_comfy_cast_weights | |
return torch.nn.functional.linear(input, weight, bias) | |
``` | |
## System Information | |
- **ComfyUI Version:** 0.3.14 | |
- **Arguments:** main.py --auto-launch --disable-xformers --use-quad-cross-attention --reserve-vram 0.8 | |
- **OS:** nt | |
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] | |
- **Embedded Python:** false | |
- **PyTorch Version:** 2.3.1+cu118 | |
## Devices | |
- **Name:** cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native | |
- **Type:** cuda | |
- **VRAM Total:** 25753026560 | |
- **VRAM Free:** 23690173440 | |
- **Torch VRAM Total:** 268435456 | |
- **Torch VRAM Free:** 4094976 | |
## Logs | |
``` | |
2025-02-22T10:21:37.692882 - Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended. | |
2025-02-22T10:21:38.144184 - Total VRAM 24560 MB, total RAM 65367 MB | |
2025-02-22T10:21:38.144184 - pytorch version: 2.3.1+cu118 | |
2025-02-22T10:21:38.144184 - Set vram state to: NORMAL_VRAM | |
2025-02-22T10:21:38.144184 - [36mDetected ZLUDA, support for it is experimental and comfy may not work properly.[0m | |
2025-02-22T10:21:38.144184 - [36mDisabling cuDNN because ZLUDA does currently not support it.[0m | |
2025-02-22T10:21:38.144184 - Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native. | |
2025-02-22T10:21:39.261982 - | |
A module that was compiled using NumPy 1.x cannot be run in | |
NumPy 2.2.3 as it may crash. To support both 1.x and 2.x | |
versions of NumPy, modules must be compiled with NumPy 2.0. | |
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. | |
If you are a user of the module, the easiest solution will be to | |
downgrade to 'numpy<2' or try to upgrade the affected module. | |
We expect that some modules will need time to support NumPy 2. | |
Traceback (most recent call last): File "F:\SD-Zluda\ComfyUI\main.py", line 136, in <module> | |
import execution | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 13, in <module> | |
import nodes | |
File "F:\SD-Zluda\ComfyUI\nodes.py", line 22, in <module> | |
import comfy.diffusers_load | |
File "F:\SD-Zluda\ComfyUI\comfy\diffusers_load.py", line 3, in <module> | |
import comfy.sd | |
File "F:\SD-Zluda\ComfyUI\comfy\sd.py", line 10, in <module> | |
from .ldm.cascade.stage_c_coder import StageC_coder | |
File "F:\SD-Zluda\ComfyUI\comfy\ldm\cascade\stage_c_coder.py", line 19, in <module> | |
import torchvision | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\__init__.py", line 6, in <module> | |
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\models\__init__.py", line 2, in <module> | |
from .convnext import * | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\models\convnext.py", line 8, in <module> | |
from ..ops.misc import Conv2dNormActivation, Permute | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\ops\__init__.py", line 23, in <module> | |
from .poolers import MultiScaleRoIAlign | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\ops\poolers.py", line 10, in <module> | |
from .roi_align import roi_align | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torchvision\ops\roi_align.py", line 4, in <module> | |
import torch._dynamo | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\__init__.py", line 64, in <module> | |
torch.manual_seed = disable(torch.manual_seed) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\decorators.py", line 50, in disable | |
return DisableContext()(fn) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 410, in __call__ | |
(filename is None or trace_rules.check(fn)) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 3378, in check | |
return check_verbose(obj, is_inlined_call).skipped | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 3361, in check_verbose | |
rule = torch._dynamo.trace_rules.lookup_inner( | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 3442, in lookup_inner | |
rule = get_torch_obj_rule_map().get(obj, None) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 2782, in get_torch_obj_rule_map | |
obj = load_object(k) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 2811, in load_object | |
val = _load_obj_from_str(x[0]) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 2795, in _load_obj_from_str | |
return getattr(importlib.import_module(module), obj_name) | |
File "C:\Program Files\Python_3_10\lib\importlib\__init__.py", line 126, in import_module | |
return _bootstrap._gcd_import(name[level:], package, level) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nested\_internal\nested_tensor.py", line 417, in <module> | |
values=torch.randn(3, 3, device="meta"), | |
2025-02-22T10:21:39.265982 - F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nested\_internal\nested_tensor.py:417: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) | |
values=torch.randn(3, 3, device="meta"), | |
2025-02-22T10:21:39.399106 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention | |
2025-02-22T10:21:48.989186 - ComfyUI version: 0.3.14 | |
2025-02-22T10:21:49.012875 - ****** User settings have been changed to be stored on the server instead of browser storage. ****** | |
2025-02-22T10:21:49.012875 - ****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ****** | |
2025-02-22T10:21:49.012875 - [Prompt Server] web root: F:\SD-Zluda\ComfyUI\web | |
2025-02-22T10:21:49.032872 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_latent to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.037233 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypernetwork to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.121726 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_upscale_model to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.121726 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_post_processing to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.133730 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mask to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.141369 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_compositing to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.145267 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_rebatch to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.157282 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.165384 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_tomesd to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.169384 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_clip_sdxl to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.600166 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_canny to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.608170 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_freelunch to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.672170 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_custom_sampler to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.684170 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypertile to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.691818 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_advanced to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.699824 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_downscale to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.707829 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_images to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.715883 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_video_model to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.723884 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sag to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.731883 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_perpneg to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.739882 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable3d to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.747882 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sdupscale to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.752275 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_photomaker to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.760279 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pixart to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.764279 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cond to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.768278 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_morphology to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.776280 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable_cascade to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.784585 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_differential_diffusion to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.788588 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_ip2p to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.796119 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging_model_specific to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.804126 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pag to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.808129 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_align_your_steps to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.816606 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_attention_multiply to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.820717 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_advanced_samplers to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:49.828720 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_webcam to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.054690 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_audio to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.070419 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sd3 to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.082418 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_gits to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.090421 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_controlnet to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.094422 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hunyuan to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.102069 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_flux to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.110071 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lora_extract to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.114076 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_torch_compile to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.118078 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mochi to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.118078 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_slg to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.126079 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mahiro to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.130111 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lt to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.142116 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hooks to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.150116 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_load_3d to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.154115 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cosmos to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.162338 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save to prevent enabling cuDNN.[0m | |
2025-02-22T10:21:50.162338 - | |
Import times for custom nodes: | |
2025-02-22T10:21:50.162338 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save.py | |
2025-02-22T10:21:50.162338 - | |
2025-02-22T10:21:50.166424 - Starting server | |
2025-02-22T10:21:50.166424 - To see the GUI go to: http://127.0.0.1:8188 | |
2025-02-22T10:22:14.182535 - got prompt | |
2025-02-22T10:22:14.355183 - model weight dtype torch.float16, manual cast: None | |
2025-02-22T10:22:14.449319 - model_type EPS | |
2025-02-22T10:22:16.504876 - Using split attention in VAE | |
2025-02-22T10:22:16.508877 - Using split attention in VAE | |
2025-02-22T10:22:16.730589 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 | |
2025-02-22T10:22:16.806096 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 | |
2025-02-22T10:22:17.526342 - Requested to load SD1ClipModel | |
2025-02-22T10:22:17.593854 - loaded completely 22772.75625 235.84423828125 True | |
2025-02-22T10:25:09.755591 - !!! Exception during processing !!! CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` | |
2025-02-22T10:25:09.765636 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\nodes.py", line 69, in encode | |
return (clip.encode_from_tokens_scheduled(tokens), ) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd.py", line 149, in encode_from_tokens_scheduled | |
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd.py", line 211, in encode_from_tokens | |
o = self.cond_stage_model.encode_token_weights(tokens) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 640, in encode_token_weights | |
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights | |
o = self.encode(to_encode) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 252, in encode | |
return self(tokens) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\sd1_clip.py", line 224, in forward | |
outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 137, in forward | |
x = self.text_model(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 113, in forward | |
x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 70, in forward | |
x = l(x, mask, optimized_attention) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 51, in forward | |
x += self.self_attn(self.layer_norm1(x), mask, optimized_attention) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\clip_model.py", line 17, in forward | |
q = self.q_proj(x) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl | |
return self._call_impl(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl | |
return forward_call(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\ops.py", line 68, in forward | |
return self.forward_comfy_cast_weights(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\comfy\ops.py", line 64, in forward_comfy_cast_weights | |
return torch.nn.functional.linear(input, weight, bias) | |
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` | |
2025-02-22T10:25:09.765636 - Prompt executed in 175.58 seconds | |
``` | |
## Attached Workflow | |
Please make sure that workflow does not contain any sensitive information such as API keys or passwords. | |
``` | |
{"last_node_id":9,"last_link_id":9,"nodes":[{"id":7,"type":"CLIPTextEncode","pos":[413,389],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":5}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":6,"type":"CLIPTextEncode","pos":[415,186],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"]},{"id":5,"type":"EmptyLatentImage","pos":[473,609],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":3,"type":"KSampler","pos":[863,186],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[824862170463928,"randomize",20,8,"euler","normal",1]},{"id":8,"type":"VAEDecode","pos":[1209,188],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":8}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":9,"type":"SaveImage","pos":[1451,189],"size":[210,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[26,474],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1},{"name":"VAE","type":"VAE","links":[8],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["dreamshaper_5BakedVae.safetensors"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":{"0":0,"1":0}}},"version":0.4} | |
``` | |
## Additional Context | |
(Please add any additional context or steps to reproduce the error here) | |
_____________________________________________________________________________________________________________________________________________ | |
numpy error | |
Getting ComfyUI-Manager: | |
After the installation is done go into ComfyUI/custom_nodes folder | |
Click in the File Explorer bar (not searchbar) and type cmd then press enter. | |
Then you copy and paste this command: | |
git clone https://github.com/ltdrdata/ComfyUI-Manager.git | |
Press enter and after its done you can close the cmd. | |
Microsoft Windows [Version 10.0.22631.4602] | |
(c) Microsoft Corporation. All rights reserved. | |
F:\SD-Zluda\ComfyUI\custom_nodes>git clone https://github.com/ltdrdata/ComfyUI-Manager.git | |
Cloning into 'ComfyUI-Manager'... | |
remote: Enumerating objects: 17913, done. | |
remote: Counting objects: 100% (2916/2916), done. | |
remote: Compressing objects: 100% (223/223), done. | |
remote: Total 17913 (delta 2782), reused 2697 (delta 2693), pack-reused 14997 (from 1)Receiving objects: 100% (17913/179 | |
Resolving deltas: 100% (13274/13274), done. | |
F:\SD-Zluda\ComfyUI\custom_nodes> | |
If you get a NUMPY Error do the following: | |
Install the Comfyui-Manager from below, then Click on Manager > Click on Install PIP Packages > type numpy==1.26.4 > Click OK and restart the server as prompted. | |
# ComfyUI Error Report | |
## Error Details | |
- **Node ID:** 30 | |
- **Node Type:** HyVideoTextEncode | |
- **Exception Type:** IndexError | |
- **Exception Message:** list index out of range | |
## Stack Trace | |
``` | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 905, in process | |
prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self, | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 830, in encode_prompt | |
text_inputs = text_encoder.text2tokens(prompt, | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens | |
text_tokens = self.processor( | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\models\llava\processing_llava.py", line 145, in __call__ | |
image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"]) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\image_processing_utils.py", line 42, in __call__ | |
return self.preprocess(images, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 312, in preprocess | |
if do_rescale and is_scaled_image(images[0]): | |
``` | |
## System Information | |
- **ComfyUI Version:** 0.3.14 | |
- **Arguments:** main.py --auto-launch --disable-xformers --use-quad-cross-attention --reserve-vram 0.8 | |
- **OS:** nt | |
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] | |
- **Embedded Python:** false | |
- **PyTorch Version:** 2.3.1+cu118 | |
## Devices | |
- **Name:** cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native | |
- **Type:** cuda | |
- **VRAM Total:** 25753026560 | |
- **VRAM Free:** 7207262720 | |
- **Torch VRAM Total:** 16863199232 | |
- **Torch VRAM Free:** 143256064 | |
## Logs | |
``` | |
2025-02-23T13:40:23.752223 - [START] Security scan2025-02-23T13:40:23.752223 - | |
2025-02-23T13:40:24.260647 - [DONE] Security scan2025-02-23T13:40:24.260647 - | |
2025-02-23T13:40:24.318042 - ## ComfyUI-Manager: installing dependencies done.2025-02-23T13:40:24.318042 - | |
2025-02-23T13:40:24.333637 - ** ComfyUI startup time:2025-02-23T13:40:24.333637 - 2025-02-23T13:40:24.333637 - 2025-02-23 13:40:24.3332025-02-23T13:40:24.333637 - | |
2025-02-23T13:40:24.333890 - ** Platform:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - Windows2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** Python version:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** Python executable:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - F:\SD-Zluda\ComfyUI\venv\Scripts\python.exe2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** ComfyUI Path:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - F:\SD-Zluda\ComfyUI2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** ComfyUI Base Folder Path:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - F:\SD-Zluda\ComfyUI2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** User directory:2025-02-23T13:40:24.333890 - 2025-02-23T13:40:24.333890 - F:\SD-Zluda\ComfyUI\user2025-02-23T13:40:24.333890 - | |
2025-02-23T13:40:24.333890 - ** ComfyUI-Manager config path:2025-02-23T13:40:24.334930 - 2025-02-23T13:40:24.334930 - F:\SD-Zluda\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-02-23T13:40:24.334930 - | |
2025-02-23T13:40:24.343615 - ** Log path:2025-02-23T13:40:24.343615 - 2025-02-23T13:40:24.343615 - F:\SD-Zluda\ComfyUI\user\comfyui.log2025-02-23T13:40:24.343615 - | |
2025-02-23T13:40:24.868186 - | |
Prestartup times for custom nodes: | |
2025-02-23T13:40:24.868186 - 1.4 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\ComfyUI-Manager | |
2025-02-23T13:40:24.868186 - | |
2025-02-23T13:40:25.845780 - Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended. | |
2025-02-23T13:40:26.192623 - Total VRAM 24560 MB, total RAM 65367 MB | |
2025-02-23T13:40:26.192623 - pytorch version: 2.3.1+cu118 | |
2025-02-23T13:40:26.192623 - Set vram state to: NORMAL_VRAM | |
2025-02-23T13:40:26.192623 - [36mDetected ZLUDA, support for it is experimental and comfy may not work properly.[0m | |
2025-02-23T13:40:26.192623 - [36mDisabling cuDNN because ZLUDA does currently not support it.[0m | |
2025-02-23T13:40:26.192623 - Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native. | |
2025-02-23T13:40:26.801718 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention | |
2025-02-23T13:40:27.625151 - ComfyUI version: 0.3.14 | |
2025-02-23T13:40:27.644388 - [Prompt Server] web root: F:\SD-Zluda\ComfyUI\web | |
2025-02-23T13:40:27.645384 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_latent to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.646477 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypernetwork to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_upscale_model to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_post_processing to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mask to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_compositing to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_rebatch to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.698675 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.714465 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_tomesd to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.715501 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_clip_sdxl to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_canny to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_freelunch to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_custom_sampler to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypertile to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_advanced to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_downscale to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_images to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_video_model to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sag to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_perpneg to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.941246 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable3d to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sdupscale to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_photomaker to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pixart to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cond to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_morphology to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable_cascade to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_differential_diffusion to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_ip2p to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging_model_specific to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pag to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_align_your_steps to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_attention_multiply to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_advanced_samplers to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:27.955749 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_webcam to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.010625 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_audio to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.011625 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sd3 to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013625 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_gits to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_controlnet to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hunyuan to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_flux to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lora_extract to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_torch_compile to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mochi to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_slg to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mahiro to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lt to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hooks to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_load_3d to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cosmos to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.013703 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfy-image-saver to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.249297 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-hunyuanvideowrapper to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.249297 - Total VRAM 24560 MB, total RAM 65367 MB | |
2025-02-23T13:40:28.249297 - pytorch version: 2.3.1+cu118 | |
2025-02-23T13:40:28.249297 - Set vram state to: NORMAL_VRAM | |
2025-02-23T13:40:28.249297 - [36mDetected ZLUDA, support for it is experimental and comfy may not work properly.[0m | |
2025-02-23T13:40:28.249297 - Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native. | |
2025-02-23T13:40:28.268841 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-kjnodes to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.270849 - ### Loading: ComfyUI-Manager (V3.25.1) | |
2025-02-23T13:40:28.270849 - [ComfyUI-Manager] network_mode: public | |
2025-02-23T13:40:28.360786 - ### ComfyUI Revision: 3161 on 'nodes-cudnn-patch' [07833a5f] | Released on '2025-02-10' | |
2025-02-23T13:40:28.471590 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node ComfyUI-Manager to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.510464 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-videohelpersuite to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.510464 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save to prevent enabling cuDNN.[0m | |
2025-02-23T13:40:28.510464 - | |
Import times for custom nodes: | |
2025-02-23T13:40:28.510464 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save.py | |
2025-02-23T13:40:28.510464 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfy-image-saver | |
2025-02-23T13:40:28.510464 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-kjnodes | |
2025-02-23T13:40:28.510464 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-videohelpersuite | |
2025-02-23T13:40:28.510464 - 0.2 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\ComfyUI-Manager | |
2025-02-23T13:40:28.510464 - 0.2 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper | |
2025-02-23T13:40:28.510464 - | |
2025-02-23T13:40:28.510464 - Starting server | |
2025-02-23T13:40:28.510464 - To see the GUI go to: http://127.0.0.1:8188 | |
2025-02-23T13:40:28.878893 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json | |
2025-02-23T13:40:28.878893 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json | |
2025-02-23T13:40:28.899728 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json | |
2025-02-23T13:40:28.942268 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json | |
2025-02-23T13:40:28.952271 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json | |
2025-02-23T13:40:31.229929 - got prompt | |
2025-02-23T13:40:32.900627 - FETCH ComfyRegistry Data: 5/342025-02-23T13:40:32.900627 - | |
2025-02-23T13:40:33.183913 - F:\SD-Zluda\ComfyUI\venv\lib\site-packages\diffusers\models\attention_processor.py:3286: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) | |
hidden_states = F.scaled_dot_product_attention( | |
2025-02-23T13:40:33.427679 - encoded latents shape2025-02-23T13:40:33.427679 - 2025-02-23T13:40:33.427679 - torch.Size([1, 16, 1, 120, 68])2025-02-23T13:40:33.427679 - | |
2025-02-23T13:40:33.427679 - Loading text encoder model (clipL) from: F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14 | |
2025-02-23T13:40:33.434684 - !!! Exception during processing !!! Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T13:40:33.463889 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 683, in loadmodel | |
text_encoder_2 = TextEncoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__ | |
self.model, self.model_path = load_text_encoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 36, in load_text_encoder | |
text_encoder = CLIPTextModel.from_pretrained(text_encoder_path) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper | |
return func(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3808, in from_pretrained | |
raise EnvironmentError( | |
OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T13:40:33.464889 - Prompt executed in 2.23 seconds | |
2025-02-23T13:40:36.757132 - FETCH ComfyRegistry Data: 10/342025-02-23T13:40:36.757132 - | |
2025-02-23T13:40:40.757312 - FETCH ComfyRegistry Data: 15/342025-02-23T13:40:40.757312 - | |
2025-02-23T13:40:44.667249 - FETCH ComfyRegistry Data: 20/342025-02-23T13:40:44.667249 - | |
2025-02-23T13:40:45.723791 - got prompt | |
2025-02-23T13:40:45.735794 - Loading text encoder model (clipL) from: F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14 | |
2025-02-23T13:40:45.735794 - !!! Exception during processing !!! Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T13:40:45.735794 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 683, in loadmodel | |
text_encoder_2 = TextEncoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__ | |
self.model, self.model_path = load_text_encoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 36, in load_text_encoder | |
text_encoder = CLIPTextModel.from_pretrained(text_encoder_path) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper | |
return func(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3808, in from_pretrained | |
raise EnvironmentError( | |
OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T13:40:45.735794 - Prompt executed in 0.01 seconds | |
2025-02-23T13:40:48.405655 - FETCH ComfyRegistry Data: 25/342025-02-23T13:40:48.406816 - | |
2025-02-23T13:40:52.658815 - FETCH ComfyRegistry Data: 30/342025-02-23T13:40:52.658815 - | |
2025-02-23T13:40:56.201619 - FETCH ComfyRegistry Data [DONE]2025-02-23T13:40:56.201619 - | |
2025-02-23T13:40:56.230737 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes | |
2025-02-23T13:40:56.264355 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote | |
2025-02-23T13:40:56.264355 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-23T13:40:56.264355 - 2025-02-23T13:40:56.344458 - [DONE]2025-02-23T13:40:56.344458 - | |
2025-02-23T13:40:56.377116 - [ComfyUI-Manager] All startup tasks have been completed. | |
2025-02-23T14:44:21.323115 - got prompt | |
2025-02-23T14:44:21.340497 - Loading text encoder model (clipL) from: F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14 | |
2025-02-23T14:44:21.341497 - !!! Exception during processing !!! Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T14:44:21.342497 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 683, in loadmodel | |
text_encoder_2 = TextEncoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__ | |
self.model, self.model_path = load_text_encoder( | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 36, in load_text_encoder | |
text_encoder = CLIPTextModel.from_pretrained(text_encoder_path) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper | |
return func(*args, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3808, in from_pretrained | |
raise EnvironmentError( | |
OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14. | |
2025-02-23T14:44:21.343497 - Prompt executed in 0.01 seconds | |
2025-02-23T15:47:09.390916 - got prompt | |
2025-02-23T15:47:09.406540 - Loading text encoder model (clipL) from: F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14 | |
2025-02-23T15:47:09.563107 - Text encoder to dtype: torch.float16 | |
2025-02-23T15:47:09.578732 - Loading tokenizer (clipL) from: F:\SD-Zluda\ComfyUI\models\clip\clip-vit-large-patch14 | |
2025-02-23T15:47:09.641618 - Downloading model to: F:\SD-Zluda\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers | |
2025-02-23T15:47:09.892432 - F:\SD-Zluda\ComfyUI\venv\lib\site-packages\huggingface_hub\file_download.py:834: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. | |
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. | |
warnings.warn( | |
2025-02-23T16:22:43.039097 - | |
Fetching 13 files: 54%|█████████████████████████████████▉ | 7/13 [35:33<34:31, 345.25s/it]2025-02-23T16:22:43.040600 - | |
Fetching 13 files: 100%|██████████████████████████████████████████████████████████████| 13/13 [35:33<00:00, 164.09s/it]2025-02-23T16:22:43.040600 - | |
2025-02-23T16:22:43.063603 - Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. | |
2025-02-23T16:22:43.074915 - You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. | |
2025-02-23T16:22:43.782711 - Loading text encoder model (vlm) from: F:\SD-Zluda\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers | |
2025-02-23T16:22:52.806637 - | |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████| 4/4 [00:08<00:00, 2.05s/it]2025-02-23T16:22:52.806637 - | |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████| 4/4 [00:08<00:00, 2.13s/it]2025-02-23T16:22:52.806637 - | |
2025-02-23T16:23:00.080460 - Text encoder to dtype: torch.bfloat16 | |
2025-02-23T16:23:00.084691 - Loading tokenizer (vlm) from: F:\SD-Zluda\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers | |
2025-02-23T16:23:03.273682 - Unused or unrecognized kwargs: device. | |
2025-02-23T16:23:03.273682 - !!! Exception during processing !!! list index out of range | |
2025-02-23T16:23:03.283720 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 905, in process | |
prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self, | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 830, in encode_prompt | |
text_inputs = text_encoder.text2tokens(prompt, | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens | |
text_tokens = self.processor( | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\models\llava\processing_llava.py", line 145, in __call__ | |
image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"]) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\image_processing_utils.py", line 42, in __call__ | |
return self.preprocess(images, **kwargs) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 312, in preprocess | |
if do_rescale and is_scaled_image(images[0]): | |
IndexError: list index out of range | |
2025-02-23T16:23:03.284851 - Prompt executed in 2153.88 seconds | |
``` | |
## Attached Workflow | |
Please make sure that workflow does not contain any sensitive information such as API keys or passwords. | |
``` | |
{"last_node_id":100,"last_link_id":136,"nodes":[{"id":58,"type":"HyVideoCFG","pos":[-1280,1410],"size":[437.5832824707031,201.83335876464844],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_cfg","type":"HYVID_CFG","links":[130],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoCFG"},"widgets_values":["camera movement, jump cut, scene cut, transition, fading, morphing",6,0,0.5,false]},{"id":57,"type":"HyVideoTorchCompileSettings","pos":[-1854.767822265625,-527.714111328125],"size":[441,274],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"torch_compile_args","type":"COMPILEARGS","links":[105],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoTorchCompileSettings"},"widgets_values":["inductor",false,"default",false,64,true,true,false,false,false]},{"id":88,"type":"HyVideoCustomPromptTemplate","pos":[-1290,800],"size":[453.78076171875,551.197265625],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_prompt_template","type":"PROMPT_TEMPLATE","links":[131],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoCustomPromptTemplate"},"widgets_values":["<|start_header_id|>system<|end_header_id|>\n\nYou are a professional Content Analyst. As a specialist Scene Annotator, you tag and describe scenes in detail, paying special attention to the temporal coherence and sequence of events in the scene.\n\n# INSTRUCTIONS\n\nYou will receive two inputs:\n\n1. The first frame of a video;\n2. A short description of the scene.\n\nYour job is to write a comprehensive description of the scene based on the inputs.\n\n# IMPORTANT INFORMATION \n\nThe scene is only 4s (four seconds long).\nThe scene has 98 frames in total at 24 frames per second (24 fps).\n\n# GUIDELINES\n\n- Write a detailed description based on the inputs.\n- Use your expertise to adapt the scene description so that it is consistent and coherent.\n- Ensure the actions and events described can fit reasonably in a 4 second clip.\n- Be concise and avoid abstract qualifiers, focus on concrete aspects of the scene, in particular the main subject, the motion, actions and events.\n- Use the input image, which is the fisrt frame of the video, to infer the visual qualities, mood and tone of the overall scene.\n- Always consider the appropriate sequence of events for the scene so it fits the four seconds legth of the clip.\n\n# DELIVERABLES\n\nYou will deliver a concise and detailed description of the scene that is consistent with the inputs you receive, and temporally coherent given the length of the scene. You should output something like this:\n\nDETAILED DESCRIPTION OF THE VIDEO SCENE IN APPROPRIATE TEMPORAL SEQUENCE. OVERALL MOOD OF THE SCENE BASED ON THE FIRST FRAME. 3 TO 5 TAGS THAT REPRESENT THE SCENE GENRE/STYLE/CATEGORY.\n\n\nWrite the scene description: \n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|>",95]},{"id":45,"type":"ImageResizeKJ","pos":[-1220,140],"size":[315,266],"flags":{},"order":11,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":123},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"shape":7,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"shape":7,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[119,120,121],"slot_index":0},{"name":"width","type":"INT","links":[69],"slot_index":1},{"name":"height","type":"INT","links":[70],"slot_index":2}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347","Node name for S&R":"ImageResizeKJ"},"widgets_values":[544,960,"lanczos",false,2,0,0,"center"]},{"id":30,"type":"HyVideoTextEncode","pos":[-750,510],"size":[313.6783752441406,440.2134704589844],"flags":{},"order":12,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":131,"shape":7},{"name":"clip_l","type":"CLIP","link":null,"shape":7},{"name":"hyvid_cfg","type":"HYVID_CFG","link":130,"shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[74],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoTextEncode"},"widgets_values":["Cinematic scene shows a woman getting up and walking away.The background remains consistent throughout the scene. ","bad quality video","video"]},{"id":43,"type":"HyVideoEncode","pos":[-762.68408203125,-89.60575866699219],"size":[315,198],"flags":{},"order":13,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":135},{"name":"image","type":"IMAGE","link":119}],"outputs":[{"name":"samples","type":"LATENT","links":[75],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoEncode"},"widgets_values":[false,64,256,true,0.04,1]},{"id":52,"type":"ImageConcatMulti","pos":[1265.74609375,-380.3453063964844],"size":[210,150],"flags":{},"order":17,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":120},{"name":"image_2","type":"IMAGE","link":85}],"outputs":[{"name":"images","type":"IMAGE","links":[73],"slot_index":0}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347"},"widgets_values":[2,"right",false,null]},{"id":34,"type":"VHS_VideoCombine","pos":[1526.9112548828125,-380.55364990234375],"size":[215.375,334],"flags":{},"order":19,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":73},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"cnr_id":"comfyui-videohelpersuite","ver":"124c913ccdd8a585734ea758c35fa1bab8499c99","Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_skyreel_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_skyreel_I2V_00047.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_skyreel_I2V_00047.png","fullpath":"/home/linux/AI/ComfyUI/output/HunyuanVideo_skyreel_I2V_00047.mp4"},"muted":false}}},{"id":60,"type":"ColorMatch","pos":[893.6535034179688,-226.94412231445312],"size":[315,102],"flags":{},"order":16,"mode":0,"inputs":[{"name":"image_ref","type":"IMAGE","link":121},{"name":"image_target","type":"IMAGE","link":83}],"outputs":[{"name":"image","type":"IMAGE","links":[85,117],"slot_index":0}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347","Node name for S&R":"ColorMatch"},"widgets_values":["mkl",1]},{"id":64,"type":"HyVideoEnhanceAVideo","pos":[-802.6838989257812,246.1566162109375],"size":[352.79998779296875,154],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"feta_args","type":"FETAARGS","links":[91]}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoEnhanceAVideo"},"widgets_values":[4,true,true,0,1]},{"id":59,"type":"HyVideoBlockSwap","pos":[-1742.82958984375,-190.67405700683594],"size":[315,130],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[108],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoBlockSwap"},"widgets_values":[20,10,false,false]},{"id":78,"type":"VHS_VideoCombine","pos":[1528.812744140625,23.629047393798828],"size":[215.375,334],"flags":{},"order":18,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":117},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"cnr_id":"comfyui-videohelpersuite","ver":"124c913ccdd8a585734ea758c35fa1bab8499c99","Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_skyreel_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_skyreel_I2V_00046.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_skyreel_I2V_00046.png","fullpath":"/home/linux/AI/ComfyUI/output/HunyuanVideo_skyreel_I2V_00046.mp4"},"muted":false}}},{"id":79,"type":"LoadImage","pos":[-1738.3333740234375,235.88121032714844],"size":[315,314],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[123],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"LoadImage"},"widgets_values":["amateur-absurdist.png","image"]},{"id":3,"type":"HyVideoSampler","pos":[51.265254974365234,-204.44427490234375],"size":[416.07513427734375,1142.9561767578125],"flags":{},"order":14,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":134},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":74},{"name":"samples","type":"LATENT","link":null,"shape":7},{"name":"image_cond_latents","type":"LATENT","link":75,"shape":7},{"name":"stg_args","type":"STGARGS","link":null,"shape":7},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7},{"name":"feta_args","type":"FETAARGS","link":91,"shape":7},{"name":"width","type":"INT","link":69,"widget":{"name":"width"}},{"name":"height","type":"INT","link":70,"widget":{"name":"height"}},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoSampler"},"widgets_values":[512,320,97,30,1,9,15,"fixed",1,1,"SDE-DPMSolverMultistepScheduler"]},{"id":1,"type":"HyVideoModelLoader","pos":[-1272.8134765625,-201.72789001464844],"size":[426.1773986816406,242],"flags":{},"order":10,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":105,"shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":108,"shape":7},{"name":"lora","type":"HYVIDLORA","link":null,"shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[134],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoModelLoader"},"widgets_values":["skyreels_hunyuan_i2v_bf16.safetensors","bf16","disabled","offload_device","sageattn_varlen",false,true]},{"id":5,"type":"HyVideoDecode","pos":[510.1028747558594,-408.8643798828125],"size":[345.4285888671875,150],"flags":{},"order":15,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":136},{"name":"samples","type":"LATENT","link":4}],"outputs":[{"name":"images","type":"IMAGE","links":[83],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoDecode"},"widgets_values":[true,64,192,false]},{"id":100,"type":"Note","pos":[-573.7662963867188,-842.064208984375],"size":[254.22499084472656,134.40151977539062],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["I figured out to move the models in the right folders\n\nVAE gives a bad error, I think it's ZLUDA acceleration that fails me"],"color":"#432","bgcolor":"#653"},{"id":99,"type":"HyVideoVAELoader","pos":[-1246.245849609375,-461.3368835449219],"size":[315,82],"flags":{},"order":7,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[135,136],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-1220,510],"size":[391.5,202],"flags":{},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35]}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["xtuner/llava-llama-3-8b-v1_1-transformers","openai/clip-vit-large-patch14","bf16",false,3,"disabled","offload_device"]},{"id":90,"type":"Note","pos":[-1004.8192138671875,-856.0059814453125],"size":[254.22499084472656,134.40151977539062],"flags":{},"order":9,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["README:\nhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper/tree/main\n\nExample workflows\nhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper/tree/main/example_workflows\n\nhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper\n\nModel download\n"],"color":"#432","bgcolor":"#653"}],"links":[[4,3,0,5,1,"LATENT"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[69,45,1,3,7,"INT"],[70,45,2,3,8,"INT"],[73,52,0,34,0,"IMAGE"],[74,30,0,3,1,"HYVIDEMBEDS"],[75,43,0,3,3,"LATENT"],[83,5,0,60,1,"IMAGE"],[85,60,0,52,1,"IMAGE"],[91,64,0,3,6,"FETAARGS"],[105,57,0,1,0,"COMPILEARGS"],[108,59,0,1,1,"BLOCKSWAPARGS"],[117,60,0,78,0,"IMAGE"],[119,45,0,43,1,"IMAGE"],[120,45,0,52,0,"IMAGE"],[121,45,0,60,0,"IMAGE"],[123,79,0,45,0,"IMAGE"],[130,58,0,30,3,"HYVID_CFG"],[131,88,0,30,1,"PROMPT_TEMPLATE"],[134,1,0,3,0,"HYVIDEOMODEL"],[135,99,0,43,0,"VAE"],[136,99,0,5,0,"VAE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7513148009015777,"offset":[1861.6160020573338,140.11830927539992]},"node_versions":{"comfyui-hunyuanvideowrapper":"ecd60a66e6ebbdde2b8a0a6fe24bad72a8af925b","comfy-core":"0.3.14","comfyui-kjnodes":"1.0.5","comfyui-videohelpersuite":"1.5.2"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"ue_links":[],"VHS_MetadataImage":true,"VHS_KeepIntermediate":true},"version":0.4} | |
``` | |
## Additional Context | |
(Please add any additional context or steps to reproduce the error here) | |
If you get a security warning you have to go into ComfyUI\user\default\ComfyUI-Manager and there edit the config.ini. | |
Change the Security Level to Weak, then save and try install numpy again. | |
After that set the Security Level back to normal. | |
_____________________________________________________________________________________________________________________________________________ | |
YAY! now I generate SD1.5 images. | |
# ComfyUI Error Report | |
## Error Details | |
- **Node ID:** 99 | |
- **Node Type:** HyVideoVAELoader | |
- **Exception Type:** RuntimeError | |
- **Exception Message:** Error(s) in loading state_dict for AutoencoderKLCausal3D: | |
Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.conv1.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.conv.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.conv2.conv.weight", "encoder.down_blocks.0.resnets.0.conv2.conv.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.conv1.conv.weight", "encoder.down_blocks.0.resnets.1.conv1.conv.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.0.resnets.1.norm2.bias", "encoder.down_blocks.0.resnets.1.conv2.conv.weight", "encoder.down_blocks.0.resnets.1.conv2.conv.bias", "encoder.down_blocks.0.downsamplers.0.conv.conv.weight", "encoder.down_blocks.0.downsamplers.0.conv.conv.bias", "encoder.down_blocks.1.resnets.0.norm1.weight", "encoder.down_blocks.1.resnets.0.norm1.bias", "encoder.down_blocks.1.resnets.0.conv1.conv.weight", "encoder.down_blocks.1.resnets.0.conv1.conv.bias", "encoder.down_blocks.1.resnets.0.norm2.weight", "encoder.down_blocks.1.resnets.0.norm2.bias", "encoder.down_blocks.1.resnets.0.conv2.conv.weight", "encoder.down_blocks.1.resnets.0.conv2.conv.bias", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.1.resnets.1.norm1.weight", "encoder.down_blocks.1.resnets.1.norm1.bias", "encoder.down_blocks.1.resnets.1.conv1.conv.weight", "encoder.down_blocks.1.resnets.1.conv1.conv.bias", "encoder.down_blocks.1.resnets.1.norm2.weight", "encoder.down_blocks.1.resnets.1.norm2.bias", "encoder.down_blocks.1.resnets.1.conv2.conv.weight", "encoder.down_blocks.1.resnets.1.conv2.conv.bias", "encoder.down_blocks.1.downsamplers.0.conv.conv.weight", "encoder.down_blocks.1.downsamplers.0.conv.conv.bias", "encoder.down_blocks.2.resnets.0.norm1.weight", "encoder.down_blocks.2.resnets.0.norm1.bias", "encoder.down_blocks.2.resnets.0.conv1.conv.weight", "encoder.down_blocks.2.resnets.0.conv1.conv.bias", "encoder.down_blocks.2.resnets.0.norm2.weight", "encoder.down_blocks.2.resnets.0.norm2.bias", "encoder.down_blocks.2.resnets.0.conv2.conv.weight", "encoder.down_blocks.2.resnets.0.conv2.conv.bias", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.2.resnets.1.norm1.weight", "encoder.down_blocks.2.resnets.1.norm1.bias", "encoder.down_blocks.2.resnets.1.conv1.conv.weight", "encoder.down_blocks.2.resnets.1.conv1.conv.bias", "encoder.down_blocks.2.resnets.1.norm2.weight", "encoder.down_blocks.2.resnets.1.norm2.bias", "encoder.down_blocks.2.resnets.1.conv2.conv.weight", "encoder.down_blocks.2.resnets.1.conv2.conv.bias", "encoder.down_blocks.2.downsamplers.0.conv.conv.weight", "encoder.down_blocks.2.downsamplers.0.conv.conv.bias", "encoder.down_blocks.3.resnets.0.norm1.weight", "encoder.down_blocks.3.resnets.0.norm1.bias", "encoder.down_blocks.3.resnets.0.conv1.conv.weight", "encoder.down_blocks.3.resnets.0.conv1.conv.bias", "encoder.down_blocks.3.resnets.0.norm2.weight", "encoder.down_blocks.3.resnets.0.norm2.bias", "encoder.down_blocks.3.resnets.0.conv2.conv.weight", "encoder.down_blocks.3.resnets.0.conv2.conv.bias", "encoder.down_blocks.3.resnets.1.norm1.weight", "encoder.down_blocks.3.resnets.1.norm1.bias", "encoder.down_blocks.3.resnets.1.conv1.conv.weight", "encoder.down_blocks.3.resnets.1.conv1.conv.bias", "encoder.down_blocks.3.resnets.1.norm2.weight", "encoder.down_blocks.3.resnets.1.norm2.bias", "encoder.down_blocks.3.resnets.1.conv2.conv.weight", "encoder.down_blocks.3.resnets.1.conv2.conv.bias", "encoder.mid_block.attentions.0.group_norm.weight", "encoder.mid_block.attentions.0.group_norm.bias", "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "encoder.mid_block.resnets.0.norm1.weight", "encoder.mid_block.resnets.0.norm1.bias", "encoder.mid_block.resnets.0.conv1.conv.weight", "encoder.mid_block.resnets.0.conv1.conv.bias", "encoder.mid_block.resnets.0.norm2.weight", "encoder.mid_block.resnets.0.norm2.bias", "encoder.mid_block.resnets.0.conv2.conv.weight", "encoder.mid_block.resnets.0.conv2.conv.bias", "encoder.mid_block.resnets.1.norm1.weight", "encoder.mid_block.resnets.1.norm1.bias", "encoder.mid_block.resnets.1.conv1.conv.weight", "encoder.mid_block.resnets.1.conv1.conv.bias", "encoder.mid_block.resnets.1.norm2.weight", "encoder.mid_block.resnets.1.norm2.bias", "encoder.mid_block.resnets.1.conv2.conv.weight", "encoder.mid_block.resnets.1.conv2.conv.bias", "encoder.conv_norm_out.weight", "encoder.conv_norm_out.bias", "decoder.up_blocks.0.resnets.0.norm1.weight", "decoder.up_blocks.0.resnets.0.norm1.bias", "decoder.up_blocks.0.resnets.0.conv1.conv.weight", "decoder.up_blocks.0.resnets.0.conv1.conv.bias", "decoder.up_blocks.0.resnets.0.norm2.weight", "decoder.up_blocks.0.resnets.0.norm2.bias", "decoder.up_blocks.0.resnets.0.conv2.conv.weight", "decoder.up_blocks.0.resnets.0.conv2.conv.bias", "decoder.up_blocks.0.resnets.1.norm1.weight", "decoder.up_blocks.0.resnets.1.norm1.bias", "decoder.up_blocks.0.resnets.1.conv1.conv.weight", "decoder.up_blocks.0.resnets.1.conv1.conv.bias", "decoder.up_blocks.0.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.1.norm2.bias", "decoder.up_blocks.0.resnets.1.conv2.conv.weight", "decoder.up_blocks.0.resnets.1.conv2.conv.bias", "decoder.up_blocks.0.resnets.2.norm1.weight", "decoder.up_blocks.0.resnets.2.norm1.bias", "decoder.up_blocks.0.resnets.2.conv1.conv.weight", "decoder.up_blocks.0.resnets.2.conv1.conv.bias", "decoder.up_blocks.0.resnets.2.norm2.weight", "decoder.up_blocks.0.resnets.2.norm2.bias", "decoder.up_blocks.0.resnets.2.conv2.conv.weight", "decoder.up_blocks.0.resnets.2.conv2.conv.bias", "decoder.up_blocks.0.upsamplers.0.conv.conv.weight", "decoder.up_blocks.0.upsamplers.0.conv.conv.bias", "decoder.up_blocks.1.resnets.0.norm1.weight", "decoder.up_blocks.1.resnets.0.norm1.bias", "decoder.up_blocks.1.resnets.0.conv1.conv.weight", "decoder.up_blocks.1.resnets.0.conv1.conv.bias", "decoder.up_blocks.1.resnets.0.norm2.weight", "decoder.up_blocks.1.resnets.0.norm2.bias", "decoder.up_blocks.1.resnets.0.conv2.conv.weight", "decoder.up_blocks.1.resnets.0.conv2.conv.bias", "decoder.up_blocks.1.resnets.1.norm1.weight", "decoder.up_blocks.1.resnets.1.norm1.bias", "decoder.up_blocks.1.resnets.1.conv1.conv.weight", "decoder.up_blocks.1.resnets.1.conv1.conv.bias", "decoder.up_blocks.1.resnets.1.norm2.weight", "decoder.up_blocks.1.resnets.1.norm2.bias", "decoder.up_blocks.1.resnets.1.conv2.conv.weight", "decoder.up_blocks.1.resnets.1.conv2.conv.bias", "decoder.up_blocks.1.resnets.2.norm1.weight", "decoder.up_blocks.1.resnets.2.norm1.bias", "decoder.up_blocks.1.resnets.2.conv1.conv.weight", "decoder.up_blocks.1.resnets.2.conv1.conv.bias", "decoder.up_blocks.1.resnets.2.norm2.weight", "decoder.up_blocks.1.resnets.2.norm2.bias", "decoder.up_blocks.1.resnets.2.conv2.conv.weight", "decoder.up_blocks.1.resnets.2.conv2.conv.bias", "decoder.up_blocks.1.upsamplers.0.conv.conv.weight", "decoder.up_blocks.1.upsamplers.0.conv.conv.bias", "decoder.up_blocks.2.resnets.0.norm1.weight", "decoder.up_blocks.2.resnets.0.norm1.bias", "decoder.up_blocks.2.resnets.0.conv1.conv.weight", "decoder.up_blocks.2.resnets.0.conv1.conv.bias", "decoder.up_blocks.2.resnets.0.norm2.weight", "decoder.up_blocks.2.resnets.0.norm2.bias", "decoder.up_blocks.2.resnets.0.conv2.conv.weight", "decoder.up_blocks.2.resnets.0.conv2.conv.bias", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.2.resnets.1.norm1.weight", "decoder.up_blocks.2.resnets.1.norm1.bias", "decoder.up_blocks.2.resnets.1.conv1.conv.weight", "decoder.up_blocks.2.resnets.1.conv1.conv.bias", "decoder.up_blocks.2.resnets.1.norm2.weight", "decoder.up_blocks.2.resnets.1.norm2.bias", "decoder.up_blocks.2.resnets.1.conv2.conv.weight", "decoder.up_blocks.2.resnets.1.conv2.conv.bias", "decoder.up_blocks.2.resnets.2.norm1.weight", "decoder.up_blocks.2.resnets.2.norm1.bias", "decoder.up_blocks.2.resnets.2.conv1.conv.weight", "decoder.up_blocks.2.resnets.2.conv1.conv.bias", "decoder.up_blocks.2.resnets.2.norm2.weight", "decoder.up_blocks.2.resnets.2.norm2.bias", "decoder.up_blocks.2.resnets.2.conv2.conv.weight", "decoder.up_blocks.2.resnets.2.conv2.conv.bias", "decoder.up_blocks.2.upsamplers.0.conv.conv.weight", "decoder.up_blocks.2.upsamplers.0.conv.conv.bias", "decoder.up_blocks.3.resnets.0.norm1.weight", "decoder.up_blocks.3.resnets.0.norm1.bias", "decoder.up_blocks.3.resnets.0.conv1.conv.weight", "decoder.up_blocks.3.resnets.0.conv1.conv.bias", "decoder.up_blocks.3.resnets.0.norm2.weight", "decoder.up_blocks.3.resnets.0.norm2.bias", "decoder.up_blocks.3.resnets.0.conv2.conv.weight", "decoder.up_blocks.3.resnets.0.conv2.conv.bias", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.3.resnets.1.norm1.weight", "decoder.up_blocks.3.resnets.1.norm1.bias", "decoder.up_blocks.3.resnets.1.conv1.conv.weight", "decoder.up_blocks.3.resnets.1.conv1.conv.bias", "decoder.up_blocks.3.resnets.1.norm2.weight", "decoder.up_blocks.3.resnets.1.norm2.bias", "decoder.up_blocks.3.resnets.1.conv2.conv.weight", "decoder.up_blocks.3.resnets.1.conv2.conv.bias", "decoder.up_blocks.3.resnets.2.norm1.weight", "decoder.up_blocks.3.resnets.2.norm1.bias", "decoder.up_blocks.3.resnets.2.conv1.conv.weight", "decoder.up_blocks.3.resnets.2.conv1.conv.bias", "decoder.up_blocks.3.resnets.2.norm2.weight", "decoder.up_blocks.3.resnets.2.norm2.bias", "decoder.up_blocks.3.resnets.2.conv2.conv.weight", "decoder.up_blocks.3.resnets.2.conv2.conv.bias", "decoder.mid_block.attentions.0.group_norm.weight", "decoder.mid_block.attentions.0.group_norm.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.resnets.0.norm1.weight", "decoder.mid_block.resnets.0.norm1.bias", "decoder.mid_block.resnets.0.conv1.conv.weight", "decoder.mid_block.resnets.0.conv1.conv.bias", "decoder.mid_block.resnets.0.norm2.weight", "decoder.mid_block.resnets.0.norm2.bias", "decoder.mid_block.resnets.0.conv2.conv.weight", "decoder.mid_block.resnets.0.conv2.conv.bias", "decoder.mid_block.resnets.1.norm1.weight", "decoder.mid_block.resnets.1.norm1.bias", "decoder.mid_block.resnets.1.conv1.conv.weight", "decoder.mid_block.resnets.1.conv1.conv.bias", "decoder.mid_block.resnets.1.norm2.weight", "decoder.mid_block.resnets.1.norm2.bias", "decoder.mid_block.resnets.1.conv2.conv.weight", "decoder.mid_block.resnets.1.conv2.conv.bias", "decoder.conv_norm_out.weight", "decoder.conv_norm_out.bias". | |
Unexpected key(s) in state_dict: "encoder.down.0.block.0.conv1.conv.bias", "encoder.down.0.block.0.conv1.conv.weight", "encoder.down.0.block.0.conv2.conv.bias", "encoder.down.0.block.0.conv2.conv.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.1.conv1.conv.bias", "encoder.down.0.block.1.conv1.conv.weight", "encoder.down.0.block.1.conv2.conv.bias", "encoder.down.0.block.1.conv2.conv.weight", "encoder.down.0.block.1.norm1.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm2.bias", "encoder.down.0.block.1.norm2.weight", "encoder.down.0.downsample.conv.conv.bias", "encoder.down.0.downsample.conv.conv.weight", "encoder.down.1.block.0.conv1.conv.bias", "encoder.down.1.block.0.conv1.conv.weight", "encoder.down.1.block.0.conv2.conv.bias", "encoder.down.1.block.0.conv2.conv.weight", "encoder.down.1.block.0.nin_shortcut.conv.bias", "encoder.down.1.block.0.nin_shortcut.conv.weight", "encoder.down.1.block.0.norm1.bias", "encoder.down.1.block.0.norm1.weight", "encoder.down.1.block.0.norm2.bias", "encoder.down.1.block.0.norm2.weight", "encoder.down.1.block.1.conv1.conv.bias", "encoder.down.1.block.1.conv1.conv.weight", "encoder.down.1.block.1.conv2.conv.bias", "encoder.down.1.block.1.conv2.conv.weight", "encoder.down.1.block.1.norm1.bias", "encoder.down.1.block.1.norm1.weight", "encoder.down.1.block.1.norm2.bias", "encoder.down.1.block.1.norm2.weight", "encoder.down.1.downsample.conv.conv.bias", "encoder.down.1.downsample.conv.conv.weight", "encoder.down.2.block.0.conv1.conv.bias", "encoder.down.2.block.0.conv1.conv.weight", "encoder.down.2.block.0.conv2.conv.bias", "encoder.down.2.block.0.conv2.conv.weight", "encoder.down.2.block.0.nin_shortcut.conv.bias", "encoder.down.2.block.0.nin_shortcut.conv.weight", "encoder.down.2.block.0.norm1.bias", "encoder.down.2.block.0.norm1.weight", "encoder.down.2.block.0.norm2.bias", "encoder.down.2.block.0.norm2.weight", "encoder.down.2.block.1.conv1.conv.bias", "encoder.down.2.block.1.conv1.conv.weight", "encoder.down.2.block.1.conv2.conv.bias", "encoder.down.2.block.1.conv2.conv.weight", "encoder.down.2.block.1.norm1.bias", "encoder.down.2.block.1.norm1.weight", "encoder.down.2.block.1.norm2.bias", "encoder.down.2.block.1.norm2.weight", "encoder.down.2.downsample.conv.conv.bias", "encoder.down.2.downsample.conv.conv.weight", "encoder.down.3.block.0.conv1.conv.bias", "encoder.down.3.block.0.conv1.conv.weight", "encoder.down.3.block.0.conv2.conv.bias", "encoder.down.3.block.0.conv2.conv.weight", "encoder.down.3.block.0.norm1.bias", "encoder.down.3.block.0.norm1.weight", "encoder.down.3.block.0.norm2.bias", "encoder.down.3.block.0.norm2.weight", "encoder.down.3.block.1.conv1.conv.bias", "encoder.down.3.block.1.conv1.conv.weight", "encoder.down.3.block.1.conv2.conv.bias", "encoder.down.3.block.1.conv2.conv.weight", "encoder.down.3.block.1.norm1.bias", "encoder.down.3.block.1.norm1.weight", "encoder.down.3.block.1.norm2.bias", "encoder.down.3.block.1.norm2.weight", "encoder.mid.attn_1.k.bias", "encoder.mid.attn_1.k.weight", "encoder.mid.attn_1.norm.bias", "encoder.mid.attn_1.norm.weight", "encoder.mid.attn_1.proj_out.bias", "encoder.mid.attn_1.proj_out.weight", "encoder.mid.attn_1.q.bias", "encoder.mid.attn_1.q.weight", "encoder.mid.attn_1.v.bias", "encoder.mid.attn_1.v.weight", "encoder.mid.block_1.conv1.conv.bias", "encoder.mid.block_1.conv1.conv.weight", "encoder.mid.block_1.conv2.conv.bias", "encoder.mid.block_1.conv2.conv.weight", "encoder.mid.block_1.norm1.bias", "encoder.mid.block_1.norm1.weight", "encoder.mid.block_1.norm2.bias", "encoder.mid.block_1.norm2.weight", "encoder.mid.block_2.conv1.conv.bias", "encoder.mid.block_2.conv1.conv.weight", "encoder.mid.block_2.conv2.conv.bias", "encoder.mid.block_2.conv2.conv.weight", "encoder.mid.block_2.norm1.bias", "encoder.mid.block_2.norm1.weight", "encoder.mid.block_2.norm2.bias", "encoder.mid.block_2.norm2.weight", "encoder.norm_out.bias", "encoder.norm_out.weight", "decoder.mid.attn_1.k.bias", "decoder.mid.attn_1.k.weight", "decoder.mid.attn_1.norm.bias", "decoder.mid.attn_1.norm.weight", "decoder.mid.attn_1.proj_out.bias", "decoder.mid.attn_1.proj_out.weight", "decoder.mid.attn_1.q.bias", "decoder.mid.attn_1.q.weight", "decoder.mid.attn_1.v.bias", "decoder.mid.attn_1.v.weight", "decoder.mid.block_1.conv1.conv.bias", "decoder.mid.block_1.conv1.conv.weight", "decoder.mid.block_1.conv2.conv.bias", "decoder.mid.block_1.conv2.conv.weight", "decoder.mid.block_1.norm1.bias", "decoder.mid.block_1.norm1.weight", "decoder.mid.block_1.norm2.bias", "decoder.mid.block_1.norm2.weight", "decoder.mid.block_2.conv1.conv.bias", "decoder.mid.block_2.conv1.conv.weight", "decoder.mid.block_2.conv2.conv.bias", "decoder.mid.block_2.conv2.conv.weight", "decoder.mid.block_2.norm1.bias", "decoder.mid.block_2.norm1.weight", "decoder.mid.block_2.norm2.bias", "decoder.mid.block_2.norm2.weight", "decoder.norm_out.bias", "decoder.norm_out.weight", "decoder.up.0.block.0.conv1.conv.bias", "decoder.up.0.block.0.conv1.conv.weight", "decoder.up.0.block.0.conv2.conv.bias", "decoder.up.0.block.0.conv2.conv.weight", "decoder.up.0.block.0.nin_shortcut.conv.bias", "decoder.up.0.block.0.nin_shortcut.conv.weight", "decoder.up.0.block.0.norm1.bias", "decoder.up.0.block.0.norm1.weight", "decoder.up.0.block.0.norm2.bias", "decoder.up.0.block.0.norm2.weight", "decoder.up.0.block.1.conv1.conv.bias", "decoder.up.0.block.1.conv1.conv.weight", "decoder.up.0.block.1.conv2.conv.bias", "decoder.up.0.block.1.conv2.conv.weight", "decoder.up.0.block.1.norm1.bias", "decoder.up.0.block.1.norm1.weight", "decoder.up.0.block.1.norm2.bias", "decoder.up.0.block.1.norm2.weight", "decoder.up.0.block.2.conv1.conv.bias", "decoder.up.0.block.2.conv1.conv.weight", "decoder.up.0.block.2.conv2.conv.bias", "decoder.up.0.block.2.conv2.conv.weight", "decoder.up.0.block.2.norm1.bias", "decoder.up.0.block.2.norm1.weight", "decoder.up.0.block.2.norm2.bias", "decoder.up.0.block.2.norm2.weight", "decoder.up.1.block.0.conv1.conv.bias", "decoder.up.1.block.0.conv1.conv.weight", "decoder.up.1.block.0.conv2.conv.bias", "decoder.up.1.block.0.conv2.conv.weight", "decoder.up.1.block.0.nin_shortcut.conv.bias", "decoder.up.1.block.0.nin_shortcut.conv.weight", "decoder.up.1.block.0.norm1.bias", "decoder.up.1.block.0.norm1.weight", "decoder.up.1.block.0.norm2.bias", "decoder.up.1.block.0.norm2.weight", "decoder.up.1.block.1.conv1.conv.bias", "decoder.up.1.block.1.conv1.conv.weight", "decoder.up.1.block.1.conv2.conv.bias", "decoder.up.1.block.1.conv2.conv.weight", "decoder.up.1.block.1.norm1.bias", "decoder.up.1.block.1.norm1.weight", "decoder.up.1.block.1.norm2.bias", "decoder.up.1.block.1.norm2.weight", "decoder.up.1.block.2.conv1.conv.bias", "decoder.up.1.block.2.conv1.conv.weight", "decoder.up.1.block.2.conv2.conv.bias", "decoder.up.1.block.2.conv2.conv.weight", "decoder.up.1.block.2.norm1.bias", "decoder.up.1.block.2.norm1.weight", "decoder.up.1.block.2.norm2.bias", "decoder.up.1.block.2.norm2.weight", "decoder.up.1.upsample.conv.conv.bias", "decoder.up.1.upsample.conv.conv.weight", "decoder.up.2.block.0.conv1.conv.bias", "decoder.up.2.block.0.conv1.conv.weight", "decoder.up.2.block.0.conv2.conv.bias", "decoder.up.2.block.0.conv2.conv.weight", "decoder.up.2.block.0.norm1.bias", "decoder.up.2.block.0.norm1.weight", "decoder.up.2.block.0.norm2.bias", "decoder.up.2.block.0.norm2.weight", "decoder.up.2.block.1.conv1.conv.bias", "decoder.up.2.block.1.conv1.conv.weight", "decoder.up.2.block.1.conv2.conv.bias", "decoder.up.2.block.1.conv2.conv.weight", "decoder.up.2.block.1.norm1.bias", "decoder.up.2.block.1.norm1.weight", "decoder.up.2.block.1.norm2.bias", "decoder.up.2.block.1.norm2.weight", "decoder.up.2.block.2.conv1.conv.bias", "decoder.up.2.block.2.conv1.conv.weight", "decoder.up.2.block.2.conv2.conv.bias", "decoder.up.2.block.2.conv2.conv.weight", "decoder.up.2.block.2.norm1.bias", "decoder.up.2.block.2.norm1.weight", "decoder.up.2.block.2.norm2.bias", "decoder.up.2.block.2.norm2.weight", "decoder.up.2.upsample.conv.conv.bias", "decoder.up.2.upsample.conv.conv.weight", "decoder.up.3.block.0.conv1.conv.bias", "decoder.up.3.block.0.conv1.conv.weight", "decoder.up.3.block.0.conv2.conv.bias", "decoder.up.3.block.0.conv2.conv.weight", "decoder.up.3.block.0.norm1.bias", "decoder.up.3.block.0.norm1.weight", "decoder.up.3.block.0.norm2.bias", "decoder.up.3.block.0.norm2.weight", "decoder.up.3.block.1.conv1.conv.bias", "decoder.up.3.block.1.conv1.conv.weight", "decoder.up.3.block.1.conv2.conv.bias", "decoder.up.3.block.1.conv2.conv.weight", "decoder.up.3.block.1.norm1.bias", "decoder.up.3.block.1.norm1.weight", "decoder.up.3.block.1.norm2.bias", "decoder.up.3.block.1.norm2.weight", "decoder.up.3.block.2.conv1.conv.bias", "decoder.up.3.block.2.conv1.conv.weight", "decoder.up.3.block.2.conv2.conv.bias", "decoder.up.3.block.2.conv2.conv.weight", "decoder.up.3.block.2.norm1.bias", "decoder.up.3.block.2.norm1.weight", "decoder.up.3.block.2.norm2.bias", "decoder.up.3.block.2.norm2.weight", "decoder.up.3.upsample.conv.conv.bias", "decoder.up.3.upsample.conv.conv.weight". | |
## Stack Trace | |
``` | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 564, in loadmodel | |
vae.load_state_dict(vae_sd) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict | |
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( | |
``` | |
## System Information | |
- **ComfyUI Version:** 0.3.14 | |
- **Arguments:** main.py --auto-launch --disable-xformers --use-quad-cross-attention --reserve-vram 0.8 | |
- **OS:** nt | |
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] | |
- **Embedded Python:** false | |
- **PyTorch Version:** 2.3.1+cu118 | |
## Devices | |
- **Name:** cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native | |
- **Type:** cuda | |
- **VRAM Total:** 25753026560 | |
- **VRAM Free:** 25596952576 | |
- **Torch VRAM Total:** 0 | |
- **Torch VRAM Free:** 0 | |
## Logs | |
``` | |
2025-02-23T13:00:50.504787 - [START] Security scan2025-02-23T13:00:50.504787 - | |
2025-02-23T13:00:51.012604 - [DONE] Security scan2025-02-23T13:00:51.012604 - | |
2025-02-23T13:00:51.094867 - ## ComfyUI-Manager: installing dependencies done.2025-02-23T13:00:51.094867 - | |
2025-02-23T13:00:51.094867 - ** ComfyUI startup time:2025-02-23T13:00:51.094867 - 2025-02-23T13:00:51.094867 - 2025-02-23 13:00:51.0942025-02-23T13:00:51.094867 - | |
2025-02-23T13:00:51.094867 - ** Platform:2025-02-23T13:00:51.094867 - 2025-02-23T13:00:51.094867 - Windows2025-02-23T13:00:51.094867 - | |
2025-02-23T13:00:51.094867 - ** Python version:2025-02-23T13:00:51.094867 - 2025-02-23T13:00:51.094867 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.095952 - ** Python executable:2025-02-23T13:00:51.095952 - 2025-02-23T13:00:51.095952 - F:\SD-Zluda\ComfyUI\venv\Scripts\python.exe2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.095952 - ** ComfyUI Path:2025-02-23T13:00:51.095952 - 2025-02-23T13:00:51.095952 - F:\SD-Zluda\ComfyUI2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.095952 - ** ComfyUI Base Folder Path:2025-02-23T13:00:51.095952 - 2025-02-23T13:00:51.095952 - F:\SD-Zluda\ComfyUI2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.095952 - ** User directory:2025-02-23T13:00:51.095952 - 2025-02-23T13:00:51.095952 - F:\SD-Zluda\ComfyUI\user2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.095952 - ** ComfyUI-Manager config path:2025-02-23T13:00:51.095952 - 2025-02-23T13:00:51.095952 - F:\SD-Zluda\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-02-23T13:00:51.095952 - | |
2025-02-23T13:00:51.097012 - ** Log path:2025-02-23T13:00:51.097012 - 2025-02-23T13:00:51.097012 - F:\SD-Zluda\ComfyUI\user\comfyui.log2025-02-23T13:00:51.097012 - | |
2025-02-23T13:00:51.634937 - | |
Prestartup times for custom nodes: | |
2025-02-23T13:00:51.634937 - 1.4 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\ComfyUI-Manager | |
2025-02-23T13:00:51.634937 - | |
2025-02-23T13:00:52.615081 - Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended. | |
2025-02-23T13:00:52.969547 - Total VRAM 24560 MB, total RAM 65367 MB | |
2025-02-23T13:00:52.969547 - pytorch version: 2.3.1+cu118 | |
2025-02-23T13:00:52.969547 - Set vram state to: NORMAL_VRAM | |
2025-02-23T13:00:52.974050 - [36mDetected ZLUDA, support for it is experimental and comfy may not work properly.[0m | |
2025-02-23T13:00:52.974050 - [36mDisabling cuDNN because ZLUDA does currently not support it.[0m | |
2025-02-23T13:00:52.974050 - Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native. | |
2025-02-23T13:00:53.605000 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention | |
2025-02-23T13:00:54.384994 - ComfyUI version: 0.3.14 | |
2025-02-23T13:00:54.402218 - [Prompt Server] web root: F:\SD-Zluda\ComfyUI\web | |
2025-02-23T13:00:54.403721 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_latent to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.404780 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypernetwork to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.468103 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_upscale_model to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.468103 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_post_processing to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.469102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mask to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.470102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_compositing to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.471102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_rebatch to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.471102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.472102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_tomesd to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.472102 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_clip_sdxl to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.699838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_canny to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.699838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_freelunch to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.699838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_custom_sampler to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.699838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hypertile to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.699838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_advanced to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.703838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_downscale to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.703838 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_images to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.705071 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_video_model to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.705071 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sag to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.705071 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_perpneg to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.705071 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable3d to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sdupscale to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_photomaker to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pixart to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cond to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_morphology to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_stable_cascade to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.708080 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_differential_diffusion to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.712084 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_ip2p to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.712084 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_model_merging_model_specific to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.712084 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_pag to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.712084 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_align_your_steps to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.714596 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_attention_multiply to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.714596 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_advanced_samplers to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.715691 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_webcam to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.759787 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_audio to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.763837 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_sd3 to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.764847 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_gits to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.764847 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_controlnet to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.764847 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hunyuan to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.764847 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_flux to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lora_extract to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_torch_compile to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mochi to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_slg to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_mahiro to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.767856 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_lt to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.771860 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_hooks to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.771860 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_load_3d to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.771860 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\comfy_extras\nodes_cosmos to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:54.775881 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfy-image-saver to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.014808 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-hunyuanvideowrapper to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.017898 - Total VRAM 24560 MB, total RAM 65367 MB | |
2025-02-23T13:00:55.017898 - pytorch version: 2.3.1+cu118 | |
2025-02-23T13:00:55.017898 - Set vram state to: NORMAL_VRAM | |
2025-02-23T13:00:55.017898 - [36mDetected ZLUDA, support for it is experimental and comfy may not work properly.[0m | |
2025-02-23T13:00:55.017898 - Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native. | |
2025-02-23T13:00:55.029927 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-kjnodes to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.038000 - ### Loading: ComfyUI-Manager (V3.25.1) | |
2025-02-23T13:00:55.038121 - [ComfyUI-Manager] network_mode: public | |
2025-02-23T13:00:55.133907 - ### ComfyUI Revision: 3161 on 'nodes-cudnn-patch' [07833a5f] | Released on '2025-02-10' | |
2025-02-23T13:00:55.244820 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node ComfyUI-Manager to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.276289 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node comfyui-videohelpersuite to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.276289 - [32m[ZLUDA] Patched torch.backends.cudnn in custom node F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save to prevent enabling cuDNN.[0m | |
2025-02-23T13:00:55.276289 - | |
Import times for custom nodes: | |
2025-02-23T13:00:55.276289 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\websocket_image_save.py | |
2025-02-23T13:00:55.276289 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfy-image-saver | |
2025-02-23T13:00:55.276289 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-kjnodes | |
2025-02-23T13:00:55.276289 - 0.0 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-videohelpersuite | |
2025-02-23T13:00:55.276289 - 0.2 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\ComfyUI-Manager | |
2025-02-23T13:00:55.276289 - 0.2 seconds: F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper | |
2025-02-23T13:00:55.276289 - | |
2025-02-23T13:00:55.284684 - Starting server | |
2025-02-23T13:00:55.284684 - To see the GUI go to: http://127.0.0.1:8188 | |
2025-02-23T13:00:55.373729 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json | |
2025-02-23T13:00:55.474778 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json | |
2025-02-23T13:00:55.518962 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json | |
2025-02-23T13:00:55.658088 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json | |
2025-02-23T13:00:55.717389 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json | |
2025-02-23T13:01:00.044007 - FETCH ComfyRegistry Data: 5/342025-02-23T13:01:00.044007 - | |
2025-02-23T13:01:04.532121 - FETCH ComfyRegistry Data: 10/342025-02-23T13:01:04.532121 - | |
2025-02-23T13:01:08.680547 - FETCH ComfyRegistry Data: 15/342025-02-23T13:01:08.680547 - | |
2025-02-23T13:01:13.164064 - FETCH ComfyRegistry Data: 20/342025-02-23T13:01:13.164064 - | |
2025-02-23T13:01:13.676772 - got prompt | |
2025-02-23T13:01:13.727211 - Using split attention in VAE | |
2025-02-23T13:01:13.727211 - Using split attention in VAE | |
2025-02-23T13:01:13.982439 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 | |
2025-02-23T13:01:14.045756 - !!! Exception during processing !!! 'VAE' object has no attribute 'to' | |
2025-02-23T13:01:14.063676 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 1433, in encode | |
vae.to(device) | |
AttributeError: 'VAE' object has no attribute 'to' | |
2025-02-23T13:01:14.064684 - Prompt executed in 0.38 seconds | |
2025-02-23T13:01:18.797832 - FETCH ComfyRegistry Data: 25/342025-02-23T13:01:18.797832 - | |
2025-02-23T13:01:23.488526 - FETCH ComfyRegistry Data: 30/342025-02-23T13:01:23.488526 - | |
2025-02-23T13:01:27.373728 - FETCH ComfyRegistry Data [DONE]2025-02-23T13:01:27.373728 - | |
2025-02-23T13:01:27.407717 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes | |
2025-02-23T13:01:27.444713 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote | |
2025-02-23T13:01:27.444713 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-23T13:01:27.444713 - 2025-02-23T13:01:27.721892 - [DONE]2025-02-23T13:01:27.721892 - | |
2025-02-23T13:01:27.754871 - [ComfyUI-Manager] All startup tasks have been completed. | |
2025-02-23T13:01:32.948919 - got prompt | |
2025-02-23T13:01:32.961127 - !!! Exception during processing !!! 'VAE' object has no attribute 'to' | |
2025-02-23T13:01:32.962127 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 1433, in encode | |
vae.to(device) | |
AttributeError: 'VAE' object has no attribute 'to' | |
2025-02-23T13:01:32.962127 - Prompt executed in 0.01 seconds | |
2025-02-23T13:03:06.905074 - got prompt | |
2025-02-23T13:03:07.675071 - !!! Exception during processing !!! Error(s) in loading state_dict for AutoencoderKLCausal3D: | |
Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.conv1.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.conv.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.conv2.conv.weight", "encoder.down_blocks.0.resnets.0.conv2.conv.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.conv1.conv.weight", "encoder.down_blocks.0.resnets.1.conv1.conv.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.0.resnets.1.norm2.bias", "encoder.down_blocks.0.resnets.1.conv2.conv.weight", "encoder.down_blocks.0.resnets.1.conv2.conv.bias", "encoder.down_blocks.0.downsamplers.0.conv.conv.weight", "encoder.down_blocks.0.downsamplers.0.conv.conv.bias", "encoder.down_blocks.1.resnets.0.norm1.weight", "encoder.down_blocks.1.resnets.0.norm1.bias", "encoder.down_blocks.1.resnets.0.conv1.conv.weight", "encoder.down_blocks.1.resnets.0.conv1.conv.bias", "encoder.down_blocks.1.resnets.0.norm2.weight", "encoder.down_blocks.1.resnets.0.norm2.bias", "encoder.down_blocks.1.resnets.0.conv2.conv.weight", "encoder.down_blocks.1.resnets.0.conv2.conv.bias", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.1.resnets.1.norm1.weight", "encoder.down_blocks.1.resnets.1.norm1.bias", "encoder.down_blocks.1.resnets.1.conv1.conv.weight", "encoder.down_blocks.1.resnets.1.conv1.conv.bias", "encoder.down_blocks.1.resnets.1.norm2.weight", "encoder.down_blocks.1.resnets.1.norm2.bias", "encoder.down_blocks.1.resnets.1.conv2.conv.weight", "encoder.down_blocks.1.resnets.1.conv2.conv.bias", "encoder.down_blocks.1.downsamplers.0.conv.conv.weight", "encoder.down_blocks.1.downsamplers.0.conv.conv.bias", "encoder.down_blocks.2.resnets.0.norm1.weight", "encoder.down_blocks.2.resnets.0.norm1.bias", "encoder.down_blocks.2.resnets.0.conv1.conv.weight", "encoder.down_blocks.2.resnets.0.conv1.conv.bias", "encoder.down_blocks.2.resnets.0.norm2.weight", "encoder.down_blocks.2.resnets.0.norm2.bias", "encoder.down_blocks.2.resnets.0.conv2.conv.weight", "encoder.down_blocks.2.resnets.0.conv2.conv.bias", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.2.resnets.1.norm1.weight", "encoder.down_blocks.2.resnets.1.norm1.bias", "encoder.down_blocks.2.resnets.1.conv1.conv.weight", "encoder.down_blocks.2.resnets.1.conv1.conv.bias", "encoder.down_blocks.2.resnets.1.norm2.weight", "encoder.down_blocks.2.resnets.1.norm2.bias", "encoder.down_blocks.2.resnets.1.conv2.conv.weight", "encoder.down_blocks.2.resnets.1.conv2.conv.bias", "encoder.down_blocks.2.downsamplers.0.conv.conv.weight", "encoder.down_blocks.2.downsamplers.0.conv.conv.bias", "encoder.down_blocks.3.resnets.0.norm1.weight", "encoder.down_blocks.3.resnets.0.norm1.bias", "encoder.down_blocks.3.resnets.0.conv1.conv.weight", "encoder.down_blocks.3.resnets.0.conv1.conv.bias", "encoder.down_blocks.3.resnets.0.norm2.weight", "encoder.down_blocks.3.resnets.0.norm2.bias", "encoder.down_blocks.3.resnets.0.conv2.conv.weight", "encoder.down_blocks.3.resnets.0.conv2.conv.bias", "encoder.down_blocks.3.resnets.1.norm1.weight", "encoder.down_blocks.3.resnets.1.norm1.bias", "encoder.down_blocks.3.resnets.1.conv1.conv.weight", "encoder.down_blocks.3.resnets.1.conv1.conv.bias", "encoder.down_blocks.3.resnets.1.norm2.weight", "encoder.down_blocks.3.resnets.1.norm2.bias", "encoder.down_blocks.3.resnets.1.conv2.conv.weight", "encoder.down_blocks.3.resnets.1.conv2.conv.bias", "encoder.mid_block.attentions.0.group_norm.weight", "encoder.mid_block.attentions.0.group_norm.bias", "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "encoder.mid_block.resnets.0.norm1.weight", "encoder.mid_block.resnets.0.norm1.bias", "encoder.mid_block.resnets.0.conv1.conv.weight", "encoder.mid_block.resnets.0.conv1.conv.bias", "encoder.mid_block.resnets.0.norm2.weight", "encoder.mid_block.resnets.0.norm2.bias", "encoder.mid_block.resnets.0.conv2.conv.weight", "encoder.mid_block.resnets.0.conv2.conv.bias", "encoder.mid_block.resnets.1.norm1.weight", "encoder.mid_block.resnets.1.norm1.bias", "encoder.mid_block.resnets.1.conv1.conv.weight", "encoder.mid_block.resnets.1.conv1.conv.bias", "encoder.mid_block.resnets.1.norm2.weight", "encoder.mid_block.resnets.1.norm2.bias", "encoder.mid_block.resnets.1.conv2.conv.weight", "encoder.mid_block.resnets.1.conv2.conv.bias", "encoder.conv_norm_out.weight", "encoder.conv_norm_out.bias", "decoder.up_blocks.0.resnets.0.norm1.weight", "decoder.up_blocks.0.resnets.0.norm1.bias", "decoder.up_blocks.0.resnets.0.conv1.conv.weight", "decoder.up_blocks.0.resnets.0.conv1.conv.bias", "decoder.up_blocks.0.resnets.0.norm2.weight", "decoder.up_blocks.0.resnets.0.norm2.bias", "decoder.up_blocks.0.resnets.0.conv2.conv.weight", "decoder.up_blocks.0.resnets.0.conv2.conv.bias", "decoder.up_blocks.0.resnets.1.norm1.weight", "decoder.up_blocks.0.resnets.1.norm1.bias", "decoder.up_blocks.0.resnets.1.conv1.conv.weight", "decoder.up_blocks.0.resnets.1.conv1.conv.bias", "decoder.up_blocks.0.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.1.norm2.bias", "decoder.up_blocks.0.resnets.1.conv2.conv.weight", "decoder.up_blocks.0.resnets.1.conv2.conv.bias", "decoder.up_blocks.0.resnets.2.norm1.weight", "decoder.up_blocks.0.resnets.2.norm1.bias", "decoder.up_blocks.0.resnets.2.conv1.conv.weight", "decoder.up_blocks.0.resnets.2.conv1.conv.bias", "decoder.up_blocks.0.resnets.2.norm2.weight", "decoder.up_blocks.0.resnets.2.norm2.bias", "decoder.up_blocks.0.resnets.2.conv2.conv.weight", "decoder.up_blocks.0.resnets.2.conv2.conv.bias", "decoder.up_blocks.0.upsamplers.0.conv.conv.weight", "decoder.up_blocks.0.upsamplers.0.conv.conv.bias", "decoder.up_blocks.1.resnets.0.norm1.weight", "decoder.up_blocks.1.resnets.0.norm1.bias", "decoder.up_blocks.1.resnets.0.conv1.conv.weight", "decoder.up_blocks.1.resnets.0.conv1.conv.bias", "decoder.up_blocks.1.resnets.0.norm2.weight", "decoder.up_blocks.1.resnets.0.norm2.bias", "decoder.up_blocks.1.resnets.0.conv2.conv.weight", "decoder.up_blocks.1.resnets.0.conv2.conv.bias", "decoder.up_blocks.1.resnets.1.norm1.weight", "decoder.up_blocks.1.resnets.1.norm1.bias", "decoder.up_blocks.1.resnets.1.conv1.conv.weight", "decoder.up_blocks.1.resnets.1.conv1.conv.bias", "decoder.up_blocks.1.resnets.1.norm2.weight", "decoder.up_blocks.1.resnets.1.norm2.bias", "decoder.up_blocks.1.resnets.1.conv2.conv.weight", "decoder.up_blocks.1.resnets.1.conv2.conv.bias", "decoder.up_blocks.1.resnets.2.norm1.weight", "decoder.up_blocks.1.resnets.2.norm1.bias", "decoder.up_blocks.1.resnets.2.conv1.conv.weight", "decoder.up_blocks.1.resnets.2.conv1.conv.bias", "decoder.up_blocks.1.resnets.2.norm2.weight", "decoder.up_blocks.1.resnets.2.norm2.bias", "decoder.up_blocks.1.resnets.2.conv2.conv.weight", "decoder.up_blocks.1.resnets.2.conv2.conv.bias", "decoder.up_blocks.1.upsamplers.0.conv.conv.weight", "decoder.up_blocks.1.upsamplers.0.conv.conv.bias", "decoder.up_blocks.2.resnets.0.norm1.weight", "decoder.up_blocks.2.resnets.0.norm1.bias", "decoder.up_blocks.2.resnets.0.conv1.conv.weight", "decoder.up_blocks.2.resnets.0.conv1.conv.bias", "decoder.up_blocks.2.resnets.0.norm2.weight", "decoder.up_blocks.2.resnets.0.norm2.bias", "decoder.up_blocks.2.resnets.0.conv2.conv.weight", "decoder.up_blocks.2.resnets.0.conv2.conv.bias", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.2.resnets.1.norm1.weight", "decoder.up_blocks.2.resnets.1.norm1.bias", "decoder.up_blocks.2.resnets.1.conv1.conv.weight", "decoder.up_blocks.2.resnets.1.conv1.conv.bias", "decoder.up_blocks.2.resnets.1.norm2.weight", "decoder.up_blocks.2.resnets.1.norm2.bias", "decoder.up_blocks.2.resnets.1.conv2.conv.weight", "decoder.up_blocks.2.resnets.1.conv2.conv.bias", "decoder.up_blocks.2.resnets.2.norm1.weight", "decoder.up_blocks.2.resnets.2.norm1.bias", "decoder.up_blocks.2.resnets.2.conv1.conv.weight", "decoder.up_blocks.2.resnets.2.conv1.conv.bias", "decoder.up_blocks.2.resnets.2.norm2.weight", "decoder.up_blocks.2.resnets.2.norm2.bias", "decoder.up_blocks.2.resnets.2.conv2.conv.weight", "decoder.up_blocks.2.resnets.2.conv2.conv.bias", "decoder.up_blocks.2.upsamplers.0.conv.conv.weight", "decoder.up_blocks.2.upsamplers.0.conv.conv.bias", "decoder.up_blocks.3.resnets.0.norm1.weight", "decoder.up_blocks.3.resnets.0.norm1.bias", "decoder.up_blocks.3.resnets.0.conv1.conv.weight", "decoder.up_blocks.3.resnets.0.conv1.conv.bias", "decoder.up_blocks.3.resnets.0.norm2.weight", "decoder.up_blocks.3.resnets.0.norm2.bias", "decoder.up_blocks.3.resnets.0.conv2.conv.weight", "decoder.up_blocks.3.resnets.0.conv2.conv.bias", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.3.resnets.1.norm1.weight", "decoder.up_blocks.3.resnets.1.norm1.bias", "decoder.up_blocks.3.resnets.1.conv1.conv.weight", "decoder.up_blocks.3.resnets.1.conv1.conv.bias", "decoder.up_blocks.3.resnets.1.norm2.weight", "decoder.up_blocks.3.resnets.1.norm2.bias", "decoder.up_blocks.3.resnets.1.conv2.conv.weight", "decoder.up_blocks.3.resnets.1.conv2.conv.bias", "decoder.up_blocks.3.resnets.2.norm1.weight", "decoder.up_blocks.3.resnets.2.norm1.bias", "decoder.up_blocks.3.resnets.2.conv1.conv.weight", "decoder.up_blocks.3.resnets.2.conv1.conv.bias", "decoder.up_blocks.3.resnets.2.norm2.weight", "decoder.up_blocks.3.resnets.2.norm2.bias", "decoder.up_blocks.3.resnets.2.conv2.conv.weight", "decoder.up_blocks.3.resnets.2.conv2.conv.bias", "decoder.mid_block.attentions.0.group_norm.weight", "decoder.mid_block.attentions.0.group_norm.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.resnets.0.norm1.weight", "decoder.mid_block.resnets.0.norm1.bias", "decoder.mid_block.resnets.0.conv1.conv.weight", "decoder.mid_block.resnets.0.conv1.conv.bias", "decoder.mid_block.resnets.0.norm2.weight", "decoder.mid_block.resnets.0.norm2.bias", "decoder.mid_block.resnets.0.conv2.conv.weight", "decoder.mid_block.resnets.0.conv2.conv.bias", "decoder.mid_block.resnets.1.norm1.weight", "decoder.mid_block.resnets.1.norm1.bias", "decoder.mid_block.resnets.1.conv1.conv.weight", "decoder.mid_block.resnets.1.conv1.conv.bias", "decoder.mid_block.resnets.1.norm2.weight", "decoder.mid_block.resnets.1.norm2.bias", "decoder.mid_block.resnets.1.conv2.conv.weight", "decoder.mid_block.resnets.1.conv2.conv.bias", "decoder.conv_norm_out.weight", "decoder.conv_norm_out.bias". | |
Unexpected key(s) in state_dict: "encoder.down.0.block.0.conv1.conv.bias", "encoder.down.0.block.0.conv1.conv.weight", "encoder.down.0.block.0.conv2.conv.bias", "encoder.down.0.block.0.conv2.conv.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.1.conv1.conv.bias", "encoder.down.0.block.1.conv1.conv.weight", "encoder.down.0.block.1.conv2.conv.bias", "encoder.down.0.block.1.conv2.conv.weight", "encoder.down.0.block.1.norm1.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm2.bias", "encoder.down.0.block.1.norm2.weight", "encoder.down.0.downsample.conv.conv.bias", "encoder.down.0.downsample.conv.conv.weight", "encoder.down.1.block.0.conv1.conv.bias", "encoder.down.1.block.0.conv1.conv.weight", "encoder.down.1.block.0.conv2.conv.bias", "encoder.down.1.block.0.conv2.conv.weight", "encoder.down.1.block.0.nin_shortcut.conv.bias", "encoder.down.1.block.0.nin_shortcut.conv.weight", "encoder.down.1.block.0.norm1.bias", "encoder.down.1.block.0.norm1.weight", "encoder.down.1.block.0.norm2.bias", "encoder.down.1.block.0.norm2.weight", "encoder.down.1.block.1.conv1.conv.bias", "encoder.down.1.block.1.conv1.conv.weight", "encoder.down.1.block.1.conv2.conv.bias", "encoder.down.1.block.1.conv2.conv.weight", "encoder.down.1.block.1.norm1.bias", "encoder.down.1.block.1.norm1.weight", "encoder.down.1.block.1.norm2.bias", "encoder.down.1.block.1.norm2.weight", "encoder.down.1.downsample.conv.conv.bias", "encoder.down.1.downsample.conv.conv.weight", "encoder.down.2.block.0.conv1.conv.bias", "encoder.down.2.block.0.conv1.conv.weight", "encoder.down.2.block.0.conv2.conv.bias", "encoder.down.2.block.0.conv2.conv.weight", "encoder.down.2.block.0.nin_shortcut.conv.bias", "encoder.down.2.block.0.nin_shortcut.conv.weight", "encoder.down.2.block.0.norm1.bias", "encoder.down.2.block.0.norm1.weight", "encoder.down.2.block.0.norm2.bias", "encoder.down.2.block.0.norm2.weight", "encoder.down.2.block.1.conv1.conv.bias", "encoder.down.2.block.1.conv1.conv.weight", "encoder.down.2.block.1.conv2.conv.bias", "encoder.down.2.block.1.conv2.conv.weight", "encoder.down.2.block.1.norm1.bias", "encoder.down.2.block.1.norm1.weight", "encoder.down.2.block.1.norm2.bias", "encoder.down.2.block.1.norm2.weight", "encoder.down.2.downsample.conv.conv.bias", "encoder.down.2.downsample.conv.conv.weight", "encoder.down.3.block.0.conv1.conv.bias", "encoder.down.3.block.0.conv1.conv.weight", "encoder.down.3.block.0.conv2.conv.bias", "encoder.down.3.block.0.conv2.conv.weight", "encoder.down.3.block.0.norm1.bias", "encoder.down.3.block.0.norm1.weight", "encoder.down.3.block.0.norm2.bias", "encoder.down.3.block.0.norm2.weight", "encoder.down.3.block.1.conv1.conv.bias", "encoder.down.3.block.1.conv1.conv.weight", "encoder.down.3.block.1.conv2.conv.bias", "encoder.down.3.block.1.conv2.conv.weight", "encoder.down.3.block.1.norm1.bias", "encoder.down.3.block.1.norm1.weight", "encoder.down.3.block.1.norm2.bias", "encoder.down.3.block.1.norm2.weight", "encoder.mid.attn_1.k.bias", "encoder.mid.attn_1.k.weight", "encoder.mid.attn_1.norm.bias", "encoder.mid.attn_1.norm.weight", "encoder.mid.attn_1.proj_out.bias", "encoder.mid.attn_1.proj_out.weight", "encoder.mid.attn_1.q.bias", "encoder.mid.attn_1.q.weight", "encoder.mid.attn_1.v.bias", "encoder.mid.attn_1.v.weight", "encoder.mid.block_1.conv1.conv.bias", "encoder.mid.block_1.conv1.conv.weight", "encoder.mid.block_1.conv2.conv.bias", "encoder.mid.block_1.conv2.conv.weight", "encoder.mid.block_1.norm1.bias", "encoder.mid.block_1.norm1.weight", "encoder.mid.block_1.norm2.bias", "encoder.mid.block_1.norm2.weight", "encoder.mid.block_2.conv1.conv.bias", "encoder.mid.block_2.conv1.conv.weight", "encoder.mid.block_2.conv2.conv.bias", "encoder.mid.block_2.conv2.conv.weight", "encoder.mid.block_2.norm1.bias", "encoder.mid.block_2.norm1.weight", "encoder.mid.block_2.norm2.bias", "encoder.mid.block_2.norm2.weight", "encoder.norm_out.bias", "encoder.norm_out.weight", "decoder.mid.attn_1.k.bias", "decoder.mid.attn_1.k.weight", "decoder.mid.attn_1.norm.bias", "decoder.mid.attn_1.norm.weight", "decoder.mid.attn_1.proj_out.bias", "decoder.mid.attn_1.proj_out.weight", "decoder.mid.attn_1.q.bias", "decoder.mid.attn_1.q.weight", "decoder.mid.attn_1.v.bias", "decoder.mid.attn_1.v.weight", "decoder.mid.block_1.conv1.conv.bias", "decoder.mid.block_1.conv1.conv.weight", "decoder.mid.block_1.conv2.conv.bias", "decoder.mid.block_1.conv2.conv.weight", "decoder.mid.block_1.norm1.bias", "decoder.mid.block_1.norm1.weight", "decoder.mid.block_1.norm2.bias", "decoder.mid.block_1.norm2.weight", "decoder.mid.block_2.conv1.conv.bias", "decoder.mid.block_2.conv1.conv.weight", "decoder.mid.block_2.conv2.conv.bias", "decoder.mid.block_2.conv2.conv.weight", "decoder.mid.block_2.norm1.bias", "decoder.mid.block_2.norm1.weight", "decoder.mid.block_2.norm2.bias", "decoder.mid.block_2.norm2.weight", "decoder.norm_out.bias", "decoder.norm_out.weight", "decoder.up.0.block.0.conv1.conv.bias", "decoder.up.0.block.0.conv1.conv.weight", "decoder.up.0.block.0.conv2.conv.bias", "decoder.up.0.block.0.conv2.conv.weight", "decoder.up.0.block.0.nin_shortcut.conv.bias", "decoder.up.0.block.0.nin_shortcut.conv.weight", "decoder.up.0.block.0.norm1.bias", "decoder.up.0.block.0.norm1.weight", "decoder.up.0.block.0.norm2.bias", "decoder.up.0.block.0.norm2.weight", "decoder.up.0.block.1.conv1.conv.bias", "decoder.up.0.block.1.conv1.conv.weight", "decoder.up.0.block.1.conv2.conv.bias", "decoder.up.0.block.1.conv2.conv.weight", "decoder.up.0.block.1.norm1.bias", "decoder.up.0.block.1.norm1.weight", "decoder.up.0.block.1.norm2.bias", "decoder.up.0.block.1.norm2.weight", "decoder.up.0.block.2.conv1.conv.bias", "decoder.up.0.block.2.conv1.conv.weight", "decoder.up.0.block.2.conv2.conv.bias", "decoder.up.0.block.2.conv2.conv.weight", "decoder.up.0.block.2.norm1.bias", "decoder.up.0.block.2.norm1.weight", "decoder.up.0.block.2.norm2.bias", "decoder.up.0.block.2.norm2.weight", "decoder.up.1.block.0.conv1.conv.bias", "decoder.up.1.block.0.conv1.conv.weight", "decoder.up.1.block.0.conv2.conv.bias", "decoder.up.1.block.0.conv2.conv.weight", "decoder.up.1.block.0.nin_shortcut.conv.bias", "decoder.up.1.block.0.nin_shortcut.conv.weight", "decoder.up.1.block.0.norm1.bias", "decoder.up.1.block.0.norm1.weight", "decoder.up.1.block.0.norm2.bias", "decoder.up.1.block.0.norm2.weight", "decoder.up.1.block.1.conv1.conv.bias", "decoder.up.1.block.1.conv1.conv.weight", "decoder.up.1.block.1.conv2.conv.bias", "decoder.up.1.block.1.conv2.conv.weight", "decoder.up.1.block.1.norm1.bias", "decoder.up.1.block.1.norm1.weight", "decoder.up.1.block.1.norm2.bias", "decoder.up.1.block.1.norm2.weight", "decoder.up.1.block.2.conv1.conv.bias", "decoder.up.1.block.2.conv1.conv.weight", "decoder.up.1.block.2.conv2.conv.bias", "decoder.up.1.block.2.conv2.conv.weight", "decoder.up.1.block.2.norm1.bias", "decoder.up.1.block.2.norm1.weight", "decoder.up.1.block.2.norm2.bias", "decoder.up.1.block.2.norm2.weight", "decoder.up.1.upsample.conv.conv.bias", "decoder.up.1.upsample.conv.conv.weight", "decoder.up.2.block.0.conv1.conv.bias", "decoder.up.2.block.0.conv1.conv.weight", "decoder.up.2.block.0.conv2.conv.bias", "decoder.up.2.block.0.conv2.conv.weight", "decoder.up.2.block.0.norm1.bias", "decoder.up.2.block.0.norm1.weight", "decoder.up.2.block.0.norm2.bias", "decoder.up.2.block.0.norm2.weight", "decoder.up.2.block.1.conv1.conv.bias", "decoder.up.2.block.1.conv1.conv.weight", "decoder.up.2.block.1.conv2.conv.bias", "decoder.up.2.block.1.conv2.conv.weight", "decoder.up.2.block.1.norm1.bias", "decoder.up.2.block.1.norm1.weight", "decoder.up.2.block.1.norm2.bias", "decoder.up.2.block.1.norm2.weight", "decoder.up.2.block.2.conv1.conv.bias", "decoder.up.2.block.2.conv1.conv.weight", "decoder.up.2.block.2.conv2.conv.bias", "decoder.up.2.block.2.conv2.conv.weight", "decoder.up.2.block.2.norm1.bias", "decoder.up.2.block.2.norm1.weight", "decoder.up.2.block.2.norm2.bias", "decoder.up.2.block.2.norm2.weight", "decoder.up.2.upsample.conv.conv.bias", "decoder.up.2.upsample.conv.conv.weight", "decoder.up.3.block.0.conv1.conv.bias", "decoder.up.3.block.0.conv1.conv.weight", "decoder.up.3.block.0.conv2.conv.bias", "decoder.up.3.block.0.conv2.conv.weight", "decoder.up.3.block.0.norm1.bias", "decoder.up.3.block.0.norm1.weight", "decoder.up.3.block.0.norm2.bias", "decoder.up.3.block.0.norm2.weight", "decoder.up.3.block.1.conv1.conv.bias", "decoder.up.3.block.1.conv1.conv.weight", "decoder.up.3.block.1.conv2.conv.bias", "decoder.up.3.block.1.conv2.conv.weight", "decoder.up.3.block.1.norm1.bias", "decoder.up.3.block.1.norm1.weight", "decoder.up.3.block.1.norm2.bias", "decoder.up.3.block.1.norm2.weight", "decoder.up.3.block.2.conv1.conv.bias", "decoder.up.3.block.2.conv1.conv.weight", "decoder.up.3.block.2.conv2.conv.bias", "decoder.up.3.block.2.conv2.conv.weight", "decoder.up.3.block.2.norm1.bias", "decoder.up.3.block.2.norm1.weight", "decoder.up.3.block.2.norm2.bias", "decoder.up.3.block.2.norm2.weight", "decoder.up.3.upsample.conv.conv.bias", "decoder.up.3.upsample.conv.conv.weight". | |
2025-02-23T13:03:07.697040 - Traceback (most recent call last): | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 327, in execute | |
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 202, in get_output_data | |
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 174, in _map_node_over_list | |
process_inputs(input_dict, i) | |
File "F:\SD-Zluda\ComfyUI\execution.py", line 163, in process_inputs | |
results.append(getattr(obj, func)(**inputs)) | |
File "F:\SD-Zluda\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper\nodes.py", line 564, in loadmodel | |
vae.load_state_dict(vae_sd) | |
File "F:\SD-Zluda\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict | |
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( | |
RuntimeError: Error(s) in loading state_dict for AutoencoderKLCausal3D: | |
Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.conv1.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.conv.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.conv2.conv.weight", "encoder.down_blocks.0.resnets.0.conv2.conv.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.conv1.conv.weight", "encoder.down_blocks.0.resnets.1.conv1.conv.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.0.resnets.1.norm2.bias", "encoder.down_blocks.0.resnets.1.conv2.conv.weight", "encoder.down_blocks.0.resnets.1.conv2.conv.bias", "encoder.down_blocks.0.downsamplers.0.conv.conv.weight", "encoder.down_blocks.0.downsamplers.0.conv.conv.bias", "encoder.down_blocks.1.resnets.0.norm1.weight", "encoder.down_blocks.1.resnets.0.norm1.bias", "encoder.down_blocks.1.resnets.0.conv1.conv.weight", "encoder.down_blocks.1.resnets.0.conv1.conv.bias", "encoder.down_blocks.1.resnets.0.norm2.weight", "encoder.down_blocks.1.resnets.0.norm2.bias", "encoder.down_blocks.1.resnets.0.conv2.conv.weight", "encoder.down_blocks.1.resnets.0.conv2.conv.bias", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.1.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.1.resnets.1.norm1.weight", "encoder.down_blocks.1.resnets.1.norm1.bias", "encoder.down_blocks.1.resnets.1.conv1.conv.weight", "encoder.down_blocks.1.resnets.1.conv1.conv.bias", "encoder.down_blocks.1.resnets.1.norm2.weight", "encoder.down_blocks.1.resnets.1.norm2.bias", "encoder.down_blocks.1.resnets.1.conv2.conv.weight", "encoder.down_blocks.1.resnets.1.conv2.conv.bias", "encoder.down_blocks.1.downsamplers.0.conv.conv.weight", "encoder.down_blocks.1.downsamplers.0.conv.conv.bias", "encoder.down_blocks.2.resnets.0.norm1.weight", "encoder.down_blocks.2.resnets.0.norm1.bias", "encoder.down_blocks.2.resnets.0.conv1.conv.weight", "encoder.down_blocks.2.resnets.0.conv1.conv.bias", "encoder.down_blocks.2.resnets.0.norm2.weight", "encoder.down_blocks.2.resnets.0.norm2.bias", "encoder.down_blocks.2.resnets.0.conv2.conv.weight", "encoder.down_blocks.2.resnets.0.conv2.conv.bias", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.weight", "encoder.down_blocks.2.resnets.0.conv_shortcut.conv.bias", "encoder.down_blocks.2.resnets.1.norm1.weight", "encoder.down_blocks.2.resnets.1.norm1.bias", "encoder.down_blocks.2.resnets.1.conv1.conv.weight", "encoder.down_blocks.2.resnets.1.conv1.conv.bias", "encoder.down_blocks.2.resnets.1.norm2.weight", "encoder.down_blocks.2.resnets.1.norm2.bias", "encoder.down_blocks.2.resnets.1.conv2.conv.weight", "encoder.down_blocks.2.resnets.1.conv2.conv.bias", "encoder.down_blocks.2.downsamplers.0.conv.conv.weight", "encoder.down_blocks.2.downsamplers.0.conv.conv.bias", "encoder.down_blocks.3.resnets.0.norm1.weight", "encoder.down_blocks.3.resnets.0.norm1.bias", "encoder.down_blocks.3.resnets.0.conv1.conv.weight", "encoder.down_blocks.3.resnets.0.conv1.conv.bias", "encoder.down_blocks.3.resnets.0.norm2.weight", "encoder.down_blocks.3.resnets.0.norm2.bias", "encoder.down_blocks.3.resnets.0.conv2.conv.weight", "encoder.down_blocks.3.resnets.0.conv2.conv.bias", "encoder.down_blocks.3.resnets.1.norm1.weight", "encoder.down_blocks.3.resnets.1.norm1.bias", "encoder.down_blocks.3.resnets.1.conv1.conv.weight", "encoder.down_blocks.3.resnets.1.conv1.conv.bias", "encoder.down_blocks.3.resnets.1.norm2.weight", "encoder.down_blocks.3.resnets.1.norm2.bias", "encoder.down_blocks.3.resnets.1.conv2.conv.weight", "encoder.down_blocks.3.resnets.1.conv2.conv.bias", "encoder.mid_block.attentions.0.group_norm.weight", "encoder.mid_block.attentions.0.group_norm.bias", "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "encoder.mid_block.resnets.0.norm1.weight", "encoder.mid_block.resnets.0.norm1.bias", "encoder.mid_block.resnets.0.conv1.conv.weight", "encoder.mid_block.resnets.0.conv1.conv.bias", "encoder.mid_block.resnets.0.norm2.weight", "encoder.mid_block.resnets.0.norm2.bias", "encoder.mid_block.resnets.0.conv2.conv.weight", "encoder.mid_block.resnets.0.conv2.conv.bias", "encoder.mid_block.resnets.1.norm1.weight", "encoder.mid_block.resnets.1.norm1.bias", "encoder.mid_block.resnets.1.conv1.conv.weight", "encoder.mid_block.resnets.1.conv1.conv.bias", "encoder.mid_block.resnets.1.norm2.weight", "encoder.mid_block.resnets.1.norm2.bias", "encoder.mid_block.resnets.1.conv2.conv.weight", "encoder.mid_block.resnets.1.conv2.conv.bias", "encoder.conv_norm_out.weight", "encoder.conv_norm_out.bias", "decoder.up_blocks.0.resnets.0.norm1.weight", "decoder.up_blocks.0.resnets.0.norm1.bias", "decoder.up_blocks.0.resnets.0.conv1.conv.weight", "decoder.up_blocks.0.resnets.0.conv1.conv.bias", "decoder.up_blocks.0.resnets.0.norm2.weight", "decoder.up_blocks.0.resnets.0.norm2.bias", "decoder.up_blocks.0.resnets.0.conv2.conv.weight", "decoder.up_blocks.0.resnets.0.conv2.conv.bias", "decoder.up_blocks.0.resnets.1.norm1.weight", "decoder.up_blocks.0.resnets.1.norm1.bias", "decoder.up_blocks.0.resnets.1.conv1.conv.weight", "decoder.up_blocks.0.resnets.1.conv1.conv.bias", "decoder.up_blocks.0.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.1.norm2.bias", "decoder.up_blocks.0.resnets.1.conv2.conv.weight", "decoder.up_blocks.0.resnets.1.conv2.conv.bias", "decoder.up_blocks.0.resnets.2.norm1.weight", "decoder.up_blocks.0.resnets.2.norm1.bias", "decoder.up_blocks.0.resnets.2.conv1.conv.weight", "decoder.up_blocks.0.resnets.2.conv1.conv.bias", "decoder.up_blocks.0.resnets.2.norm2.weight", "decoder.up_blocks.0.resnets.2.norm2.bias", "decoder.up_blocks.0.resnets.2.conv2.conv.weight", "decoder.up_blocks.0.resnets.2.conv2.conv.bias", "decoder.up_blocks.0.upsamplers.0.conv.conv.weight", "decoder.up_blocks.0.upsamplers.0.conv.conv.bias", "decoder.up_blocks.1.resnets.0.norm1.weight", "decoder.up_blocks.1.resnets.0.norm1.bias", "decoder.up_blocks.1.resnets.0.conv1.conv.weight", "decoder.up_blocks.1.resnets.0.conv1.conv.bias", "decoder.up_blocks.1.resnets.0.norm2.weight", "decoder.up_blocks.1.resnets.0.norm2.bias", "decoder.up_blocks.1.resnets.0.conv2.conv.weight", "decoder.up_blocks.1.resnets.0.conv2.conv.bias", "decoder.up_blocks.1.resnets.1.norm1.weight", "decoder.up_blocks.1.resnets.1.norm1.bias", "decoder.up_blocks.1.resnets.1.conv1.conv.weight", "decoder.up_blocks.1.resnets.1.conv1.conv.bias", "decoder.up_blocks.1.resnets.1.norm2.weight", "decoder.up_blocks.1.resnets.1.norm2.bias", "decoder.up_blocks.1.resnets.1.conv2.conv.weight", "decoder.up_blocks.1.resnets.1.conv2.conv.bias", "decoder.up_blocks.1.resnets.2.norm1.weight", "decoder.up_blocks.1.resnets.2.norm1.bias", "decoder.up_blocks.1.resnets.2.conv1.conv.weight", "decoder.up_blocks.1.resnets.2.conv1.conv.bias", "decoder.up_blocks.1.resnets.2.norm2.weight", "decoder.up_blocks.1.resnets.2.norm2.bias", "decoder.up_blocks.1.resnets.2.conv2.conv.weight", "decoder.up_blocks.1.resnets.2.conv2.conv.bias", "decoder.up_blocks.1.upsamplers.0.conv.conv.weight", "decoder.up_blocks.1.upsamplers.0.conv.conv.bias", "decoder.up_blocks.2.resnets.0.norm1.weight", "decoder.up_blocks.2.resnets.0.norm1.bias", "decoder.up_blocks.2.resnets.0.conv1.conv.weight", "decoder.up_blocks.2.resnets.0.conv1.conv.bias", "decoder.up_blocks.2.resnets.0.norm2.weight", "decoder.up_blocks.2.resnets.0.norm2.bias", "decoder.up_blocks.2.resnets.0.conv2.conv.weight", "decoder.up_blocks.2.resnets.0.conv2.conv.bias", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.2.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.2.resnets.1.norm1.weight", "decoder.up_blocks.2.resnets.1.norm1.bias", "decoder.up_blocks.2.resnets.1.conv1.conv.weight", "decoder.up_blocks.2.resnets.1.conv1.conv.bias", "decoder.up_blocks.2.resnets.1.norm2.weight", "decoder.up_blocks.2.resnets.1.norm2.bias", "decoder.up_blocks.2.resnets.1.conv2.conv.weight", "decoder.up_blocks.2.resnets.1.conv2.conv.bias", "decoder.up_blocks.2.resnets.2.norm1.weight", "decoder.up_blocks.2.resnets.2.norm1.bias", "decoder.up_blocks.2.resnets.2.conv1.conv.weight", "decoder.up_blocks.2.resnets.2.conv1.conv.bias", "decoder.up_blocks.2.resnets.2.norm2.weight", "decoder.up_blocks.2.resnets.2.norm2.bias", "decoder.up_blocks.2.resnets.2.conv2.conv.weight", "decoder.up_blocks.2.resnets.2.conv2.conv.bias", "decoder.up_blocks.2.upsamplers.0.conv.conv.weight", "decoder.up_blocks.2.upsamplers.0.conv.conv.bias", "decoder.up_blocks.3.resnets.0.norm1.weight", "decoder.up_blocks.3.resnets.0.norm1.bias", "decoder.up_blocks.3.resnets.0.conv1.conv.weight", "decoder.up_blocks.3.resnets.0.conv1.conv.bias", "decoder.up_blocks.3.resnets.0.norm2.weight", "decoder.up_blocks.3.resnets.0.norm2.bias", "decoder.up_blocks.3.resnets.0.conv2.conv.weight", "decoder.up_blocks.3.resnets.0.conv2.conv.bias", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.weight", "decoder.up_blocks.3.resnets.0.conv_shortcut.conv.bias", "decoder.up_blocks.3.resnets.1.norm1.weight", "decoder.up_blocks.3.resnets.1.norm1.bias", "decoder.up_blocks.3.resnets.1.conv1.conv.weight", "decoder.up_blocks.3.resnets.1.conv1.conv.bias", "decoder.up_blocks.3.resnets.1.norm2.weight", "decoder.up_blocks.3.resnets.1.norm2.bias", "decoder.up_blocks.3.resnets.1.conv2.conv.weight", "decoder.up_blocks.3.resnets.1.conv2.conv.bias", "decoder.up_blocks.3.resnets.2.norm1.weight", "decoder.up_blocks.3.resnets.2.norm1.bias", "decoder.up_blocks.3.resnets.2.conv1.conv.weight", "decoder.up_blocks.3.resnets.2.conv1.conv.bias", "decoder.up_blocks.3.resnets.2.norm2.weight", "decoder.up_blocks.3.resnets.2.norm2.bias", "decoder.up_blocks.3.resnets.2.conv2.conv.weight", "decoder.up_blocks.3.resnets.2.conv2.conv.bias", "decoder.mid_block.attentions.0.group_norm.weight", "decoder.mid_block.attentions.0.group_norm.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.resnets.0.norm1.weight", "decoder.mid_block.resnets.0.norm1.bias", "decoder.mid_block.resnets.0.conv1.conv.weight", "decoder.mid_block.resnets.0.conv1.conv.bias", "decoder.mid_block.resnets.0.norm2.weight", "decoder.mid_block.resnets.0.norm2.bias", "decoder.mid_block.resnets.0.conv2.conv.weight", "decoder.mid_block.resnets.0.conv2.conv.bias", "decoder.mid_block.resnets.1.norm1.weight", "decoder.mid_block.resnets.1.norm1.bias", "decoder.mid_block.resnets.1.conv1.conv.weight", "decoder.mid_block.resnets.1.conv1.conv.bias", "decoder.mid_block.resnets.1.norm2.weight", "decoder.mid_block.resnets.1.norm2.bias", "decoder.mid_block.resnets.1.conv2.conv.weight", "decoder.mid_block.resnets.1.conv2.conv.bias", "decoder.conv_norm_out.weight", "decoder.conv_norm_out.bias". | |
Unexpected key(s) in state_dict: "encoder.down.0.block.0.conv1.conv.bias", "encoder.down.0.block.0.conv1.conv.weight", "encoder.down.0.block.0.conv2.conv.bias", "encoder.down.0.block.0.conv2.conv.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.1.conv1.conv.bias", "encoder.down.0.block.1.conv1.conv.weight", "encoder.down.0.block.1.conv2.conv.bias", "encoder.down.0.block.1.conv2.conv.weight", "encoder.down.0.block.1.norm1.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm2.bias", "encoder.down.0.block.1.norm2.weight", "encoder.down.0.downsample.conv.conv.bias", "encoder.down.0.downsample.conv.conv.weight", "encoder.down.1.block.0.conv1.conv.bias", "encoder.down.1.block.0.conv1.conv.weight", "encoder.down.1.block.0.conv2.conv.bias", "encoder.down.1.block.0.conv2.conv.weight", "encoder.down.1.block.0.nin_shortcut.conv.bias", "encoder.down.1.block.0.nin_shortcut.conv.weight", "encoder.down.1.block.0.norm1.bias", "encoder.down.1.block.0.norm1.weight", "encoder.down.1.block.0.norm2.bias", "encoder.down.1.block.0.norm2.weight", "encoder.down.1.block.1.conv1.conv.bias", "encoder.down.1.block.1.conv1.conv.weight", "encoder.down.1.block.1.conv2.conv.bias", "encoder.down.1.block.1.conv2.conv.weight", "encoder.down.1.block.1.norm1.bias", "encoder.down.1.block.1.norm1.weight", "encoder.down.1.block.1.norm2.bias", "encoder.down.1.block.1.norm2.weight", "encoder.down.1.downsample.conv.conv.bias", "encoder.down.1.downsample.conv.conv.weight", "encoder.down.2.block.0.conv1.conv.bias", "encoder.down.2.block.0.conv1.conv.weight", "encoder.down.2.block.0.conv2.conv.bias", "encoder.down.2.block.0.conv2.conv.weight", "encoder.down.2.block.0.nin_shortcut.conv.bias", "encoder.down.2.block.0.nin_shortcut.conv.weight", "encoder.down.2.block.0.norm1.bias", "encoder.down.2.block.0.norm1.weight", "encoder.down.2.block.0.norm2.bias", "encoder.down.2.block.0.norm2.weight", "encoder.down.2.block.1.conv1.conv.bias", "encoder.down.2.block.1.conv1.conv.weight", "encoder.down.2.block.1.conv2.conv.bias", "encoder.down.2.block.1.conv2.conv.weight", "encoder.down.2.block.1.norm1.bias", "encoder.down.2.block.1.norm1.weight", "encoder.down.2.block.1.norm2.bias", "encoder.down.2.block.1.norm2.weight", "encoder.down.2.downsample.conv.conv.bias", "encoder.down.2.downsample.conv.conv.weight", "encoder.down.3.block.0.conv1.conv.bias", "encoder.down.3.block.0.conv1.conv.weight", "encoder.down.3.block.0.conv2.conv.bias", "encoder.down.3.block.0.conv2.conv.weight", "encoder.down.3.block.0.norm1.bias", "encoder.down.3.block.0.norm1.weight", "encoder.down.3.block.0.norm2.bias", "encoder.down.3.block.0.norm2.weight", "encoder.down.3.block.1.conv1.conv.bias", "encoder.down.3.block.1.conv1.conv.weight", "encoder.down.3.block.1.conv2.conv.bias", "encoder.down.3.block.1.conv2.conv.weight", "encoder.down.3.block.1.norm1.bias", "encoder.down.3.block.1.norm1.weight", "encoder.down.3.block.1.norm2.bias", "encoder.down.3.block.1.norm2.weight", "encoder.mid.attn_1.k.bias", "encoder.mid.attn_1.k.weight", "encoder.mid.attn_1.norm.bias", "encoder.mid.attn_1.norm.weight", "encoder.mid.attn_1.proj_out.bias", "encoder.mid.attn_1.proj_out.weight", "encoder.mid.attn_1.q.bias", "encoder.mid.attn_1.q.weight", "encoder.mid.attn_1.v.bias", "encoder.mid.attn_1.v.weight", "encoder.mid.block_1.conv1.conv.bias", "encoder.mid.block_1.conv1.conv.weight", "encoder.mid.block_1.conv2.conv.bias", "encoder.mid.block_1.conv2.conv.weight", "encoder.mid.block_1.norm1.bias", "encoder.mid.block_1.norm1.weight", "encoder.mid.block_1.norm2.bias", "encoder.mid.block_1.norm2.weight", "encoder.mid.block_2.conv1.conv.bias", "encoder.mid.block_2.conv1.conv.weight", "encoder.mid.block_2.conv2.conv.bias", "encoder.mid.block_2.conv2.conv.weight", "encoder.mid.block_2.norm1.bias", "encoder.mid.block_2.norm1.weight", "encoder.mid.block_2.norm2.bias", "encoder.mid.block_2.norm2.weight", "encoder.norm_out.bias", "encoder.norm_out.weight", "decoder.mid.attn_1.k.bias", "decoder.mid.attn_1.k.weight", "decoder.mid.attn_1.norm.bias", "decoder.mid.attn_1.norm.weight", "decoder.mid.attn_1.proj_out.bias", "decoder.mid.attn_1.proj_out.weight", "decoder.mid.attn_1.q.bias", "decoder.mid.attn_1.q.weight", "decoder.mid.attn_1.v.bias", "decoder.mid.attn_1.v.weight", "decoder.mid.block_1.conv1.conv.bias", "decoder.mid.block_1.conv1.conv.weight", "decoder.mid.block_1.conv2.conv.bias", "decoder.mid.block_1.conv2.conv.weight", "decoder.mid.block_1.norm1.bias", "decoder.mid.block_1.norm1.weight", "decoder.mid.block_1.norm2.bias", "decoder.mid.block_1.norm2.weight", "decoder.mid.block_2.conv1.conv.bias", "decoder.mid.block_2.conv1.conv.weight", "decoder.mid.block_2.conv2.conv.bias", "decoder.mid.block_2.conv2.conv.weight", "decoder.mid.block_2.norm1.bias", "decoder.mid.block_2.norm1.weight", "decoder.mid.block_2.norm2.bias", "decoder.mid.block_2.norm2.weight", "decoder.norm_out.bias", "decoder.norm_out.weight", "decoder.up.0.block.0.conv1.conv.bias", "decoder.up.0.block.0.conv1.conv.weight", "decoder.up.0.block.0.conv2.conv.bias", "decoder.up.0.block.0.conv2.conv.weight", "decoder.up.0.block.0.nin_shortcut.conv.bias", "decoder.up.0.block.0.nin_shortcut.conv.weight", "decoder.up.0.block.0.norm1.bias", "decoder.up.0.block.0.norm1.weight", "decoder.up.0.block.0.norm2.bias", "decoder.up.0.block.0.norm2.weight", "decoder.up.0.block.1.conv1.conv.bias", "decoder.up.0.block.1.conv1.conv.weight", "decoder.up.0.block.1.conv2.conv.bias", "decoder.up.0.block.1.conv2.conv.weight", "decoder.up.0.block.1.norm1.bias", "decoder.up.0.block.1.norm1.weight", "decoder.up.0.block.1.norm2.bias", "decoder.up.0.block.1.norm2.weight", "decoder.up.0.block.2.conv1.conv.bias", "decoder.up.0.block.2.conv1.conv.weight", "decoder.up.0.block.2.conv2.conv.bias", "decoder.up.0.block.2.conv2.conv.weight", "decoder.up.0.block.2.norm1.bias", "decoder.up.0.block.2.norm1.weight", "decoder.up.0.block.2.norm2.bias", "decoder.up.0.block.2.norm2.weight", "decoder.up.1.block.0.conv1.conv.bias", "decoder.up.1.block.0.conv1.conv.weight", "decoder.up.1.block.0.conv2.conv.bias", "decoder.up.1.block.0.conv2.conv.weight", "decoder.up.1.block.0.nin_shortcut.conv.bias", "decoder.up.1.block.0.nin_shortcut.conv.weight", "decoder.up.1.block.0.norm1.bias", "decoder.up.1.block.0.norm1.weight", "decoder.up.1.block.0.norm2.bias", "decoder.up.1.block.0.norm2.weight", "decoder.up.1.block.1.conv1.conv.bias", "decoder.up.1.block.1.conv1.conv.weight", "decoder.up.1.block.1.conv2.conv.bias", "decoder.up.1.block.1.conv2.conv.weight", "decoder.up.1.block.1.norm1.bias", "decoder.up.1.block.1.norm1.weight", "decoder.up.1.block.1.norm2.bias", "decoder.up.1.block.1.norm2.weight", "decoder.up.1.block.2.conv1.conv.bias", "decoder.up.1.block.2.conv1.conv.weight", "decoder.up.1.block.2.conv2.conv.bias", "decoder.up.1.block.2.conv2.conv.weight", "decoder.up.1.block.2.norm1.bias", "decoder.up.1.block.2.norm1.weight", "decoder.up.1.block.2.norm2.bias", "decoder.up.1.block.2.norm2.weight", "decoder.up.1.upsample.conv.conv.bias", "decoder.up.1.upsample.conv.conv.weight", "decoder.up.2.block.0.conv1.conv.bias", "decoder.up.2.block.0.conv1.conv.weight", "decoder.up.2.block.0.conv2.conv.bias", "decoder.up.2.block.0.conv2.conv.weight", "decoder.up.2.block.0.norm1.bias", "decoder.up.2.block.0.norm1.weight", "decoder.up.2.block.0.norm2.bias", "decoder.up.2.block.0.norm2.weight", "decoder.up.2.block.1.conv1.conv.bias", "decoder.up.2.block.1.conv1.conv.weight", "decoder.up.2.block.1.conv2.conv.bias", "decoder.up.2.block.1.conv2.conv.weight", "decoder.up.2.block.1.norm1.bias", "decoder.up.2.block.1.norm1.weight", "decoder.up.2.block.1.norm2.bias", "decoder.up.2.block.1.norm2.weight", "decoder.up.2.block.2.conv1.conv.bias", "decoder.up.2.block.2.conv1.conv.weight", "decoder.up.2.block.2.conv2.conv.bias", "decoder.up.2.block.2.conv2.conv.weight", "decoder.up.2.block.2.norm1.bias", "decoder.up.2.block.2.norm1.weight", "decoder.up.2.block.2.norm2.bias", "decoder.up.2.block.2.norm2.weight", "decoder.up.2.upsample.conv.conv.bias", "decoder.up.2.upsample.conv.conv.weight", "decoder.up.3.block.0.conv1.conv.bias", "decoder.up.3.block.0.conv1.conv.weight", "decoder.up.3.block.0.conv2.conv.bias", "decoder.up.3.block.0.conv2.conv.weight", "decoder.up.3.block.0.norm1.bias", "decoder.up.3.block.0.norm1.weight", "decoder.up.3.block.0.norm2.bias", "decoder.up.3.block.0.norm2.weight", "decoder.up.3.block.1.conv1.conv.bias", "decoder.up.3.block.1.conv1.conv.weight", "decoder.up.3.block.1.conv2.conv.bias", "decoder.up.3.block.1.conv2.conv.weight", "decoder.up.3.block.1.norm1.bias", "decoder.up.3.block.1.norm1.weight", "decoder.up.3.block.1.norm2.bias", "decoder.up.3.block.1.norm2.weight", "decoder.up.3.block.2.conv1.conv.bias", "decoder.up.3.block.2.conv1.conv.weight", "decoder.up.3.block.2.conv2.conv.bias", "decoder.up.3.block.2.conv2.conv.weight", "decoder.up.3.block.2.norm1.bias", "decoder.up.3.block.2.norm1.weight", "decoder.up.3.block.2.norm2.bias", "decoder.up.3.block.2.norm2.weight", "decoder.up.3.upsample.conv.conv.bias", "decoder.up.3.upsample.conv.conv.weight". | |
2025-02-23T13:03:07.699040 - Prompt executed in 0.79 seconds | |
``` | |
## Attached Workflow | |
Please make sure that workflow does not contain any sensitive information such as API keys or passwords. | |
``` | |
{"last_node_id":99,"last_link_id":136,"nodes":[{"id":58,"type":"HyVideoCFG","pos":[-1280,1410],"size":[437.5832824707031,201.83335876464844],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_cfg","type":"HYVID_CFG","links":[130],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoCFG"},"widgets_values":["camera movement, jump cut, scene cut, transition, fading, morphing",6,0,0.5,false]},{"id":57,"type":"HyVideoTorchCompileSettings","pos":[-1854.767822265625,-527.714111328125],"size":[441,274],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"torch_compile_args","type":"COMPILEARGS","links":[105],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoTorchCompileSettings"},"widgets_values":["inductor",false,"default",false,64,true,true,false,false,false]},{"id":88,"type":"HyVideoCustomPromptTemplate","pos":[-1290,800],"size":[453.78076171875,551.197265625],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_prompt_template","type":"PROMPT_TEMPLATE","links":[131],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoCustomPromptTemplate"},"widgets_values":["<|start_header_id|>system<|end_header_id|>\n\nYou are a professional Content Analyst. As a specialist Scene Annotator, you tag and describe scenes in detail, paying special attention to the temporal coherence and sequence of events in the scene.\n\n# INSTRUCTIONS\n\nYou will receive two inputs:\n\n1. The first frame of a video;\n2. A short description of the scene.\n\nYour job is to write a comprehensive description of the scene based on the inputs.\n\n# IMPORTANT INFORMATION \n\nThe scene is only 4s (four seconds long).\nThe scene has 98 frames in total at 24 frames per second (24 fps).\n\n# GUIDELINES\n\n- Write a detailed description based on the inputs.\n- Use your expertise to adapt the scene description so that it is consistent and coherent.\n- Ensure the actions and events described can fit reasonably in a 4 second clip.\n- Be concise and avoid abstract qualifiers, focus on concrete aspects of the scene, in particular the main subject, the motion, actions and events.\n- Use the input image, which is the fisrt frame of the video, to infer the visual qualities, mood and tone of the overall scene.\n- Always consider the appropriate sequence of events for the scene so it fits the four seconds legth of the clip.\n\n# DELIVERABLES\n\nYou will deliver a concise and detailed description of the scene that is consistent with the inputs you receive, and temporally coherent given the length of the scene. You should output something like this:\n\nDETAILED DESCRIPTION OF THE VIDEO SCENE IN APPROPRIATE TEMPORAL SEQUENCE. OVERALL MOOD OF THE SCENE BASED ON THE FIRST FRAME. 3 TO 5 TAGS THAT REPRESENT THE SCENE GENRE/STYLE/CATEGORY.\n\n\nWrite the scene description: \n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|>",95]},{"id":45,"type":"ImageResizeKJ","pos":[-1220,140],"size":[315,266],"flags":{},"order":10,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":123},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"shape":7,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"shape":7,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[119,120,121],"slot_index":0},{"name":"width","type":"INT","links":[69],"slot_index":1},{"name":"height","type":"INT","links":[70],"slot_index":2}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347","Node name for S&R":"ImageResizeKJ"},"widgets_values":[544,960,"lanczos",false,2,0,0,"center"]},{"id":30,"type":"HyVideoTextEncode","pos":[-750,510],"size":[313.6783752441406,440.2134704589844],"flags":{},"order":11,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":131,"shape":7},{"name":"clip_l","type":"CLIP","link":null,"shape":7},{"name":"hyvid_cfg","type":"HYVID_CFG","link":130,"shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[74],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoTextEncode"},"widgets_values":["Cinematic scene shows a woman getting up and walking away.The background remains consistent throughout the scene. ","bad quality video","video"]},{"id":43,"type":"HyVideoEncode","pos":[-762.68408203125,-89.60575866699219],"size":[315,198],"flags":{},"order":12,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":135},{"name":"image","type":"IMAGE","link":119}],"outputs":[{"name":"samples","type":"LATENT","links":[75],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoEncode"},"widgets_values":[false,64,256,true,0.04,1]},{"id":52,"type":"ImageConcatMulti","pos":[1265.74609375,-380.3453063964844],"size":[210,150],"flags":{},"order":16,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":120},{"name":"image_2","type":"IMAGE","link":85}],"outputs":[{"name":"images","type":"IMAGE","links":[73],"slot_index":0}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347"},"widgets_values":[2,"right",false,null]},{"id":34,"type":"VHS_VideoCombine","pos":[1526.9112548828125,-380.55364990234375],"size":[215.375,334],"flags":{},"order":18,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":73},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"cnr_id":"comfyui-videohelpersuite","ver":"124c913ccdd8a585734ea758c35fa1bab8499c99","Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_skyreel_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_skyreel_I2V_00047.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_skyreel_I2V_00047.png","fullpath":"/home/linux/AI/ComfyUI/output/HunyuanVideo_skyreel_I2V_00047.mp4"},"muted":false}}},{"id":60,"type":"ColorMatch","pos":[893.6535034179688,-226.94412231445312],"size":[315,102],"flags":{},"order":15,"mode":0,"inputs":[{"name":"image_ref","type":"IMAGE","link":121},{"name":"image_target","type":"IMAGE","link":83}],"outputs":[{"name":"image","type":"IMAGE","links":[85,117],"slot_index":0}],"properties":{"cnr_id":"comfyui-kjnodes","ver":"f3d931a630e01821fc1375c9aa24401ab2852347","Node name for S&R":"ColorMatch"},"widgets_values":["mkl",1]},{"id":64,"type":"HyVideoEnhanceAVideo","pos":[-802.6838989257812,246.1566162109375],"size":[352.79998779296875,154],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"feta_args","type":"FETAARGS","links":[91]}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoEnhanceAVideo"},"widgets_values":[4,true,true,0,1]},{"id":59,"type":"HyVideoBlockSwap","pos":[-1742.82958984375,-190.67405700683594],"size":[315,130],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[108],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoBlockSwap"},"widgets_values":[20,10,false,false]},{"id":78,"type":"VHS_VideoCombine","pos":[1528.812744140625,23.629047393798828],"size":[215.375,334],"flags":{},"order":17,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":117},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"cnr_id":"comfyui-videohelpersuite","ver":"124c913ccdd8a585734ea758c35fa1bab8499c99","Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_skyreel_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_skyreel_I2V_00046.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_skyreel_I2V_00046.png","fullpath":"/home/linux/AI/ComfyUI/output/HunyuanVideo_skyreel_I2V_00046.mp4"},"muted":false}}},{"id":79,"type":"LoadImage","pos":[-1738.3333740234375,235.88121032714844],"size":[315,314],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[123],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"LoadImage"},"widgets_values":["amateur-absurdist.png","image"]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-1220,510],"size":[391.5,202],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35]}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["Kijai/llava-llama-3-8b-text-encoder-tokenizer","openai/clip-vit-large-patch14","bf16",false,3,"disabled","offload_device"]},{"id":3,"type":"HyVideoSampler","pos":[51.265254974365234,-204.44427490234375],"size":[416.07513427734375,1142.9561767578125],"flags":{},"order":13,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":134},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":74},{"name":"samples","type":"LATENT","link":null,"shape":7},{"name":"image_cond_latents","type":"LATENT","link":75,"shape":7},{"name":"stg_args","type":"STGARGS","link":null,"shape":7},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7},{"name":"feta_args","type":"FETAARGS","link":91,"shape":7},{"name":"width","type":"INT","link":69,"widget":{"name":"width"}},{"name":"height","type":"INT","link":70,"widget":{"name":"height"}},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoSampler"},"widgets_values":[512,320,97,30,1,9,15,"fixed",1,1,"SDE-DPMSolverMultistepScheduler"]},{"id":90,"type":"Note","pos":[-1004.8192138671875,-856.0059814453125],"size":[254.22499084472656,134.40151977539062],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["Example workflows\nhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper/tree/main/example_workflows\n\nhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper\n\nModel download\n"],"color":"#432","bgcolor":"#653"},{"id":1,"type":"HyVideoModelLoader","pos":[-1272.8134765625,-201.72789001464844],"size":[426.1773986816406,242],"flags":{},"order":9,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":105,"shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":108,"shape":7},{"name":"lora","type":"HYVIDLORA","link":null,"shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[134],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoModelLoader"},"widgets_values":["skyreels_hunyuan_i2v_bf16.safetensors","bf16","disabled","offload_device","sageattn_varlen",false,true]},{"id":5,"type":"HyVideoDecode","pos":[510.1028747558594,-408.8643798828125],"size":[345.4285888671875,150],"flags":{},"order":14,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":136},{"name":"samples","type":"LATENT","link":4}],"outputs":[{"name":"images","type":"IMAGE","links":[83],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoDecode"},"widgets_values":[true,64,192,false]},{"id":99,"type":"HyVideoVAELoader","pos":[-1246.245849609375,-461.3368835449219],"size":[315,82],"flags":{},"order":8,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[135,136],"slot_index":0}],"properties":{"cnr_id":"comfyui-hunyuanvideowrapper","ver":"5cbc584b68caef787ce3cdf5da565e8bcc3af3f1","Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]}],"links":[[4,3,0,5,1,"LATENT"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[69,45,1,3,7,"INT"],[70,45,2,3,8,"INT"],[73,52,0,34,0,"IMAGE"],[74,30,0,3,1,"HYVIDEMBEDS"],[75,43,0,3,3,"LATENT"],[83,5,0,60,1,"IMAGE"],[85,60,0,52,1,"IMAGE"],[91,64,0,3,6,"FETAARGS"],[105,57,0,1,0,"COMPILEARGS"],[108,59,0,1,1,"BLOCKSWAPARGS"],[117,60,0,78,0,"IMAGE"],[119,45,0,43,1,"IMAGE"],[120,45,0,52,0,"IMAGE"],[121,45,0,60,0,"IMAGE"],[123,79,0,45,0,"IMAGE"],[130,58,0,30,3,"HYVID_CFG"],[131,88,0,30,1,"PROMPT_TEMPLATE"],[134,1,0,3,0,"HYVIDEOMODEL"],[135,99,0,43,0,"VAE"],[136,99,0,5,0,"VAE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8264462809917354,"offset":[1680.1901713332804,910.7922986702486]},"node_versions":{"comfyui-hunyuanvideowrapper":"ecd60a66e6ebbdde2b8a0a6fe24bad72a8af925b","comfy-core":"0.3.14","comfyui-kjnodes":"1.0.5","comfyui-videohelpersuite":"1.5.2"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"ue_links":[],"VHS_MetadataImage":true,"VHS_KeepIntermediate":true},"version":0.4} | |
``` | |
## Additional Context | |
(Please add any additional context or steps to reproduce the error here) | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment