-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
ZIT GGUF workflow with SDXL inpainting should work without a problem.
Actual Behavior
I tried to use ZIT and SDXL in the same workflow. Specifically SDXL to inpaint the ZIT generated picture. But it seems that as soon as I load the ZIT GGUF diffusion model I will get the "'Linear' object has no attribute 'weight'" error from the LoadCheckpointSimple node (or any other checkpoint loader node).
Also if I switch back to a SDXL only workflow I will get the same error until I restart ComfyUI. So only using the SDXL workflow is fine, so is only using a ZIT GGUF workflow (without including SDXL). But trying to combine the two models for inpainting or switching from ZIT GGUF workflow to the SDXL workflow produces the error.
Steps to Reproduce
Start a simple ZIT GGUF workflow (see the ZIT template) then start a SDXL workflow (also see the template).
Debug Logs
Requested to load SDXL
loaded completely; 16740.71 MB usable, 4897.05 MB loaded, full load: True
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:14<00:00, 2.11it/s]
Requested to load AutoencoderKL
loaded completely; 5362.75 MB usable, 159.56 MB loaded, full load: True
Prompt executed in 26.49 seconds
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: ***\Downloads\ComfyUI\user\__manager\cache\1514988643_custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
Requested to load ZImageTEModel_
loaded completely; 4207.27 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Loads SAM model: ***\Downloads\ComfyUI\models\sams\sam_vit_h_4b8939.pth (device:AUTO)
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Using MixedPrecisionOps for text encoder
Using MixedPrecisionOps for text encoder
Missing weight for layer clip_l.transformer.text_projection
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
!!! Exception during processing !!! 'Linear' object has no attribute 'weight'
Traceback (most recent call last):
File "***\Downloads\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "***\Downloads\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "***\Downloads\ComfyUI\nodes.py", line 644, in set_last_layer
clip = clip.clone()
^^^^^^^^^^^^
File "***\Downloads\ComfyUI\comfy\sd.py", line 167, in clone
n.patcher = self.patcher.clone()
^^^^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\comfy\model_patcher.py", line 288, in clone
n = self.__class__(self.model, self.load_device, self.offload_device, self.model_size(), weight_inplace_update=self.weight_inplace_update)
^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\comfy\model_patcher.py", line 275, in model_size
self.size = comfy.model_management.module_size(self.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\comfy\model_management.py", line 467, in module_size
sd = module.state_dict()
^^^^^^^^^^^^^^^^^^^
File "***\Downloads\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 2269, in state_dict
module.state_dict(
File "***\Downloads\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 2269, in state_dict
module.state_dict(
File "***\Downloads\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 2269, in state_dict
module.state_dict(
File "***\Downloads\ComfyUI\comfy\ops.py", line 636, in state_dict
if isinstance(self.weight, QuantizedTensor):
^^^^^^^^^^^
File "***\Downloads\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1967, in __getattr__
raise AttributeError(
AttributeError: 'Linear' object has no attribute 'weight'
Prompt executed in 26.52 secondsOther
Using ComfyUI on a Radeon 7900XT on Windows.