I followed your instructions perfectly but, I have been getting this error (CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).) all day, I tried to troubleshoot it for hours, but I couldn't fix it. a little help please
Thanks for the excellent tutorial as usual, but I have an error with "sigclip_vision" CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection size mismatch for vision_model.embeddings.patch_embedding.weight
I get a lot of errors too: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
Can the inpaint tool use a lora to inpaint? Like when you have a character lora and you wanna add it to a scene
so great. could you make a tutorial to use redux to convert a photo to a comic book illustration preserving the subject face identity?
The depth and canny are also smaller loRAs they seem to work quite well, I needed to lower the loRA strength
Yup :) will try that next
Good explanatory video.
Thanks Ben! We will integrate this into our work.😊
Let's do it 😎👍
I followed your instructions perfectly but, I have been getting this error (CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).) all day, I tried to troubleshoot it for hours, but I couldn't fix it.
a little help please
02:49
@@TheFutureThinker Thank U so much brother. It worked.
Great breakdown mate
Glad you enjoyed it
Thanks for the excellent tutorial as usual, but I have an error with "sigclip_vision"
CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection
size mismatch for vision_model.embeddings.patch_embedding.weight
Same error here, I have the same set up with the same clip Vision but didn't work
same error
I get a lot of errors too: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
Download sigclip from Google
@@ChrissyAiven Make sure you update ComfyUI.
you will be rich :)