Bro, it's like you're reading my mind! Every time I run into a new issue or have a new objective, I just check YT and I see your latest video covering it. Keep on crushing it!
Great technique! One question: what if I need to put into my generated image some specific objects? For example, a very specific table lamp or a particular vase, or a poster on the wall with exactly specified text and graphics on it - can I use your workflow for that? Or can you recommend a more suitable approach for tasks of that kind? Thanks!
Thank you so much! I have just one question - how can I add a scarf? Segmentation has no option for scarfs. It thinks that scarf is pants and puts it on like pants
i am getting this error even though i have the correct model. please help fyi i already update compyui and others "StyleModelLoader invalid style model D:\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev-1.safetensors"
This is great!! Is there any way to import as a jpg or png the manual mask for the person instead of masking inside comfyUI for more precision? Thanks!
Is the flux redux model a style model? I am getting this error "invalid style model C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev.safetensors"
i am getting this error : CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
hello, i got this error, any tip how to resolve it ? Failed to validate prompt for output 220: * MaskToImage 229: - Return type mismatch between linked nodes: mask, received_type(*) mismatch input_type(MASK)
Hey, thanks for the tutorial. I have the following problem. Do you have any idea how to fix it? mat1 and mat2 shapes cannot be multiplied (577x1024 and 1152x12288)
are you guys getting this error : CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).??
Nice trick, but the output resolution/detail will be limited since we are concatenating two images and processing at once? How big can the concatenated image be here using this model?
Hello Xclbr Xtra, i share linksFirst I want to thank you for your video and for making the comfyUI workflow available. Thanks for registration and like. Unfortunately I'm not getting the desired result... The final image does not wear the indicated clothing, it only superimposes the reference image (the changing clothing) on the masked space manually or via automatic masking. I checked the nodes and the like used in my ComfyUI with the configuration found in your video and everything matches except the results I got. What could be causing this error? Any adjustment of values? If yes, in which nodes? i duplicated the comment because TH-cam seems to be deleting comments in which.
In ComfyUI Folder, find models In models folder - Flux Fill Model should be placed in Unet folder & Redux file should be placed in style_models folder (create style_models folder if not there)
thanks for the great work man!!! I am trying to adjust your workflow so it will work with Dev and not GGUF, but getting errors in StyleModelApply about the dimensions. do you have a working workflow?
@@xclbrxtra Yes, mine is 12GB, I think other factors affected it yesterday, I used it again today and it runs very well, it just takes 5-6 minutes. Be that as it may, thank you so much for the tutorial, it's fantastic!
Bro, it's like you're reading my mind! Every time I run into a new issue or have a new objective, I just check YT and I see your latest video covering it. Keep on crushing it!
this is brilliant. seems like a way to create consistent characters with just a single image.
You are amazing!!! Thank you so much for this workflow, i learn a lot
this is some kind of magic... thanks a lot!
Great technique! One question: what if I need to put into my generated image some specific objects? For example, a very specific table lamp or a particular vase, or a poster on the wall with exactly specified text and graphics on it - can I use your workflow for that? Or can you recommend a more suitable approach for tasks of that kind? Thanks!
Excellent, Happy New Year!
Happy New Year 🎊🎉💯
Crazy good stuff man!
You're just a person with a capital. Thank you for sharing your best practices with us. I'll keep an eye on your progress."
Thank you so much! I have just one question - how can I add a scarf? Segmentation has no option for scarfs. It thinks that scarf is pants and puts it on like pants
Hi, mac user here, got this error :DownloadAndLoadSAM2Model
Torch not compiled with CUDA enabled
What can I do with it?
Thank you for Workflow, it's work very well.
i get this poblem
'NoneType' object has no attribute 'device'
how i fix it plz fast🥺
the poblem in CLIP Text Encode (Prompt) #
Great Video Bro thanks a lot
i am getting this error even though i have the correct model. please help fyi i already update compyui and others
"StyleModelLoader
invalid style model D:\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev-1.safetensors"
Does this also work on saree
This is great!! Is there any way to import as a jpg or png the manual mask for the person instead of masking inside comfyUI for more precision? Thanks!
Is the flux redux model a style model? I am getting this error "invalid style model C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev.safetensors"
i am getting this error : CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
anyways, it worked by updating comfyui
hello, i got this error, any tip how to resolve it ?
Failed to validate prompt for output 220:
* MaskToImage 229:
- Return type mismatch between linked nodes: mask, received_type(*) mismatch input_type(MASK)
Hey, thanks for the tutorial. I have the following problem. Do you have any idea how to fix it?
mat1 and mat2 shapes cannot be multiplied (577x1024 and 1152x12288)
That usually means something is the wrong dimensions. What node is red on your workflow?
I have the same issue
are you guys getting this error : CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).??
anyways, it worked by updating comfyui
Getting the same error
Bro, how do I handle the error message "UnetLoaderGGUF
'conv_in.weight'" reported by Unet Loader (GGUF)?
Nice trick, but the output resolution/detail will be limited since we are concatenating two images and processing at once? How big can the concatenated image be here using this model?
Hello Xclbr Xtra, i share linksFirst I want to thank you for your video and for making the comfyUI workflow available. Thanks for registration and like. Unfortunately I'm not getting the desired result... The final image does not wear the indicated clothing, it only superimposes the reference image (the changing clothing) on the masked space manually or via automatic masking. I checked the nodes and the like used in my ComfyUI with the configuration found in your video and everything matches except the results I got. What could be causing this error? Any adjustment of values? If yes, in which nodes? i duplicated the comment because TH-cam seems to be deleting comments in which.
Is it possible to change the bottom wear too or can swap the body instead of clothes?like body reference instead of clothes
Great Tutorial and Workflow
for me its changing the model face also didn't match the face ?
where to put the models on comfyui? maybe do a installation tutorial? thanks
In ComfyUI Folder, find models
In models folder -
Flux Fill Model should be placed in Unet folder & Redux file should be placed in style_models folder (create style_models folder if not there)
@@xclbrxtra thank you!
unfortunately, i can not import ComfyUI-Florence2 custom nodes. (IMPORT FAILED) T_T
did you fixed it same problem
Is it better then CatVTON plus Flux refiner?
can you please leave link for siglip
thanks for the great work man!!!
I am trying to adjust your workflow so it will work with Dev and not GGUF, but getting errors in StyleModelApply about the dimensions. do you have a working workflow?
Can you please mention the exact error ?
@@xclbrxtra StyleModelApply
Sizes of tensors must match except in dimension 1. Expected size 2048 but got size 4096 for tensor number 1 in the list.
Can it work with cartoons? or other image styles?
Hi Bro, Great content. Can we have "how to video" of it? Tnx
can this workflow work for product as well?
Could you also try mockup generator?
The clothes are long-sleeved, but the model is wearing short-sleeved clothes. The resulting image shows the model wearing a short-sleeved shirt.
If you Inpaint the whole arm then in 1-2 tries, it will generate long sleeved outfit.
Is this working with jewelry?
It should but I would suggest to paint mask manually for it as auto detection won't be able to paint the intricate edges.
Hey, do patreon so I can somehow thank your contribution.
RTX3060 doesn't run at all
What's the VRAM, id it's 12 GB then it should work as I'm running this on 8gb VRAM
@@xclbrxtra Yes, mine is 12GB, I think other factors affected it yesterday, I used it again today and it runs very well, it just takes 5-6 minutes. Be that as it may, thank you so much for the tutorial, it's fantastic!
CLIPVisionEncode
'NoneType' object has no attribute 'encode_image'
can you share link download model sigclip_vision_patch14_384.safetensor