Change Image Style With Multi-ControlNet in ComfyUI 🔥
ฝัง
- เผยแพร่เมื่อ 21 ก.ย. 2024
- In this video, we are going to build a ComfyUI workflow to run multiple ControlNet models. You can use multiple ControlNet to achieve better results when changing the style of your image, or more stability when generating img2img videos.
💬 Social Media:
[Discord] / discord
[Patreon] patreon.com/In...
[Instagram] / lacarnevali
[TikTok] www.tiktok.com...
____________________________________________________________________
🤙🏻 Learn More:
/ membership
/ lauracarnevali
📌 Links:
ControlNet Paper: arxiv.org/abs/...
Download ControlNet Models: huggingface.co...
Workflow: www.patreon.co...
00:54 Install missing nodes (ComfyUI Manager and Manual Download)
02:46 Image from Pexels
03:04 Packages used to build the workflow
04:08 Where to download the ControlNet models
04:18 Workflow explanation, ControlNet preprocessors
07:19 CR Multi-ControlNet Stack
09:12 Efficient Loader
11:23 KSampler (Efficient)
13:51 Remove the background using the depth mask
15:42 Add more ControlNet models
16:15 Conclusions
#aiart #stablediffusion #generativeart #stabilityai #stablediffusiontutorial
#comfyui
Photo by natsuminh 夏明 from Pexels: www.pexels.com...
Another great thing about ComfyUI is people like me with a 4GB card, if I have multiple steps in a process, I have to wait for A1111 to do each one, so I can start the next step. In ComfyUI, I just setup all the steps needed, click "queue prompt" and go for a nice walk while it does all that. I like that. ❤
Comfyui is a crap in front of automatic 1111 , automatic 1111 is more powerful more features rich and more options then comfyui and more easy, comfyui has less features and less powerful
ComfyUI's custom nodes adds a whole ton of features that some people don't know exist. It's just a different way of accessing stableDiffusion. Some people don't like it because they don't like the node approach I guess. I love the nodes! 😁 It's like Blender.
but still comfyui lack rich new models extention or features flood like A1111 has u know! @@Satscape
Thank you Laura, been looking for a useful guide for a few days until I found yours.
yesss, finally some comfyui tutorioals!
Thank you so much for sharing your workflow, My ControlNet setup was not generating images in a good way, and I was able to figure out what nodes were causing problems by cross referencing yours
Some many encounter CR Aspect ratio missing. This looks to have been replaced with two possible nodes. CR SD1.5 Aspect ratio or CR SDXL Aspect ratio whichever you need. Thanks for this Laura and your very kind free workflow. ❤
Im trying to install this through git and it shows the file is missing. Im also seeing my Efficent Loader is red but it appears I have everything installed- any advice?
MAybe its just me cause I'm lazy LOL.....to begin with, this workflow felt a little tedious with having to drag and switch between nodes..... came back after a break and realise just how good this is in terms of multi controlnet work flow and efficiency (works well on my 3050 253sec x3 pre-processors)... subbed cause you got a great accent and for sharing your workflow....
That was just brilliant, Laura... I do hope you create more ComfyUI content. Your tutorials are easy to follow and have a wealth of potential in them. Subscribed and joined your Patreon: thanks for the workflow.
Awesome.. Great run through and helped loads in explaining the idea of using masks. 👏🙌
What a great tutorial about using multiple control nets in ComfyUI! Thank you so much!
Awesome tutorial, thanks for sharing!
10/10 video. Thank you!
thank you for this tutorial... could you provide a photo so we can drop it into comfyui to generate your workflow?
www.patreon.com/posts/change-image-in-91754267?Link&
Excellent tutorial. My only critique is that you glossed over the DWPose section a bit too quickly. Their page isn't very clear on what is required to get it working (I was looking a traditional .ckpt or .py model files but DWpose uses .onnx). Other than that, you explained everything very well.
thank you so much for this tutorial. have you tried use a picture of someone and change the clothes by using any clothes image reference? I would like to try myself with another clothe I saw in a store, I have the clothe picture and my one but I don't know how to do it using contronet.
Excellent!
hi! great content been ff for awhile just a suggestion, add a preview at the start of the video so its more engaging!
very good
bellissimo! Thanks a lot!
Thanks Laura !!! New sub
Funziona, hai fatto bravo, It does not seem to work in SDXL yet, or I am doing something wrong, but we'll figure it out thx, good workflow BTW but I get 3 images in the preview saying ""CFG 7.0, CFG 11.0 and CFG 15.0" not a big problem but it is different from your video
Amazing video
Really nice video and nice workflow, has someone tried to make it with architecture??
Please how to install ComfyUI Manager on Google Colab, thanks 💜💜💜
can you make a video on how to inpaint an image and add a sketch in the inpainted area and then do a text prompt to image to the sketch?
Example. Adding a sketched building into an inpainted area of a site context.
Ottimo video!
In questo workflow dove si posizionerebbe esattamente instant-ID controlnet? Ho visto che in efficient loader c'è un socket "cnet stack" e mi chiedevo se si dovesse in qualche modo inserire prima del loader anche se, ad istinto, mi pare contro intuitivo dato che solitamente va tra il loader e il ksampler. Bisogna in sostanza scompattare i tuple output del loader (SDXL) e ricomporli con instant-ID nel mezzo? Grazie! Ciao.
Ey Laura !
I downloaded your workflow... but doesnt work... Efficient loader y KSampler (Efficient) is in red. How can I solve that?
The first part of the workflow (multi controlnet) works correctly
What is the error?
@@LaCarnevali SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
What about multiple control nets for sdxl models?
The examples a d model for control net 1.1 are SD 1.5 no?
Nice tutorial thanks for sharing
Appreciated
Hi, your video is very good, but may be I'm misunderstood, my load image is like 1:5, I wanna gen the image in proportion? I try different setting but not working, the result of the image is cropped now, thank you
Hi Laura,
A question for you re using ControlNet with image2image instead of text2image :
I notice that ControlNet for controlling a pose works best if you use a stye checkpoint (eg as downloaded from CivitAI etc) but can it be used without that and only an image for the style + a pose reference to use within ControlNet ?
Another great video thx. When it comes to comfyui I miss that "Seed Variation" abbility (setting variations strenght) that automatic1111 provides. Is there a solution for that in comfyui? Maybe adding kinda seed based noise?
Nice tutorial! I'm very new to ComfyUI and I wonder if it is possible to preview controlnet preprocessor result without queueing the whole prompt (same thing that little 'explosion' button does in automatic1111). Like for example if I need to adjust resolution of preprocessor of it quickly before running the generation itself?
Thank you a lot for the tutorial! I keep getting AttributeError: 'ModuleList' object has no attribute '1'
working on macpro M1. Do you have any idea what can be wrong? Maybe i am using wrong model? Thank you!!
Does it works with base pony chackpoints and loras?
Looks great but the workflow won't load right. Any idea why "CR Aspect Ratio" and "PreviewBridge" node types were not found?
You need to install the package - should follow instructions from min 3:00
i think you have to manually replace the red aspect ratio box with a new version from the RockOfFire ComfyUI_Comfyroll node (Add node > Comfyroll > Other > CR SD1.5 Aspect Ratio)
@@Kryptonic83 I'll just confirm that this is exactly what you need to do.
what stable diffusion should I use to process my art drawings? any recommendation? THanks great videos.
sd 1.5 and use scribble controlnet or lineart / canny controlnet (it depends on the type of image you need to process)
You can also use SDXL (which has better quality), but controlnet is not amazing (not yet)
Does anyone knows why nodes are missing and even after installing them with manager, nodes are still red. And can't really use that workflow
Hi, who developed the nodes I used at the time stopped upgrading them and they are not available anymore.
Thanks a lot for the tutorial, multi-controlnet is really cool. But when I tried to load your workflow, and run a bit with my own 1024*1024 reference image. I always met error of "linear(): input and weight.T shapes cannot be multiplied (77x2048 and 768x320)" at KSampler, and I already set CR aspect ratio to 1024*1024. Do you know why?
check your checkpoint version 1.5 and controlnet vr 1.5, if checkpont XL it will be error
it took 30 mins to process, loading some SDXL model. dont know why. CAnt really use it for SDXL, switched to 1.5 realisic vision and it was normal. Very odd
sdxl is very heavy :S
can u process batch images in this tho?
Hey there! why do I have the Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)? Thanks!
i had same problem until i deleted the red aspect ratio box that i think is maybe now a deprecated version in that custom node (RockOfFire ComfyUI_Comfyroll_CustomNodes) i think you can add in a CR SD1.5 Aspect Ratio to replace it (Add node > Comfyroll > Other > CR SD1.5 Aspect Ratio)
exactly - no need to use it anyway :)
Can you please also explain how to install DWPose? I have tried to install it from the ComfyUi Manager, using the Git link, but I get error: "Cmd('git') failed due to: exit code(128)". Is there another way to install it? Does it need Control Net to be installed first?
I am sorry to ask, but I can't find any tutorial about DWPose and ComfyUI, all are for A1111
Update: after hours bashing my head on it, I managed to Install it. Cant remember how.
However, it seems now that having DWpose and Reactor Face Swap at the same time is not possible in ComfyUi. For some reason, to Use DW pose i need to first desable Reactor. Any comments on this?
model checkpoint pls ?
👋
there is a special node "Remove BG"
good to know!
Ciao Laura è possibile avere qualche spiegazione in italiano? Anche in privato magari❤
Ciao Francesca! Puoi scrivermi all'indirizzo hello@intelligentart.co. Solitamente non faccio training privati, ma vediamo se riesco ad aiutarti :)
Grazie Laura ti ho scritto una mail😊
Eyebrows
i looking for a medall n i found a gold mine
CR aspect ratio node.... i cant find it
It has been deleted, but it is not needed.
ur gpu?
NVIDIA RTX 3090
multi contolnet stack doesnt exist... searched in through the manager
yes, sorry, it looks like they have updated/deleted many stuff since I made the video ... - that's why I hate making videos using ComfyUI LOL. I ll try to make another video about controlnet w/ComfyUI -_-
I don't understand anything. Do you have tutorials from Basics?
Starting from ComfyUI is not the best - I would start from A1111
comfyroll just dont work anymore. :(
yeah - that 's why I hate making videos with ComfyUI LOOOOL, I might make another one to show how to use controlnet in other ways
it is a shame. ComfyUI it is my favorite one@@LaCarnevali
ComfyUI is the wrong direction to go in my opinion. I believe it makes Stable Diffusion more confusing, anti-intuitive, and increases the likelihood of mistakes dramatically... all for slight improvements. Any time put into ComfyUI is time wasted. Again... my opinion.
If this targets beginners it's poorly done. Very hard to follow.
ComfyUI is not for beginners :/ would suggest to use A1111 :D