Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity. You can now support the channel and unlock exclusive perks by becoming a member: th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
I can't begin to express how thankful I am for these videos. This whole comfyUI subject has been so overwhelming to me, but your calm and detailed approach to explaining everything has made this such a joy to work with. Thank you so much!
You are an amazing teacher! I always find ComfyUI confusing, but after binge-watched your tutorials now everything makes sense, even other tutorials that were collecting dust in my playlist now suddenly makes sense. Can't thank you enough!
literally the best guide ever. You have no idea how I had to self learn all this months ago and now seeing you who carefully puts all the pieces together exactly how it should be. Wish I watched this sooner haha. But maybe because I had to learn it alone first, I get to know that you are doing it correctly. Thanks and keep up the good work. With the updates and changes of these tools and nodes though, what we learn today is already outdated the next day. This guide is very comfyUI-new-comer-friendly for someone who wants a decent step-by-step explanation and the right tool for the right job without a bunch of pointless talking and messy instructions.
Another excellent video... and as always, you throw in extra helpful tidbits about ComfyUI to boot! Well done! I'll be a Legend now!! 😀 I struggled a little with the Error that you mentioned, but your Discord is super helpful and I found the Windows "Long Path" solution there and BAM, it's working fine! Again, thanks!!
Thank you! Very useful, easy to follow and practical tutorial as always. I would still hate ComfyUI if it weren't for your channel! I feel very comfy working in it now :)
I got a different error, that it could not find "cudnn_graph64_9.dll". But the fix was the same. I knew I already have CUDNN installed so it had to be that. Thank you so much!!! This one is useful in a lot of ways
Thanks for the tip on Flux Resolution Calculator, I was using Comfyroll's Aspect Ratio which has not been updated but worked. Always learn something from your EP's.
for monitoring I use Crystools. The workflow is save in any image you generate with comfyui, so if you drag any image generated into comfyui you get that workflow that was used to generate that image.
Hello, thank you for your guides, they are easy to follow. Tho i have one question. I already have a depth image, so I don't need the preprocessor. How do I use that ?
I didnt try without it, but it should work if you load directly to the image input of the apply control net. if that doesnt work you can also try to add a node "SetUnionControlNetType" just after load control net model node and select depth from there
Great video. It works great!... But it is so slow compare to just using just using Flux. Any parameters I can play with to help? It the full ControlNet 6GB end up in VRAM?
It is slow because flux already takes a lot of resources and even those with 24gb VRAM struggle with full dev, that is why used q8. You can try q4,q5 dev to see if it helps, or maybe change the vae encode and vae decode to vae encode tiled and vae decode tiled there you can try 512 or 1024 tile
Everytime i install ControlNet auxiliary completely brakes my comfyui and i get error "RecursionError: maximum recursion depth exceeded" after restart and everytime i try to run it.
you can try to install this version on a separate folder that has all the nodes installed, just to make sure is not some dependencies error github.com/Tavris1/ComfyUI-Easy-Install
can you use more than one controlNet preprocessor with Flux? Like use the stacker like in the sdxl version, or maybe line them up like loras or something?
the one I used for sdxl doesnt work with flux, asked for a vae and that node apply multi control net didnt have it, so I didnt found a solution yet for flux. But probably from settings you can find one preprocessor that does a good job for what you need
I dont have a way to test it, maybe try with schnell version first. The q8 dev probably will take too much even if it works, so maybe q4 but try with schnell first. Flux is quite demanding when it comes about vram
I think i saw a lora that unblur somewhere on civit ai, but depends also on prompt if you prompt like a portrait in reality the background is blurred because subject is in focus and there is depth of field, like if you look at your hand the hand is in focus but everything else is blurred
@@pixaroma I have tried this Lora but have not been able to achieve any really good results so far. That's true, but unfortunately various prompts have not produced really sharp images for me, even with landscape images etc. Do you have a video of better prompts for sharp images in Flux?
Sorry, but this doesnt work. If I put the output of the canny line preprocessor in the Controlnet with the Union Controlnet I just get the Lines from canny in the KSampler output. Same for depth, hed etc. The whole union controlnet models is broken.
Edit: "--reserve-vram 2" seem to solve this problem. It still fill up the VRAM but it doesn't fail anymore 🙂 I cant make this work. As soon as I use the Controlnet I get a OOM error. It destroy my VRAM. I got 16GB and it's filled up in sec. I have tried with X-flux too - same. What the H am I doing wrong? I can run FLUX dev (and GGUF Q8) fine, but as soon as I activate the controlNet I kill it? Before I start Comfy I use 550Mb VRAM so it's not something else that use it..
It might be too much with control net and flux together, if you try like smaller size does it work to make sure is not something else, also post on comfyui chanel on pixaroma discord server maybe someone has the same card and can test it
I am not focusing on forge right now since the update are slow, I saw this comment on the issue page: "The rewrite of ControlNet Intergrated will start at about Sep 29. The estimated finish date is about Oct 7. When this note is announced, the main targets include some diffusers formatted Flux ControlNets and some community implementation of Union ControlNets. However, this may be extended if stronger models come out after this note."
@@pixaroma hey thta's good to know ! But did you notice that Forge gives nicer Flux outputs than ComfyUI ? I noticed that once I started to make a few Flux loras, it's almost night and day sometimes... Inpainting is easier too. Of course it's possible to do nice things in Comfy as well but it's a lot more difficult, just like before with SD models...
@@tetsuooshima832 if the models is the same generation should be similar just the same seed is not the same on forge with comfyui, one generate seed on cpu and one on gpu, so comparing the same prompt and seed will give different results. But if you just generate with same model I dont see how they are nicer, probably depends on workflows if has something behind the scene that does something extra, not sure. I only use comfyui these days since I can make the workflows how i want and with one click it does all for me, generate a prompt, using it, upscaling, saving etc.
Noap. I cant do it. it seems that people here are computer nerds. You should take into consideration that there are people like me who need a more detailed explanation and step by step proceedure if you want us to follow you.
I assumed people started from episode one not just jumped to episode 14 if want to understand all, I can not explain the same basics on each episode. I include the workflow on discord so if is too much info you can just download the ready made workflow and load it in your comfyui. If you have problems and questions then discord is quite active and people can help and answer questions. for control net already explained more slowly in the sdxl control net episode.
mention me on discord, show me a screen with an error I am sure it can be fixed, is either the long path problem on windows that dont let you download all the files automatically because the path is too long, or the comfui didnt update to latest version.
all good now after watching again and again this video and with the help of chatgpt i fixed the problem .But now after i queue prompt i get " clip text encode none type object has no attribute devise. the problem is on the negative prompt note. i don't know what the hll is this now.
Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
You can now support the channel and unlock exclusive perks by becoming a member:
th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Could you do a vid on Flux.1-dev-Controlnet-Upscaler
When I watch one of your episodes, I still feel the same excitement I had when I first started learning, as if I haven't learned anything at all
I started because of you with comfyui ☺️ just need it a little push
I can't begin to express how thankful I am for these videos. This whole comfyUI subject has been so overwhelming to me, but your calm and detailed approach to explaining everything has made this such a joy to work with. Thank you so much!
you are welcome 🙂
You are an amazing teacher! I always find ComfyUI confusing, but after binge-watched your tutorials now everything makes sense, even other tutorials that were collecting dust in my playlist now suddenly makes sense. Can't thank you enough!
Glad I could help, thank you so much ☺️
literally the best guide ever. You have no idea how I had to self learn all this months ago and now seeing you who carefully puts all the pieces together exactly how it should be. Wish I watched this sooner haha. But maybe because I had to learn it alone first, I get to know that you are doing it correctly. Thanks and keep up the good work.
With the updates and changes of these tools and nodes though, what we learn today is already outdated the next day.
This guide is very comfyUI-new-comer-friendly for someone who wants a decent step-by-step explanation and the right tool for the right job without a bunch of pointless talking and messy instructions.
thank you 🙂
Great video, this is really helping me learn Comfy UI!
You've done really well with this video, informative, helpful and actually fun to watch. Thanks a lot and looking forward to more from you
Another excellent video... and as always, you throw in extra helpful tidbits about ComfyUI to boot! Well done! I'll be a Legend now!! 😀 I struggled a little with the Error that you mentioned, but your Discord is super helpful and I found the Windows "Long Path" solution there and BAM, it's working fine! Again, thanks!!
thank you, yeah that long path limitation on windows is annoying not sure why windows didnt fix that by now.
Your videos always add to my growing knowledge of AI art. Thanks!
Thank you! Very useful, easy to follow and practical tutorial as always. I would still hate ComfyUI if it weren't for your channel! I feel very comfy working in it now :)
Thank you so much, glad I could help ☺️
You could say, that you've gotten COMFY with ComfyUI ;)
Pixaroma you are an incredible teacher! I've learned so much from you about comfy UI! THANK YOU!
Thank you so much for your support, glad I could help ☺️
You are doing great man 💯 Teaching and giving the knowledge to others 👏🏻
Results are amazing! Thanks for sharing!
I got a different error, that it could not find "cudnn_graph64_9.dll". But the fix was the same. I knew I already have CUDNN installed so it had to be that. Thank you so much!!! This one is useful in a lot of ways
Thanks for the tip on Flux Resolution Calculator, I was using Comfyroll's Aspect Ratio which has not been updated but worked. Always learn something from your EP's.
Glad it helped 🙂
Thank you for this well explained tutorial and for your discord channel that has so many knowledgeable and helpful members.
thank you for all the help 🙂
amazing!! AI is getting better day by day
I found this video helpful. Like button was pressed. Comment written. :)
thanks 🙂
WOW - This is what I was waiting for! Thanks!
Thank you so much for all the help ☺️ you are the best
👌🏻
🥂 Cheers to channel legends 👑
☺️👑
Really helpful tutorial
Thank you very much for the great and detailed tutorials. Great job.
Thanks for joining and support ☺️
Thank you again for a wonderful tutorial. Finally got the poses working correctly after like a week of trying and it not working!
tank you... very well explained
Nice one. keep up the good work
Thank you for a most excellent video. Your speaking style is pleasant and to the point. Subscribed. 🙏👌
Thanks and welcome 🙂
Your tutorials are really well presented, Thank you!
Thanks!
Thank you so much 🙂 for testing and support and everything
excelent thank you!!!
tnx alot amazing
Tysm man ❤❤
Thanks for the explanation - What extension/plugins are you using for the monitoring of CPU/GPU and for the saving of .json with an image
for monitoring I use
Crystools. The workflow is save in any image you generate with comfyui, so if you drag any image generated into comfyui you get that workflow that was used to generate that image.
comment :) thanks for the tutorials.
tnx alot
Hello, thank you for your guides, they are easy to follow. Tho i have one question. I already have a depth image, so I don't need the preprocessor. How do I use that ?
I didnt try without it, but it should work if you load directly to the image input of the apply control net. if that doesnt work you can also try to add a node
"SetUnionControlNetType" just after load control net model node and select depth from there
@@pixaroma Yeah i just load the depth image into apply controlnet. Thanks 👍
Tumbs Up
❤
Great video. It works great!... But it is so slow compare to just using just using Flux. Any parameters I can play with to help? It the full ControlNet 6GB end up in VRAM?
It is slow because flux already takes a lot of resources and even those with 24gb VRAM struggle with full dev, that is why used q8. You can try q4,q5 dev to see if it helps, or maybe change the vae encode and vae decode to vae encode tiled and vae decode tiled there you can try 512 or 1024 tile
So how do we add extra Lora's? Thanks for your work. It helped lot.
check discord, on the pixaroma-workflows channel, i added an example workflow there
Everytime i install ControlNet auxiliary completely brakes my comfyui and i get error "RecursionError: maximum recursion depth exceeded" after restart and everytime i try to run it.
you can try to install this version on a separate folder that has all the nodes installed, just to make sure is not some dependencies error github.com/Tavris1/ComfyUI-Easy-Install
How can I get this workflow? Discord invite link is invalid.
the one from channel header should be always valid, try this discord.com/invite/gggpkVgBf3
can you use more than one controlNet preprocessor with Flux? Like use the stacker like in the sdxl version, or maybe line them up like loras or something?
the one I used for sdxl doesnt work with flux, asked for a vae and that node apply multi control net didnt have it, so I didnt found a solution yet for flux. But probably from settings you can find one preprocessor that does a good job for what you need
@@pixaroma thanks 😊
Hi, I'm using flux schnell, is there a controlnet union for flux schnell?
the one for dev should work with schnell also, I am not aware of a schnell only controlnet
Can this be used for lower vram 4060 8gb vram
I dont have a way to test it, maybe try with schnell version first. The q8 dev probably will take too much even if it works, so maybe q4 but try with schnell first. Flux is quite demanding when it comes about vram
Is there a flux controlnet model for the normal map
Not sure, i mostly use canny and depthmap, only if you try to see
Would like to add to this WF the node Power Lora Loader (rgthree)
Check the discord I added today a workflow with both control net and Lora, i used load lora only node
Does anyone know a way to render the backgrounds sharp in Flux? I have tried models but they reduce the quality of the output.
I think i saw a lora that unblur somewhere on civit ai, but depends also on prompt if you prompt like a portrait in reality the background is blurred because subject is in focus and there is depth of field, like if you look at your hand the hand is in focus but everything else is blurred
@@pixaroma I have tried this Lora but have not been able to achieve any really good results so far. That's true, but unfortunately various prompts have not produced really sharp images for me, even with landscape images etc. Do you have a video of better prompts for sharp images in Flux?
@Tamam-e8e try to use 30 steps instead of 20 and try different samplers and schedulers
Sorry, but this doesnt work. If I put the output of the canny line preprocessor in the
Controlnet with the Union Controlnet I just get the Lines from canny in the KSampler output.
Same for depth, hed etc. The whole union controlnet models is broken.
Did you use the workflow i put on discord and didn't work? Show me a screenshot om discord
Edit: "--reserve-vram 2" seem to solve this problem. It still fill up the VRAM but it doesn't fail anymore 🙂
I cant make this work. As soon as I use the Controlnet I get a OOM error. It destroy my VRAM. I got 16GB and it's filled up in sec. I have tried with X-flux too - same. What the H am I doing wrong? I can run FLUX dev (and GGUF Q8) fine, but as soon as I activate the controlNet I kill it? Before I start Comfy I use 550Mb VRAM so it's not something else that use it..
It might be too much with control net and flux together, if you try like smaller size does it work to make sure is not something else, also post on comfyui chanel on pixaroma discord server maybe someone has the same card and can test it
is it possible to use flux control net in forge? if so, can you make a tutorial for it? doesn't seem to work for me
I've found no good news about this. So far, it's not supported in Forge sorry
I am not focusing on forge right now since the update are slow, I saw this comment on the issue page: "The rewrite of ControlNet Intergrated will start at about Sep 29. The estimated finish date is about Oct 7. When this note is announced, the main targets include some diffusers formatted Flux ControlNets and some community implementation of Union ControlNets. However, this may be extended if stronger models come out after this note."
@@pixaroma hey thta's good to know ! But did you notice that Forge gives nicer Flux outputs than ComfyUI ? I noticed that once I started to make a few Flux loras, it's almost night and day sometimes... Inpainting is easier too.
Of course it's possible to do nice things in Comfy as well but it's a lot more difficult, just like before with SD models...
@@tetsuooshima832 if the models is the same generation should be similar just the same seed is not the same on forge with comfyui, one generate seed on cpu and one on gpu, so comparing the same prompt and seed will give different results. But if you just generate with same model I dont see how they are nicer, probably depends on workflows if has something behind the scene that does something extra, not sure. I only use comfyui these days since I can make the workflows how i want and with one click it does all for me, generate a prompt, using it, upscaling, saving etc.
How to used both Lora and controlnet to this workflow?
I want lora for charecter face and controlnet for pose.
You need to add a lload lora node between load checkpoint and clip text encoder and redo connection so it goes through that lora
@@pixaroma can you share this workflow with the lora conection ? its is help a lot 😄
GGUF cannot recognise lora
@@李用-k1d I use loras with GGUF with no issues.
@@raininheart9967 I shared the workflow on discord, is in the pixaroma-workflows channel
Noap. I cant do it. it seems that people here are computer nerds. You should take into consideration that there are people like me who need a more detailed explanation and step by step proceedure if you want us to follow you.
I assumed people started from episode one not just jumped to episode 14 if want to understand all, I can not explain the same basics on each episode. I include the workflow on discord so if is too much info you can just download the ready made workflow and load it in your comfyui. If you have problems and questions then discord is quite active and people can help and answer questions. for control net already explained more slowly in the sdxl control net episode.
@@pixaroma no I just have problem with the Aio preprocessor. All the rest I'm good so I couldn't go and watch further on yr vids which is pity
mention me on discord, show me a screen with an error I am sure it can be fixed, is either the long path problem on windows that dont let you download all the files automatically because the path is too long, or the comfui didnt update to latest version.
all good now after watching again and again this video and with the help of chatgpt i fixed the problem .But now after i queue prompt i get " clip text encode none type object has no attribute devise. the problem is on the negative prompt note. i don't know what the hll is this now.