This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!
I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.
Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!
You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!
Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.
I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!
also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem
Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3) AnimalPosePreprocessor [ComfyUI-tbox] DWPreprocessor [ComfyUI-tbox] DensePosePreprocessor [ComfyUI-tbox] So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?
Thank you for coming here, and I appreciate your feedback. Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling. If you can tweak those and add the right nodes, you should be able to get great, consistent characters!
@goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf
double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.
Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!
Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.
Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?
You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.
I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs. Also they are super useful for game development, animation, or just keeping a consistent look across different art projects
Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.
I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.
IDK if you can help me but I've had problems with this AIO Preprocessor. AIO_Preprocessor 'NoneType' object has no attribute 'get_provider. Please help
A missing or outdated dependency can cause this, so make sure to update comfy Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.
Welcome on board! Here is the prompt for that. A Cyberpunk Mecha Kid, concept art, character sheet, in different poses and angles, including front view, side view, and back view, turnaround sheet, minimalist background, detailed face, portrait.
Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?
Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory. Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video th-cam.com/video/TWSFej_S_bY/w-d-xo.htmlsi=hWosspilbjYj3QWl
@@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you! Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.
Great video, very helpful! What kind of spec do you need for this flow? I'm able to run some Flux1D stuff, but ComfyUi keeps getting killed for taking too much memory with this workflow :(
Thank you! I'm glad you found the video helpful. if you’re already running FLUX1D. Ideally, you’d want at least 12GB of VRAM for smoother runs. You can try lowering the resolution of the inputs or using quantized models to reduce memory usage.
How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow
Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet. For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration. I hope these help.
My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?
First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ] If it’s missing, you can download the necessary files by using the Manager. then make sure you update dcomfyUi to the latest version.
Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator
Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently. Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.
Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx
Thank you for diving into the workflow! Here are a few tips that might help: - Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already. - It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result. i hope these helps
It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.
Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency. To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.
@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process. you can check out this reference video that might assist you th-cam.com/video/Uls_jXy9RuU/w-d-xo.htmlsi=EJoLucxVyOFFQKjB
I'm getting an error when I try to use the DWPreprocessor (and several others). The message says: # ComfyUI Error Report ## Error Details - **Node Type:** AIO_Preprocessor - **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError - **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. ## Stack Trace My internet connection is fine. Any advice?
I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.
This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?
Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency. it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.
i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently. your take is basically the mick mumpitz workflow with flux. it's good as it is.
I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.
great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.
You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight
To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=-l4wISSzrH0i1wmp
I'm glad you enjoyed the workflow and video! I appreciate your suggestion to create various poses and profile photos for LoRAs, and I will take it into consideration. True enough, Part 2 seems like a really good idea! :)
I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.
It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.
yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!
If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear. That will help to find the missing nodes and fix them.
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
Wow, I really enjoyed this vid. I am an absolute beginner. I am confused. In the video you have your character in many poses and improved the details. How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment? Do you have a video on that?
I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background. I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.
Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
@@stevenls9781 True! A PNG or JSON file can be used in the same way. The benefit of using a PNG workflow is that you can see a preview of the node structure or layout. You only need to drag the PNG file into comfyui to get to the workflow.
@@goshniiAI ah gotcha, I was just looking at them as an image preview and thought cool I can create it based on that. Now after doing it manually I have dragged the png into Comfy and it loaded.. hahahah well good practice following the image :D
So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times????? Thank you
What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.
Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=FzSSqoe6OV_56l55
I can't use "DWPose" as a Preprocessor. I get some strange errors. Could have something to do with onnxruntime-gpu / Cuda version whatever. Someone wrote: "The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine." I tried for 4 hours to fix it, ChatGpt could'nt help neither anyone on the Internet..... :(
@@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference
It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.
Very nice work ! thanks a lot man. I know it takes a lot of time to make videos like this, but is there any chance you could make a video with a workflow like this one but with flux ofc: th-cam.com/video/849xBkgpF3E/w-d-xo.htmlsi=GZwbPr4nuI8dvvyn That would be amazing!!! 🙏
The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=M-fMMvE6-kEzr5u8
Awesome question! Just picture game development, animation, or storyboarding. When you have consistent images from different angles, it makes sure your character looks the same from any perspective. This makes it easier to animate, storyboard, or even print in 3D. It's also super helpful for storybooks or visualizing characters in dynamic scenes. I hope that gives some inspiration!
@@hasstv9393 Absolutely! There are good AI tools for converting 2D concepts to 3D. If you're looking for AI-powered choices, you can use 3D A.I. Studio, Meshy, Rodin, Tripo 3D, or Genie by Luma Labs to produce 3D models directly from images, while platforms like Ready Player Me allow you to build 3D avatars using an image input.
This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!
I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.
thank bro ... i love the way your detailles all the process ... you are a Rock star , merci
You are very welcome, and saying thank you for your compliment.
Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!
You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!
omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method
you are most welcome. i am glad to read your feedback.💜
thank you very much for this tutorial... at the right speed and detailed explanation..
Thank you so much for the kind words!
Thanks so much for your hardwrok, very useful videos.
You are very welcome! I appreciate your encouraging feedback. Thank you!
So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.
Thank you so much! I’m honoured to have earned your subscription and and glad you found this helpful.
Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.
I am Glad it was helpful! Thank you for the observation and feedback . It means alot
Amazing, concise, understandable. Congrats man, keep the good work.
Thank you so much! appreciate it
Just wanted to say, you are amazing!!
Hearing that means so much. Thank you for your support.
This is amazing! Thank you so much. Subscribed!
Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.
I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!
Thank you, you are excellent!
That's very kind of you!
Thank you!
You are more than welcome.
I love this, already subscribed
Thank you for being here. i appreciate your support.
also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem
Great video as always! Thanks!
Thank you for your encouragement.
Really good Explanation, Keep up the good work :)
Thank you for the motivation! I'm glad I could help.
Superb work mate
Thank you so much, Suda! Love
Dope stuff. You rock!
I appreciate that! Thank you!
Great video!
I'm glad you enjoyed it!
Very helpful, thank you.
i appreciate your feedback
Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3)
AnimalPosePreprocessor [ComfyUI-tbox]
DWPreprocessor [ComfyUI-tbox]
DensePosePreprocessor [ComfyUI-tbox]
So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?
i cant lie, this was the best consistent character video for sure! is this able to work with sd3.5?
Thank you for coming here, and I appreciate your feedback.
Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling.
If you can tweak those and add the right nodes, you should be able to get great, consistent characters!
@goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf
perfect but what if i want use image instead use prompt input?
when doing the first queue prompt for the aio aux processor - i just get a blank black image
double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.
@@goshniiAI i still get a blank image also the strength is after the preprocessor save image i don't think it affects it?
Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!
Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.
great thanks
You are welcome!
THANKS
You're welcome!
Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?
You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.
thanks
You're welcome!
How to use the image reference in animation?
thank you!!!!
You're welcome!
Thanks for the video. This is Awesome. Do you use this to create loras? Or what do you use the character sheets for?
I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs.
Also they are super useful for game development, animation, or just keeping a consistent look across different art projects
Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.
I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.
for the pose reference, can we add our own pics posing as we like. will it work?
Yep!!! You can use any picture, and then you'll need ControlNet to extract your pose.
it shows "(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors" when i try to install ControlNet Auxiliary Preprocessors...anyone pls help
Make sure you're running the latest version of ComfyUI. Sometimes, older versions don’t play well with newer add-ons.
IDK if you can help me but I've had problems with this AIO Preprocessor.
AIO_Preprocessor
'NoneType' object has no attribute 'get_provider. Please help
A missing or outdated dependency can cause this, so make sure to update comfy
Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.
How do you get the super cartoony prompts, like that cool robot? I keep generating 3D characters.
Sweet workflow! Subbed!
Welcome on board! Here is the prompt for that.
A Cyberpunk Mecha Kid, concept art, character sheet, in different poses and angles, including front view, side view, and back view, turnaround sheet, minimalist background, detailed face, portrait.
great ai master
Thank you, Sensei!
Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple
ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?
Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory.
Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video th-cam.com/video/TWSFej_S_bY/w-d-xo.htmlsi=hWosspilbjYj3QWl
@@goshniiAI Thank you! It worked but is it normally very slow when it hits the first ksampler? it takes forever to get through this point
@@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you!
Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.
Wow nice
saying thank you!
Great video, very helpful! What kind of spec do you need for this flow?
I'm able to run some Flux1D stuff, but ComfyUi keeps getting killed for taking too much memory with this workflow :(
Thank you! I'm glad you found the video helpful. if you’re already running FLUX1D. Ideally, you’d want at least 12GB of VRAM for smoother runs. You can try lowering the resolution of the inputs or using quantized models to reduce memory usage.
@@goshniiAI any system RAM requirements? That's given me grief in the past, before I upgraded it.
How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow
Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet.
For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration.
I hope these help.
My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?
First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ]
If it’s missing, you can download the necessary files by using the Manager.
then make sure you update dcomfyUi to the latest version.
Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator
Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently.
Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.
Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx
Thank you for diving into the workflow! Here are a few tips that might help:
- Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already.
- It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result.
i hope these helps
It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.
.glb would advance the creation of consistent characters. That might just be a possibility in the future!
Is there automated way in comfy to split the character sheet into individual images to train LoRAs on the character?
Yes, you can get individual images by using the image crop node.
Is there a way with this workflow to use an image of a person that would be part of the output character sheet?
Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency.
To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.
@@goshniiAI oh ok, that works also. Doooo you happen to have a link to a training a lora video :D
@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process.
you can check out this reference video that might assist you th-cam.com/video/Uls_jXy9RuU/w-d-xo.htmlsi=EJoLucxVyOFFQKjB
Would you be so kind as to give the workflow for using an existing image or character? Thanks
Yes, hopefully, the tutorial that follows will clarify and give that.
@@goshniiAI can't wait
are you going to follow up on this video on how to use this character sheet to put them in different scenes/videos?
Thanks for the suggestion! I'll check it out since you mentioned it.
I'm getting an error when I try to use the DWPreprocessor (and several others). The message says:
# ComfyUI Error Report
## Error Details
- **Node Type:** AIO_Preprocessor
- **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError
- **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
## Stack Trace
My internet connection is fine. Any advice?
Sorry to hear that; I would recommend updating any of your nodes as well as running an update for ComfyUI.
How to create multiple consistent cartoon characters interacting with each other on different scenes?
Hopefully soon, in the next post
I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.
Thank you for sharing that approach with everyone! awesome tip!
This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?
Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency.
it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.
@@goshniiAI Cool! Maybe you could do a video on that? How to move from a character sheet to a 3D model :)
Can use this for Sdxl?
Yes, you can; just make sure to use the correct SDXL models for controlnet, checkpoint Loader, and other SDXL-compatible nodes.
i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently.
your take is basically the mick mumpitz workflow with flux. it's good as it is.
I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.
great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.
You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight
How to add LoRA to this workflow? Please. I need LoRA for my charector face and Controlnet for my charector pose.
To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=-l4wISSzrH0i1wmp
And how do I use it to generate images after this set up?
You can use a character sheet to train your own Lora to become a consistent character that you can use.
But how do we make different poses and profile photos for loras etc? Part 2 would be awesome 😂 this is a great workflow and video thanks!
I'm glad you enjoyed the workflow and video! I appreciate your suggestion to create various poses and profile photos for LoRAs, and I will take it into consideration. True enough, Part 2 seems like a really good idea! :)
I can’t use AIO Aux processor, how do I fix this? 😢
No need to worry. You can use separate preprocessors for each model, and everything will still work.
Nice thanks. But what about when we want to use the character in a generation?
Yes, you can, here is a follow-up video that explains the process. th-cam.com/video/OHl9J_Pga-E/w-d-xo.html
your UI is very nice, I still have the old look, how do I update to get your UI ?
Please see my Video here, towards the end, i explained the settings: th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=uMK8VUuxhCxyIerW
sir! which gpu are you using? and please suggest cloud gpu service site!
I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.
It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.
yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!
I can't find the ControlNetApply SD3 and HunyuanDIT nodes. Where can I install them?
One of the key nodes in comfyUI is the controlnetapplySD3. Before it's made available, make sure comfy is updated.
@@goshniiAI I can't find it either. Auxiliary Preprocessors is installed and "ComfyUI is already up to date with the latest version."
@@goshniiAI I already have comfy and packages up to date and still can't find it
@@bluemodize7718 It has changed. It's been renamed to "Apply Controlnet with VAE"
@@bluemodize7718 same here
Bro this video is great but some nodes are missing...how should we fix this?
If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear.
That will help to find the missing nodes and fix them.
update on the controlnetapplysd3 node, supposedly it has been renamed controlnet apply vae
Thank you for making us aware. We appreciate you watching out for that.
what is ur comf ui panel in browser?
Hello there, i have explained that towards the end of this video. th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=_KhvMhp30g_h2rxx
i hope this helps.
This "controlnetapply sd3 andhunyuandit" is nowhere :/ I updated everything.
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
Wow, I really enjoyed this vid.
I am an absolute beginner.
I am confused. In the video you have your character in many poses and improved the details.
How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment?
Do you have a video on that?
I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background.
I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.
Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
@@goshniiAI Thanks! And thank you for an excellent video
@@RxAIWithDrJen You are most welcome. Thank you for being here
Can we download that workflow.. maybe I missed that in the vid.
Yes, you can use the link in the description.
@@goshniiAI oh man... if only I used my eyes. thanks for the reply.
ah I was looking for a JSON file or something, it's a PNG to use as a ref and copy into Comfy
@@stevenls9781 True! A PNG or JSON file can be used in the same way.
The benefit of using a PNG workflow is that you can see a preview of the node structure or layout. You only need to drag the PNG file into comfyui to get to the workflow.
@@goshniiAI ah gotcha, I was just looking at them as an image preview and thought cool I can create it based on that. Now after doing it manually I have dragged the png into Comfy and it loaded.. hahahah well good practice following the image :D
So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box
Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times?????
Thank you
What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.
Thanks for explaining and providing the extra information
How do you get that new Interface??, I updated everything and I still have the old interface
Nevermind, I found it
Awesome! I'm glad you found it.
@@goshniiAI By the way, Amazing video, Thank you
@@Huguillon i appreciate it, You are welcome
how to add simple lora?
Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=FzSSqoe6OV_56l55
why not share the json for comfy? I went to gumroad and downloaded your files but was surprised there is no json just an image of your set up!!!?????
You sure the image didn't have the comfy workflow stored into it? Did you try dropping it into Comfy UI?
Yes you are right, the PNG image still works the same as a JSON file. You only have to import it or drag and drop into comfyUI.
@@goshniiAI I saw that later... sorry I thought comfy only accepted json... thanks for your work!
@@tmlander you are most welcome, thank you for sharing an update.
Cool but now how can we use that to create a consistent character in a scene with flux?
I am looking into it, and hopefully we will have a video guide on it soon.
@@goshniiAI Cheers mate keep up the great content
I can't use "DWPose" as a Preprocessor. I get some strange errors. Could have something to do with onnxruntime-gpu / Cuda version whatever. Someone wrote: "The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine." I tried for 4 hours to fix it, ChatGpt could'nt help neither anyone on the Internet..... :(
I can't either. Try using OpenposePreprocessor instead.
@@JustinCiriello Yes, this is working :)
Thank you for providing the additional information.
@@JustinCiriello Thanks, I had the same issue
So it appears that apply SD3 node has been renamed to Apply With VAE?
It is still SD3, as I checked.
@@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference
Thank you for pointing that out, it looks like the "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates
@@fungus98 Yeah, you are right, and thank you for sharing your observation
but what about non-human characters?
Animals?
For animals, you'll need the controlnet animal position model, but for now I'm not sure it is currently available for Flux.
@@goshniiAI how i can custom skelet.
iam have game char like pokemon
Does anyone know how to fix this problem?
Failed to restore node: Ultimate SD Upscale
Please remove and re-add it.
It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.
@@goshniiAI Yes, that's it, but I don't know which node to delete.. How do I know which node to delete?
There is no option for controlnetapply sd3 option.
The controlnetapplySD3 is a core node in comfyUI. Ensure comfy is updated before it becomes available.
please do the same by updating comfyui.
@@goshniiAI it's not there even after updates
@HelloMeMeMeow Yeah the workflow is now available.
@@goshniiAI Your workflow is a ControlnetApply vae not the sd3 you have in yours or did you rename it?
Can this be done in forge ui?
Yeah, hopefully I'll make a tutorial video for that.
@@goshniiAI thatd be awesome! I need that badly
can I add load lora node?
Yeah, that can be done. I explained how to do it in this link here. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=gC-go2q4ylLSm6Or
Is this free?
Yes, you are welcome to use the description's link.
Very nice work ! thanks a lot man. I know it takes a lot of time to make videos like this, but is there any chance you could make a video with a workflow like this one but with flux ofc:
th-cam.com/video/849xBkgpF3E/w-d-xo.htmlsi=GZwbPr4nuI8dvvyn
That would be amazing!!!
🙏
Hi there, I appreciate your suggestion and the reference link. i will consider that.
Hi it's stuck on Load Upscale Model node. I believe I don't have the "4x-Ultrasharp.pth". How to get that please?
The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=M-fMMvE6-kEzr5u8
Can Anyone tell me the usecase of this characters images?
Awesome question! Just picture game development, animation, or storyboarding. When you have consistent images from different angles, it makes sure your character looks the same from any perspective. This makes it easier to animate, storyboard, or even print in 3D. It's also super helpful for storybooks or visualizing characters in dynamic scenes. I hope that gives some inspiration!
@@goshniiAI is possible to make the 3D models with AI with this images
@@hasstv9393 Absolutely! There are good AI tools for converting 2D concepts to 3D.
If you're looking for AI-powered choices, you can use 3D A.I. Studio, Meshy, Rodin, Tripo 3D, or Genie by Luma Labs to produce 3D models directly from images, while platforms like Ready Player Me allow you to build 3D avatars using an image input.
perfect workflow could you send me your contact we need speak about some business work?
Thank you! Please send an email to this address: mylifeisgrander@protonmail.com.