Mastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIAL
ฝัง
- เผยแพร่เมื่อ 4 ต.ค. 2024
- Join me in this tutorial as we dive deep into ControlNet, an AI model that revolutionizes the way we create human poses and compositions from reference images. Learn about the different ControlNet models, their applications, and how to use them effectively. Whether you're an artist, designer, or just curious about AI, this video will show you how to harness the power of ControlNet for stunning results. Let's explore the world of AI artistry together!
** Links from the Video Tutorial **
PoseMyArt: posemy.art
ComfyUI's ControlNet Auxiliary Preprocessors: github.com/Fan...
ControlNet Models: huggingface.co... - แนวปฏิบัติและการใช้ชีวิต
Thank you very much! I finally managed to clearly understand the basics of controlnet. Everything works perfectly for me. I just avoided downloading the workflow, rebuilding it step by step and following your video! Fantastic!
I think that it is important to understand the concept first. And you explain this very well. I use to watch a lot of other video of "how to" on that subject, but it‘s like watching a robot reading a technical paper that only scientists would understand. Anyway nice video 😊
Thanks! And yes, i hate that too, i mean if you want to read me some part of the paper that explain to me the concept if fine, but the more technical parts are of absolutely no interest to most people.... it's like I'm supposed to buy a car and the guy at the car dealer starts reading me the engine technical manual 🤣
So far you are my favourite AI tutorial guy.
Excellent tutorial the Aux processor information was very useful.
For someone who gets an error saids "mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" , that means you're using something for SD 1.5 with XL models(checkpoint). If you look at the checkpoint's name then there should be a word "XL" included. You just have to look for another model on civit AI or somewhere and use it.
I have tried different XL models and all of them gave me the same mat1 and mat2 error, the only one that works for me it is epicrealism_naturalSinRC1VAE. Do you know what I am doing wrong? Thanks
@@adriands8207 Yeah, you can't use models that includes letter "XL" in the model name.
Wonderful! Thank goodness for you tutorials! More power to your channel!
Great video, concise explanation. Loved it.
Great vod mate!
Thanks bro for your work. Really concise and great videos
Thanks for the tutorial it came in handy
Nice and simple explanation. Thanks.
thanks for tutorial ! useful !
Great tutorial, thanks! Also, how did you get the straight lines instead of noodle connections?
Hi! Thanks,here i've explained how to put the straight lines: th-cam.com/video/AjwfswzLmxU/w-d-xo.html
@@DreamingAIChannelty for including the timestamp as well 🙏
Hi complete newbie, installed Auto1111 a week and a half ago and then took the leap and installed ComfyUI Thursday. I've watched a few of your videos and I really like the way you break down what look like really complicated workflows when I read up on them and break it down and show what you need and how it all links up, thank you :) I was wondering if this could be used for img2img? Got it to make character pics for D&D and would be cool if I could use this to change poses but retain the character and style.
Absolutely! The most simple way to do img2img is use Load image -> Vae Encode -> Ksampler reducing the denoise to like 0.2 or so (you should test with low cfg too) -> Vae Decode -> Save Image... but what you asking is a bit more complex since you need to use controlnet to inject the pose keeping the character and style maybe with some lora. I could try to do it and maybe make a video about it!
@@DreamingAIChannel thanks :) I'll give that a try and let you know how I get on.
why no openpose editor in comfy? not great for image to image for example when it fails to find the pose.
Great job.
Do you have any examples of workflow with 2 and or 3 adapters?
I want to use openpose but I notice for all the Control Net models on huggingface there are two of each: .pth and .yaml. What's the difference between them?
Does it matter whether you put the apply Controlnet before or after the prompt? Model-> ControlNet->Prompt or Model->Prompt->ControlNet?
encountered this issue when trying to load test preprocessors
"When loading the graph, the following node types were not found:
PiDiNetPreprocessor
ColorPreprocessor
CannyEdgePreprocessor
SAMPreprocessor
DWPreprocessor
BinaryPreprocessor....
Nodes that have failed to load will show as red on the graph."
how do i fix this?
Error occurred when executing ControlNetLoader:
Error while deserializing header: MetadataIncompleteBuffer
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI
odes.py", line 705, in load_controlnet
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 326, in load_controlnet
controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 311, in load_file
with safe_open(filename, framework="pt", device=device) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
🤷♂
How are those wires straight? Mine is like spaghetti tangled
xD ,here i've explained how to put the straight lines: th-cam.com/video/AjwfswzLmxU/w-d-xo.html
I'm confused between automatic 1111 and comfy ui
Stick to comfy UI is hard and confusing initially, but you have always control . Aut1111 is way behind Comfy UI. Save this workflow as group so you can always recall them when you need it.
Make another one for Inpainting, Image 2 Image and Upscaling.
@@Bikini_Beats true confusing, but fun when it works, but sadly for me using 1660ti my python crashed 😅
hey, i just added the .yaml and .pth files of open pose but when im on comfyUI and im loading the controlnet the .safetensors doesnt appear instead it appears the .pth file but it gives an error when i try using it
uhm, maybe you put the safetensor in the wrong folder
Prompt outputs failed validation
ControlNetLoader:
- Required input is missing: control_net_name
what did i do wrong...?!
uhm maybe you didn't connect some input, it's a pretty generic error... dont you have some red/pink node enlightened?
how to install COntrolnet in comfyUi?
How do I change the pose of an existing image instead of creating a new image, pls help
I know this is possible in A1111 with a reference image, I don't know if it is possible with controlnet in comfyui, I would have to try!
would be cool if you add the template
Unfortunately I wasn't successful following this tutorial, somewhere I must have made a mistake. After building the workflow and getting my pose base image I hit Queue Prompt. Once the workflow reaches the KSampler I get an error message Error occurred when executing KSampler:
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
File "Z:\AI Images\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
and the many more errors after that which mean nothing to me. If you have any ideas what might fix this please let me know.
well, is possible that the width and the height of the image in the load image and the width and height of the empty latent are different? It should be the same for what i know
Thanks for your response. I loaded up a 512x512 image and set the latent to the same size, which did indeed remove the first line of the error message but not all the subsequent lines. I guess I'll have to keep searching for a solution but if you think of anything let me know. Cheers.@@DreamingAIChannel
I had the same error. Not all checkpoints work with open pose. I was using Juggernaut and I got a error at kSampler. I changed the checkpoint to some anime one and it worked.
Thanks for that! Would you mind telling us which anime checkpoint you used exactly? I'll download it and give it a try.@@sebastianvalenzuela1652
how do you get the wires to be at sharp 90's?
Hi! here i've explained how to put the straight lines: th-cam.com/video/AjwfswzLmxU/w-d-xo.html
keep dreaming.?...
thats what she said :(
🤣