If you are just starting Comfyui WATCH THIS VIDEO! This answered so many questions. I've been dragging my feet for weeks and this solved so many problems. Thanks so much!
Bro, I only learnt about Stable Diffusion a couple days ago and came across your tutorial. It's just other-worldly stuff what you're doing. I'll forever be grateful to you for your efforts. I tried several times in vain after watching this tutorial, but then realized I wasn't using OpenPose model. Once I did that, the output image that came before me almost took my breath away. Outrageously good and thanks from the bottom of heart. I can't thank you enough for this video and the references ❤
@@NerdyRodent Thanks for the response ❤️ I even tried making a workflow of my own in ComfyUI to get face expression from a reference image and apply to any character. I used MediaPipe FaceMeshProcessor but it isn't really working out😅. Too much to learn I guess before I start making workflows. Do you have a video for the same by any chance so I can look up and get some insight on the facial expression aspect?
I am getting an error that is actually driving me nuts: Error occurred when executing IPAdapter: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
Ehem, it only took me 6.78 hours, to install this. All the custom nodes were about as fun to download and install as a root canal treatment. When I fist loaded it my screen was a red as the bridge on the NC 1701 under red alert. Still not working optimal.
Can we get an updated version of this that uses the new IP Adapter Advanced Node since the IPAApply node is depreciated? I can't figure out how to get the Advanced node to work in this workflow. I'd also appreciate explicit links to the models that must be used together for IPA and clip vision. The troubleshooting page for IPAAdvanced is not clear enough to be helpful.
@@NerdyRodent I appreciate the swift reply! However, I think I forgot to mention that I'm using SDXL. The SDXL reposer image in your github repo still produces a workflow with the old node. It shows up bright red and labeled "undefined" - I have the latest versions of all custom nodes. There are also no links describing any models for the SDXL reposer. Are you referring exclusively to the SD1.5 version of the reposer workflow?
woah, I started with the first video and got that rodent druid to work. but now I am trying to make those poser workflows work but somehow I end up getting errors like this: "Error occurred when executing IPAdapterApply: 'NoneType' object has no attribute 'patcher'" I downloaded at least 5 different IP Adaper things and some by hand, some by the ComfyUI Manager, some are. bin, some are .safetensor.... I am so confused by now and I feel like I need an in between video that explains all the different kinds of, models, checkpoints, IPAdapters and what these errors even mean. Where can I get some help?
Thanks for this! Works great in 1.5, but I'm having the damnest time figuring out what is dependent on 1.5. When I load it up with SDXL, the first ksampler throws a "Error occurred when executing KSampler: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1". Anyone know what's the cause?
The workflow looks Robust. Although i've tries implementing but Ksample Pre scale keeps giving this error " Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead". How do i Resolve this ?
thanks for create this awesome tutorial, but after install all custom node step by step , I have some problem , appreciate for your help. when I first open this workflow file, the browser window popup information : 1 When loading the graph, the following node types were not found: CR Batch Process Switch Nodes that have failed to load will show as red on the graph. __ 2 after I click" queue prompt " button in browser , the window pop a meessage: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) __ 3 and my terminal come this error : File "/home/young/Downloads/ComfyUI/ComfyUI/execution.py", line 598, in validate_prompt class_ = nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']] KeyError: 'class_type'
Same thing happened to me. I'm still working on a solution. I think it has something to do with the SD 1.5 face model. This is my error: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). Did you ever get this solved?
I figured it out. It's the clip vision model. If you used the manger to download it, it places the models.softensor in the base of the clip_visions folder.
Dear Mr. Rodent, ip adapter has been updated and the workflow does not work anymore. are you planning to update this one_ I am still noob and now need to figure it out 🙂
How to load the workflow ,where shall I find .json file in order to load your workflow ,pls tell me how shall I load you exact workflow into my comfy ui
You've changed my perspective on everything. I'm glad I am researching so much before diving in. What generative model are you using? 1.5? SDXL? SDXL-Turbo? Any thoughts on what you would recommend for someone that is just starting out learning?
What do I need to start doing this? I’d want to start a comic strip using this software but have no idea where to start. Do I download Stable Diffusion to my laptop? If so, How do I even do that? Is Reposer like a preset? So many questions 😔
Can you please make a tutorial (SDXL and ForgeUI for us with rusty old machines) to show how to combine interaction between multiple characters via img2img and controlnet? in example: two characters hugging, shaking hands, kissing or put one head on their shoulder or head or whatever interaction that we can CONTROL via "DUO" OpenpPose? I have no idea how to do such thing, but I'm talking about img2img specifically. I hope that you will consider that idea, thanks ahead 🙏
Seeing all these nodes and things, of which i know nothing, i wonder how insight ai works. i would imagine its a similar process but with different parameters and models but just visualizing it in the way you showed has sparked my curiousity for how these ai things work. Great video, thanks
I have downloaded models, updated ComfyUi to the latest, ran "Install Missing Nodes" and yet after six hours of trying to fix this I still get "Error occurred when executing IPAdapter: 'ClipVisionModel' object has no attribute 'processor'" I've Googled and can't seem to find any hint to a solution. ???
I am suffering from this error as well. I have 1.5 models down the line as well as ipadapter models from huggingface. The issue appears to be at the clipvision model. I attempted to use both, the safetensors file and the .bin. I noticed your github links to the huggingface space for those models. Here is a .json of the flow as it appears on my machine. Maybe you see something I dont? @@NerdyRodent drive.google.com/file/d/1_3P4Tf_MX0IbejAWIAGuzCzRiwqCMZpd/view?usp=drive_link
Manual install seems to have done the trick. No more processor none issues. I'm now bouncing around between installing the right version of torch and xformers. updating now and.... HOLY BALLS BATMAN it works! enjoy the sub.
Great stuff! After obtaining all nodes and models, I got the following error: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280 ]) from checkpoint, the shape in current model is torch.Size([768, 1024]) Thanks!
@@alexs1681 For my part, I initially loaded the wrong IPAdapter and got this error. Strictly follow Rodent's instructions and make sure you load the right resources. All the necessary files for this workflow can be found at the bottom of the page in the link in the description.
Yeah really. If a script could keep feeding each frame from the reference video back in, it should be amazing. It might not be able to keep the backgrounds consistent though.
Can anyone help me understand why my image isn't loading face? I have everything set up exactly as instructed but when I generate image it only does headless body :(
Hey! awesome tutorial. I've got a problem though. I get errors on the DWPose Estimator. I have found that without fail, it always turns out to be a matter of changing the models. what .onnx files should be in the bbox_detector and pose_esimator fields ?
Hi! I'm trying to repeat everything according to the guide, with the same settings, but the program still makes a lot of different variations from each other. The same character does not work. I've been struggling with this problem for a year now, but nothing comes out.
Hi! Your videos seem to show that you separated your checkpoints into subfolders within the comfyui structure. I can't figure out how to do this. It would be great to have sd15 and sdxl subfolders for checkpoints, loras and embeddings. If you haven't covered this already can you explain how to do this? If you already have, just a point to the video where you explain it would be great, too!
hi @NerdyRodent , I haven't found the json file for the workflow, only provided things is a png image, it is little bit confusing for me when i am recreating your workflow, can you please provide the working json file for this workflow, i really want to try this.... thank you for creating such amazing video tutorials..
Hey, awesome video tysm! Is there any shortcut how to find all the checkpoints and safetensors to test this or is it highly dependend on use case and I have to manually download and import them?
WOW, great work Nerdy Rodent! This really seems like a useful comfyui workflow for storytelling... can't wait to try it out. Thanks for everything you do! 🤓🐀
Every image I generate is auto stored somewhere? Just to know so I can delete after I generate a lot of images Btw i've had a lot of problems installing this requeriments to run the workflow but was all my fault and I managet to made it work, except the ip-adapter part, looks like this part need to be updated in your tutorial, everything else is good, thanks a lot!
Hi, first of all, thank you for making this workflow and video. When I load the reposer2 workflow, the prompt and pose is working. However, the ipadapter isn't. My settings are the same as yours except the User Everywhere (UE nodes): title_regex input_regex group_regex. Could you please explain what to write in these three regexs?
Would it be able to generate consistent sprite sheets for game animation like running and fighting? Or in addition to IP adapter and openpose it's better to also train a lora? I'm thinking of 1 sprite 1 image, not all sheet at once.
Unfortunately I'm getting this unholy mess when I drag the reposer.png in a clean comfyui screen: "Loading aborted due to error reloading workflow data". "This may be due to the following script:/extensions/core/widgetInputs.js". I ran ComfyUI manager, updated everything, but didn't work.
I had the exact same error on a fresh ComfyUI installed today, and I managed to fix it. Try it at your own risk. 1. I installed the node IPAdapter-ComfyUI using ComfyUI Manager. Restarted ComfyUI. 2. After this I got another error when loading the workflow, but now ComfyUI Manager would enable the Install Missing Cusom Nodes, to install the remaining three nodes, which was not possible initially. Restarted ComfyUI. After this the workflow loaded without errors. 3. Install/copy a bunch of required models for IPAdapter and Clip Vision to run the workflow. (Read the notes on IPAdapter-ComfyUI and watch the video). Beautiful workflow I must say, very cool!
Amazing workflow! I'm trying to achieve this result with SDXL but the quality is not even close to SD 1.5. Do you know if it has to do with the specific ipadapters for SDXL?
Great video!! Please provide Json file along with image to make it easier during import process. Sometimes images don’t work so there is nothing better than json file. Many thanks in advance amigo!!!
I get an error when dragging the png to Comfy UI. Are there specific custom nodes that need to be installed? Error is "TypeError: Cannot read properties of undefined (reading '1')"
Thank you! How do we update when using the standalone NVIDIA build that we just unzip? Is that a GIT method or do I just have to redownload the standalone and overwrite?@@NerdyRodent
hi there, thanks a lot for the video. Im completely new in this and im still finding out how everything works. Im getting an error trying to import the Allor Plugin, im using a Mac and wanted to know if it has something to do with it. hope you can help me out.
Mate, this looks absolutly amazing!!! Can't wait to try it.. One question.... Is it able to copy clothing aswell if I have a character I want to remain consistent, can I use that full body character and then have it come out in a new pose or it's jusrly for faces?
This one is consistent faces, though it will use clothing influences from the face image also. For clothing swaps, see Stable Diffusion - Face + Pose + Clothing - NO training required! th-cam.com/video/ZcCfwTkYSz8/w-d-xo.html
Truly Amazing! Thanks for fast reply.. However I'm stuck with an error: NNLatentUpscale Missing and it doesn't seem to be working.. Is there a fix to this that I'm missing@@NerdyRodent
@@NerdyRodent thank you for reply. I googled the error and couldn't find anyone with the same error. I uninstalled and reinstalled onnxruntime, used the .pt models instead of .onnx in the DWPoseEstimator (node 238), tried other SAM Loaders, SAM detectors (idk what they do), updated COMFIUI (portable) via manager and .bat (ran the .onnxruntime remove/install cycle again). Nothing.
@@NerdyRodent I’ve been suffering with these errors for 3 days, I can’t fix it in any way, maybe someone knows how to do this at the current time, where to get the right models and nodes? that would be incredibly cool!
@@NerdyRodent ok then here is my problem the IP model loader goes to either null or undefined, when I left click on it to load a model it does not allow me to, any thoughts?
If I add a skiny person to the input image and set a fat person for control net pose, will output be fat person or will it just detect and adjust pose?
I think I'm so close to get this working but I keep getting this: Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' , is there any way to fix this?
Hello Nerdy, many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?
Im new so kinda overwhelmed 😅 Hope you can give a preview of your setup for comfy ui, also recommended models, which do you prefer and more importantly how much space does it take for you ?, can i run it through hard drive but sadly its not SSD Hope to hear from you soon 🤞
I love running sd from my ssd as it helps reduce that initial load time for the models 😊 Overall, including controlnets and other models, it’s about 60GB
@@NerdyRodent really appreciate the reply sadly mine isn't SSD so will it perform worse or will it be manageable? Also wanted to ask I'm a bit confused on the setup for "reposer" what is the basic setup for a noob like me if you dont mind, sorry for the bother tho
The basic setup is to first install comfy, then download the models and setup like in the video. If you’re new to Comfy, I’d suggest going through all the videos in my ComfyUI playlist as they start basic and work up to workflows like this
@@NerdyRodent You seem extremely responsive and I appreciate that. Would you recommend SDXL, SD 1.5, or SD-Turbo? What generative model do you think works best?
hello, this is insanely cool! i was trying this out but i keep running into an error. i get i also get an error when i load the node group image that says it cant find anything everywhere and prompts everywhere node types. and when i try to generate an image it gives a syntax error: unexpected non-whitespace character after json at position 4 (line 1 column 5).
I get the exact same error! " syntax error: unexpected non-whitespace character after json at position 4 (line 1 column 5)" and I can't find the 'anything everywhere' and 'prompts everywhere' node types, anyone have the solution?
@@MovieBr0 i fixed it, so the problem was that all the nodes still have mr rodent's sd models. you just have to manually go to each of the nodes and select the models you have for controlnet, sd model etc. also get the comfyui manager to get the anything everywhere and prompts everywhere custom nodes as well! :)
@@karthiknambiar1544 Hmm I replaced all the models, but don't seem to be able to find the anything everywhere and prompts everywhere custom nodes in the manager
@@karthiknambiar1544 I'm going to try replacing all the models, but I can't find the anything everywhere and prompts everywhere custom nodes in the manager, I've looked everywhere
@@MovieBr0 Install all the custom nodes from the print screens at the bottom of the GH page of the description. Now I'm getting “'NoneType' object has no attribute 'shape'” even with everything apparently set up.
Is there no tutorial for correcting all the node errors? I don't know what I'm doing or what resources I need to get this working, the video just skips over this.
im sure this is an easy answer but i dragged in your handy workflow and installed alll the missing nodes but when i engage the prompt i get an error "SyntaxError: Unexpected non-whitespace character after JSON" and a column line designation. i would deduce that means a typo but in that fascinatingly complex work flow i don't know whats a collumn vs a line, etc any help would be great but ill get tugging at it
Hi sir, could you have another look at the SDXL version of this. I'm getting an issue with the SDXL version of this workflow ("SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model) ERROR: IPAdapterApply: 'NoneType' object has no attribute 'encode_image' I have a feeling it is an issue with the model used in the IP Adaptor Model, maybe the Load CLIP Vision too
After changing the model and clipvision to 'ip-adapter-plus-face_sdxl_vit-h' and 'CLIP-ViT-H-14-laion2B-s32B-b79K' I now get: Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
It’s best to use the suggested models - I can’t say if it will work with any others. You can drop me a dm on www.patreon.com/NerdyRodent for more info!
@@NerdyRodent Hi I managed to fix it. I tried to use the models in the workflow but it didnt work. So I downloaded the models found in the iAdaptor github (named in my above comment) and then fixed the next error I got by using the --flat 16 thingin the executable (I'm not in my pc I dont remenber the name) as my 1080ti processes in a different way to newer cards I guess
I installed comfyui in a virtual env by cloning the repo. Set all directory paths to A1111 controlnet models , checkpoints etc. If I drag and drop this workflow I get the error: [When loading the graph, the following node types were not found: CR Batch Process Switch DWPreprocessor ] I have also installed ComfyUI's ControlNet Auxiliary Preprocessors and DWPreprocessor Provider (SEGS) //Inspire shows in the list. How do I install the missing preprocessor and CR Batch Process Switch?
Thanks for your inspiration and workflow. I have test it to create animation character, and I am combining this workflow to AnimatedDiff to play around. :)
Let say I have an image of a specific chair and want SD to be able to create this chair from a specific angle, can I do that with this type of workflow?
@@NerdyRodent hola Nerdy no me aparece como missing, ya instale todo lo que me aparecía como faltante pero ese nodo sigue sin aparecer, creo le cambiaste el nombre no?
I keep having issues getting the NNLatentUpscale node to load in. I understand that this is due to Effieciency Nodes no longer being supported. Has anyone found a workaround for this yet? I'm new to ComfyUI so I'm having some difficulty troubleshooting it.
Do you have the workflow.json? I tried copying the visual by hand for learning, but I get loop errors, as there might be a bad node or something. I know that if I load the .json that I can use the manager to find the missing nodes.
Ok, as a newb this one was a lot more difficult! Still working the kinks out, have had some interesting crashes. Basically make it as far as the HD image, but end up with errors before the final upscale. The rig I'm working on is 9 years old, so wondering if errors could be related to using outdated hardware? Wondering about your hardware configuration vs. minimum requirements?
I didn't like it at first, always waited for new stuff to come to automatic1111. Then new stuff stopped coming. So I dove in. Asked a lot of noob questions. Got frustrated a billion times when I wanted to do something that was simple in a1111. And finally it started to click. And when new tech drops, there's some incredible devs out here that put it together so rapidly that I haven't touched any other UI in almost a year now. It's comfy once you get comfy with it.
Thanks for the video. Could it be possible to combine this approach with something like "instant lora" (yt vid) to be able to maybe load multiple angles of one face ?
If you are just starting Comfyui WATCH THIS VIDEO!
This answered so many questions. I've been dragging my feet for weeks and this solved so many problems.
Thanks so much!
Bro, I only learnt about Stable Diffusion a couple days ago and came across your tutorial. It's just other-worldly stuff what you're doing. I'll forever be grateful to you for your efforts. I tried several times in vain after watching this tutorial, but then realized I wasn't using OpenPose model. Once I did that, the output image that came before me almost took my breath away. Outrageously good and thanks from the bottom of heart. I can't thank you enough for this video and the references ❤
Glad you like the things 😊 It’s amazing what you can make with Comfy!
@@NerdyRodent Thanks for the response ❤️ I even tried making a workflow of my own in ComfyUI to get face expression from a reference image and apply to any character. I used MediaPipe FaceMeshProcessor but it isn't really working out😅. Too much to learn I guess before I start making workflows. Do you have a video for the same by any chance so I can look up and get some insight on the facial expression aspect?
I spent weeks in search of such techniques. I'm fortunate to get it here. Thankyou very much.
I am getting an error that is actually driving me nuts:
Error occurred when executing IPAdapter:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
😮This is the biggest incentive to install that spaghetti interface.
😂
Omnomnom spaghetti!
Not! That interface looks even worse that automatic1111
Ehem, it only took me 6.78 hours, to install this. All the custom nodes were about as fun to download and install as a root canal treatment. When I fist loaded it my screen was a red as the bridge on the NC 1701 under red alert. Still not working optimal.
@artisans8521 ya got it to function ans its just not great at recapturing the essence of the face feel I'm probably still doing somthing wrong ugh 😑
Can you please make a Tutorial how we can do this in Automatic 1111? 🙏
Amazing ! Will try for sure !
The Nerdy Rodent is becoming the ComfyUI Workflow master of the Internet!
Lol. Just playing 😉
Can we get an updated version of this that uses the new IP Adapter Advanced Node since the IPAApply node is depreciated? I can't figure out how to get the Advanced node to work in this workflow. I'd also appreciate explicit links to the models that must be used together for IPA and clip vision. The troubleshooting page for IPAAdvanced is not clear enough to be helpful.
I’ve swapped the node from the old new one to the new, new one 😉 Direct model links are in the “description” column so all ready to go!
@@NerdyRodent I appreciate the swift reply! However, I think I forgot to mention that I'm using SDXL. The SDXL reposer image in your github repo still produces a workflow with the old node. It shows up bright red and labeled "undefined" - I have the latest versions of all custom nodes. There are also no links describing any models for the SDXL reposer. Are you referring exclusively to the SD1.5 version of the reposer workflow?
Yup, I’m referring to the sd 1.5 version this video covers. Same as I did in Reposer2, any workflow with ip adapter apply simply needs it replaced!
woah, I started with the first video and got that rodent druid to work. but now I am trying to make those poser workflows work but somehow I end up getting errors like this:
"Error occurred when executing IPAdapterApply:
'NoneType' object has no attribute 'patcher'"
I downloaded at least 5 different IP Adaper things and some by hand, some by the ComfyUI Manager, some are. bin, some are .safetensor.... I am so confused by now and I feel like I need an in between video that explains all the different kinds of, models, checkpoints, IPAdapters and what these errors even mean. Where can I get some help?
Same deal. Did you ever figure it out? On a deadline and getting desperate.
do you have any suggestions on fixing the IP adapter not being found?
Thanks for this! Works great in 1.5, but I'm having the damnest time figuring out what is dependent on 1.5. When I load it up with SDXL, the first ksampler throws a "Error occurred when executing KSampler: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1". Anyone know what's the cause?
you are like a magician..thanks for everything
It's my pleasure. Thank you for watching!
The workflow looks Robust. Although i've tries implementing but Ksample Pre scale keeps giving this error " Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead". How do i Resolve this ?
thanks for create this awesome tutorial, but after install all custom node step by step , I have some problem , appreciate for your help.
when I first open this workflow file, the browser window popup information :
1
When loading the graph, the following node types were not found:
CR Batch Process Switch
Nodes that have failed to load will show as red on the graph.
__
2
after I click" queue prompt " button in browser ,
the window pop a meessage: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
__
3
and my terminal come this error :
File "/home/young/Downloads/ComfyUI/ComfyUI/execution.py", line 598, in validate_prompt
class_ = nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']]
KeyError: 'class_type'
I get an error trying to use the workflow, something about a size mismatch in IPAdapter. Any ideas what's up with that?
Make sure to follow the instructional video and also update everything
Same thing happened to me. I'm still working on a solution. I think it has something to do with the SD 1.5 face model. This is my error: Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Did you ever get this solved?
I figured it out. It's the clip vision model. If you used the manger to download it, it places the models.softensor in the base of the clip_visions folder.
@@knoughlbawdy This is the node I can't figure out to fix, I need to put what model where to fix it?
@@darkestmagi size mismatch is caused by the clipvision model. You can see it in the video at 6:43.
Dear Mr. Rodent, ip adapter has been updated and the workflow does not work anymore. are you planning to update this one_ I am still noob and now need to figure it out 🙂
Yup! Reposer2 was updated a while back already :)
How to load the workflow ,where shall I find .json file in order to load your workflow ,pls tell me how shall I load you exact workflow into my comfy ui
Check the video description for info!
@@NerdyRodentso basically I have to load the image you provided in you github?
I use the comfyui extension for A1111, and it keeps everything in one place, super practical for that.
Wait, what? Can you use comfyUI inside of Automatic1111?? I'm confused O_O
SPEAK PERSON!!!! How... WHERE...
I don't think I can post a link here apparently. Last time I tried my comment got deleted. Search "model surge a1111", or "sd-webui-comfyui"
You've changed my perspective on everything.
I'm glad I am researching so much before diving in.
What generative model are you using?
1.5? SDXL? SDXL-Turbo?
Any thoughts on what you would recommend for someone that is just starting out learning?
Is it any good?
What do I need to start doing this? I’d want to start a comic strip using this software but have no idea where to start. Do I download Stable Diffusion to my laptop? If so, How do I even do that? Is Reposer like a preset? So many questions 😔
Can you please make a tutorial (SDXL and ForgeUI for us with rusty old machines) to show how to combine interaction between multiple characters via img2img and controlnet?
in example: two characters hugging, shaking hands, kissing or put one head on their shoulder or head or whatever interaction that we can CONTROL via "DUO" OpenpPose? I have no idea how to do such thing, but I'm talking about img2img specifically.
I hope that you will consider that idea, thanks ahead 🙏
Seeing all these nodes and things, of which i know nothing, i wonder how insight ai works. i would imagine its a similar process but with different parameters and models but just visualizing it in the way you showed has sparked my curiousity for how these ai things work. Great video, thanks
excellent always wanted chars to be consistent and now its possible thank you :)
me too! is there a setting for the number of images you want generated in batch somewhere-or am I just missing something?
I have downloaded models, updated ComfyUi to the latest, ran "Install Missing Nodes" and yet after six hours of trying to fix this I still get "Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'" I've Googled and can't seem to find any hint to a solution. ???
Im getting the same error :/ any luck?
Make sure to use the SD1.5 clip vision models indicated!
I am suffering from this error as well. I have 1.5 models down the line as well as ipadapter models from huggingface. The issue appears to be at the clipvision model. I attempted to use both, the safetensors file and the .bin. I noticed your github links to the huggingface space for those models. Here is a .json of the flow as it appears on my machine. Maybe you see something I dont? @@NerdyRodent
drive.google.com/file/d/1_3P4Tf_MX0IbejAWIAGuzCzRiwqCMZpd/view?usp=drive_link
Manual install seems to have done the trick. No more processor none issues. I'm now bouncing around between installing the right version of torch and xformers. updating now and.... HOLY BALLS BATMAN it works! enjoy the sub.
Installed comfyUI using the manual method and this was fixed. Portable version doesn't have full compatibility it seems. @@NerdyRodent
canyou update this for now maybe, some of the nodes havechanged, and nobody did something similar to this yet :)
comfyUi more and more becoming the standard it seems
It’s fun to experiment with stuff, for sure!
Great stuff! After obtaining all nodes and models, I got the following error: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280 ]) from checkpoint, the shape in current model is torch.Size([768, 1024]) Thanks!
Never seen that. Maybe not a sd 1.5 checkpoint?
Unfortunately, that's not the case. I tried several checkpoints based on SD 1.5
Ah, that extra info helps! Make sure to use the right checkpoint in there.
Same error. Doesn't work
@@alexs1681 For my part, I initially loaded the wrong IPAdapter and got this error. Strictly follow Rodent's instructions and make sure you load the right resources. All the necessary files for this workflow can be found at the bottom of the page in the link in the description.
This seems like it would be incredible for keeping videos consistent
Seems like an idea 😉
Yeah really. If a script could keep feeding each frame from the reference video back in, it should be amazing. It might not be able to keep the backgrounds consistent though.
@@NerdyRodentcan you try creating A video ??❤❤
Excellent work and tutorial!
Can anyone help me understand why my image isn't loading face? I have everything set up exactly as instructed but when I generate image it only does headless body :(
Hey! awesome tutorial. I've got a problem though. I get errors on the DWPose Estimator. I have found that without fail, it always turns out to be a matter of changing the models. what .onnx files should be in the bbox_detector and pose_esimator fields ?
Hi! I'm trying to repeat everything according to the guide, with the same settings, but the program still makes a lot of different variations from each other. The same character does not work. I've been struggling with this problem for a year now, but nothing comes out.
how to install controlnet for comfyui please
I'm still confused between the model require just to run only reposer pls help out, its kindly of urgent😅😅
Hi! Your videos seem to show that you separated your checkpoints into subfolders within the comfyui structure. I can't figure out how to do this. It would be great to have sd15 and sdxl subfolders for checkpoints, loras and embeddings. If you haven't covered this already can you explain how to do this? If you already have, just a point to the video where you explain it would be great, too!
You can open the graphical file manager for your operating system, and then from the context menu create a new directory
@@NerdyRodent I must have labeled them poorly in the past. This time it worked!
Any chance on doing a video on Deep Floyd? Not much out there.
Sure, here you go - Deep Floyd - AI Generated Text In Images!?
th-cam.com/video/139f-gbj9ko/w-d-xo.html
hi @NerdyRodent , I haven't found the json file for the workflow, only provided things is a png image, it is little bit confusing for me when i am recreating your workflow, can you please provide the working json file for this workflow, i really want to try this....
thank you for creating such amazing video tutorials..
You can load the PNG workflow, and then click save if you want it in JSON format instead!
@@NerdyRodent thank You, i am new to comfyUI still learning.
Finally we can create comics! Wow!❤
Hey, awesome video tysm! Is there any shortcut how to find all the checkpoints and safetensors to test this or is it highly dependend on use case and I have to manually download and import them?
WOW, great work Nerdy Rodent! This really seems like a useful comfyui workflow for storytelling... can't wait to try it out. Thanks for everything you do! 🤓🐀
Have fun!
Every image I generate is auto stored somewhere? Just to know so I can delete after I generate a lot of images
Btw i've had a lot of problems installing this requeriments to run the workflow but was all my fault and I managet to made it work, except the ip-adapter part, looks like this part need to be updated in your tutorial, everything else is good, thanks a lot!
Hi, first of all, thank you for making this workflow and video. When I load the reposer2 workflow, the prompt and pose is working. However, the ipadapter isn't. My settings are the same as yours except the User Everywhere (UE nodes): title_regex
input_regex
group_regex. Could you please explain what to write in these three regexs?
IPAdapter is model2
Would it be able to generate consistent sprite sheets for game animation like running and fighting? Or in addition to IP adapter and openpose it's better to also train a lora? I'm thinking of 1 sprite 1 image, not all sheet at once.
Cool stuff, thanks Nerdy.
😉
Holy shit bro i would read the comic book you had going in the initial shots.
I think she may have to face… Cthulhu!
Unfortunately I'm getting this unholy mess when I drag the reposer.png in a clean comfyui screen: "Loading aborted due to error reloading workflow data". "This may be due to the following script:/extensions/core/widgetInputs.js". I ran ComfyUI manager, updated everything, but didn't work.
My only guess would be that you’re using an old version of ComfyUI
@@NerdyRodent Aaaaaaand.... you were absolutely right of course. I never suspected that would be it as I updated it only last week. Thank you!
Lol. I update mine every few hours for reasons… 😆
I've the exact same error but I'm all updated via the manager.
I'm on Mac m2. Any guess?
I had the exact same error on a fresh ComfyUI installed today, and I managed to fix it. Try it at your own risk.
1. I installed the node IPAdapter-ComfyUI using ComfyUI Manager. Restarted ComfyUI.
2. After this I got another error when loading the workflow, but now ComfyUI Manager would enable the Install Missing Cusom Nodes, to install the remaining three nodes, which was not possible initially. Restarted ComfyUI. After this the workflow loaded without errors.
3. Install/copy a bunch of required models for IPAdapter and Clip Vision to run the workflow. (Read the notes on IPAdapter-ComfyUI and watch the video).
Beautiful workflow I must say, very cool!
Amazing workflow! I'm trying to achieve this result with SDXL but the quality is not even close to SD 1.5. Do you know if it has to do with the specific ipadapters for SDXL?
Nothing yet with face for SDXL that I’m aware of. Do let me know if you find anything!
Can you use it, when you have more than one character?
Great video!! Please provide Json file along with image to make it easier during import process. Sometimes images don’t work so there is nothing better than json file. Many thanks in advance amigo!!!
I get an error when dragging the png to Comfy UI. Are there specific custom nodes that need to be installed? Error is "TypeError: Cannot read properties of undefined (reading '1')"
same here no idea why?
Make sure to update Comfy!
Thank you! How do we update when using the standalone NVIDIA build that we just unzip? Is that a GIT method or do I just have to redownload the standalone and overwrite?@@NerdyRodent
There is an update folder in the Comfyui folder with a bat file I believe @@alex_jasper
I also got the same error, updated to the latest version. Replace Vision models of all kinds
hi there, thanks a lot for the video. Im completely new in this and im still finding out how everything works. Im getting an error trying to import the Allor Plugin, im using a Mac and wanted to know if it has something to do with it. hope you can help me out.
Absolutely Amazing... Just what I've been looking for. Thank you so much!!!
Glad it was helpful!
can u do an automatic 1111 version?
I would love how to make this workflow step by step, I just don't wanna copy paste
If you prefer to make workflows (rather than have them ready made for you), then check the links in the video description!
wish I knew how to set this up step by step, I mean the base install does it work on mac ?
How to install ComfyUI:
th-cam.com/video/2r3uM_b3zA8/w-d-xo.html - mac is practically the same as Linux 😀
Mate, this looks absolutly amazing!!! Can't wait to try it.. One question....
Is it able to copy clothing aswell if I have a character I want to remain consistent, can I use that full body character and then have it come out in a new pose or it's jusrly for faces?
This one is consistent faces, though it will use clothing influences from the face image also. For clothing swaps, see Stable Diffusion - Face + Pose + Clothing - NO training required!
th-cam.com/video/ZcCfwTkYSz8/w-d-xo.html
Truly Amazing! Thanks for fast reply.. However I'm stuck with an error: NNLatentUpscale Missing and it doesn't seem to be working.. Is there a fix to this that I'm missing@@NerdyRodent
Thanks You, this is a game changer, roop struggles with none realistic faces so this is workflow is a awesome addition!
Glad you like it!
@@NerdyRodent is there a setting for output numbers or batch? I see image or batch node but that's it
Great work nerdy! Does it run on Krita sd plugin?
any help with Error occurred when executing SAMDetectorSegmented:
'SAM' object has no attribute 'sam_wrapper' in the SAMDetector (node 308).
Best to check the GitHub issues
@@NerdyRodent thank you for reply. I googled the error and couldn't find anyone with the same error. I uninstalled and reinstalled onnxruntime, used the .pt models instead of .onnx in the DWPoseEstimator (node 238), tried other SAM Loaders, SAM detectors (idk what they do), updated COMFIUI (portable) via manager and .bat (ran the .onnxruntime remove/install cycle again). Nothing.
@@TaroFields yup, I wouldn’t know how to make it error like that either! Maybe try with a fresh conda environment?
@@NerdyRodent I’ve been suffering with these errors for 3 days, I can’t fix it in any way, maybe someone knows how to do this at the current time, where to get the right models and nodes? that would be incredibly cool!
Nerdy a question do I need to have IPAdapter because I have IPAdater Plus for this to work?
So long as you use the suggested models, any IP adapter will do!
@@NerdyRodent ok then here is my problem the IP model loader goes to either null or undefined, when I left click on it to load a model it does not allow me to, any thoughts?
Thx 4 Your hard Work. This is amazing =D
Glad you enjoy it!
No matter what I try it always comes back to missing certain models or nodes. Is there a place where I can look this up? from beginning to end.
You can drop me a dm on www.patreon.com/NerdyRodent if you need more help!
If I add a skiny person to the input image and set a fat person for control net pose, will output be fat person or will it just detect and adjust pose?
For some reason I dragged in the image but my workflow looks way different than yours, any idea why that would be?? Did they update it?
I made a few versions. This is version 1
I think I'm so close to get this working but I keep getting this: Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' , is there any way to fix this?
Hello Nerdy,
many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?
Im new so kinda overwhelmed 😅
Hope you can give a preview of your setup for comfy ui, also recommended models, which do you prefer and more importantly how much space does it take for you ?, can i run it through hard drive but sadly its not SSD
Hope to hear from you soon 🤞
I love running sd from my ssd as it helps reduce that initial load time for the models 😊 Overall, including controlnets and other models, it’s about 60GB
@@NerdyRodent really appreciate the reply sadly mine isn't SSD so will it perform worse or will it be manageable?
Also wanted to ask I'm a bit confused on the setup for "reposer" what is the basic setup for a noob like me if you dont mind, sorry for the bother tho
The basic setup is to first install comfy, then download the models and setup like in the video. If you’re new to Comfy, I’d suggest going through all the videos in my ComfyUI playlist as they start basic and work up to workflows like this
@@NerdyRodent thank you 👍
@@NerdyRodent You seem extremely responsive and I appreciate that.
Would you recommend SDXL, SD 1.5, or SD-Turbo?
What generative model do you think works best?
is there a way to get that done in forge?
Should be possible once forge gets custom workflows
hello, this is insanely cool! i was trying this out but i keep running into an error.
i get i also get an error when i load the node group image that says it cant find anything everywhere and prompts everywhere node types. and when i try to generate an image it gives a syntax error: unexpected non-whitespace character after json at position 4 (line 1 column 5).
I get the exact same error! " syntax error: unexpected non-whitespace character after json at position 4 (line 1 column 5)" and I can't find the 'anything everywhere' and 'prompts everywhere' node types, anyone have the solution?
@@MovieBr0 i fixed it, so the problem was that all the nodes still have mr rodent's sd models. you just have to manually go to each of the nodes and select the models you have for controlnet, sd model etc. also get the comfyui manager to get the anything everywhere and prompts everywhere custom nodes as well! :)
@@karthiknambiar1544 Hmm I replaced all the models, but don't seem to be able to find the anything everywhere and prompts everywhere custom nodes in the manager
@@karthiknambiar1544 I'm going to try replacing all the models, but I can't find the anything everywhere and prompts everywhere custom nodes in the manager, I've looked everywhere
@@MovieBr0 Install all the custom nodes from the print screens at the bottom of the GH page of the description. Now I'm getting “'NoneType' object has no attribute 'shape'” even with everything apparently set up.
What is this error i'm getting? TypeError: widget[GET_CONFIG] is not a function
First guess would be an old version of ComfyUI?
I'm getting the same error and the manager is telling me I've got the latest ComfyUI version
@@ak_rd444 possibly check the ComfyUI issues. This is just a workflow so zero code…
Is there no tutorial for correcting all the node errors? I don't know what I'm doing or what resources I need to get this working, the video just skips over this.
You can drop me a dm at www.patreon.com/NerdyRodent if you’re getting errors on your computer!
@@NerdyRodent How do I do that? There's no visible option to DM you on Patreon.
do you know how i can find (name) and uninstall the old version of NNLatentUpscale? this workflow refuses to work unless i did.
Excellent as always but how are we determining the size of the original image before upscale ?? Cant find any resolution inputs..???
Its 0.5 megapixels
Yep, found that thanks, but can we not determine width/height of first image generated from face/pose inputs ?? @@NerdyRodent
im sure this is an easy answer but i dragged in your handy workflow and installed alll the missing nodes but when i engage the prompt i get an error "SyntaxError: Unexpected non-whitespace character after JSON" and a column line designation. i would deduce that means a typo but in that fascinatingly complex work flow i don't know whats a collumn vs a line, etc any help would be great but ill get tugging at it
Did you ever figure it out? Same thing happening for me
Hi sir, could you have another look at the SDXL version of this.
I'm getting an issue with the SDXL version of this workflow ("SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model)
ERROR: IPAdapterApply: 'NoneType' object has no attribute 'encode_image'
I have a feeling it is an issue with the model used in the IP Adaptor Model, maybe the Load CLIP Vision too
After changing the model and clipvision to 'ip-adapter-plus-face_sdxl_vit-h' and 'CLIP-ViT-H-14-laion2B-s32B-b79K'
I now get:
Error occurred when executing KSamplerAdvanced:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
It’s best to use the suggested models - I can’t say if it will work with any others. You can drop me a dm on www.patreon.com/NerdyRodent for more info!
@@NerdyRodent Hi I managed to fix it. I tried to use the models in the workflow but it didnt work. So I downloaded the models found in the iAdaptor github (named in my above comment) and then fixed the next error I got by using the --flat 16 thingin the executable (I'm not in my pc I dont remenber the name) as my 1080ti processes in a different way to newer cards I guess
Great! Where is your reposer workflow json file?
Links are in the video description 😀
Does this wf still hold up with all the recent changes?
Yup! The plus face one is still the best that isn't for research-only use :)
Is this to be installed located locally on our home system or accessing a cloud/matrtix? I'm sorry but I'm a total beginner.
You can run ComfyUI anywhere, but best run at home!
I installed comfyui in a virtual env by cloning the repo. Set all directory paths to A1111 controlnet models , checkpoints etc. If I drag and drop this workflow I get the error: [When loading the graph, the following node types were not found:
CR Batch Process Switch
DWPreprocessor ]
I have also installed ComfyUI's ControlNet Auxiliary Preprocessors and DWPreprocessor Provider (SEGS) //Inspire shows in the list. How do I install the missing preprocessor and CR Batch Process Switch?
ComfyUI Manager is your friend!
Thanks for your inspiration and workflow. I have test it to create animation character, and I am combining this workflow to AnimatedDiff to play around. :)
Sounds great!
Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'
can someone please help me with this error
Let say I have an image of a specific chair and want SD to be able to create this chair from a specific angle, can I do that with this type of workflow?
I’ve only focused on faces here, but it may indeed be somewhat possible using a similar approach!
load ipadapter sd1.5 face me aparece en rojo y no me aparece como nodo faltante me podrias ayudar? saludos!
You’ll need to click “install missing”
@@NerdyRodent hola Nerdy no me aparece como missing, ya instale todo lo que me aparecía como faltante pero ese nodo sigue sin aparecer, creo le cambiaste el nombre no?
@@NerdyRodent install missing shows nothing is missing and half a dozen nodes are undefined with no names
Can you please tell how to get consistent outfit or clothes,how can I maintain same outfit?,Please it can be really useful, thank you.
Stay tuned 😉
Hi!
Can you do this tutorial without the spagettichaosUI?
I keep having issues getting the NNLatentUpscale node to load in. I understand that this is due to Effieciency Nodes no longer being supported. Has anyone found a workaround for this yet? I'm new to ComfyUI so I'm having some difficulty troubleshooting it.
You can search for the node in Manager
@@NerdyRodent even in manager no dice. I have uninstalled and reinstalled and then updated but still can’t make a go of it. Getting desperate.
for some reason 2 nodes are missing and i cant find a way to download them, "Anything Everywhere" and "Prompts Everywhere"
Search for “everywhere” to find it in manager
Thank you @NerdyRodent "Use Everywhere (UE Nodes)" fixed the issue :D
Do you have the workflow.json? I tried copying the visual by hand for learning, but I get loop errors, as there might be a bad node or something. I know that if I load the .json that I can use the manager to find the missing nodes.
You can drop me a dm via patreon for help!
@@NerdyRodentI have! Thanks!
wanted to see more examples with the side angle thingy
Well you've done it. I hope you're happy with yourself. I'm trying to figure out the spaghetti-hell that Comfy UI looks like to me! WELL PLAYED.
Most sincerely, well done and thank you.
Heh 😆 Yay! As long as something something, success is inevitable!
is there a way to generate a pic from SD and then programatically generate another pose from the character using the seed or something? not using ui
is there a reason why i am getting yellow solid color?
it was the vae model
Ok, as a newb this one was a lot more difficult! Still working the kinks out, have had some interesting crashes. Basically make it as far as the HD image, but end up with errors before the final upscale. The rig I'm working on is 9 years old, so wondering if errors could be related to using outdated hardware? Wondering about your hardware configuration vs. minimum requirements?
Yeah, 9 years old will likely run out of VRAM when going to higher resolutions
Why do people like ComfyUI, it's so messy and hard to follow, comfy isn't the right word for the ui, more like MessyUI
@@billionaeris1183 I like stable diffusion, I try learn comfy ui but I feel comfy with stable diffusion 😅
No bro once you get a hang over it you will start to love it
I didn't like it at first, always waited for new stuff to come to automatic1111. Then new stuff stopped coming. So I dove in. Asked a lot of noob questions. Got frustrated a billion times when I wanted to do something that was simple in a1111. And finally it started to click. And when new tech drops, there's some incredible devs out here that put it together so rapidly that I haven't touched any other UI in almost a year now. It's comfy once you get comfy with it.
Thanks for the video. Could it be possible to combine this approach with something like "instant lora" (yt vid) to be able to maybe load multiple angles of one face ?
Thank you, Nerdy Rodent! You should turn TH-cam membership 👍🔥🔥
I’ll have to take a look at that 😆
@@NerdyRodent maybe we can have “Raton Laveur”, “Honey badger” badges 😜
@@banzai316 lol. Honey Badges 😆
its wonderful. but I'm too dumb to use it. xD just keep getting errors and manager wont update/ download any of the open pose/ vision clips.
You can drop me a dm on patreon if you need more help!
Hello, thank you for the video. As I understand I can't write in positive promt things like . How can I add it to this workflow?
Check out my beginner video for the workflow basics - ComfyUI for first-time users! SDXL special
th-cam.com/video/2r3uM_b3zA8/w-d-xo.html
@@NerdyRodent thank you!
This is incredible
So is comfyui the interface I need for this?
Yup, this is a workflow for ComfyUI! You can drop me a dm on patreon if you need more help 😀
That character is beautiful!
Yes, she is, isn’t she 😃