Need help? Check out our Discord channel: bit.ly/3Wuy0af I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help 😉
Thanks, It works with me select_every_nth : 15 and 480-1080, but it is taking too long, I have CP Config with 20GB RAM, Core i3, and Win 11. Let me know if there is any process to fast, I want to create 20-second video, can I upload the image segment in "Keyframe IPAdapter -Load Image" to speedup process?
It's taking too long to create videos, so I'm considering generating animated sequence images instead. I'll merge these sequence images using Premiere Pro and create the video myself.
@MDMZ Can confirm as of 12/03 that following your tutorial steps work perfectly. Was not a ComfyUI user (InvokeAI) - but I needed a solution that can work with video. I will try to combine it with the new 4 step SDXL Lightning or JuggernautXL Lightning models. Seem a PERFECT fit for good quality vs speed IF it works.
Wow this is a great tutorial. It's taking its sweet time on my PC LOL but none the less it is actually working! I've seen so many vid to vid confyui videos, and everyon is jumping from left to right, with no coherency, no explaination about what model, and nodes do what, thanks for being super clear about those things. You single handedly just made this whole thing easy!
@@randy2d mine is also 12 gig version, but i just shifted to comfyui, in A1111 my 3060 couldn't do controlnet and hires fix in sdxl models.. so im wondering if this workflow will work on my system? thanks for reply
@@Aryannnnnn217 I never ran videos higher resolution than 960x512 because the upscaler I use I can just set the size I want to upsale to and than send it to the video combine to export
Fixes for current version - June 2024: IPAdapter won't work in folder 'ComfyUI_IPAdapter_plus' => Go to 'ComfyUI\models' folder and add a folder named 'IPAdapter' and place the IPAdapter plus there. (Now you can find the IPAdapter) Next you'll still have two red nodes in video reference and keyframe. Change those with 'IPAdapter Advanced' nodes (double click to search for nodes) and link the lines to these new nodes then remove the old ones. Make sure all connections are made like on the broken nodes.
Thanks for the video! Most creators forget, to show which models they got and where to put them in the ComfyUi folder. This step by step video helped a lot.
One idea to improve vídeo background. Try remove background first. Then apply a specific node only for background generation to avoid flickering. if you see flickering on hands you may use a technique by creating a boundering box that stylizes only hands and uses any hands detailers tools (lora, or node).
the future of art... downloading the newest hard to find files Thanks for the tut, it was helpful I don't know how I would have figured out all those steps
Buckle up, creators! This tutorial featuring ComfyUI IPAdapter + HotshotXL is your ticket to a whole new dimension of video wizardry. Transform your content with the power of A.I., and let the magic unfold! 🌟🤖
Let me get this right. You take a video of someone moving around. Then you upload the video, paying money to use this service. Then you type in some prompts, and you get an animated character back? You can do this for free on your own computer, without the middleman, and without sharing your ideas. It's called motion capture.
It's interesting, but without even paying much attention I can tell it takes a level of involvement comparable with traditional methods. Until so called generative AI offers simplistic prompting, nothing changes. You'll end up having to pay experts to use these systems. I see no benefit to anyone apart perhaps for those owning severs, sifting through endless input codes, searching for some kind of pay-dirt. It's a hard ask. A.I. systems (a fad) will NOT replace traditional techniques.
thank for ur great tutorial.is there any limition for frame rendering? i use ur workflow for a 32 seconds video file and its like 30 frames(1000 png) and i got this error after 1 hours render time on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
I experienced that the ip adapter loader was not working properly, because it was not finding the path. I solved it by specifying the path as below, in case anyone else is having trouble with this issue. ComfyUI / models / ipadapter / (ip-adapter-plus_sdxl_vit-h.safetensors) You will need to manually create an ipadapter folder under the models folder.
hi, in my case it shows this error when running: ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\dpt_hybrid-midas-501f0c75.pt.501f0c75b3bca7daec6b3682c5054c09b366765aef6fa3a09d03a5cb4b230853.incomplete'
for this method, you need a video card, if you don't have a decent one, you can run it on the cloud(for a fee): th-cam.com/video/XPRXhnrmzzs/w-d-xo.html
For those having issues with IPAdapter, simply replace the node refusing to load with a IPAdapter Advanced node. The node used in this, and many other workflows, was replaced with this new node. All the broken workflows will work the same if you plug everything back in to the new blocks properly :D
oh, I didn't realized it was necessary to do manually, may I know at which step you realized that and how you found out that u need to install it? I will pin the solution fore everyone else who runs into the same issue, thanks a lot!
Go to manager and press Update ComfyUI. Fixed it for me. After that, I got "ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'", which I fixed by once again going to the manager and pressing "Update all". Now it works and a few warnings I was getting also disappeared 😁
Nice tutorial but when I tried to follow along and run the prompt I'm getting this error, any idea why that might be? Error occurred when executing KSamplerAdvanced: ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'
bien explicado felicitaciones solo una duda , me sale: When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
Sir help me, how to fix this Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 1024])
1:45 when you go to that URL it says "We're adjusting a few things, be back in a few minutes..." I'm not really sure where to go now from here. And how come I dont have "manager" as a button option on my ComfyUI?
Great tutorial! Thank You! Could You please help me with one thing? My "Video Combine VHS" nodes are missing video formats - only "image/gif" and "image/webp" is available. WHat did I miss?
@@MDMZ Thank you, it works. But. This solution works only with specific version of IPAdapter Plus. When I updated the IPAdapter Plus via custom nodes update, the red blocks IPadapter Advanced appeared agan... What are the permanent troubles of these updates? Why they don`t make reversed compatibility?
Well it runs without any errors for me, but starting at the Video Combine Node all it generates is noise. Fractions of the prompt are even visible, but only so barely. Triple checked all models and settings but still it's like that. img2img same model is fine. ControlNet Preview Image seems fine. Any ideas?
how i can fix this ( When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
the workflow does not work. I've been trying all the afternoon. no matters the values I always get in the sampler exactly the same video without any change from the prompt. 😥
Hi, ive finnsly managed to get this to work after countless hours of tweaking. I was wondering if theres any way to randomize the art style and model throughout the video generation? Like you luma+warpfusion ai video?
The past few months were all about trying to minimize flicker and keeping things consistemt, if u want smth that changes frequently, u can try the old warpfusion
im getting the following error: Prompt outputs failed validation VHS_LoadVideoPath: - Custom validation failed for node: video - Invalid file path: anyway to fix this?
Hi, please I need your help, I just updated the comfyui, did update all, and I lost apply ipadapter within the video reference, and also the ipadapter from the keyframe adapter section.
can anybody tell me what it is and how can i get rid of it When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
I am using 11GB gpu and gets an error when executing KSamplerAdvanced (Allocation on device 0 would exceed allowed memory. (out of memory)). Is there any workaround or my gpu is just too weak?
Thanks so much for the tutorial! just wondering how to keep the character and background to be consistent and a bit stable? I had kept similar setting multiple times but the outcome of human and background still changing a lot.
you can try playing with the main settings I mentioned. there's no exact formula, so try different combinations, I've tried to explain what every one of those settings does in the video
Hi, need your help please, I just dragged and drop a workflow into comfyUI and got this error : When loading the graph, the following node types were not found: MiDaS-DepthMapPreprocessor VHS_VideoCombine VHS_LoadVideoPath. And i am trying to install missing but not working, please help me. Thank you bro.
what do you mean when you say "not working," is there an error? does it crash? context matters, feel free to share more info on discord, the community is very helpful
'git' is not recognized as an internal or external command, operable program or batch file. I just to start but cmd shows the above text what should I do.
I asked several times but didn't get an answer in any forum. the IPAdapter MOdel Loader node remains undefined also after installing the IP-adapter as shown in the video. how can I make sure it recognize it?? pelase hlp...
Thanks for the video! With this workflow, though, it doesn’t seem possible to change the background or the lighting on the model, does it? I have a specific outcome in mind that i can’t quite achieve. Using the same prompts, i get perfect results in still images with the same models and settings. Any thoughts on why that might be? 🤔 Is it about my IPAdapter? I had issues with it and switched to the workflow suggested in the description, which is quite different from the one in your video. Hmm..
the workflow fix is not very different, you cant really change the background with this workflow, but u might be able to by adding more nodes, check my dance transfer tutorial and see if that gets you any closer to what you wanna achieve
Is anyone else having problems when trying to "install missing custom nodes"? Apparently, this one is missing: "When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph."
@@MDMZ Thank you very much, I updated the workflow with the one provided in the description, and it worked. Now I will proceed with the tutorial. Big hug.
KSampler stays at 33% Although I waited for 4.5 hours, it still did not work at the same steps 30, I tried at 25, it is the same again, the last time I was able to run it at 9, it also stayed at 33% Is there a solution? System: Ryzen 5 3600/gtx1070Ti 8GB/16GB 3200mhz Ram/500gb SSD
When loading the graph, the following node types were not found: MiDaS-DepthMapPreprocessor VHS_VideoCombine IPAdapterApply VHS_LoadVideoPath Nodes that have failed to load will show as red on the graph
@@MDMZ Thanks for the answer! But 2 nodes appear with "import error", which are: (IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors (IMPORT FAILED) ComfyUI-VideoHelperSuite They are marked as "installed" but for some reason they have this import error...
For some reason, my preview is skipping a bunch of frames, so it's a flickering mess of randomness. Any idea what's causing that? Also, I'm getting a 24-frame load cap error from the KSampler.
Boss, please help me. After installing the missing nodes in my workflow, there are still two nodes that are red, and they are not displayed in the manager. They are in these two areas: Video Reference lPAdapter Keyframe IPAdapter
Need Some help here my IPAdapter Model Loader only has undefined on it and it wont let me change it to ip adapter plus sdxl (i've checked the file is in the folder)
followed every step as is, in the workflow all the other models appeared , except the IP Adaptor. Restarted comfy UI and then got lot of errors and and it quits at command prompt. What has gone wrong no Idea and what to do , no idea at all. Any ideas guys ?
@@MDMZ Thank you this worked! However, the memory consideration to run this successfully should be what? I have RTX 3070 ti GPU with 8 GB and 32 GB RAM, yet , the process fails , giving error out of memory message.
@@AICineVerseStudios 8GB VRAM is a little too low for it, from my experience, you need atleast 12, but you might be able to make it work if you drop the dimensions
hey creator a problem in IPAdapter, it is showing red and bizyair button is shown, and the comfyUI you opened and I got are different is there a solution for it
i get this error when it gets to the sampler : Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. any idea what should i do ?
there is an error coming saying... When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph. pleaser help
Hi man, how are you? Your tutorial is very good, just one thing I couldn't reproduce here, the iPadapter I installed is different from yours, mine doesn't have two inputs, the clip vision and the noise, this is hindering my performance, do you know how I can solve this? Thanks
Hi How can i fix that "When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph."?
@@Fixerdp Lowering the resolution and chopping the video in different frames resolved the problem for me. For example, I render half of the video, then the other half with frame_cap and skip_frames from the load video node.
Hey @MDMZ I have installed Comfy UI on my Mac M2. using another tutorial. Then when I came to go through your tutorial It seems I don't have access to the manager tab nor the share tab available to me on that window. I don't suppose you know why that is? Anyway thanks for sharing either way.
The whole tab is not showing or just the manager button ? Try updating comfyui, there's another solution for the manager button disappearing, in the pinned comment
Hi! When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph. Any hints?
I have followed the steps in this video. but I found something like this. how to solve it. hope you can help me. thanks. like this: When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
Hi, I successfully installed the workflow and installed the models in the same way. I processed it but it only processed 4% for about 1-2 hours. I have a 16GB 4070TI SUPER. I think it is not normal, what should I do?
Thank you very much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance
I haven't tried a video that long, I haven't encountered restrictions on video duration, but a 15 mins video will surely take so long to process if it works, why not give it a shot ?
Unless you have unlimited Vram or ton of it, you will get Out of Memory Error. You can segment it out and combine it together for that long video. It's prone to get more bad result for that long duration. Good luck trying to make a long one.
I'm getting into this error: "Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' and a bunch of other stuff, anyone had this problem?
I followed all the steps but in the workflow in the ipadapter model loader box it appears ipadapter file - null and when I want to select it changes to undefined, any solution please
help i geting Warning: Missing Node Types When loading the graph, the following node types were not found: IPAdapterApply No selected item Nodes that have failed to load will show as red on the graph.
hi, thanks for the tutorial it was a great help for a beginner like me. how can I Add my own custom SDXL Lora to the prompts here? like where do I connect em? thanks in advance
Hello, thank you for this amazing tutorial. I am stuck on the "preview image" node and the console tell me this : got prompt Failed to validate prompt for output 12: * IPAdapterModelLoader 138: - Value not in list: ipadapter_file: 'ip-adapter-plus_sdxl_vit-h.safetensors' not in [] Output will be ignored Failed to validate prompt for output 185: Output will be ignored How can I resolve this please ? (I am on mac)
@@MDMZ Thank you for your responsiveness. I forgot to specify that the model was installed in the right place on my Mac (/Users/myusername/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/models/ip-adapter-plus_sdxl_vit-h.safetensors) as in the video. And I used the workflow base that you put in the description. Is there an external factor to this for the path to be detected? A line of code or a command to enter?
Man you are awesome, thanks for your time and effort❤❤❤ do you know is it possible to use multiple controlnets in this pipeline? Depth+edge detection? I tried to use multi controlnet node but I got error with ip adapter then😢
if am not mistaken, the 2080 super has 8GB of VRAM ? which is considered a little low for this, you need atleast 12, it won't be blazing fast even with 24GB
I am loving this, but I have hit an issue, I hope you can please help I am getting * IPAdapterModelLoader 138: - Value not in list: ipadapter_file: 'None' not in [] Everything seems to be loading, but on the IPAdaptor Model Loader it says UNDEFINED Thank you so much guys, for any help you can offer xx
I keep getting this error, do you know what'll fix it? Error occurred when executing KSamplerAdvanced: ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'
Need help? Check out our Discord channel: bit.ly/3Wuy0af
I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help 😉
Thanks, It works with me select_every_nth : 15 and 480-1080, but it is taking too long, I have CP Config with 20GB RAM, Core i3, and Win 11. Let me know if there is any process to fast, I want to create 20-second video, can I upload the image segment in "Keyframe IPAdapter -Load Image" to speedup process?
It's taking too long to create videos, so I'm considering generating animated sequence images instead. I'll merge these sequence images using Premiere Pro and create the video myself.
@@bhabasankardagar5810 the process relies heavily on your GPU's VRAM
@@bhabasankardagar5810 that's a good workaround
@MDMZ
Can confirm as of 12/03 that following your tutorial steps work perfectly. Was not a ComfyUI user (InvokeAI) - but I needed a solution that can work with video.
I will try to combine it with the new 4 step SDXL Lightning or JuggernautXL Lightning models. Seem a PERFECT fit for good quality vs speed IF it works.
Wow this is a great tutorial. It's taking its sweet time on my PC LOL but none the less it is actually working! I've seen so many vid to vid confyui videos, and everyon is jumping from left to right, with no coherency, no explaination about what model, and nodes do what, thanks for being super clear about those things. You single handedly just made this whole thing easy!
that's really great to hear. Thank you 🙏
Will it work on rtx 3060?
@@Aryannnnnn217 well mine is a 3060 but the 12 gig version, also I kinda improved a few bits on the workflow and it is actually really good now
@@randy2d mine is also 12 gig version, but i just shifted to comfyui, in A1111 my 3060 couldn't do controlnet and hires fix in sdxl models.. so im wondering if this workflow will work on my system? thanks for reply
@@Aryannnnnn217 I never ran videos higher resolution than 960x512 because the upscaler I use I can just set the size I want to upsale to and than send it to the video combine to export
Fixes for current version - June 2024:
IPAdapter won't work in folder 'ComfyUI_IPAdapter_plus' => Go to 'ComfyUI\models' folder and add a folder named 'IPAdapter' and place the IPAdapter plus there. (Now you can find the IPAdapter)
Next you'll still have two red nodes in video reference and keyframe. Change those with 'IPAdapter Advanced' nodes (double click to search for nodes) and link the lines to these new nodes then remove the old ones. Make sure all connections are made like on the broken nodes.
thanks for sharing
still not work =/ i'm still trying
why dont you make another video?
Thanks bro
yes, it works exactly as said, but IPAdapter Advanced has a little bit different parameters to change, so i wonder how does it affect on output
Thanks for the video! Most creators forget, to show which models they got and where to put them in the ComfyUi folder. This step by step video helped a lot.
glad it was helpful
I want to try this, thanks man, love your channel
🙏
One idea to improve vídeo background. Try remove background first. Then apply a specific node only for background generation to avoid flickering. if you see flickering on hands you may use a technique by creating a boundering box that stylizes only hands and uses any hands detailers tools (lora, or node).
great tips!
the future of art... downloading the newest hard to find files
Thanks for the tut, it was helpful I don't know how I would have figured out all those steps
Think of it as modern day gold mining, nothing more than just another scheme to get people to hand over their ideas for free.
Buckle up, creators! This tutorial featuring ComfyUI IPAdapter + HotshotXL is your ticket to a whole new dimension of video wizardry. Transform your content with the power of A.I., and let the magic unfold! 🌟🤖
Let me get this right. You take a video of someone moving around. Then you upload the video, paying money to use this service. Then you type in some prompts, and you get an animated character back? You can do this for free on your own computer, without the middleman, and without sharing your ideas. It's called motion capture.
Incredible man!! Thanks, I was waiting for this! I prefer Comfyui than deforum. you are the best!💪
Glad you like it!
Your tutorials are one of the best and even beginners can become almost like pros by seeing your videos 🙌🏻
Happy to hear that!
THX Man thats great TUT... 1 time just looked and all instaled and ewrithing works FINE..:) Super tut..
Great to hear!
Best lesson ever. It's a pleasure to listen you)
glad you liked the video
It's interesting, but without even paying much attention I can tell it takes a level of involvement comparable with traditional methods. Until so called generative AI offers simplistic prompting, nothing changes. You'll end up having to pay experts to use these systems. I see no benefit to anyone apart perhaps for those owning severs, sifting through endless input codes, searching for some kind of pay-dirt. It's a hard ask. A.I. systems (a fad) will NOT replace traditional techniques.
@@handlenumber707
You didn't just write "you'll end up having to pay experts to use it" and "i see no benefit to anyone" in the same sentence 😅
@@MDMZ Wasn't the whole idea to avoid paying people to do things?
Excellent explanation! Kudos bro... You deserve millions of subscribers!
Thank you so much 😀
Thats what we call A Simple Tutorial, Well done man
hey i have rtx 3080 10gb and im stuck on :
Requested to load AnimateDiffModel
Loading 2 new models
0%| | 0/24 [00:00
me too
me too
thank for ur great tutorial.is there any limition for frame rendering? i use ur workflow for a 32 seconds video file and its like 30 frames(1000 png) and i got this error after 1 hours render time on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
probably running out of VRAM, make sure your GPU is not being overused by other apps during the process
If you have problem with ipadapetr , you should try change folder to ComfyUI\models\ipadapter (if you don't have folder "ipadapter " create it)
👍
I experienced that the ip adapter loader was not working properly, because it was not finding the path. I solved it by specifying the path as below, in case anyone else is having trouble with this issue.
ComfyUI / models / ipadapter / (ip-adapter-plus_sdxl_vit-h.safetensors)
You will need to manually create an ipadapter folder under the models folder.
thx man that solved the issue.
glad it worked
Saviour
hi, in my case it shows this error when running: ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\dpt_hybrid-midas-501f0c75.pt.501f0c75b3bca7daec6b3682c5054c09b366765aef6fa3a09d03a5cb4b230853.incomplete'
Do you need a video card for this? or can it run on Google Colab? Thank you
for this method, you need a video card, if you don't have a decent one, you can run it on the cloud(for a fee): th-cam.com/video/XPRXhnrmzzs/w-d-xo.html
2:09 i didn't get any Missing nodes list here ... Please help me 😢
Someone didn't import the workflow 😏
@@MDMZ how to import that .. plz help me 🥲 i am new in this
@@uttarakhand_chess it's in the video
Hi..may I know the pc requeriments to use this?😢
run out of memory error here 😢
@@sajinscartoonvine local, but i'll try remote
me too@@sajinscartoonvine
I have a 3060 12 gb, 5 times I get an error that there is not enough memory. 😢😢😢
@@Fixerdp try remote, it worked here
For those having issues with IPAdapter, simply replace the node refusing to load with a IPAdapter Advanced node. The node used in this, and many other workflows, was replaced with this new node. All the broken workflows will work the same if you plug everything back in to the new blocks properly :D
yep! i've added a workflow that uses the new node in the description
@@MDMZ Awesome, thanks for all your work, I really appreciate your informative guides and videos man!
where to put IP adapter model. I am using the updated workflow but the workflow does not see the model in the legacy dir
I managed to run this but now I am getting out of memory... Which style transfer vid2vid would you recommend me to run on a 8gb VRAM GPU?
@@MDMZ Thank you. Will this also help with the 24-frame load cap error I'm getting from the KSampler?
Great tutorial man, but please next time tell us we need to install git from the official page in the first place xD
oh, I didn't realized it was necessary to do manually, may I know at which step you realized that and how you found out that u need to install it? I will pin the solution fore everyone else who runs into the same issue, thanks a lot!
@@MDMZ Ehi there, I needed to install Git when I first run cmd and pasted the link
I thought I messed up on the first step, thanks bro
I get this error and I don't know how to solve it : 'T2IAdapter' object has no attribute 'compression_ratio'
Go to manager and press Update ComfyUI. Fixed it for me. After that, I got "ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'", which I fixed by once again going to the manager and pressing "Update all". Now it works and a few warnings I was getting also disappeared 😁
I haven't encountered this one yet
Nice tutorial but when I tried to follow along and run the prompt I'm getting this error, any idea why that might be?
Error occurred when executing KSamplerAdvanced:
ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'
hi, I know many people are getting that error recently, can you try updating comfyui?
bien explicado felicitaciones solo una duda , me sale: When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
check the pinned comment, just shared a fix
@@MDMZ gracias ahora si me salió
First comment
In 10 sec
You're so cool.
This is a phenomenally good video. Thank you for this.
You're very welcome!
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
check the pinned comment
Sir help me, how to fix this Error(s) in loading state_dict for ImageProjModel:
size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 1024])
Reply me
Hi, please check the pinned comment
I had a similar issue, it was because I didn't select the image_encoder in the Load CLIP Vision node
1:45 when you go to that URL it says "We're adjusting a few things, be back in a few minutes..." I'm not really sure where to go now from here. And how come I dont have "manager" as a button option on my ComfyUI?
hi, try again, seems to be fixed now
for the manager, check this video: th-cam.com/video/E_D7y0YjE88/w-d-xo.html
Great tutorial! Thank You! Could You please help me with one thing? My "Video Combine VHS" nodes are missing video formats - only "image/gif" and "image/webp" is available. WHat did I miss?
are you using the same workflow from this video? in any case, you can export to webp and convert later
the node "animatediff combine" change to "video combine" . there are two nodes look like the same one . but different. try it again.
You are amazing ya akhi 😎 as always awesome and creative videos 👍🏻❤️
frr mahabtch tslahli ma3lich fb tfhmni kimah psk dert kifo mhbtch tmchi mlwl
🙏
I have done this installation like you do in the video but it is not working
same
I wish i could help, but it's impossible for me to tell why, feel free to share more info on discord, someone might be able to help
@@MDMZ getting error while installing manager - 'git' is not recognized as an internal or external command,
operable program or batch file.
ksampler is not running for me before that the que gets stopped also i can't see any preview video how to fix this ???
could be a memory issue, check the pinned comment
IPAdapterApply node is missing (red) and not seen in Intall Custom Notes Tab. Whats happen? How to fix that?
Use the workflow fix in the description
@@MDMZ Thank you, it works. But.
This solution works only with specific version of IPAdapter Plus. When I updated the IPAdapter Plus via custom nodes update, the red blocks IPadapter Advanced appeared agan... What are the permanent troubles of these updates? Why they don`t make reversed compatibility?
Well it runs without any errors for me, but starting at the Video Combine Node all it generates is noise. Fractions of the prompt are even visible, but only so barely. Triple checked all models and settings but still it's like that. img2img same model is fine. ControlNet Preview Image seems fine. Any ideas?
this happened to me before, I solved it by reinstalling from scratch
If you are getting an IPAdapter apply problem, replace that node by IPAdapterAdvanced, and reconnect with the same inputs and outputs.
Such a useful video, thanks heaps for putting this together.
Glad it was helpful!
how i can fix this ( When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
there's a solution in the pinned comment
@@MDMZ sir, I want use my GPU for Rendering in comfyUI, GPU is getting 5 so 14% and Ram is getting 80%
the workflow does not work. I've been trying all the afternoon. no matters the values I always get in the sampler exactly the same video without any change from the prompt. 😥
hi, can you share more context on discord ?
Hi, ive finnsly managed to get this to work after countless hours of tweaking. I was wondering if theres any way to randomize the art style and model throughout the video generation? Like you luma+warpfusion ai video?
The past few months were all about trying to minimize flicker and keeping things consistemt, if u want smth that changes frequently, u can try the old warpfusion
im getting the following error:
Prompt outputs failed validation
VHS_LoadVideoPath:
- Custom validation failed for node: video - Invalid file path:
anyway to fix this?
make sure you paste the correct file to your video
@@MDMZ It was the correct file, but I don't know what was wrong. I fixed the issue by removing that node and then adding the upload video node.
@@jittthooce oh nice, glad that fixed it
Hi, please I need your help, I just updated the comfyui, did update all, and I lost apply ipadapter within the video reference, and also the ipadapter from the keyframe adapter section.
I just found your comment about the update, thanks a lot, شكرا يا حبيبي
u r welcome, glad it worked
can anybody tell me what it is and how can i get rid of it
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
> i tried installing ipadapters again
> installed all requirments
> updated everything
update all from the manager and use the fix workflow(in the description), there are more info on discord, link in the pinned comment
very nice and love using AI for animated features. Great sharing
Thank you! Cheers!
great video, I just want to know how could I train my own model data from my own art set 2:30 and use that as the style reference?
yeah you can do that, I don't have a video on training your own model, but there are several tutorials on youtube
I am using 11GB gpu and gets an error when executing KSamplerAdvanced (Allocation on device 0 would exceed allowed memory. (out of memory)). Is there any workaround or my gpu is just too weak?
same problem here.. just errors when its coming to the KSampler
This usually means u need a more powerful gpu with higher VRAM
What errors are u getting, did u check the pinned coment ?
@@MDMZ yeah, I have just 8gb.. still waiting for my credit card to use the hosting services 😅 but I figured out, when I reduce the steps, it's working
@@kietzi glad to hear that
Thanks so much for the tutorial! just wondering how to keep the character and background to be consistent and a bit stable? I had kept similar setting multiple times but the outcome of human and background still changing a lot.
you can try playing with the main settings I mentioned. there's no exact formula, so try different combinations, I've tried to explain what every one of those settings does in the video
hi thanks for your guide, i tried following step by step but the node "IPAdapterApply" is missing. how to install it...
hi, use the fix workflow instead, in the description
Thank you very-very much !❤️👍👍
You're welcome 😊
Hi, need your help please,
I just dragged and drop a workflow into comfyUI and got this error :
When loading the graph, the following node types were not found:
MiDaS-DepthMapPreprocessor
VHS_VideoCombine
VHS_LoadVideoPath.
And i am trying to install missing but not working, please help me. Thank you bro.
what do you mean when you say "not working," is there an error? does it crash? context matters, feel free to share more info on discord, the community is very helpful
@@MDMZ Ok, I will contact you
'git' is not recognized as an internal or external command,
operable program or batch file.
I just to start but cmd shows the above text what should I do.
You need to install git, you can find it with a quick google search
What are the required sepcs of the pc to run the program?
most importantly is to have atleast 12GB of GPU VRAM
I asked several times but didn't get an answer in any forum.
the IPAdapter MOdel Loader node remains undefined also after installing the IP-adapter as shown in the video. how can I make sure it recognize it?? pelase hlp...
make sure you have the files in the right folders
Thanks for the video! With this workflow, though, it doesn’t seem possible to change the background or the lighting on the model, does it? I have a specific outcome in mind that i can’t quite achieve. Using the same prompts, i get perfect results in still images with the same models and settings. Any thoughts on why that might be? 🤔 Is it about my IPAdapter? I had issues with it and switched to the workflow suggested in the description, which is quite different from the one in your video. Hmm..
the workflow fix is not very different, you cant really change the background with this workflow, but u might be able to by adding more nodes, check my dance transfer tutorial and see if that gets you any closer to what you wanna achieve
@@MDMZ Thanks, will look at it.
Is anyone else having problems when trying to "install missing custom nodes"?
Apparently, this one is missing:
"When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph."
hi, this is normal, there's a fix workflow in the description, and you can find more help on discord
@@MDMZ Thank you very much, I updated the workflow with the one provided in the description, and it worked. Now I will proceed with the tutorial. Big hug.
Hi good evening, your video helped me a lot, but when I run the model for rev_animated v2, I get an error in ''CLIPTextEncodeSDXL'', can you help me?
make sure the model you're using is an SDXL model
KSampler stays at 33% Although I waited for 4.5 hours, it still did not work at the same steps 30, I tried at 25, it is the same again, the last time I was able to run it at 9, it also stayed at 33% Is there a solution?
System: Ryzen 5 3600/gtx1070Ti 8GB/16GB 3200mhz Ram/500gb SSD
8GB might be a little low for it, but it could also be happening for another reason, did you try setting a lower resolution ? maybe 4080p
When loading the graph, the following node types were not found:
MiDaS-DepthMapPreprocessor
VHS_VideoCombine
IPAdapterApply
VHS_LoadVideoPath
Nodes that have failed to load will show as red on the graph
Make sure you install missing nodes
@@MDMZ Thanks for the answer! But 2 nodes appear with "import error", which are:
(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors
(IMPORT FAILED) ComfyUI-VideoHelperSuite
They are marked as "installed" but for some reason they have this import error...
Command line error details:
[AnimateDiffEvo] - ERROR - No motion models found. Please download one and place in: ['C:\\Users\\Thyayrô\\Downloads\
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-AnimateDiff-Evolved\\models', 'C:\\Users\\Thyayrô\\Downloads\
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\models\\animatediff_models']
### Loading: ComfyUI-Manager (V2.36)
### ComfyUI Revision: 2228 [56333d48] | Released on '2024-06-07'
[VideoHelperSuite] - WARNING - Failed to import imageio_ffmpeg
[ComfyUI-Manager] default cache updated: raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json[ComfyUI-Manager] default cache updated: raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json[ComfyUI-Manager] default cache updated: raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[VideoHelperSuite] - ERROR - No valid ffmpeg found.
@@thyayronovaes7684 can u share the error texts on discord ?
@@MDMZ ok, going there!
This guide is so cool. What do I need to change so it will have better result for celebs?
a model trained on celebrity pics would probably help, but sometimes using the person's name in the prompt works fine
I have tried that, but I get many artifacts on face and cloths@@MDMZ
For some reason, my preview is skipping a bunch of frames, so it's a flickering mess of randomness. Any idea what's causing that? Also, I'm getting a 24-frame load cap error from the KSampler.
can you share the output and workflow on discord ?
Boss, please help me. After installing the missing nodes in my workflow, there are still two nodes that are red, and they are not displayed in the manager. They are in these two areas: Video Reference lPAdapter Keyframe IPAdapter
hi, use the fix workflow instead(in the description)
Need Some help here my IPAdapter Model Loader only has undefined on it and it wont let me change it to ip adapter plus sdxl (i've checked the file is in the folder)
it has been updated use git to role back or replace it with ipadapter advance
Error showing " IP adapter apply" missing ..what to do ?????
Same error 😢
the solution is in the description
Can you help me - how can i load different models into your workflow? i get errors if you use other ones than in your description. Thanks in advance
make sure they are SDXL and not SD1.5 models
@@MDMZ okay thanks a lot! and any idea how to get other models running in that workflow? for exampe Lora-models?
Any recommended hardware as far gpu and cpu?
I have a 3060 12 gb, 5 times I get an error that there is not enough memory.
lol@@Fixerdp
try lower resolution@@Fixerdp
I'd say anything lower than 12GB will give you trouble
great tutorial,...!!!! i wanna try, where i can find the original video of dancing man?
just added in the description
@@MDMZ thank you and it work ,.... thank you,...
followed every step as is, in the workflow all the other models appeared , except the IP Adaptor. Restarted comfy UI and then got lot of errors and and it quits at command prompt.
What has gone wrong no Idea and what to do , no idea at all. Any ideas guys ?
Ok it started but I don't see any model in IPAdaptor node. Although the model is in the same folder as per the tutorial.
try creating a new "ipadapter" folder under ComfyUI\models\
and place the models there
@@MDMZ Thank you this worked! However, the memory consideration to run this successfully should be what? I have RTX 3070 ti GPU with 8 GB and 32 GB RAM, yet , the process fails , giving error out of memory message.
@@AICineVerseStudios 8GB VRAM is a little too low for it, from my experience, you need atleast 12, but you might be able to make it work if you drop the dimensions
hey creator a problem in IPAdapter, it is showing red and bizyair button is shown, and the comfyUI you opened and I got are different is there a solution for it
hi, make sure you use the fix workflow, it's in the description
What settings can I use for 1.5 I have 3090 RTX with 8gb ram but keeps getting the cuda memory error with the settings in the video.
unfortunately, 8GB is very low, you might not be a blt o run it smoothly, but reducing the steps value and video resolution might help
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
check the pinned comment, just shared a fix
i get this error when it gets to the sampler : Error occurred when executing KSamplerAdvanced:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
any idea what should i do ?
hi, you can check the pinned comment
My graphics card is 4070ti 12GB, and it takes 1.5 hours to generate a 6s long video. Is this normal?
a bit too slow, what resolution are you generating at ? maybe you can try to lower it
there is an error coming saying...
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
pleaser help
Hi, check the pinned comment, this has been resolved
Hi man, how are you? Your tutorial is very good, just one thing I couldn't reproduce here, the iPadapter I installed is different from yours, mine doesn't have two inputs, the clip vision and the noise, this is hindering my performance, do you know how I can solve this? Thanks
you can just change the properties that are available in the new ipadapter
please help with mentioned bellowed errors, 1070 8gb gpu and ram also 8gb
I have a 3060 12 gb, 5 times I get an error that there is not enough memory.
8GB VRAM might be a bit too low for this
Hi
How can i fix that
"When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph."?
there's a solution for this in the pinned comment
I have a 4090 24fb and get CUDA OUT OF MEMORY ERROR, I run every workflow but this does not seem to work
did you manage to solve the problem? It keeps telling me that there is not enough memory
@@Fixerdp No i did not... Need @mdmz for that
@@Fixerdp Lowering the resolution and chopping the video in different frames resolved the problem for me. For example, I render half of the video, then the other half with frame_cap and skip_frames from the load video node.
that's really weird, make sure your GPU memory is not taken over by other apps, I'm glad you found a fix tho
not sure which part you're referring to
Hey @MDMZ I have installed Comfy UI on my Mac M2. using another tutorial. Then when I came to go through your tutorial It seems I don't have access to the manager tab nor the share tab available to me on that window. I don't suppose you know why that is? Anyway thanks for sharing either way.
The whole tab is not showing or just the manager button ? Try updating comfyui, there's another solution for the manager button disappearing, in the pinned comment
@@MDMZ Just the manager button.
Ive have figured it out!
Hi, I'm not able to see the models in the checkpoint? I've them downloaded in the same location as done in the video.
hit refresh, try restarting otherwise
Hi! When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph. Any hints?
Yes, use the fix workflow instead, it's in the description
erro: When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
hi, make sure you use the fix workflow (in the description)
I have followed the steps in this video. but I found something like this. how to solve it. hope you can help me. thanks.
like this:
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
Hi, the solution is in the pinned comment
Pls help , got this MemoryError: cannot allocate array memory
looks like u have low VRAM or shooting the settings too high
Hi, I successfully installed the workflow and installed the models in the same way. I processed it but it only processed 4% for about 1-2 hours. I have a 16GB 4070TI SUPER. I think it is not normal, what should I do?
Depends, How many frames are u trying to process?
Thank you very much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance
I haven't tried a video that long, I haven't encountered restrictions on video duration, but a 15 mins video will surely take so long to process if it works, why not give it a shot ?
15 minute or 15 sec .15 minute destroy ur pc bro😂😂😂
Unless you have unlimited Vram or ton of it, you will get Out of Memory Error. You can segment it out and combine it together for that long video. It's prone to get more bad result for that long duration. Good luck trying to make a long one.
@@gopxrock4950 thank you so much
I'm getting into this error: "Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' and a bunch of other stuff, anyone had this problem?
yeah fuck it, im out. requirements are high as fuck. yiou should point that dude.
I followed all the steps but in the workflow in the ipadapter model loader box it appears ipadapter file - null and when I want to select it changes to undefined, any solution please
hi, use the fix workflow in the description box, you can also head to discord for more help(link in the pinned comment)
help i geting
Warning: Missing Node Types
When loading the graph, the following node types were not found:
IPAdapterApply
No selected item
Nodes that have failed to load will show as red on the graph.
hi, use the fix workflow in the description
@MDMZ tryed, only way to fix it was to change out adapter nodes for an advanced adapter node. Thanks for replying 🙏💯
@@callew27 yep thats what the fix workflow was for, glad you fixed it
hi, thanks for the tutorial it was a great help for a beginner like me. how can I Add my own custom SDXL Lora to the prompts here? like where do I connect em? thanks in advance
This would be a seperate tutorial on its own, did you try finding other videos on youtube ?
Hello, thank you for this amazing tutorial. I am stuck on the "preview image" node and the console tell me this :
got prompt
Failed to validate prompt for output 12:
* IPAdapterModelLoader 138:
- Value not in list: ipadapter_file: 'ip-adapter-plus_sdxl_vit-h.safetensors' not in []
Output will be ignored
Failed to validate prompt for output 185:
Output will be ignored
How can I resolve this please ? (I am on mac)
you're missing the ipadapter model, make sure it's in the right folder
@@MDMZ Thank you for your responsiveness. I forgot to specify that the model was installed in the right place on my Mac (/Users/myusername/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/models/ip-adapter-plus_sdxl_vit-h.safetensors) as in the video. And I used the workflow base that you put in the description.
Is there an external factor to this for the path to be detected? A line of code or a command to enter?
Man you are awesome, thanks for your time and effort❤❤❤ do you know is it possible to use multiple controlnets in this pipeline? Depth+edge detection? I tried to use multi controlnet node but I got error with ip adapter then😢
theoretically, it should be possible, I haven't tried it myself.
i tried to with multiple control net, but it work just with about 20 frames, but when i try to make more frames of video there is error
Thanks for the clarity.
Glad it was helpful!
nice one . .,how long it take u to render,. why in my setting in low v ram its too slow. although i have good gpu. ,2080 super
if am not mistaken, the 2080 super has 8GB of VRAM ? which is considered a little low for this, you need atleast 12, it won't be blazing fast even with 24GB
@@MDMZ thanks bro. Can you make that popular rose or plants dancing animation tutorial
hello sir it says cuda out of memory error can u please help me i only have 6gb vram
unfortunately that's a little too low for this, check the pinned comment for some tips
I am loving this, but I have hit an issue, I hope you can please help
I am getting * IPAdapterModelLoader 138:
- Value not in list: ipadapter_file: 'None' not in []
Everything seems to be loading, but on the IPAdaptor Model Loader it says UNDEFINED
Thank you so much guys, for any help you can offer xx
did you download the ipadapter models and place them in the right folder ?
Moving to ComfyUI\models\ipadapter works for me
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 5.35 GiB
Requested : 1.41 GiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
this means your GPU doesnt have enough VRAM
@@MDMZ how much memory does your GPU have?
@@rayzerfantasy 24GB
@@MDMZ I have a 8gb NVIDIA RTX 3050 😕
I keep getting this error, do you know what'll fix it?
Error occurred when executing KSamplerAdvanced: ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'
seems like it's happening with many comfyui users recently, I hope it gets fixed with an update:
github.com/comfyanonymous/ComfyUI/issues/3044
@@MDMZ do you know if this error got fixed?
Hi Mohamed
Whats the minimum specs to run this?
Thanks for the video
I haven't tried myself but I believe u can run it on 12GB VRAM or higher