Need help? Check out our Discord channel: bit.ly/44Qtkin Use these workflows to add more than 4 images: bit.ly/45lDiZD I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help
You are such a master at Comfy UI.. but also just as an educator! Having spent so many hours on youtube, your approach to teaching is just so concise, easy to follow, and generally brilliant... Thank you so much for taking the time to share your knowledge with the world.. You legend!
I just wanted to express my immense appreciation for your ComfyUI Animatediff tutorial! It was incredibly clear and well-paced, making a complex topic feel so approachable. Your detailed explanations and step-by-step guidance were exactly what I needed to grasp the concepts fully. Thanks to you, I feel much more confident in implementing these animations in my projects. Looking forward to learning more from your expertise!
Hey, Thank you a lot for this tutorial. The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images. Thanks in advance for your help.
I have a question that other people might have too. I am new to the AI world and don’t know how things work. In your video, you show us how to do everything step by step. But if I want to try new things or use other models, how do I do that? I think we can do more fun stuff in ComfyUI besides just changing the video. Can you make a video about that or write back to explain? This will help many people like me. Thank you for all your hard work!
Great guide thanks. I managed to produce something and this basically is what Krea ai is offering but their output is bit dark and unpolished. Really appreciate the points on using vram.
these tutorials are great they just completely skip crucial steps for the truly uninitiated .... i keep having problems installing all the models and no one is providing a clear instruction ive never used github before and im not a developer ...... maybe its gatekeeping, maybe its just me ... but this is truly the most frustrating learning experience ive ever had
Thank you for this! Any other morph workflows keep the image "exact" rather than reimagining it? I love these morphing loops but would love it to follow my initial images completely. I though I saw a 2 image morph that seemed to not "reimagine" the inputs. Thank you for the detailed settings walkthrough. Improved my results considerably.
Great vid! Did you ever find a way to keep the likeness of the celebrities you were morphing between? I know you said you were looking into it. Thanks!!!
Thanks for your amazing workflow! Please;I have two questions: 1) In the Samplers group there is "Upscale Image By" node: scale_by 3.75 = 1080p and you say also is possible to set scale_by 2.5 = 720p . - How do you calculate the factors (3.75 and 2.5) for 1080p and 720p? 2) If we choose scale_by 3.75 in the Upscale /w Model group in the node "Upscale Image" we need to set width 1080 height 1920. If we were to choose scale_by 2.5 in the node "Upscale Image" we should change it to width 720 height 1280 ?
I later found out that both the scale ratio and final resolution are independent from each other, you can use the ratio to do a first upscale, then the final resolution to upscale again. and as for the calculation, simply multiply the ratio to the batch size and you'll get the upscaling resolution
@@MDMZ Thanks for the answer. Simply multiply the ratio to the batch size and you'll get the upscaling resolution: In your original workflow: batch size = 96 512 x 288 = 16:9 ratio scale_by 1.75 1) 16:9 = 1.777777777777778 * 96 = 170.6666666666667 2) 96 * 16 = 1.536 96 * 9 = 864 how do you get scale_by 1.75?
I Still can't get thus workflow to work :( i am getting this error after following everything precisely. IP ADAPTER ADVANCED Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).
I can't get rid of the red notifications, and when I try to update or install anything (at video 1:13) I get errors. Reinstalled and uninstalled the program several times already and still errors. Can you please advise me what I am doing wrong?
Hi @mdmz, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
thank you brother , it was working perfect but just today there's a problem showed up in the simple math node in the qr code group , would you please help with it ?
Hello, I’m following the ipiv's Morp tutorial, and everything is going well, but I’m using reference images without humans, just hallways or structures, and yet a human always appears at the end. Is there something I’m doing wrong? I’m using the same models and LoRAs that come by default. The only thing I’ve adjusted is the motion scale to add more movement to the animation.
perhaps you can try to use another model thats trained on images similar to what you're trying to achieve ? example: if you wanna generate buildings, get a model that's trained on building images
@@MDMZ Thank you for the response. I tried some more architectural models, but I don't think they were that good. In the end, I believe what helped was increasing the weight in the IPAdapter Advanced (haha, but I'm not sure that's the reason). Thank you very much for the effort put into this tutorial; it's very good.
How to make it if from Video? from 1 video or multi video for example 6 video faces? please make a tutorial, if possible sync with the mouth in the video.
Hello my friend!! I am following the Morph tutorial from the video: "Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial" I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!
I hope it's not too much of a bother but when I place the ipiv-Morph-img2vid-AnimateDiff-HyperSD on the page I get an error message: Warning: Missing Node Types When loading the graph, the following node types were not found: About 18 of them. How do I fix this?
@@MDMZ ah, thanks for getting back so quickly! I was following along from 0:53 to 0:58 with dragging and dropping the json. In this tutorial your nodes are green and when I drag and drop 18 are red and that's when I get the error message. But thanks anyway...
When I use a real person's image, it completely changes that person's face to a different man. Is there any way to fix that to maintain the same face? By the way, great video. Keep it up.🔥🔥
Thank you brother for being so kind doing this amazing vid ❤ , sadly i still can't get any results.. i followed all the steps and every file is in it's right place , but i always get an error once i reach to ksampler , would you please help ?
Hi There! This is so sick, do you do anything paid or know anyone who does this for commision? Just recently have been exploring AI art and am totally new to the field. Thank you so much!
To use juggernaut_reborn, where in the ComfyUI folder structure did you put it? I downloaded it and tried a bunch of different places but it wouldn't show up in the "Load checkpoint" box
Hi! I regularly use this workflow, but lately, I've encountered a few issues with it. All the problems started after the Comfy update. Initially, the issue was that instead of smooth morphing, a bunch of images similar to what I inserted into the IP adapter were generated, and they would rapidly switch between each other (restarting helped, but only for one generation). However, the biggest problem appeared today (also after the update): there’s an issue with the "Simple Math" node, and honestly, I don’t know what to do. There are just two red circles around "A" and "B" that are highlighted. I’d really appreciate your help-I have no one else to turn to
@@MDMZ After the recent update, the issue with the IP adapter has been completely resolved, but the workflow still isn't working due to the Simple Math node.
@@MDMZ I've fixed everything. In case anyone else encounters this issue, you just need to replace the "Simple Math" node with "Math Expression" and make sure to write "a/b".
anyway to set video combine to download in h264 format because the uncompress is too big also for the images anyway to always save as jpeg or webp not png? ps im not talking about previewing.
No matter what I do, I always get the, "cannot find IPAdapter model" when I try to use Plus(High Strength), I've got the model, several times, and renamed it properly; but it's NEVER found. Thoughts?
@@MDMZ I've got it In the /ComfyUI/models/clip_vision folder. Same spot as where I have the medium Strength model that IS functioning. Looks like I may need a hardware upgrade or something though; using a medium strength model, my project fails at the second Ksampler "torch.cuda.OutOfMemoryError: Allocation on device" Running an RTX 4070TI Super, 16Gig VRAM, I feel that SHOULD be enough.
@@MDMZ I'm putting the model in the /ComfyUI/models/clip_vision folder, same folder as the medium strength model which is working. I Get a couple "Allocation on device" errors; Running an I-9, RTX 4070TI Super 16 G VRAM and 32 GIG ram, I'm wondering if I need more RAM for this workflow?
@@NWO_ILLUMINATUS that's not the correct folder for IPAdapter models, it should be placed in the IPAdapter models folder, and you might need more VRAM depending on how high you're pushing your settings
@@MDMZ Saddly, didn't work. Still model not found. Also, the notes in the workflow say to add the models to the clip vision folder, and the medium model works in the clip vision folder. odd
Where can i find the list of the models? The link is not working :( Can you please update the Link since the discord Chat its linking to has no list as pinned Message.
I went according to the same links you posted and download *ed the required files, but it gave me an error again, what's the problem? Error occurred when executing ADE_LoadAnimateDiffModel: 'Hyper-SD15-8steps-lora. safetensors' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks. File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj. FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 74, in map_node_over_list results. append(getattr(obj, func)( ** slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff- Evolved\animatediff odes_gen2.py", line 178, in load_motion_model motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) AAAN File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved\animatediff\model_injection.py", line 1084, in load_motion_module_gen2 mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name) AAA File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff- Evolved\animatediff\motion_module_ad.py", line 136, in normalize_ad_state_dict raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max} downblocks.")
hey, it's possible that the file 'Hyper-SD15-8steps-lora. safetensors' is corrupted, try re-downloading it, you can also share this on discord for more help
Hey i am using think-diffusion for this. When i am uploading 2 files which named as model.safetensors inside ComfyUI / clip_vision folder. I am not able to rename it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
@@MDMZ Nope. Patreon is blocked from their side, they decide which nation have privilege to join... If workflow is free, maybe u can link it via google drive?
Can u help me? Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models raise Exception("IPAdapter model not found.")
Need help? Check out our Discord channel: bit.ly/44Qtkin
Use these workflows to add more than 4 images: bit.ly/45lDiZD
I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help
Hey man, the link says its invalid. Could you update it please? :)
@@grovemonk fixed
@AillusoryOfficial thanks for letting me know, just updated the link
i cant access your discord server for list of models, any ideas?
hi how can i download all the models if i cant join your discord says unable to accept invite
I gave up on comfyui forever until I saw your tutorial. Yours is truly the best one on youtube! Thank you, and keep up your amazing work!
Wow, thank you!
You are such a master at Comfy UI.. but also just as an educator! Having spent so many hours on youtube, your approach to teaching is just so concise, easy to follow, and generally brilliant... Thank you so much for taking the time to share your knowledge with the world.. You legend!
Wow, thank you!
I just wanted to express my immense appreciation for your ComfyUI Animatediff tutorial! It was incredibly clear and well-paced, making a complex topic feel so approachable. Your detailed explanations and step-by-step guidance were exactly what I needed to grasp the concepts fully. Thanks to you, I feel much more confident in implementing these animations in my projects. Looking forward to learning more from your expertise!
Very helpful and thank you so much. I would recommend this to my friends who asked me before about these ai morph transitions. again thank you.
Thanks for sharing!
As always the best tutorial ever, helped reaching dope crazy results thanks bro 🙏
Happy to help!
Hey,
Thank you a lot for this tutorial.
The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images.
Thanks in advance for your help.
strange, can u share more context on discord ?
I have a question that other people might have too. I am new to the AI world and don’t know how things work. In your video, you show us how to do everything step by step. But if I want to try new things or use other models, how do I do that? I think we can do more fun stuff in ComfyUI besides just changing the video. Can you make a video about that or write back to explain? This will help many people like me. Thank you for all your hard work!
I get you, I think that comes with experience, try different workflows, you can also look up tutorials on specific nodes and what they're used for
Great guide thanks. I managed to produce something and this basically is what Krea ai is offering but their output is bit dark and unpolished. Really appreciate the points on using vram.
interesting, I'm gonna try Krea ai
these tutorials are great they just completely skip crucial steps for the truly uninitiated .... i keep having problems installing all the models and no one is providing a clear instruction ive never used github before and im not a developer ...... maybe its gatekeeping, maybe its just me ... but this is truly the most frustrating learning experience ive ever had
can you head to our discord and share what specific issues u ran into? we'll be happy to help
@@MDMZyou are a gentleman, soooo patient😂
Thanks! Been looking for a tut on AnimateDiff!!!
Awesome!
Can you create on a video how we can increase the video length i.e adding more images then 4
I will be experimenting with that
Subscribed, very complete tutorial, what video card are you using?
4090
The king has answered our prayers. Just upgraded to a 4060 ti cant wait to get better quality outputs!
Congrats!! 8 or 16 VRAM ?
@@MDMZ 16!
@@ComfyCott Power 💪
Thank you for this! Any other morph workflows keep the image "exact" rather than reimagining it? I love these morphing loops but would love it to follow my initial images completely. I though I saw a 2 image morph that seemed to not "reimagine" the inputs. Thank you for the detailed settings walkthrough. Improved my results considerably.
I would love to have that too, I don't know of a way to do it yet
hey! i'm also interesting on what you describe! did you find something ?
I think its because the IPA, althought you set it to Strong, it will still reshape the image, not sure if anyone have the solution yet?
Hi! This looks so dope but I cannot seem to make it work. The generation is a blank white image. Can someone help? Thanks
can u try with default settings first ?
Just discovered this workflow today, thanks for the tips!
Happy to help!
Great vid! Did you ever find a way to keep the likeness of the celebrities you were morphing between? I know you said you were looking into it. Thanks!!!
Not yet! It didnt work
Thanks for your amazing workflow! Please;I have two questions:
1) In the Samplers group there is "Upscale Image By" node: scale_by 3.75 = 1080p and you say also is possible to set scale_by 2.5 = 720p .
- How do you calculate the factors (3.75 and 2.5) for 1080p and 720p?
2) If we choose scale_by 3.75 in the Upscale /w Model group in the node "Upscale Image" we need to set width 1080 height 1920.
If we were to choose scale_by 2.5 in the node "Upscale Image" we should change it to width 720 height 1280 ?
I later found out that both the scale ratio and final resolution are independent from each other, you can use the ratio to do a first upscale, then the final resolution to upscale again. and as for the calculation, simply multiply the ratio to the batch size and you'll get the upscaling resolution
@@MDMZ Thanks for the answer. Simply multiply the ratio to the batch size and you'll get the upscaling resolution:
In your original workflow: batch size = 96 512 x 288 = 16:9 ratio scale_by 1.75
1) 16:9 = 1.777777777777778 * 96 = 170.6666666666667
2) 96 * 16 = 1.536 96 * 9 = 864
how do you get scale_by 1.75?
@eltalismandelafe7531 wanted to go from 288 to 504
504/288 is 1.75, and that's how i found the ratio
@@MDMZ yes, you have rounded it off by 288 * 1.75 = 504 although in Empty latent Image you have written Width 288 Height 512 , not 504
I like how the link "List of nessesary links" leads to your Discord server with no clear way to get the file
The list is there, with full instructions, check the pinned msg in the discord channel
@@MDMZ thanks, now i see it!
fantastic tutorial. Instant results
Great to hear!
This looks similar to sparse ctrl workflows, i'll see how they compare
I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.
There's some math and numbers involved, but i can tell u that making the transition longer can produce bad results
@@MDMZ I understand. But I want to try it myself. Is it possible to find out in which node you can play with numbers?
I Still can't get thus workflow to work :( i am getting this error after following everything precisely. IP ADAPTER ADVANCED Error(s) in loading state_dict for ImageProjModel:
size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).
so sorry, I dont think I know the fix for this, did u try sharing on discord ?
Great tuttorial as always! Thank you!
Glad you liked it!
Great tutorial but Why is the simple math node not working for me? i haven't touched it but its highlighting the b input after trying to generate. 😮💨
I saw your comment on discord, responded
How long should it approximately take to create something like in this tutorial with a MacBook Pro M1 Max and 32GB RAM, just to understand the scale?
hard to tell, because this is way faster when u have an NVIDIA GPU, which MACs dont have
Thanks for this amazing KI Video Information. Many greetings from good old Vienna 🎡 🎩
Glad you enjoyed it!
I can't get rid of the red notifications, and when I try to update or install anything (at video 1:13) I get errors. Reinstalled and uninstalled the program several times already and still errors. Can you please advise me what I am doing wrong?
that's werid, can you share more context on discord please ? easier to share screenshots and resources over there
so, have you found a way to have the generated video looks like the uploaded photo? (face)
Not yet :/
Hi @mdmz, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
That's normal, it doesnt stay 100% true to the input, i tried with sdxl, couldnt gey good results
I’ve done this 2 times and keep coming out with errors. Cannot execute because VHS node doesn’t exist. node id #53. Any ideas how to fix?
try re importing the workflow
thank you brother , it was working perfect but just today there's a problem showed up in the simple math node in the qr code group , would you please help with it ?
I will check
you are pure excellence.
How long does your full render take and what is your gpu? It takes my 3080 about 1hr to render 720p but fails on the upscale. Any suggestions?
I use 4090, it takes around 20-30 mins to do the whole thing, try reducing the upscaling ratio, don't use your computer when it's upscaling
Hello, I’m following the ipiv's Morp tutorial, and everything is going well, but I’m using reference images without humans, just hallways or structures, and yet a human always appears at the end. Is there something I’m doing wrong? I’m using the same models and LoRAs that come by default. The only thing I’ve adjusted is the motion scale to add more movement to the animation.
perhaps you can try to use another model thats trained on images similar to what you're trying to achieve ? example: if you wanna generate buildings, get a model that's trained on building images
@@MDMZ Thank you for the response. I tried some more architectural models, but I don't think they were that good. In the end, I believe what helped was increasing the weight in the IPAdapter Advanced (haha, but I'm not sure that's the reason). Thank you very much for the effort put into this tutorial; it's very good.
How to make it if from Video? from 1 video or multi video for example 6 video faces? please make a tutorial, if possible sync with the mouth in the video.
Hello my friend!! I am following the Morph tutorial from the video:
"Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial"
I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!
I hope it's not too much of a bother but when I place the ipiv-Morph-img2vid-AnimateDiff-HyperSD on the page I get an error message: Warning: Missing Node Types When loading the graph, the following node types were not found: About 18 of them. How do I fix this?
Looks like u missed parts of the video 😏😉
@@MDMZ ah, thanks for getting back so quickly! I was following along from 0:53 to 0:58 with dragging and dropping the json. In this tutorial your nodes are green and when I drag and drop 18 are red and that's when I get the error message. But thanks anyway...
When I use a real person's image, it completely changes that person's face to a different man. Is there any way to fix that to maintain the same face?
By the way, great video. Keep it up.🔥🔥
check 5:00
@@MDMZ Yes, I caught this the second time I watched the video. Thank you for clarifying this, though.
love your content 🔥🔥
@@MDMZ hi sir, i know this tutorial just came out but i want to know if this is possible
Great video and I look forward to trying it. But, do you have a link to the model list that does not require discord?
Hi, i need that first and last uploaded image remain as they are, without interpretato. Is it possible? Thanks
The images will always change a bit
Thank you brother for being so kind doing this amazing vid ❤ , sadly i still can't get any results.. i followed all the steps and every file is in it's right place , but i always get an error once i reach to ksampler , would you please help ?
you can share the error on discord, u might be able to get help if u provide more context
Thanks for the tutorial, do you perhaps know why the face of me in the picture is getting deformed?
it doesn't work well with real faces, I talked about it in the video
There is no manager option in my comfy ui what should i do now?
did you install the manager ? check this video: th-cam.com/video/E_D7y0YjE88/w-d-xo.html
@@MDMZ much respect for you bro to be very honest you and your community is great i love to be a part of your community.
Hi There! This is so sick, do you do anything paid or know anyone who does this for commision? Just recently have been exploring AI art and am totally new to the field. Thank you so much!
u might be able to find some talent on our discord server
Can u do tutorial on krea similar options may be easier for many
Krea is awesome, but I don't think u can use it to do smth like this
The upscaling keeps getting stuck and won't generate anything
no errors at all ? did u try upscaling to 720 or 1080 ? trying a lower res might help
Amazing results indeed but wow at 1:00min lost me as wayyy to complex to use unfortunately
haha that was my exact reaction when I first saw it, don't get discouraged, it gets easier 😉
Thank you man! This is dope
Glad you like it!
I use same settings as you and use a Geforce RTX 3070. Is it normal that a full render will take 7 hours???
I've replied to u on discord
@@MDMZ Thank you sir!
To use juggernaut_reborn, where in the ComfyUI folder structure did you put it? I downloaded it and tried a bunch of different places but it wouldn't show up in the "Load checkpoint" box
hi, all the correct placements of models are included in the full list (link in the description) make sure you use the correct path
I have a problem where images from the previous generation are saved, and even though I remove them, they still appear in the generation
that's strangem try restarting comfyui, and set the seed to randomize
how can i control clip vision on this workflow my friend ?
Hi! I regularly use this workflow, but lately, I've encountered a few issues with it. All the problems started after the Comfy update. Initially, the issue was that instead of smooth morphing, a bunch of images similar to what I inserted into the IP adapter were generated, and they would rapidly switch between each other (restarting helped, but only for one generation). However, the biggest problem appeared today (also after the update): there’s an issue with the "Simple Math" node, and honestly, I don’t know what to do. There are just two red circles around "A" and "B" that are highlighted. I’d really appreciate your help-I have no one else to turn to
that sucks, some things tend to break after updating, I will test it out again and see if it works for me
@@MDMZ After the recent update, the issue with the IP adapter has been completely resolved, but the workflow still isn't working due to the Simple Math node.
@@MDMZ I've fixed everything. In case anyone else encounters this issue, you just need to replace the "Simple Math" node with "Math Expression" and make sure to write "a/b".
@@fo4ez_142 Thanks for this
Any suggestions on how to do a longer video? I want to use more than 4 images, how do i add nodes?
you can duplicate the image group nodes to add extra images
anyway to set video combine to download in h264 format because the uncompress is too big also for the images anyway to always save as jpeg or webp not png? ps im not talking about previewing.
u can increase the crf to reduce file size, I think the combine node has options to change codec as well
Can I use the real photos. eg : from my dad, change to me and change to my son ?
Hi there, this question was covered in the video
The only preset I can get to work is ViT-G (medium strength)?!
You need to download ALL of the ipadapter models
Great tutoria! Thanx MDMZ!
happy to help!
hmmm. I must be missing something because I can't seem to get the video to look anything like the original images...any tips?
Fixed it, Lora's did not auto pull in settings! noobing my way through, thanks for this tut!
is there a way to add more pictures to the process? and how can I make a linger video out of this?
yes, check the pinned comment
No matter what I do, I always get the, "cannot find IPAdapter model" when I try to use Plus(High Strength), I've got the model, several times, and renamed it properly; but it's NEVER found. Thoughts?
In which folder are u placing the model ?
@@MDMZ I've got it In the /ComfyUI/models/clip_vision folder. Same spot as where I have the medium Strength model that IS functioning.
Looks like I may need a hardware upgrade or something though; using a medium strength model, my project fails at the second Ksampler "torch.cuda.OutOfMemoryError: Allocation on device"
Running an RTX 4070TI Super, 16Gig VRAM, I feel that SHOULD be enough.
@@MDMZ
I'm putting the model in the /ComfyUI/models/clip_vision folder, same folder as the medium strength model which is working.
I Get a couple "Allocation on device" errors; Running an I-9, RTX 4070TI Super 16 G VRAM and 32 GIG ram, I'm wondering if I need more RAM for this workflow?
@@NWO_ILLUMINATUS that's not the correct folder for IPAdapter models, it should be placed in the IPAdapter models folder, and you might need more VRAM depending on how high you're pushing your settings
@@MDMZ Saddly, didn't work. Still model not found. Also, the notes in the workflow say to add the models to the clip vision folder, and the medium model works in the clip vision folder. odd
Where can i find the list of the models? The link is not working :(
Can you please update the Link since the discord Chat its linking to has no list as pinned Message.
Hi, I just checked, the link still works fine, when u accept the server imvite check yhe pinned messages on the ipiv morph channel
@@MDMZ I also can't seem to find it in the discord. Dont see an ipiv morph channel! tips?
@@stijnpruijsen the discord link will take you directly there, and then look for the pinned message
hi, what is the resolution of the uploaded photos?
for this particular video 1024x1024, but so far I havent had restrictions with resolution or aspect ration, better quality helps tho
@@MDMZ Are these images publicly available? I can't achieve your result.
@@CarCrashesBeamngDrive yes they are, together with the workflow
I went according to the same links you posted and download *ed the required files, but it gave me an error again, what's the problem?
Error occurred when executing ADE_LoadAnimateDiffModel:
'Hyper-SD15-8steps-lora. safetensors' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks.
File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj. FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 74, in map_node_over_list
results. append(getattr(obj, func)( ** slice_dict(input_data_all, i)))
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
Evolved\animatediff
odes_gen2.py", line 178, in load_motion_model
motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
AAAN
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-
Evolved\animatediff\model_injection.py", line 1084, in load_motion_module_gen2
mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name)
AAA
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
Evolved\animatediff\motion_module_ad.py", line 136, in normalize_ad_state_dict
raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max}
downblocks.")
hey, it's possible that the file 'Hyper-SD15-8steps-lora. safetensors' is corrupted, try re-downloading it, you can also share this on discord for more help
I'm getting a error that says control net object has no attribute latent_format
hi, please check the pinned comment
I don´t have a color correct node in my workflow. how do I get it?
make sure you've installed all the missing custom nodes
@@MDMZ can´t find a color correct node
why does the final output video turn super dark when i use super bright images??
make sure you use the right settings and models, if it persists, try reducing the steps down to 15-20
Hey i am using think-diffusion for this.
When i am uploading 2 files which named as model.safetensors inside ComfyUI / clip_vision folder. I am not able to rename it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
can you please help me?
you dont have to rename it, just make sure you load the correct file in the node
Invalid invitation to discord, would it be possible to update the link? Congratulations on the work!!
Just fixed it, sorry about that
@@MDMZ thanks
There is no manager option in my comfy ui manger what should i do now?
You need to install it, check my comfyui installation video for instructions
I can't find the list of models, when I click the link for discord.
check the pinned message in the discord channel
Can I easily create AI animation with Animate Diff/Comfy UI's help using an Nvidia Geforce 1050TI 4GB graphics card?
4GB is a bit too low
Can u please post your workflows somewhere else? Cuz Patreon not available in many countries...
I believe the unavailability issue affects the payment stage only, I put the workflow there for FREE, can you check if you're able to see the post ?
@@MDMZ Nope. Patreon is blocked from their side, they decide which nation have privilege to join... If workflow is free, maybe u can link it via google drive?
Followed the original video and cant work out why my outputs look extremely low quality
perhaps you need to adjsut the resolution, upscaling ratio, and steps
I keep getting the IPAdapter model not found error. Any solution?
Make sure you place the files in the correct path
Im learning too much from you sir 🙏🏻💯🫡
Glad to hear it
How to open comfyui after installing it all??
open run_nvidia_gpu
Donde es esta lista de modelos necesarios. No la veo en discord. Ayuda.
anclado en la chincheta arriba a la derecha.
@@johnlonggone gracias.
Can u help me?
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models
raise Exception("IPAdapter model not found.")
me too
make sure you download all the models from the list, and place them in the right folders
Error occurred when executing ImageSharpen why always like this?
help me please
what does the error say
How many vram and ram should i have for that? I have 32 ram- 8vram
I recommend atleast 12GB of VRAM, you can still give it a try
raise Exception("ClipVision model not found.")
make sure you download the correct clipvision files AND... rename them as described in the list
@@MDMZ it's solve but still i get error on cv2
2.5 for 720p and 3.75 for 1080p, what about 4k?
how long is your rendertime with 3.75? thx
7.5 try at your own risk 😅
Better then deforum?
depends who u ask, both can be used for different things
How do you print it out in a 16:9 ratio resolution!!!???????????????????? plz
just swap the dimensions, it's actually explained in the video
please tell me:how to add 5th or 6th photo or more photos?thank U a lot
still looking into this, might need to start off a different workflow
can someone send me a link to the IPAdapter model please. I think the link mentioned here is not good. thanks.
What happens when you click on the link ? Seems to be working fine for me
The motion graphic site is down, how can I get the Video? Thx
seems to be working fine now
@@MDMZ I try many times, still can't link the site, pls help, if upload the video to somewhere else(maybe the site block some IP address)
how to increase the number of frames?
check the pinned comment
영상 잘보고 가빈다
응원드립니다
Wow Thanks
AMAZING❤️
Thank you!
Nice
Does this work with flux?
I'm gonna try this, it might take ages to generate tho
Error occurred when executing ADE_LoadAnimateDiffModel:
'NoneType' object has no attribute 'lower'
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff
odes_gen2.py", line 178, in load_motion_model
motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 1066, in load_motion_module_gen2
mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
^^^^^^^^^^
facing this error
Can we have a Google colab for this please?
hello why me eror ipaadapter loaded sir can u help me
hi, check the pinned comment
How do I do looping?
it loops by default