FOR ANYONE GETTING *ERROR* AT 2:45 *MOVE YOUR ComfyUI_windows_portable folder to your "C" Drive.* The file path must be " C:\ComfyUI_windows_portable " In my case I have a storage space labeled "F". So my file path is " F:\ComfyUI_windows_portable " Windows does not like long file paths, so you need to place the comfy_UI folder at the root of your drive
Matteo's workflow from openart shows the uppermost advanced controlnet model as "ad/motion.ckpt" (the one you changed to "anidiff_controlnet_checkpoint.ckpt"). Unfortunately, I can't find it anywhere.
@@joeterzio7175 I wasn't a fan of node based editors either but I gave Comfy a try because I was sick of A1111's memory issues, slow speed and incompatibilities. I do not regret it - performance is AWESOME! The easiest way to use Comfy is to save different workspaces (as here provided!), enter your own prompts and that's all. No work, no hassle.
Seemingly you have left quite a lot of people frustrated. As it’s not obvious where to get the checkpoint from. I’ve used the safetensors that’s also used near the bottom of the workflow
Hello, I must commend the remarkable steadiness and effectiveness of your process flow. Yet, figuring out how to set the length of the resulting videos escapes me. Would you be able to guide me through that?
Great video as always. I was keen to try this but I can't find ad/motion.ckpt and issues downloading the youtube short for some reason? I can try another video if i can find the motion.ckpt. Cheers.
First of all thanks for the video and the workflow, then I have a little question, can this worflow be applied to CogVideo? I mean, use controlnet canny (for example, CogVideo have just 2 models for controlnet) to animate an image? Coz the workflows that I found around are vid2vid or img2vid, none tryed to use controlnet to tell CogVideo the movement that the image have to do
Two things. The first one others are asking for as well - where do wer get your renamed anidiff_controlnet_checkpoint.ckpt from? There doesn't appear to be a file link, even with your 'anidiff' part removed. The second thing is I have the v3_sd15_mm.ckpt used in the AnimateDiff Loader node....but where do I store it so it detects it?
@@Elwaves2925 I’m not saying it’s 100% the correct answer, but I’ve used the FP16 safetensors like what’s used at the bottom of the workflow! If I find the actual file that’s been renamed I’ll update. But try it with that. It worked for me.
@@PeteJohnson1471 Cheers, I'll give that a go. I tried with the control open pose file that's used elsewhere in the workflow (it loaded it, not me) but it messed up the second animation. Next time I go on I'm going to try some other motion models as well.
Can you please make a video how to set all the nodes and such from strach Been wanting to start my own anime, its finally time thats to you and this video ❤
Also what if your didn't want to use a phomt to make whatever character I have my own designs already made is there a way to connect the dance video with a pre made template on the anime character you want to use If you could help find out how to do this alot of people will thank you including myself ❤❤❤
How to fix this issue???? Error occurred when executing VAEDecode: Runtimerror: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead
hi amaizing tutorial, but a question, the node you use on batch and image name code"EmptyLatentImage" i cant find, is probably a sustitute? i try to serch but the EmptyLatentImage are now dnot have batch input. any idea? thanksin advance
If I want to use a LoRa I trained on specific person, is it possible to use it here? if so where do I put the lora loader (i.e. which nodes to connect it to?)
Yeh, after 4 hours i was found almost all files, because Olivio change the names... so.. could you put own links and describe to names of destinations folders? pls :)
Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?
6 หลายเดือนก่อน
Hi Olivio, thanks for the video. If I want to change frame count to lets say 48 or 64 from 32, should I change the "context overlap" to 3 or 4 etc ?
Hmm, I'm trying to understand. I have a lora that I'd like to add to this process, but it doesn't seem to get picked up properly. Where is the best place to add it? It's sort of a person lora, making the same person every time.
I apologize for the basic question, but I've only recently started using Comfy. If I haven't downloaded the checkpoints and LoRa required for this workflow, why am I still able to use them? I mean, I see them within the nodes, but I've never downloaded them.
I seem to be having a problem when it reaches the first ksampler ('NoneType' object has no attribute 'shape') and haven't found a way to fix it, any ideas would be appreciated, thanks.
Guys, why i always get the same face? I change prompt like hairs, eyes, age etc and i always have the same face but other details like hairs etc... I tried dreamsharper XL, dreamsharper 8, realisticVision and juggernaut xl.
Is there a site with motion models to use instead of trying to grab videos from dancers on youtube? I can't really download from youtube (I still haven't figured that out). A library of animations (like pose libraries on CivitAI) would be a plus.
as I was scrolling down this video I realized that everyone is waiting for you to provide the ad/motion.ckpt thingy. If you don´t conply we will start to burn cars on the streets...lol
@@aivideos322 yep, that is the conclusion I came too as well. And it’s all working. Likewise, I can’t actually find said ckpt, but the safetensors that’s also used near the bottom of the workflow.
funny i found it by right clicking and Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. But cant pull it up in a search of my nodes.
He didn't do too much testing with it. As he wanted to put the video out asap. Thuogh this does work as well and you can convert the frame limit as input to keep the Number of frames node in place
i have 4060ti 16gb and used the matteo previous ballerina template + add face detailer and >> 1, 5 sec video takes about 7 min to create .... so it is possible....
Cool, I don't think this will replace anyone, you just can work much faster and do more work. I see how this will be used in vfx with a layer mask on which you can generate for example a simple render of lava flow and add the the details with SD. Most people who make AI "'art" videos are stuck with the lack of possibility to control the narrative. But if you know 3D, compositing and animating you can make low poly basic animations and sims and later on refine them with SD. I just which comfyui had some sort of IF/else statement or case 1, 2, 3.... which could be triggered through a control pannel wth button, that way you don't have to modify everything the whole time and just create on big workflow with multiple setups
Great tutorial as always. Just one question: Where do you put the 'v3_sd15_mm' checkpoint? Can't seem to find the right folder for it. Can't select it in AnimateDiff Loader (only undefined).
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead (Please Help any guy ????)
Anyone run into the same error? How Can I fix it? Error occurred when executing DWPreprocessor: [WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\Downloads\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/DWPose-TorchScript-BatchSize5\\cache\\models--hr16--DWPose-TorchScript-BatchSize5\\snapshots\\359d662a9b33b73f6d0f21732baf8845f17bb4be'
Apparently, it's all about the assembly of ComfyUI. I had the same problem, and the updates to ComfyUI via Manager did not help. Then I just re-installed ComfyUI using the Pinokio neural network automatic installation service, and everything worked:)
Can you PLEASE stop giving this ridiculous SpaghettiUI so much attention. It's a complete waste of time and it offers NOTHING useful over A1111 or any other variant of for that matter.
i guess you don't see the potential of comfyui ....and all the things you can not do in automatic1111 ...you can put loras before Or after the prompt, or the image....you can start with a model and finish with another....you can affect loras or controlnets to specific parts, etc. . . . . i was "against" comfyui at start, as i was used to automatic1111 ...but i have to admit there are things you can do better in comfuyi ...i still prefer to "inpaint" in automatic1111 tho ....
For animation comfyui is a must....especially if you want to do things like steerable motion, SVD or lengthy videos. Sometimes you want to do the video rendering step by step to save on vram. For not so complex images and impairing A1111 is great though.
Somebody had to say it. Been working with ComgyUI for about a year now (off and on). It is literally the most headache-inducing user interface I have ever had the displeasure to experience.
Please follow Matteo on TH-cam: www.youtube.com/@latentvision
SweetyHigh Video: th-cam.com/users/shorts-_YZ1kSoInQ
#### Links from my Video ####
Workflow Download: openart.ai/workflows/matt3o/template-for-prompt-travel-openpose-controlnet/kYKv5sJWchSsujm0zOV0
huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt
huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt
huggingface.co/guoyww/animatediff/tree/main
huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/blob/main/control_v11p_sd15_openpose_fp16.safetensors
we need help finding one of the models
@OlivioSarikas you have mentioned at the title of the video using the DW Pose but you linked the Open Pose model !
thats the first animation ive seen with sd that looks professional very cool!
FOR ANYONE GETTING *ERROR* AT 2:45 *MOVE YOUR ComfyUI_windows_portable folder to your "C" Drive.* The file path must be " C:\ComfyUI_windows_portable "
In my case I have a storage space labeled "F". So my file path is " F:\ComfyUI_windows_portable "
Windows does not like long file paths, so you need to place the comfy_UI folder at the root of your drive
What is the download link for anidiff_controlnet_checkpoint?
looking for the same thing
Just simply mind-blowing. You are awesome Olivio. Will try this tonight.
After multiple tests...Im getting better results with a depth map control but overal great workflow for slower type of movement.
Matteo's workflow from openart shows the uppermost advanced controlnet model as "ad/motion.ckpt" (the one you changed to "anidiff_controlnet_checkpoint.ckpt").
Unfortunately, I can't find it anywhere.
I can't find the ad/motion.ckpt either so not working for me sadly
On openai comments say its one of these from huggingface: "crishhh/animatediff_controlnet"
Found it in the openart workflow description. It does point to the hugging face that @elowine mentioned
@@olao6737 @elowine I did download this file, but where should I put, the webui\models\ControlNet?
I've tried other workflows, this is the best, and it's free.
Can't find the anidiff checkpoint on the link you provided.
could you do a tutorial on lip sync?
yes, the biggest problem is the stability. everything goes int the latent space, and we loose control
Wow this one really is a gem! Thanks man, keep up the amazing videos!
This channel clearly became the no. 1 source for ComfyUI tutorials. Awesome 👍
Exactly the reason why I barely watch anymore. 👎
@@joeterzio7175 I wasn't a fan of node based editors either but I gave Comfy a try because I was sick of A1111's memory issues, slow speed and incompatibilities. I do not regret it - performance is AWESOME! The easiest way to use Comfy is to save different workspaces (as here provided!), enter your own prompts and that's all. No work, no hassle.
yeah, I hope he will continue on this ... other youtubers seems to go back to A1111 because more people use that one instead :/
Great content, man! What exactly is the control net model at 8:10 . The forth huggingface points to a openpose controlnet. Is that it?
Yeah, wasn't able to find it on any of his links. Great work on the video as usual, Olivio!!
Was wondering as well:)
stuck on this one as well
It seems many are having the same issue. Myself included.
I believe it is the file "v3_sd15_mm.ckpt" again, it needs to go into models/controlnet folder.
Seemingly you have left quite a lot of people frustrated. As it’s not obvious where to get the checkpoint from.
I’ve used the safetensors that’s also used near the bottom of the workflow
Hello, I must commend the remarkable steadiness and effectiveness of your process flow. Yet, figuring out how to set the length of the resulting videos escapes me. Would you be able to guide me through that?
Hi , why video just 2sec how to real duration from source ? example source 15 sec buy why output just 2 sec.
Where can I download the Anidiff controlnet checkpoint?
Looks like in the video description
Это великолепно, пошел пробовать! Большая тебе благодарность!
Great video as always. I was keen to try this but I can't find ad/motion.ckpt and issues downloading the youtube short for some reason? I can try another video if i can find the motion.ckpt. Cheers.
Stability in the video? Sure, if we're not looking at the hands...
First of all thanks for the video and the workflow, then I have a little question, can this worflow be applied to CogVideo? I mean, use controlnet canny (for example, CogVideo have just 2 models for controlnet) to animate an image? Coz the workflows that I found around are vid2vid or img2vid, none tryed to use controlnet to tell CogVideo the movement that the image have to do
Could you please tell me which adjustment can make the video longer?
Two things. The first one others are asking for as well - where do wer get your renamed anidiff_controlnet_checkpoint.ckpt from? There doesn't appear to be a file link, even with your 'anidiff' part removed.
The second thing is I have the v3_sd15_mm.ckpt used in the AnimateDiff Loader node....but where do I store it so it detects it?
Got the second part sorted, just the first one now.
@@Elwaves2925 I’m not saying it’s 100% the correct answer, but I’ve used the FP16 safetensors like what’s used at the bottom of the workflow!
If I find the actual file that’s been renamed I’ll update. But try it with that. It worked for me.
@@PeteJohnson1471 Cheers, I'll give that a go. I tried with the control open pose file that's used elsewhere in the workflow (it loaded it, not me) but it messed up the second animation.
Next time I go on I'm going to try some other motion models as well.
@@Elwaves2925 did for me too, so I reduced the denoise strength in the 2nd ksampler to about 35. and things don't go too far out of whack ;-)
where to place the ckpt file ?? can u help
hey Olivio can you let us know where to place those files from the link you shared? thank you so much!
Great video! 2 questions. Where do I get 'ad/motion.ckpt' from? How long can you make your animation? Thanks.
Can you please make a video how to set all the nodes and such from strach
Been wanting to start my own anime, its finally time thats to you and this video ❤
Also what if your didn't want to use a phomt to make whatever character
I have my own designs already made is there a way to connect the dance video with a pre made template on the anime character you want to use
If you could help find out how to do this alot of people will thank you including myself ❤❤❤
How to fix this issue????
Error occurred when executing VAEDecode:
Runtimerror: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead
what about Turbo+LCM models? do they help for frames render speed? or they are unusable here?
I tried the first animation method, follow all the steps but not sure why it doesn't really follows the controlnet which i have given.
hi amaizing tutorial, but a question, the node you use on batch and image name code"EmptyLatentImage" i cant find, is probably a sustitute? i try to serch but the EmptyLatentImage are now dnot have batch input.
any idea? thanksin advance
Amazing! But how to control video length
Wow This is the best ! Also, it works great on my 32GB ram 8GB Vram computer!, Its so fast also :O I love it!, thanks for the video.
TH-camrs are so greedy they sell workflow in patreon.. but you give away for free .. you deserved more followers
How is it smoothly looping? is it just the video that dose it? or whats going on? mine all animate great, but at the end jump
Amazing tutorial once again! Thanks Olivio 🐔😘
If I want to use a LoRa I trained on specific person, is it possible to use it here? if so where do I put the lora loader (i.e. which nodes to connect it to?)
Prompt outputs failed validation
VHS_LoadVideoPath:
- Custom validation failed for node: video - Invalid file path: C:\Users\MonWeb\Downloads\videoplayback.webm
??????????????????????
this keeps happening to me, too
Yeh, after 4 hours i was found almost all files, because Olivio change the names... so.. could you put own links and describe to names of destinations folders? pls :)
seems it's only the renamed anidiff_controlnet_checkpoint.ckpt people can't find. myself as well.
Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?
Hi Olivio, thanks for the video.
If I want to change frame count to lets say 48 or 64 from 32, should I change the "context overlap" to 3 or 4 etc ?
Hi, so one question. I'm unable to stack multiple controlnets using these fp16 models, any reason why?
In theory, wouldnt you be able to do this with a Canny? If i can figure it out, i might finally get over the roadbloack I've been at with animdiff.
Has anyone had any luck with this for realistic outputs, rather than anime?
Hmm, I'm trying to understand. I have a lora that I'd like to add to this process, but it doesn't seem to get picked up properly. Where is the best place to add it?
It's sort of a person lora, making the same person every time.
Can you do a video on RAVE?
Looks really great. Wh6is the max available length of generated animation?
He used another method of doing this but with the ballerina. Is this the new way to go about it?
I have a hard time replicating people workflows on youtube, it feels like they never go through the package instalation or model checkpoint placenent
upgraded from python 3.9->3.10.11 fixed most things, well packages still were messed up but try fix button in manager fixed it.
I apologize for the basic question, but I've only recently started using Comfy. If I haven't downloaded the checkpoints and LoRa required for this workflow, why am I still able to use them? I mean, I see them within the nodes, but I've never downloaded them.
I seem to be having a problem when it reaches the first ksampler ('NoneType' object has no attribute 'shape') and haven't found a way to fix it, any ideas would be appreciated, thanks.
nagyon ígéretes ! köszi a videot
i have a quick question can we export the pose data? i am currently working on an idea to animate 3d models..
Can I please ask you for a similar tutorial for Automatic 1111 ?
Not powerful enough to do stuff like this
Guys, why i always get the same face? I change prompt like hairs, eyes, age etc and i always have the same face but other details like hairs etc... I tried dreamsharper XL, dreamsharper 8, realisticVision and juggernaut xl.
good job and thanks for the update
Is the other contronet model depth?
Is there a site with motion models to use instead of trying to grab videos from dancers on youtube? I can't really download from youtube (I still haven't figured that out). A library of animations (like pose libraries on CivitAI) would be a plus.
download 4k video downloader+, it's free for 30 videos a day.
yt-dlp or youtube-dl (both open source and free)
Come to think of it, YT-DLP IS AVAILABLE in visions of chaos. Which is awesome.
Those fingers look insane... 😅😮
as I was scrolling down this video I realized that everyone is waiting for you to provide the ad/motion.ckpt thingy. If you don´t conply we will start to burn cars on the streets...lol
its just open pose renamed.
really ???@@aivideos322
@@aivideos322 yep, that is the conclusion I came too as well. And it’s all working. Likewise, I can’t actually find said ckpt, but the safetensors that’s also used near the bottom of the workflow.
@@aivideos322 so he used twice? That's it? I dunno man...
It’s not openpoae.
Ive been searching for the DWPose Esimator node havent found it yet, where can i get that one?
funny i found it by right clicking and Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. But cant pull it up in a search of my nodes.
Why not use the load video upload node instead of the load video path?
He didn't do too much testing with it. As he wanted to put the video out asap. Thuogh this does work as well and you can convert the frame limit as input to keep the Number of frames node in place
@@JoshTheFlyGuy👍
timestep keyframe node not loading (advanced-controlnet is broken on my system) any other node i can replace it for??
did you click on "update all" in comfyui manager?
halllo anidiff_controlnet@@OlivioSarikas
Would 16gb vram be capable of doing this? I’ve got a 4080
yes
@@agamenonmacondo tight guess I got some learning to do
i have 4060ti 16gb and used the matteo previous ballerina template + add face detailer and >> 1, 5 sec video takes about 7 min to create .... so it is possible....
Thank you always!!
Cool, I don't think this will replace anyone, you just can work much faster and do more work. I see how this will be used in vfx with a layer mask on which you can generate for example a simple render of lava flow and add the the details with SD. Most people who make AI "'art" videos are stuck with the lack of possibility to control the narrative. But if you know 3D, compositing and animating you can make low poly basic animations and sims and later on refine them with SD. I just which comfyui had some sort of IF/else statement or case 1, 2, 3.... which could be triggered through a control pannel wth button, that way you don't have to modify everything the whole time and just create on big workflow with multiple setups
Not compatible with SDXL ?
Great tutorial as always.
Just one question: Where do you put the 'v3_sd15_mm' checkpoint? Can't seem to find the right folder for it. Can't select it in AnimateDiff Loader (only undefined).
That goes into custom_nodes\ComfyUI-AnimateDiff-Evolved\models - sorry, i should have pointed that out in the video
@@OlivioSarikas thanks, man. You rock!
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead (Please Help any guy ????)
I am no expert, but I have gotten errors like that when I used FLUX checkpoint or used the wrong VAE.
first mistake its not automatic1111 second mistake not sdxl
Anyone run into the same error? How Can I fix it?
Error occurred when executing DWPreprocessor:
[WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\Downloads\
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/DWPose-TorchScript-BatchSize5\\cache\\models--hr16--DWPose-TorchScript-BatchSize5\\snapshots\\359d662a9b33b73f6d0f21732baf8845f17bb4be'
Apparently, it's all about the assembly of ComfyUI. I had the same problem, and the updates to ComfyUI via Manager did not help. Then I just re-installed ComfyUI using the Pinokio neural network automatic installation service, and everything worked:)
Niiice Olivio !
meanwhile lumiere hot dropping this tech is moving soo fast
YAS!!!!!
Holly Molly
The hands though...
Nothing stops you from passing each frame through meshgraphormer.
love the results but the complicated process is a big turnoff
This is a simple efficient workflow I'm actually surprised by how simple and powerful it is
You have to build it once, then you just edit prompts and click start.
@@purelife_aippl look for one button fix and zero learning curve... Only then they can say look how skilled I am 😂
Alternatively you could learn animation =)
You're so out of touch with how much work and knowledge usually goes into art lol.
YOU scuf
Can you PLEASE stop giving this ridiculous SpaghettiUI so much attention. It's a complete waste of time and it offers NOTHING useful over A1111 or any other variant of for that matter.
i guess you don't see the potential of comfyui ....and all the things you can not do in automatic1111 ...you can put loras before Or after the prompt, or the image....you can start with a model and finish with another....you can affect loras or controlnets to specific parts, etc. . . . . i was "against" comfyui at start, as i was used to automatic1111 ...but i have to admit there are things you can do better in comfuyi ...i still prefer to "inpaint" in automatic1111 tho ....
For animation comfyui is a must....especially if you want to do things like steerable motion, SVD or lengthy videos. Sometimes you want to do the video rendering step by step to save on vram. For not so complex images and impairing A1111 is great though.
@@cedtala Literally nothing you just mentioned is useful or necessary, just like ComfyUI.
Somebody had to say it. Been working with ComgyUI for about a year now (off and on). It is literally the most headache-inducing user interface I have ever had the displeasure to experience.
AttributeError: module 'comfy.ops' has no attribute 'Linear'
Getting the same error with AnimateDiff, Any idea whats its all about?