hi, ive been playing around with this method, and i found 512x512 with 8fps is faster, next i use 2 controlnets - openpose DW with control strength 1 and softedge or cany with strength .25 and prompt more important... and results are amazing - and realistic models dont work very well with animate diff, i use animerge which is decent... then i pick up the frames, run img2img with .4 denoise with a realistic model like consistenfactoreuclid or epicrealism... now all the animerge frames generated by animatediff are converted to realistic images.... then make gif or mp4 :) nice end results :)
I find that openpose doesn't work as well as scribble or canny for nailing poses. Sometimes openPose doesn't interpret foreshortening, perspective or even whether we're looking at the front or back of the character. I think it sucks, but maybe I'm using it wrong?
for some reason my to make mp4 is not working for me says cant find ffmeg , even though its installed , been looking everywhere cant figure it out, gif making is just fine
Bruv, which state are you in? Im pretty sure we're close in proximity. Let me know, I'll buy you a coffee. Would be cool to collab and talk vfx/cg among other things.
Do you use a setting like Xformers, or anything else that make the render animation faster ? I takes Hours for me with my RTX 3080 with the same settings :/
I wasn't able to get this to work. In your workflow, you added an input video to AnimateDiff and no input image/video to the ControlNet. In my case, I see an error printed to my notebook and the generation stops after creating a single image. The error I got : "ValueError: controlnet is enabled but no input image is given". Any suggestions? Not sure if I missed something. After watching your video multiple times, I don't think I missed any steps
Join The Tyrant Empire Discord Community: discord.gg/duAudH7cAB
hi, ive been playing around with this method, and i found 512x512 with 8fps is faster, next i use 2 controlnets - openpose DW with control strength 1 and softedge or cany with strength .25 and prompt more important... and results are amazing -
and realistic models dont work very well with animate diff, i use animerge which is decent... then i pick up the frames, run img2img with .4 denoise with a realistic model like consistenfactoreuclid or epicrealism... now all the animerge frames generated by animatediff are converted to realistic images.... then make gif or mp4 :) nice end results :)
Wow, thanks for sharing! I'll try that out.
wow thank you, could you make a video explain it please ?
Bro caught us off gaurd about us not looking at the face 😂😂😂
Your video tutorials are always top tier
Great content, many thanks, hope the OP got the link to this cool reworking.
I love your channel a lot, Just wanted to say that :p
I really appreciate you & your kind words. Thank you!
exactly what I was looking for 👌
Nice video. I will implement it in my workflow
Thanks subbed
I find that openpose doesn't work as well as scribble or canny for nailing poses. Sometimes openPose doesn't interpret foreshortening, perspective or even whether we're looking at the front or back of the character. I think it sucks, but maybe I'm using it wrong?
Is everything up to date? When is the last time you updated A1111?
Awesome tutorial!
Thank you!
trying to get this running asap
for some reason my to make mp4 is not working for me says cant find ffmeg , even though its installed , been looking everywhere cant figure it out, gif making is just fine
Bruv, which state are you in? Im pretty sure we're close in proximity. Let me know, I'll buy you a coffee. Would be cool to collab and talk vfx/cg among other things.
Send me a message in the Discord.
What gpu u use bro?
12GB 3060
Do you use a setting like Xformers, or anything else that make the render animation faster ? I takes Hours for me with my RTX 3080 with the same settings :/
My Commmandline args are: --autolaunch --medvram --no-half-vae --xformers
You use Medvram with 12Go of vram ?@@TheAITyrant
Waita not fuckin link anyone (ps: u can cut out the part where u try to kill the fly to save time in the video, just a pro tip for a newb)
Nah that part was pretty funny actually
@@lvlSpooksterlvl ya wasting time on something that already takes forever is so funny, im dying.
Nothing free. All want money to create cartoon video. waste of time as well as waste of money.
This is a free software my friend.
I wasn't able to get this to work. In your workflow, you added an input video to AnimateDiff and no input image/video to the ControlNet. In my case, I see an error printed to my notebook and the generation stops after creating a single image. The error I got : "ValueError: controlnet is enabled but no input image is given". Any suggestions? Not sure if I missed something. After watching your video multiple times, I don't think I missed any steps