AnimateDiff and (Automatic 1111) for Beginners
ฝัง
- เผยแพร่เมื่อ 2 พ.ย. 2023
- AnimateDiff lets you make beautiful GIF animations! Discover how to utilize this effective tool for stable diffusion to let your imagination run wild.
AnimateDiff Beginners workflow pt 1: • Animatediff (Automatic...
AnimateDiff Beginners workflow pt 2: • Adetailer and AnimateD...
-------------------------------------------------------------------------------------------------------------
Prompt credit: civitai.com/images/2201967
Prompt animation credit: civitai.com/images/3044633
Toonyou: civitai.com/models/30240/toonyou
ToonBabes: civitai.com/models/122022?mod...
Motion Model: huggingface.co/guoyww/animate...
Motion Model2: huggingface.co/manshoety/AD_S...
AnimateDiff GitHub: github.com/guoyww/animatediff/
-----------------------------------------------------------------------------------------------------------------
Deforum Project settings: goshnii.gumroad.com/
Deforum Tutorials: • Deforum Stable Diffusi...
#animatediff #aianimation #gifs #stablediffusion #text2video - แนวปฏิบัติและการใช้ชีวิต
Thanks for this :) It was easy to follow and straight to the point .. You a gained a Subs and Like ... Great Job
Thank you for your positive feedback! I am grateful for your support. 💛
Every helpful! Direct to the point and complete!
i am glad it was helpful, thank you for your feedback
Hey, thanks for the tutorial! Very clear and concise, appreciated!
You are most welcome, and I appreciate your feedback.
Nice and easy to understand thanks :D
I sincerely appreciate your feedback.
thanks dude. this was very useful
you are most welcome and i appreciate your feedback
Nice tutorial!
Thank you! i appreciate it
Thanks I'll try 😸
Awesome! Please feel free to experiment! 😸
very nice thanks 😇
Thank you kindly! ❤
nice tutorial
i appreciate it
Thank you for your great work, but I have a problem that I can't find a solution for.
I don't know what I'm doing wrong, but for me it always creates 2 different scenes within one gif. The second half of the frames is fundamentally different from the first in terms of pose, scene and so on. At 8 frames and 8 fps it works, but as soon as I use more frames (16/32/64 etc to increase the length) it splits the clip (but still saves as one). What could be the reason?
I appreciate the feedback. I've read a lot of comments regarding this challenge. I would be very grateful if you could send me the link that displays the screen frame. Also are you using the 512x768 frame size?
Hi! Is it possible to create a similar animation of this video but using a jpg image file? I mean using an image file instead of using an image generated from the prompt. Thank you!
I appreciate the suggestion. Look out for the upcoming tutorials.That will be on my list of upcoming tutorials.
Thank you very much, great work, keep doing it💫 your channel has only a few videos yet i learned so much. I hope you would post on this topic (animation videos with SD ) further. Thumbs up !👍👍
It also would be so nice if you show how we can edit the frames on SD, for example in the first Gif the head's movement was flawless, but the shoulders had a distortion/deformation how we can fix that on those many frames? As an 2D animator i like to know how we can make narrative animation videos that have story line (consistency of images should be preserved as much as possible)
The other topic that i like to learn about deeply, is how i can create my own style with my own illustrations(2D painting style, but NOt anime)i read we need to train our installed Automatic1111 with some data but i dont know how"/
Thanks again!😊🙏🙏
I sincerely appreciate your kind words and encouragement! 🙏 It's wonderful to hear you've already gained a lot from the videos. In the future, I will definitely consider creating more content about animation videos with stable diffusion (SD) 🌟. Your suggestion for addressing frame distortions or deformations is brilliant 👍and The topic of creating narrative animation videos with a consistent style is fascinating. I will do my best to update content that addresses your concerns and interests. It is my hope that learning AI techniques will enhance your knowledge of animation. ♥
I don't know if my setup is wrong, of I missed a step in the options. I get an image produced, but I don't see an animation, multiple frames, or a gif anywhere. It doesn't seem like animatediff made any difference. Also, I saved the ckpt to the extension model folder and selected it in the UI, but the console warns about "WARNING - No motion module detected..."
Hello, If no motion module is found, it means you haven't downloaded a model for animatediff to be used.
Kindly find the motion module links in the description . at 2:30 explains how and where to place this.
Also Your checkpoint is saved in your stable diffusion directory, and the motion modules are saved in the extension folder, as shown.
It appears you have an issue in the setup or configuration.
I hope this is helpful.
wow
means a lot, thank you
Short and nice tutorial, instantly subbed!!
How come you get the image generated so sharp & clear without even highres? With any sampler, steps (even going all the way up to 100), CFG, I get crappy resolution with my 4060 in ToonYou
Thank you for subscribing and for your excellent question. I had a similar problem with this, and I'll be sharing my work process soon to explain why. My experiments revealed that ToonYou does not perform well at high aspect ratios or in landscape mode, particularly for animations. Another tip is to use a good prompt with details instead of an upscaler, which takes time. Further content is in the works, and I hope it will make this clear to everyone.
@@goshniiAI Thank you so much. Looking forward for more excellent tutorials and watch the channel grow.
I appreciate your support; it means a lot. @@hyperdeloutz2444
Thank you for this, you gain a big like and subscription from Vietnam. One question please, should we fix the seed ? what would happens
Thank you very much for your support from Vietnam! I am glad you found the video helpful. Trying with different seeds allows you to find more different results. Whether or not to fix the seed is something of your tastes and targets. Feel free to play around and see what works best for your projects!
I hope this was helpful
@@goshniiAI Thank you bro, have a nice day!!!
Is it necessary to use model ToonYou, or can you use any other?
No, it isn't necessary and you can experiment with other models, but to see the preview models that work best with animatedIFF, please visit this link to the animation gallery. animatediff.github.io/
thx!@@goshniiAI
🙏 @@SwarowskyTech
how much GPU is needed min for this? I got erros in console but I think is because I need GPU memory
The GPU requirements for AI animation vary depending on the project's complexity and image or video resolution. In general, a GPU with at least 8GB of VRAM is recommended for better performance and to avoid memory errors.
Can we import an image and use animate diff on it?
Absolutely! There is a guide here for a better understanding. th-cam.com/video/H7iecFlk8aI/w-d-xo.htmlsi=Tkoqu0-W3WD6s5la
for me all good but when i press generate nothing happens... is behaving like i didn't check the box. but i did
That's a tricky one! Try updating SD as well as the animateDiff extension. Also, if you have checked the "Enable Animatediff" box, be patient once you hit generate.
Hi, it's still not working for me. Yet I followed your tutorial to the letter. But when I go to create the animation, it just creates an image and that's all. I don't know what to do anymore. I need help
Do not give up. Please ensure that the "ENABLE ANIMATEDIFF" box is checked before generating. tinyurl.com/3vzanrzw
@@goshniiAI Hi, I always activate the Enable checkbox. But the strange thing is that when I start generating, it's very fast because it only creates 1 photo. While as I said in your video: I enable R-P and then OFF. But when I generate, the ticks disappear. And I don't know why photos.google.com/photo/AF1QipPGQ8a2zU2QdIi333RSSjqd-oVpKEpMxZeiuhC8
@@goshniiAI same problem here and enable animatediff is checked :(
@@Miztor I'm sorry to hear you're experiencing a similar difficulty! Please look through the comments, a few suggestions were made earlier to help.
can we use our image instead of generated ones ?
Hi there, I'm not sure if that will be possible because this was a text to image process. However, you can still clarify the question so that I understand it well.
Hi,goshnil,I'm encountering an error while trying to generate something in Stable Diffusion. I'm receiving the following message:
Error: AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'
I'm not sure how to fix this issue. Could you please help me?
I'd recommend verifying that you have the most recent version of PyTorch installed and that all dependencies are up to date. Also, confirm that your code uses the right parameters. I hope this helps you resolve the issue.
Hi, awesome video thank you so much!
I have just followed your instructions but I do not get a final GIF!
Instead I get stills (in my "outputs folder") which are all different and have no connection to the image I initially wanted.
At 5:26 in your video tutorial I see your GIG animated in the window, for me it is just stills No animated Gif....
What am I doing wrong?
Please ensure that you have FFMPEG installed, as this will combine your batch frames into a gif/video format. Second, not all models will produce an animation as the final result. Use any of the reference gallery Models listed here. github.com/talesofai/AnimateDiff
There are also a few suggestions in the comments that may be useful. I hope you find these pointers fruitful.
@@goshniiAI good morning, thank you! How do I check if it is installed?
@@suzanazzz
1. Open your command prompt and type: ffmpeg -version (Reference tinyurl.com/jex7pd76 ) and enter
2. If it is installed, the version should be visible in your command prompt. (Refrence tinyurl.com/38nxe9hs )
@@goshniiAI Hi, thank you so much for the reply!
Are the instructions you sent running SD Locally?
I am running the Fast Auto1111 Ui with Google collab --> so everything is on my Google drive NOT LOCAL.
Thanks again.
@@suzanazzz Alright, I understand, and you're very welcome.
Unfortunately, I am not very familiar with collab; however, I'm sure you can find some online resources that will assist you in locating ffmpeg, whether you have it installed or not, using collab.
It was also a suggestion for your challenge, as other factors could be contributing factors. I think a little more research could help resolve the issue using collab.
Hey, thx for the video, but i got 1 "little" problem.
When i use your settings, exact the same, same model... and start to render, it splits the animations up in 2, first xx frames its one animation, than it switch to another animation. (so 2 different animations each 50% of the whole length)
I appreciate your feedback; it appears that your rendering is unexpectedly dividing the animations in half. Please double-check the range of frames you've specified for rendering, as well as the prompt and animatediff settings. Also try updating SD or the animateDiff extension as well.
Unfortunately, AnimateDiff does not allow us to save our project settings, as Deforum does. I would have loved to share my settings.
Thanks a lot for the video! But I got the same problem. I did the exactly the same steps with the tutorial, but SD split the animation into 2 as well. If anyone have the solution, please let me know! @@goshniiAI
@@goshniiAI ah thx, yeah looks like the frame count was wrong, so it splitter the video, ups :)
@@dudesicko No worries at all, and thank you for the update. :) Mistakes are just opportunities for us to learn and improve. ❤
@@JianiAi 1) settings 2) optimizations 3) check the box opposite - Pad prompt 4) aplly settings 5)reload ui 6) enjoy
I am at the 3 minute mark but the extension folder is not showing up. I have the extension installed, but that folder is not showing. What am I doing wrong?
I'm sorry to hear that there must be something you overlooked.
1. Before installing, please confirm that the extension in A1111 is named correctly. tinyurl.com/yc3v8b25
2. Use (Apply and restart) after installation.
3. You can also take a look at this different installation guide. tinyurl.com/5p3zfphw
I hope one of these solves the problem.
The permission link is granted here: drive.google.com/file/d/1a_9WnGqIbkthVlh0yzHQS8a9B4IH5cHN/view?usp=sharing
I have a lot of pictures created in gif places! What should I do!?
If you have a lot of images, consider using a different checkpoint model for generation. Also, make sure you have FFMPEG installed to help with the animation preview in the webUI.
Can you sync this video to music?
I'm not sure about Animatediff yet, but you can look into using Parseq, which works well with Deforum.
Man, when I try to generate the video it will give me this error: "EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([1, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 1 in chunks of 16" Do you know what could be wrong?
Hello there, sorry about that. There had been a few previous reports about similar issues. Please review the comments section for a few suggestions made from multiple experts; they may be useful.
I got this error and updated torch, now I’m getting another one lol
@@jorgeferraraoficial Sorry to hear that. Don't give up.
I am getting this error when i enable animatediff and hit generate: EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([1, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 1 in chunks of 16
Restart A1111, reduce the size of your image, and disable any upscaling you may be using. This is what I generally do when I receive a tensor error.
@@goshniiAI I did, even on 512x512 i am getting the error. I have a 4090.
@@zaselimgamingvideos6881 sorry about that. I'm not sure why it would happen with a 4090. I use a 3060. Please check the other settings and parameters, as well as perform some updates.
There are also a few helpful suggestions in the comments section that could be useful to read over.
I did everything the same yet my animations always come out doodoo.
Maybe its a model limitation, it was trained on some very specific stuff
It's possible that the model's training might be influencing the results. Experimenting with different models or tweaking parameters could help
im not really sure what part of this used animateddiff, it looked like your were only using stablediffusion? sorry i am very new to this, so im not understanding a lot of things.
can i use animateddiff without stablediffusion? i just want to animate my art not create AI gifs
it's totally normal to feel a bit confused when starting out with these tools.
Stable diffusion is the base model that generates the images.
AnimateDiff is an extension for Stable Diffusion that focuses on creating animations.
To animate your art, you'll need both tools working together.
If you want to animate existing art without creating new images, you can still use AnimateDiff. Simply load your artwork into the workflow and let AnimateDiff handle the animation aspect. You could do research to find an image-to-video tutorial.
@@goshniiAI Bro thank you for explaining to me it’s really helpful. 😎 you’re the man
@@ryuk5673 I appreciate reading your awesome feedback. 😊 Happy animating!
I keep running into NoneType object is not iterable. I followed everything you did and i can't generate anything without this message popping up, was doing fine before checking animatediff
Hello there, Make sure both Automatic1111 and AnimateDiff are updated to the current versions. Sometimes, slight changes may fix the problem.
hello sir i have followed your all steps but i only got 1 image can you please fill me in how solve this problem
Hello there, there could be a few various causes for your experience. Since we have had similar issues and worries in the comment section, I will knidly suggest going through the comments for different answers from a few experts to what might address the situation.
Also consider using the right Checkpoint
Update the version of Animatediff.
install FFMPEG
These are also a few things to keep an eye out for, and I hope this helps.
Good day. Thank you very much for the video. You didn't show where to put the Motion Model2 files: mm-Stabilized_high.pth
I apologise. The "mm-Stabilized_high.pth" file should be placed in the same directory as the previous models (mm_sd_v14 or mm_sd_v15). tinyurl.com/6br6pnje
Thank you very much:)
@@kotofeykotofeevich6575 You're very welcome.
Do you know why out of memory errors happen with Animatediff after 50%? I have 24GB of memory and that seems like its enough.
Even though I had a 32BG, I had a similar experience. However, I soon realized that this was because the complex animation or high resolution I was using required a large amount of RAM. There are several reasons why out-of-memory errors can occur. Try adjusting your settings, secondly - check to see if your system is running any other resource-intensive processes that could affect the amount of memory available.
@@goshniiAI I don't think its a memory issue, if most people are making these animations without a 4090. There has to be some kind of bug.
You must be right, I understand, and I believe it is well worth researching whether or not a bug is causing the issue you are experiencing. Hopefully, there are some solutions available to assist you.@@marcus_ohreallyus
subbed, can you please do tutorial for confyui from begginer to expert
I appreciate your support. I'll surely give it some thought for a later video. I'll also advise looking up these topics through research. There could probably be a few good tutorials available for CoMfy UI.
Anyone else getting a "Error while processing rearrange-reduction pattern "(b f) c h w -> b c f h w"." error? My webui is v1.9.3
Please make sure that the animatediff extension is updated. If the file size exceeds your VRAM, this could occur, you may need to use a lower resolution at first and then upscale.
brother how did you get that theme
Hey there, "brother"! Could you please tell me which specific theme aspects you're referring to so that I can clarify?
the blue and black look you have @@goshniiAI
Nice one, I have 6 gb vram, can I still use this?
Thank you, and of course! Stable diffusion can still be used, but there may be some limitations when working with complex projects. Just be careful to research and adjust your project settings appropriately to get the most out of your resources.
@@goshniiAI what yours?
I am using a GeForce RTX 3060 with 32GB of RAM.@@frustasistumbleguys4900
how to fast rendering methode
you can try the following tips:
1. Lowering the resolution of your output can speed up the process.
2. if you have a good GPU, The better the GPU, the faster the rendering.
3. Adjust settings for fewer steps, which can help in reducing render times.
i always got this error when using animatediff video_length = mm_animatediff.ad_params.batch_size
AttributeError: 'NoneType' object has no attribute 'batch_size'
anyone?
me too
@@jorgeferraraoficial Lets see if someone helps us
@@seeergio14 I´ve already installed stable diffusion from scratch, animateDiff and ControlNet and had the same issue.
Hello Guys,
A few suggestions have already been made in the comments by various experts; please read through them as they may be useful.
Because everyone's setup is different, even the smallest issue could be the primary cause.
bro my img2vid isn't animated
Because this is a text-to-animation guide, I am not sure whether I really understand your difficulty.
However, make sure you've set up your process with the necessary animation settings.
I feel for your storage 🥵
Haha, thanks for the empathy! 😅
my 2060 cried for 1 hour on 32 frames lol
You can save render time by using a lower resolution and then upscale the final result.
@@goshniiAI tks
Keeps saying " IndexError: index 21 is out of bounds for dimension 0 with size 16"
Hello there, Check the parameters you're feeding into the model. verify that the dimensions and settings match the intended inputs as well.
might be good and useful for animated effects (water, fire, energy, mist).. a morphing waifu looks cool but is not helpful for anything substantial
I appreciate your viewpoint!. Animation possibilities that are exciting.
Try combining it with a controlnet extension and some base video footage. It can be VERY usable when applied correctly.
Thank you for the advice. I'm looking forward to experiment with the concept ahead.@@exitspree
Considering your interesting work on the deforum and your wishes in the experiments at the end of the video, it's strange that you don't follow them yourself. I am the author of the very work from which you took all the customizations for this video. If you had changed even a little, I wouldn't have any questions. But if you're copying one-to-one, at least make a link to my work. I don't want anything, but respect wouldn't hurt anyone. Good luck.
I appreciate your viewpoint and feedback. I apologise if it appeared that way, and I understand your concerns. My goal in sharing videos is to pass on knowledge and assist others in exploring and learning from a variety of sources, including community contributions. I genuinely admire your work, and despite the fact that I forgot to credit you in the video process, I have included it in the description. I have done that in my past contents, but I guess it passed me by. My intention is never to ignore the efforts of others in the community.
Thank you for bringing this up, it has been a learning experience for me in terms of how I will create materials for the wider audience.
@@goshniiAI Why did you paste a link to a picture instead of an animation?
@@Titto13_AI As a result, the image link gave me the prompt that I used. However, I've also included a link to an animation. Thank you very much once more. This video was not intended to take any credit. I'm just sharing my journey and the skills I'm learning.
I'm sending virtual tissues your way. 🌈 @@visionevo
you are almost out of storage 00:55
I'll make sure to clear up some space .Lol
doesn't work, creates a lot of pictures
Please review the comments for similar problems and possible solutions.
that was the first thing I did)@@goshniiAI
@@goshniiAI I use SDXL and I have the same problem (creates a lot of picture) and among the answers I couldn't find the one that solves this problem, could you help us?
I had the same problem when trying to use an SDXL model. Have you tried any of the recommended models for generating frames with Animate diff? The list can be found here. tinyurl.com/3ncuzb52 @@marcoevangelisti2360
Also, double-check your settings here by @MihailKedrovich
1) settings 2) optimizations 3) check the box opposite - Pad prompt 4) aplly settings 5)reload ui 6) enjoy - tinyurl.com/y8rzbuyz
@@goshniiAI these are all non XL models so I can't use, Pad prompt is enabled and the gui has been restarted