Stable Warpfusion Tutorial: Turn Your Video to an AI Animation
ฝัง
- เผยแพร่เมื่อ 21 ก.ค. 2024
- The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
Learn how to use Warpfusion to stylize your videos. Discover key settings and tips for excellent results so you can turn your own videos to Ai Animations
Tech support: / discord
📁Warpfusion Settings:
bit.ly/42rJLPw
🔗Links:
Warpfusion v0.16(FREE & recommended): bit.ly/3pBh5X3
Warpfusion v0.14: bit.ly/42HozoG
DreamShaper: civitai.com/models/4384/dream...
Stable WarpFusion local install guide: • Stable WarpFusion loca...
Another local install guide: github.com/Sxela/WarpFusion/b...
Best Custom Stable Diffusion Models stablecog.com/blog/best-custo...
How to get good prompts: bit.ly/3IEAzjQ
How to use Luma AI: • Create FPV-Like Videos...
Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
©️ Credits:
Stock video: www.pexels.com/video/energeti...
James Gerde: / gerdegotit
Marc Donahue: / permagrinfilms
Markus Paolo Pe Benito: / markuspaolo_
Alex Spirin: / defileroff
Noah Miller: / noahrobertmiller
Willis Hsieh: / willis.visual
Diesellord: / diesel_ai_art
Stefano Knoll: / steknoll
Josh Doctors: / fewjative
patchesflows: / patchesflows
Yüksel Aykilic: / designyukos
Oleh Ibrahimov: / drimota.ai
nointroproductions: / nointroproductions
Positive Prompts:
"0": [
"realistic female beautiful statue of liberty is a rocky statue dancing, manhattan city skyline in the background, the environment is new york city in day time, realism, hyper detailed, cinematic lighting, photograpny, High detail RAW color art, diffused soft lighting, sharp focus, hyperrealism, cinematic lighting, unreal engine, 4k, vibrant colours, dynamic lighting, digital art, winning award masterpiece, fantastically beautiful, illustration, aesthetically, trending on artstation, art by Zdzisaw Beksiski x Jean Michel Basquiat, high quality, 8k, "
]
Negative prompts:
"0": [
"smoke, fog, lowres, (bad anatomy:1.2), EasyNegative, multiple views, six fingers, black & white, monochrome, (bad hands:1.2), (text:1.2), error, cropped, worst quality, low quality, normal quality, jpeg artifacts, (signature:1.2), (watermark:1.3), username, blurry, out of focus, amateur drawing, colored, shading, displaced feet, out of frame, massive breasts, large breasts ,((ugly)), nude nsfw"
]
⏲ Chapters:
0:00 Introducing Warpfusion
0:34 How to start with Warpfusion
1:08 Google colab: local vs online runtime
2:01 How to transform a video
2:34 What's an AI model?
3:06 Settings
8:35 How to run Warpfusion
9:23 Animation preview
9:30 How to change GUI settings
12:06 How to export the animation
12:36 Get featured
12:49 Warpfusion + Luma AI
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
#warpfusion #ai #stablediffusion
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects. - ภาพยนตร์และแอนิเมชัน
Update: I recommend using Warpfusion v0.16: bit.ly/3pBh5X3
Update 03/04: Just re-tested the same exact steps in the tutorial using v0.14 and Dreamshaper 8 model, it works perfectly!
The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
For tech support and other questions: discord.gg/YrpJRgVcax
Don't forget #mdmz when you post your Warpfusion videos 😉🥳
the problem is if I pay you, can I use it on a free colab or free kaggle account? if not, seeming useless
I'm using v0_16_13 and the script is giving an error on Generate optical flow and consistency maps 🙁
Can someone help me?
YOU ARE CONFUSING THE SHIT OUTTA ME BRO
📁Warpfusion Settings: bit.ly/42rJLPw
If you keep getting errors, use Warpfusion v0.16: bit.ly/3pBh5X3
What are the GPU requirements/VRAM requirements for Warpfusion?
Does it work with M1 MacBook OR any apple computers?
Hey man, thanks for your in-depth tutorials on stable diffusion and warp fusion, they've helped me understand the software greatly. Unfortunately I am having an issue when trying to create a warp fusion, specifically at the 'define SD + K functions, load model' section. I keep getting this error no matter what I do.
NameError Traceback (most recent call last)
Cell In[8], line 6
4 import argparse
5 import math,os,time
----> 6 os.chdir( f'{root_dir}/src/taming-transformers')
7 import taming
8 os.chdir( f'{root_dir}')
NameError: name 'root_dir' is not defined
Any help would be much appreciated, as there is nothing online that comes up when searching for a solution. Thanks.
1:19
@@Rishivlogs551 Ah okay thanks, I should've checked that out before I stated the process. I am now getting a different type of error when trying to run through a hosted runtime, under the Install and import dependencies.
ImportError: cannot import name 'isDirectory' from 'PIL._util' (/usr/local/lib/python3.10/dist-packages/PIL/_util.py)
Any idea what could be causing this? :\
I'm definitely going to give it a try and experiment with different settings.
In the "define SD + K functions, load model" section should I select CPU or GPU for the 'load_to' variable?
amazing and it really does look good
Very good, thanks !!!
very nice and I always wondered how it was done, not easy but the output is impressive
Thank you! Cheers!
where can I find the stable_warpfusion_settings_sample document for the default_settings_path?
If I have AMD GPU is it still safe to use the online version only/its the same as not having strong enough hardware?
thanks for the awesome tutorial! Looks amazing, only thing is mine keeps changing the subject's aesthetic looks and especially the face within a couple frames... is there a way to make it keep the same look as the first frame?
you can try to fix that by scheduling
@MDMZ, While Processing Video Input setttings, I got the following error:
NameError: name 'generate_file_hash' is not defined
Please Guide
Which is better, Warpfusion v0.14 or Stable WarpFusion v0.5.12 ?
Wonderful 👍👍
whats the song that people use for stabled diffusion
great tutorial, I have followed another tutorial to train my own AI model using rendered images of a character and used it, my first try wasn't so successful ( not sure if the reason is the video or the model) , any chance you can perhaps create a tutorial on creating our own AI models and using it on warpfusion?
I followed this once before and it worked great!: th-cam.com/video/kCcXrmVk1F0/w-d-xo.html
@MDMZ, Thank you for your assistance! I managed to train my AI model and achieved some progress. However, I'm still struggling with maintaining consistency in masking the female's head throughout each frame. Initially, the mask works for a few frames, but then it starts to take on the form of the original face in the video.
which video tutorial did you use
How you increase the trails effect?
Hi MDMZ, my run stopped at 'Video Masking' with the issue of 'NameError: name 'os' is not defined'. Would be amazing if you can help, thank you.
Same here. Can somebody help us, please? :(
Amazing !!!!
That's impressive!!
🙏
Please do a tutorial for the cola shorts clip it's so amazing
Why my colab always reconnecting, when i reconnect all my settings will be back to default settings and i cant go back to the 1st i made
Hi super video..however I have been trying since 2 days..it disconnected at 20% .Is there any fix for that? Thank you in advance :)
Can I use my own GPU or do I need to pay for Google Colab?
Can you achieve the same results with Temporal Kit?
This is an awesome tutorial ❤❤❤
Thank you! Cheers!
Cool bro !! 🔥
🙏
Thanks it was really usuful. When I save my video and run the last cell it tooks almost 1 hour to complete though the video that I diffused(out put video) would be almost 1 second. I don't really know what is wrong.
ty vv much legend❣
How does this compare to using stable diffusion image to image batching for creating a stylized look for videos?
this is much more consistent
question, will this tutorial basically work if i run it locally? Im not familiar with colab pro but i have a 4080.
yes same process right after you connect to local run
I tried to follow your instruction here with my own video clip, but I seem to get errors all the time. Maybe it's because there are new versions up and running now that behave different. What I'm looking for is to use the video clip I have (it's me in front of a green screen). I would like to change myself into something fun, like some kind of animation, but not all different. Just making me look animated. And still have the Green Screen in the background in the final output. Maybe it's not possible in WarpFusion or what do you think? Should I look at something else or is it possible to make this with the right prompt and right model? Just can't find any tutorials about it. And I thought your video was great.
it is possible, I have instructions on how to keep the background untouched in this same tutorial, shooting on a green screen will definitely help with the separation. and YES, you should look into using a newer version
Awesome. Great Tutorial, ❤
Thank you! Cheers!
Quick Question. If I want to try to keep the original background which options do I select?
I actually explain that in the video
Would you recommend using this to a horizontal 1080p video? I have an NVIDIA 3070.
both will work fine, depends hwo you plan to use the output, if for IG/tiktok just go with vertical
Hi does this work on MAC M2 chip?
Is there anyway to create videos like this on an iphone?
Is this not part of stabled diffusion a1111 web ui, like an extension? This is it's own thing? Also, i have 12 gb vram. Does anyone have any input if similar ram worked for them? Thx
this is its own thing
i have an error says OS is not define how to fix it? tia
hey, how to only diffuse the background but keep the object original? whats the setting for this masking, thanksss
I have covered that in the video
1.4 import dependencies, define functions
Runtime error
Can we used for photo ??
Anyone know of a free alternative to Warpfusion
Hi..thank you for the amazing videos ....but it keeps disconnecting after a few hours and it goes back to square one! how do I keep the connection alive?
I usually play a 10 hour youtube video on another tab 😅 you gotta keep your computer active
Will it be on mobile?
Does anyone know many time does it take to make a 30 seconds video with warp fusion? I need to understand this in order to present in on a live activation! Many Thanks in advance!
no one will be able to give you the correct answer, it depends on so many factors and it's pretty much impossible to predict until you run it.
Can the generated video be used commercially
Best vid. Thanks
Glad you liked it!
Took about 4 hours to render 4 seconds but man it looks buttery smooth. My 1080ti was really trying🤣
glad it worked for you 😁
970 here. I envy you! AhaHaHa
About to try this today wish me luck lol
I,ve GTX 1650 would it be okay?
@@Tamannasehgal19 Yes. Better than a 970. But will take time. Oh, I think it's ok. I don't really know. Your card is better than mine, so...
I will just shut up now.
Hey!
I'm considering buying a new PC of 8GB VRAM. Since Warpfusion seems to require more than that(wich means I'd have to pay for Colab Pro anyway), is there any benefit of buying a better 8GB VRAM PC, or should I just stick with my Laptop?anks for the tutorial.
depends on what you intend to use it for, 8GB is a bit low for SD
Do you need the later versions of warpfusion or can you use the earlier ones?
It's best to use the latest
im using the free version of google colab so it doesent let it run do i need colab pro ?
Hi, as explained in the video, colab pro will give you access to more resources
so, Do I have to pay on patreon to have acess online Warpfusion ? I did´t undersand how acess it. Can I buy it ? I can´t run on my PC. I have a poor 3070.
you dont need your local GPU for this method
When I hit "run all' it can't get passed the "1.4 Install and import dependencies" section, says it's missing some modules (timm, lpips) been scouring discord and see others with this problem but no solutions. I'm using colab pro remotely on a Mac.
did you try re-running? or using a different version ?
@@MDMZ yeah I fixed it by downloading the latest version and not the one in your tutorial
@@MikeBishoptv cool !
Getting an error msg failing at the Load a Stable tab saying; ModuleNotFoundError: No module named 'jsonmerge'. Even after getting a fresh install file and manually installing jsonmerge using pip install jsonmerge. Anyone else had this issue and managed to solve it?
hey, please visit Alex's discord for technical support, link in the description
Does A111 stable diffusion capable of this output?
technically yes, but warpfusion is way way easier
Thank you so much! Great video! Does this also work for cartoon characters with different human proportions?
Aah, sorry, I think we r out of cartoon characters.
Hi! Is this thing works with stable_warpfusion_v0_14_14.ipynb version?
it should, you can always move on to the newest version, settings shouldnt be much different
First time please help, got error 1.2 Pytorch - 'No such file or directory: 'nvidia-smi''
Followed the entire tutorial with no luck. None of them talk about switching the Notebook settings Hardware Accelerated from None - to GPU. I have no idea if im suppose to do that. but thats the only way I can get the error to go away and keep the runtime going past 1.2 .
However, with this GPU setting, it finish down to the GUI cell then disconnect my runtime and would not connect. I then switch the Notebook setting back to None, and it connected to the runtime. but now I am back at square 1 with the 1.2 Pytorch Nvidia smi error.
Please help!
hi, check the pinned comment
Hey! my run crashed at line 4:
controlnet_multimodel = get_value('controlnet_multimodel',guis)
NameError: name 'get_value' is not defined
Could you help?
hi, check the description
Do you have the local tutorial?
Nice
got an error on my first colab run:
RuntimeError: Error(s) in loading state_dict for ControlLDM:
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
is it rejecting my model "sdxlUnstableDiffusers_v8HeavensWrathVAE.safetensors"?
hi, please check the pinned comment
@@MDMZ I was able to get through by only using the SD 1.4 model. Not able to get any SDXL models to work tho. Do you have any tutorial where you are using SDXL models by chance?
Are subscription members allowed unlimited use of generation
After getting any error or server disconnection, is there a way to continue from the latest frame without running all the process again?
You can use the resume run festure
Does the AI have the capability of animating a drawing that I created (do I need to create the same subject in several angles?), and applying that drawing to a video, dance, walk or jumping video clip?
you can try image to video, I have a video on that
Do you need CUDA and Visual Studio installed to run this locally on Win 10
you can follow the installation guide, the pre-required tools are listed there
Does anyone know, can this be done using another image as reference instead of a text prompt?
I believe it's possible now with IPadapter
How i can standby the process , turn off my laptop and continue later from the last frame generated?
try using the resume_run feature
Hello, Can we use different checkpoint ? I tried result is horrible
yes you can
I'm 2 minutes in and I'm like 🤯 ... so many steps and it feels so complicated
it only takes a bit of patience, you can do it!
Hi, i used this tutorial and i have a question, why is my video at the end only 4 second if i uploaded video on 16 sec, did i do something wrong? i'm new in AI :(
probably, check the step at 7:36 and make sure you set the right frame range, [0,0] to process all frames
I can't do it because google colab disconnects all the time in the 5th, 6th step so I have to start again. Is there any way to solve that?
try using the latest version of warpfusion
Can you do a tutorial for Deforum Stable Diffusion for google colab Because my installed version is not working
will look into it
On average how much does it cost to make a 30 second video? Supposing it's 1080 vertical and you use the online processing option
very difficult to predict
there is an error , "NameError: name 'get_value' is not defined". how do I fix this. please help !
hi, check the pinned comment for technical support
Are there any graphics card requirements for this? Can you tell me?
not if you run it online just like in the video, if you run it locally, I recommend a GPU with atleast 12GB of VRAM
You are a monster, man! And I own a GTX970 😂 so, some others tutorials are more "for me"
Enjoy!
Loved your video! Super Super Helpfull. Is there a way or a prompt to achieve a better lipsync or mouth movement? I'm struggling with this.
not yet!
hello I followed your video step by step until the launch of all the scripts but an error is displayed at optical map settings and it tells me NameError: name 'os' is not defined can you help me vp (I have already tried 3 times but still the same and I have took the warpfusion 0.16) )
hi, check the pinned comment
I still have to pay another subscription to make warpfusion work?
which runtime should i use on colab? T4 or V100
I recommend u try both, one will cost you more over the other, but u get more speed
Not sure why, but when I try to open my 'run.bat' file after running the 'install.bat' file nothing happens. The command window just opens for half a second and then closes again. I've tried multiple times, including running it as administrator, but it just does the same thing. Is the run.bat file meant to behave this way, or is something wrong? :\
weird, try reinstalling
Awesome tutorial!! Quick question, I do have a windows pc, but was wondering will this work on a macbook as well?
Obviously not for mac.
Also would prefer if he would mention this right at the beginning 🤷🏻♂️
It actually works on the cloud! So your OS doesnt matter
I think you are referring to the local method, this is the online one 😉
@@MDMZ ey hedheke ch7abit nafhm bch n3rf ala ena pc nkhdm kn juste tst7a9 fazt l. Collab w local install yhmch thtd b a relief hh, thank you for the info^^
Can anybody help how to get this done with a mac?
Is there a way I could use warpfusion locally with automatic 1111? .
Please make a tutorial on it 🙏
you can use stable diffusion locally both with A1111 and warpfusion as well, I do have a stable diffusion tutorial on how to install it with A1111
@@MDMZ thankyou!!! You mean a tutorial on using warpfusion with automatic 1111 , not Google colab. Right?
@theartforeststudio8667
Pretty much the same things just different platforms.
warpfusion on google colab is used to run stable diffusion
A1111 is used to run stable diffusion on your browser
Both are set up and work differently, so it depends on which one u r more comfortable with
I am having issues connecting to google colab to local host.... i have posted into discord on the issue
Is it possible to do this on your cell phone or do you need a computer?
Does anyone know if it's possible to run Warp Fusion on MAC?
give it a try, this is the online method
Can u model a specific image instead of copying known ones like statue of liberty? I want to dance an image of myself for example ?
in the example of using your own image, you will probably need to train a model first using your images, there are plenty of tutorials on how to do that on youtube
is it not possible to do the same with stable diffusion?
warpfusioin results are much more consistent
Does Warpfusion required dedicated GPU to run?
yes
Can this also work with still images or is it only video to video?
for images i suggest you use stable diffusion on A1111, it's free and easier to use
I'm trying locally, however getting the below error:
ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (C:\WrapFusion\env\lib\site-packages\torchmetrics\utilities\imports.py)
Anyhelp?! 😭
hi, can you share this on discord along with what specs you're trying to run this on ?
same error
same error
Please bring a mobile option. I don't have a PC and I wanted to do this on my phone 😢
is there any free alternative?
bro, if you don't mind telling us, how many compute units did you use per video on average? especially that video you just showed?
I burnt like 20 units just for a 13s vid lol
@@reubzdubz wow man! thats some expensive job :D
@@radstartrek that is if you follow the resolution in the video tho. I went down to 540x960 afterwards.
@@reubzdubz ok, so it would cost even more compute units on something like 720p.
honestly I have never documented as I was experimenting regularly with different resolutions and settings which affects the rendering time heavily, but yes the lower the resolution, the faster it runs
I followed the video step by step, But i generated a video of 4 seconds. Any tips on how to get a longer video ??
did you change your end frame from 0 to another number ?
I tried to link my video after I uploaded the file but I get "FileNotFoundError: [WinError 2] The system cannot find the file specified: '/FILENMAME'". I linked it just like you did in the video. Any help is appreciated!
can you try the process from scratch? it might be referring to another setup file
@@MDMZ I've uninstalled and reinstalled everything the local guide said to install. It seems it has trouble finding the video? I put everything in the same folder.
this is probably the most complicated ai program i used by far. so many errors you cant find a fix for online and confusing settings you got to learn on your own because nobody has a full setting explanation for it. it took me almost 300 renders to understand what most settings do but i feel like its all going to be worth it once i get it all down.
it's definitely challenging and can be frustrating at times, keep an eye on updates, newer notebooks are much more stable
@@MDMZ lol turns out all i needed to do was tweak was the controlnet settings to get the output i desire. i had no clue consistency and controlnet correlated with eachother
what do I do if I get the error: Your session crashed after using all available RAM
are you using colab pro ?
program never works on py 3.11 but on 3.10.
Anyone got time for teamviewer or anydesk support?
Is this Mac only or something? I've tried with many of your tutorials and nothing has ever gotten past the first "run all." A million errors before it ever gets to my video and I'm using the hosted gpu any everything. Kind of bull that I had to pay to use this software and it doesn't even work out of the box...
hey, are u using the latest version of Warp ?
I just bought collabpro and it says "The user has exceeded their Drive storage quota
GapiError" wtf someone help
is it possible that your Google drive storage is full? you might need to clear some space