#### Links from the Video #### Download Workflow: drive.google.com/file/d/17UQXmDRvPLLI7c6g76LlZF4BjMuoRHQa/view?usp=sharing SVD Model Download: huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main Enigmatic_e Video: th-cam.com/video/imyQuIiuRnA/w-d-xo.html
Please add to this comment that you have to click on RIFE VFI ckpt_name and then select either rife47.pth or rife49.pth There are several comments about people who got an error. thank you.,
This is a good step, as soon as they allow LCM and text input to be added it will be game changing. The stability of the backgrounds is outstanding compared to animated diff but the motion of people is severely lacking as well as no lora support. I have high hopes for this to improve fast however.
Thank you, Olivio! I was starting to doubt my computer's capabilities after my tests failed yesterday. However, after following this tutorial, everything clicked into place. It turns out that the crucial missing piece was the 'update all' step in my tests. Lesson learned!
An important note: In the video you shows svd and svd_image_decoder as 14 fps and 25 fps models, but that's incorrect... The model for 25 fps video is svd_xt, located in another different hugginface link...
in model info it mentioned as "we also finetune the widely used f8-decoder for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder"
Thank you for the detail instruction! I was able to create a 512x512 14 frames rocket video finally (Prompt executed in 599.47 seconds) with my rx580 4gb video card.
Please help, I've gotten these errors before and gave up in the past with other workflows.. I've attempted with my main Comfyui install and a fresh one. I've run the manager and installed all missing nodes, did an update check, and restarted multiple times. When loading the graph, the following node types were not found: ImageOnlyCheckpointLoader SVD_img2vid_Conditioning FreeU_V2 VideoLinearCFGGuidance Nodes that have failed to load will show as red on the graph.
I still havent made the switch to comfy. I have a fairly complex workflow in a1111 that I am sure if I spent enough time could automate in comfy , but I like all the tinkering and inpainting and iterative steps that I use in a1111. In comfy I feel like you setup a workflow and its all done in one shot. I guess I could create a workflow for each step and have it work somewhat the same though.
I automated my a1111 workflow with nodejs scripts and using a1111 API, but when I look at comfy it looks like you can do all of that in browser without need for coding. Plus everything new always work in comfy first xD So decided to switch today. Time to say bye bye to a1111
Thanks for the helpful video. Do you think there is a enhance detail node I can apply at some point in the workflow that will eliminate the blurriness and disfigurement that happens at times?
Thanks a lot man. I have a problem: when installing all the missing nodes, all of them install correctly but one: the VHS VideoCombine. I get the same error again and again: "When loading the graph, the following node types were not found: VHS_VideoCombine Nodes that have failed to load will show as red on the graph." I tried to update it, fix it, and even deleting it and reinstalling it, but nothing seems to work for me. Do you know what could it be? Thanks!!
I know you say "ComfyUI" is the "future" of AI, but I can't bring myself to use it. The learning curve is much steeper than something like A1111. Yes you have massive control and I'm not unfamiliar with Node workflows (Unreal, etc) but in terms of mass marketability and ease of use for the average consumer, it's not the way to go in my opinion.
@@Vestu StableSwarmUI might be something for you (even if it's still alpha). I learned to love comfyUI because it's the only thing running smooth on my potato pc.
I was thinking the same like you, but believe me, after you first try it, you won't ever use Automatic1111 again. It is fun to watch the nodes "working" while rendering and it's very intuitive - even if it doesn't look like it at first glance@@Vestu
@@TPCDAZ With 8GB GPU sdxl doesn't work well in automatic 1111. I've disliked nodes for 20+ years but I just use somebody else's workflow file for comfy. I never need to change the connection wires or make new nodes. I've been using Foocus a lot though recently. So I think Auto1111 is for elites now.(unless using sd1.5 model)
Hi Olivio, thank you so much for this video, its clear step by step, helps someone to follow along and implement it. I have an older GPU - NVIDIA GeForce GTX 1080 Ti, and it takes quite a long time to render the video. How much time does it take on your system?
I tried to do the steps you have shown, but it seems like i run into some errors... If I try to do the "Update All" it gives me the following error: Failed to update ComfyUI or several extensions / TypeError: Cannot set properties of null (setting 'onclick') If i load the json config it says: When loading the graph, the following node types were not found: RIFE VFI VHS_VideoCombine Seed (rgthree) Nodes that have failed to load will show as red on the graph. How can i fix this?
Amazing work as always Olivio! - If anyone here is getting an "Error occurred when executing KSampler: unsupported operand type(s) for *=: 'int' and 'NoneType'" and has a solution please let me know. Thanks!
I'm eagerly looking forward to being able to generate my own Warhammer Fantasy Battle movie with nothing but a script I have generated using any random uncensored text-to-text model in a year's time
When loading the graph, the following node types were not found: RIFE VFI Seed (rgthree) VHS_VideoCombine Nodes that have failed to load will show as red on the graph.
i get this error in in the rife node "Prompt outputs failed validation RIFE VFI: - Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']"
For anyone having this problem: You need to click the "ckpt_name" button within the "RIFE VFI (recommend rife47 and rife49)" box (the one appearing in red whenever you try to queue prompt). Then select rife47.pth or rife49.pth. Done!
Somehow in RIFE, I don't have sudo_rife4_269.662_testV1_scale1.pth as shown in the video. I only have rife40.pth to rife49.pth. I downloaded sudo_rife4_269.662_testV1_scale1.pth and put it right next to the rife40.pth, restarted the server. It still doesn't show up in the dropdown. Any help please?
thank you , I make it work, so now question, how do i manage the animations, right now its like zoomin zoom out ... do there any other workflow where we can add also prompt?
Olivio is there a way to send a folder of images to SVD? I tried a few ways but kept getting errors - I want to take a story board full of a movie and pump it thru without manually doing it 1 at a time
something you didn't mention... I know should be obvious but I didn't get it... where do you put the svd model files after dl from hugging face? just normal models folder same as all the others for SD?
(Anyone else having problems when they try to update all in the comfy ui manager after i update it continuously says restarting) i just got it working great video i do wish yo explained some parts better
I downloaded everything correctly, but my ckpt isn't loading in, despite it being in the right spot. All it gives me is an undefined/null and no option to select any models. I have the paths set up correctly in the YAML file, so I'm not totally sure what the issue is
I'm taking the leap to ComfyUI... and svd_xt. Is there any way to 'coax' a specific movement (like, only background, only foreground, move an arm but don't trip out like you just took dr*gs?) The motion is generally really nice with this model, but very frustrating sometimes with its choices of WHAT it's going to animate and what is going to stand still... resulting in 20+ runs for one animation that kind of matches what the original hope.
Guys I need your help with this : When loading the graph, the following node types were not found: • SVD_img2vid_Conditioning • VideoLinearCFGGuidance • ImageOnlyCheckpointLoader Nodes that have failed to load will show as red on the graph. PLZZZZZZZZZZZZZZZ
Has anyone else noticed that when using this default workflow that the output video appears "smooth" or smeared in comparison to the input image? This even happens when starting with a lower resolution image.
#### Links from the Video ####
Download Workflow: drive.google.com/file/d/17UQXmDRvPLLI7c6g76LlZF4BjMuoRHQa/view?usp=sharing
SVD Model Download: huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main
Enigmatic_e Video: th-cam.com/video/imyQuIiuRnA/w-d-xo.html
👋
hey, you forgot the link to the Comfy UI Manager Extention :)
that is so cool
Is this possible to install this locally on PC with window 7???
Please add to this comment that you have to click on RIFE VFI ckpt_name and then select either rife47.pth or rife49.pth
There are several comments about people who got an error. thank you.,
This is a good step, as soon as they allow LCM and text input to be added it will be game changing. The stability of the backgrounds is outstanding compared to animated diff but the motion of people is severely lacking as well as no lora support. I have high hopes for this to improve fast however.
Thank you, Olivio! I was starting to doubt my computer's capabilities after my tests failed yesterday. However, after following this tutorial, everything clicked into place. It turns out that the crucial missing piece was the 'update all' step in my tests. Lesson learned!
Awesome! Thank you
I'm in the process of making a WWII movie trailer with SVD. So fun.
The age of AI never ending shows we can binge is coming.
Doom Scrollers gonna be doomed 🤣
Hello, thank you for the tutorial! I'm wondering how I can get the RIFE VF1 and Video Combine nodes?
2023 and installs are still crazy. One day, we'll be so advanced, we can click "install" and things will get installed simply... one day.
Is it possible to add this extension to Stable Diffusion automatic1111 workspace?
I'm finding it actually works quite well at different resolutions. I've done 512x768, 768x768 etc and they turned out good.
can it do 1280x720?
@@Steamrick Im running it at 2416x1360
yes, but 576x1024 is the suggested res, because that is what it was trained on. :)
@@OlivioSarikas Yeah definitely. I ended up trying it because at native resolution I couldn't generate 25 frames without running out of vram lol
@@Gh0sty.14 how much vram?
RTX 3060 with 14 frames model took about 2 mins per render ❤❤❤
with 3090 few seconds ;) ... around 10 seconds
4070TI Super around 1 minute for the 25 frames model with a multiplier of 4
rtx 3060ti, it take decates :/ because of my image 1024x1024??
@@MuratAtasoy check if your GPU is running while generating video.... maybe 8Gb VRAM is not sufficient....
Hooray! Happened! I figured it out in just 2 days )) Thank you very much!
Every time you're pronouncing the word "copy" incorrectly, exactly like my boss does 😅love it hah
thanks for the quick and easy breakdown have been putting off learning comfy but this helps a lot
An important note: In the video you shows svd and svd_image_decoder as 14 fps and 25 fps models, but that's incorrect... The model for 25 fps video is svd_xt, located in another different hugginface link...
so what's the difference then?
Oh my gosh you’re right. I just grabbed the model from the post on the Hugging Face homepage and it was the xt version
in model info it mentioned as "we also finetune the widely used f8-decoder for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder"
Not all heros wear capes...
Thanks for noticing and saying something.
Thanks for the guide! Haven't used deforum much lately, and runwayml tends to get pricey. This will be fun to play around with.
you are welcome. i love these new models :)
Olivio your attention to detail in sharing these workflows is phenomenal - thank you
Thank you very much, Olivio!
Thank you for the detail instruction! I was able to create a 512x512 14 frames rocket video finally (Prompt executed in 599.47 seconds) with my rx580 4gb video card.
Please help, I've gotten these errors before and gave up in the past with other workflows.. I've attempted with my main Comfyui install and a fresh one.
I've run the manager and installed all missing nodes, did an update check, and restarted multiple times.
When loading the graph, the following node types were not found:
ImageOnlyCheckpointLoader
SVD_img2vid_Conditioning
FreeU_V2
VideoLinearCFGGuidance
Nodes that have failed to load will show as red on the graph.
im having the same problem! when i check for missing nodes nothing shows up :(
Finally! That looks so amazing.
excited for this, but I hate node based apps. Hopefully there will be an Auto1111 version soon.
Applying emotions to an interface... Reminds me of the days when people fought over Playstations, Xbox and Nintendos. Facepalm.
You don't have to make the node networks yourself, just grab one's that do what you want and get creative. plenty out there
@@fenriswolf-always-forward your comment suggests there's no such thing as a bad interface. lol
Kinda cool. Can see it being useful for making mini-clips for presentations or youtube vids (more interesting then putting still images I guess).
I still havent made the switch to comfy. I have a fairly complex workflow in a1111 that I am sure if I spent enough time could automate in comfy , but I like all the tinkering and inpainting and iterative steps that I use in a1111. In comfy I feel like you setup a workflow and its all done in one shot. I guess I could create a workflow for each step and have it work somewhat the same though.
I automated my a1111 workflow with nodejs scripts and using a1111 API, but when I look at comfy it looks like you can do all of that in browser without need for coding. Plus everything new always work in comfy first xD So decided to switch today. Time to say bye bye to a1111
Thank you Olivio. I'm loving comfyUI, specially because it is old PC friendly. I'll give it a go.
comfyUI is like Lego for AI. Makes me feel like a kid in a candy store :)
Ha, I was waiting for your take on the install my friend
Thank you very much, awesome tutorial :)
Hello. I can't configure it. Where are the downloaded models placed within ComlyUI? In what folder, or how are they installed?
oh dang, will have to try it now! thanks for the sources! :)
Unfortunately doesn’t work on my M2 Mac MBP. Getting “RuntimeError: Conv3D is not supported on MPS”
x3 ahah mac
Thanks for the nice tutorial - i have one question - in which node i can change the length of the final video? Best Felix
Thanks for the helpful video. Do you think there is a enhance detail node I can apply at some point in the workflow that will eliminate the blurriness and disfigurement that happens at times?
Thank you for the information.
What's the minimum requirements in regard to VRAM, RAM, graphics card, etc.?
Just did the beach vid. on rtx2070 8gb, took 20min. My card is underclocked 10%. Used the 25fps model listed on Enigmatic_e Video.
thank you Oliviio
With the 24 frame model I got around 6s/it, 14 frame model around 3s/it.
8gb v-ram, rtx3070
Thnks man!
I will return to see in the comment the hardware needed.. no infos on the net..
Do anybody know how much VRam you need to run this?
You can run it on 6Gb of VRAM
24 fps model takes 9 GB for me @@MyWhyAI
Thanks a lot man. I have a problem: when installing all the missing nodes, all of them install correctly but one: the VHS VideoCombine. I get the same error again and again:
"When loading the graph, the following node types were not found:
VHS_VideoCombine
Nodes that have failed to load will show as red on the graph."
I tried to update it, fix it, and even deleting it and reinstalling it, but nothing seems to work for me.
Do you know what could it be? Thanks!!
You did not mention how much VRAM requires to work ?
12gb
minimum 8GB, 6 if you do smaller image size
for me takes 9 GB 25fps model @@Eleganttf2
I know you say "ComfyUI" is the "future" of AI, but I can't bring myself to use it. The learning curve is much steeper than something like A1111. Yes you have massive control and I'm not unfamiliar with Node workflows (Unreal, etc) but in terms of mass marketability and ease of use for the average consumer, it's not the way to go in my opinion.
Or it should have some Easy mode UI with most common building blocks and you could enter the current "advanced" mode
@@Vestu StableSwarmUI might be something for you (even if it's still alpha). I learned to love comfyUI because it's the only thing running smooth on my potato pc.
I was thinking the same like you, but believe me, after you first try it, you won't ever use Automatic1111 again. It is fun to watch the nodes "working" while rendering and it's very intuitive - even if it doesn't look like it at first glance@@Vestu
@@franciscodurand5209 it's as fun as glaring at a cluttered house that is long overdue tidying.
@@TPCDAZ With 8GB GPU sdxl doesn't work well in automatic 1111. I've disliked nodes for 20+ years but I just use somebody else's workflow file for comfy.
I never need to change the connection wires or make new nodes. I've been using Foocus a lot though recently.
So I think Auto1111 is for elites now.(unless using sd1.5 model)
Hi Olivio, thank you so much for this video, its clear step by step, helps someone to follow along and implement it. I have an older GPU - NVIDIA GeForce GTX 1080 Ti, and it takes quite a long time to render the video. How much time does it take on your system?
I thought you need 40 GB of VRAM for SD Video to work? How much VRAM do you have, Olivio?
For me takes 9 GB of 24 GB - 24 fps model ( rtx 3090 )
3050 8 gb vram, 16 ram gb r7 1800x and take me 3 min to create a vid@@mirek190
what version of Comfy UI are you using? mine dont have that fancy share button
update it
I tried to do the steps you have shown, but it seems like i run into some errors...
If I try to do the "Update All" it gives me the following error:
Failed to update ComfyUI or several extensions / TypeError: Cannot set properties of null (setting 'onclick')
If i load the json config it says:
When loading the graph, the following node types were not found:
RIFE VFI
VHS_VideoCombine
Seed (rgthree)
Nodes that have failed to load will show as red on the graph.
How can i fix this?
Gonna wait for the A1111 version
Amazing work as always Olivio! - If anyone here is getting an "Error occurred when executing KSampler: unsupported operand type(s) for *=: 'int' and 'NoneType'" and has a solution please let me know. Thanks!
If you disable FreeU Advanced, problem goes away.
It works! Thank you@@GBUK666Thank you!!! You legend!!!
Do you mean to delete the FreeU node? I did that and still same error. Odd@@GBUK666
I'm eagerly looking forward to being able to generate my own Warhammer Fantasy Battle movie with nothing but a script I have generated using any random uncensored text-to-text model in a year's time
thanks for the guide!
Awesome! So helpful! :) :) :)
Anyone know if there is a way to choose which direction the resulting video pans left vs right?
thank you for the tutorial it help a lot
Hey - where is the ComyUI instuction? you said you where going to link?
How do I create longer videos?
can't wait to get some free time to check this out
Thanks for sharing 💗
Did not see the link to the comfyui manager extension. : ) thanks for the vid
Thanks for being so clear. You're all about ComfyUI. I don't need to subscribe to you anymore.
When loading the graph, the following node types were not found:
RIFE VFI
Seed (rgthree)
VHS_VideoCombine
Nodes that have failed to load will show as red on the graph.
You gotta click on the one already there and change it to RIFE 47 or 49.
Would it work on a mac mini ? considering buying one…
Awesome video! As someone else stated below, the SVD-XT model is for the larger FPS count though. Otherwise, perfect video :D
i get this error in in the rife node "Prompt outputs failed validation
RIFE VFI:
- Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']"
For anyone having this problem: You need to click the "ckpt_name" button within the "RIFE VFI (recommend rife47 and rife49)" box (the one appearing in red whenever you try to queue prompt). Then select rife47.pth or rife49.pth. Done!
Is there a way to extend a video after it has been created?
will improve videos with ai? I mean I record myself and then improve the video on this program?
When loading the graph, the following node types were not found:
VHS_VideoCombine
Looks like those glasses were added with stable video diffusion. Easy to tell.
You jumped from the SVD to having it open with no-information on pathing or troubleshooting. Totally hopeful.
There is no workflow for a1111? Is ComfyUI required?
yes a1111 is obsolete
Do we have to install ALL the node?
Somehow in RIFE, I don't have sudo_rife4_269.662_testV1_scale1.pth as shown in the video. I only have rife40.pth to rife49.pth. I downloaded sudo_rife4_269.662_testV1_scale1.pth and put it right next to the rife40.pth, restarted the server. It still doesn't show up in the dropdown. Any help please?
Do you recommend using google colab or buying a pc to run locally? If locally, any suggestions?
buy cheap after mining rtx 3090 with 24 GB of Vram
thank you , I make it work, so now question, how do i manage the animations, right now its like zoomin zoom out ... do there any other workflow where we can add also prompt?
Why I do not have access to the preview video? I need to click on "Queue Prompt" and render it to see
Olivio is there a way to send a folder of images to SVD? I tried a few ways but kept getting errors - I want to take a story board full of a movie and pump it thru without manually doing it 1 at a time
can this be used for regular a1111 and not comfyui?
a1111 is obsolete
thank you for your incredible input@@mirek190
What is the VRAM resource requirement? I expect it is 10GB or greater. Is there a low VRAM solution?
but what did comfui do that the others didnt here...
Very nice
something you didn't mention... I know should be obvious but I didn't get it... where do you put the svd model files after dl from hugging face? just normal models folder same as all the others for SD?
yup in the checkpoints folder.
(Anyone else having problems when they try to update all in the comfy ui manager after i update it continuously says restarting) i just got it working great video i do wish yo explained some parts better
Cool thank you !
thx for the video! tried it for a few hours, but without any other modifications it's nothing for me... (like length, prompts, maybe loras)
Is it available on automatic 1111?
Link for ComfyUI manager? Thanks Olivio!
I downloaded everything correctly, but my ckpt isn't loading in, despite it being in the right spot. All it gives me is an undefined/null and no option to select any models. I have the paths set up correctly in the YAML file, so I'm not totally sure what the issue is
Can you load this on fooocus?
Thanks!
Now you can create videos with SVD! Or can you? (suddenly Jake Chudnow - Moon Men Instrumental starts to play ) :D
Installed fine. Every time I run after about 60 seconds the computer crashes and reboots itself.
Thank you
I'm taking the leap to ComfyUI... and svd_xt. Is there any way to 'coax' a specific movement (like, only background, only foreground, move an arm but don't trip out like you just took dr*gs?) The motion is generally really nice with this model, but very frustrating sometimes with its choices of WHAT it's going to animate and what is going to stand still... resulting in 20+ runs for one animation that kind of matches what the original hope.
👍Cool as ever
Can i use this with a gtx 1080ti? i dont want to train the AI just use it. Thanks!
I hate ComfyUI!!!! Really hate it with my heart!
You not alone.
skill issue
Its great for obsessing over precision pores and moisture on eyelashes but for rapid ideation it stinks.
I hear you bro. I'll wait for an Automatic 1111 or Gradio version
Hate it too!
Guys I need your help with this :
When loading the graph, the following node types were not found:
• SVD_img2vid_Conditioning
• VideoLinearCFGGuidance
• ImageOnlyCheckpointLoader
Nodes that have failed to load will show as red on the graph.
PLZZZZZZZZZZZZZZZ
what parameter changes the length of the video?
x2
thanks man... !
also my video comes out 2 se, how can I make it 4 sec?
how much vram needed?
Has anyone else noticed that when using this default workflow that the output video appears "smooth" or smeared in comparison to the input image? This even happens when starting with a lower resolution image.
how do i download what i beed on a macbook?