Since you don't need to extract depth from photos but from Blender, you could just use the Blender compositor to save depth passes for all frames to a folder and then load them into ControlNet from that folder.
Yeah I debated doing that, since I was made aware of it in a previous video, but ultimately I decided on going this way because my audience is more used to comfyUI than Blender. I didn’t want to overcomplicate things in Blender, even if they might seem easy to someone who’s used to it, but exporting depth directly is definitely the better way to do it
Fantastic stuff! I own more 3d assets that I'm eager to admit and using generative AI in this way was the idea from the beginning. I can't thank you enough . and of course Matteo as well.
Sir, nice video! Recently I'm considering retire my old 1650ti and upgrade to a gpu with higher vram. With budget limits, I can only afford a rtx4070 the 16gb vram one. Would this be enough for the workflow in the video, or any suggestions?
Hey there! I always tell people to try out things before buying them, and there’s a ton of services out there that let you rent a GPU for a few cents / hour. I’d test out a few things and see what works for your own needs, before spending some big bucks on a piece of hardware. Either way, VRAM is one of the most important spec to have when running stable diffusion, moreso if you want to run videos and animations
Awesome stuff! Just wondering how are you able to use ipadapter plus style transfer with an sd 1.5 model like you're using? I thought that wasn't possible and it never works for me
Huh, I’ve never actually had an my issue with it. I tested it with both 1.5 and SDXL when it was first updated and I didn’t encounter any errors. The only thing that comes to mind is that I have collected a ton of clipvision models over the past year, so maybe I have something that works with 1.5 by chance?
@@risunobushi_ai ok maybe, I remember Mateo mentioned it also in his ipadapter update tutorial that it wouldn't work for 1.5 but maybe it works for some and yes maybe you have some special tool that unlocked it. Regardless this is great stuff, loving and learning a lot from your tutorials
How to make stable diffiusion stable??? Where show your animation???? I don't seee. Its standard AI bla bla bla bla...where You show full your animation???
Yep there is, as I’m saying in the video it’s either this or using a batch loader node that targets a folder, but for the sake of clarity in the explanation I’d rather have all nine frames on video
If you're talking about the ethics of generative AI, we could discuss about this for days. If you're talking about the workflow, I don't know what you're getting at, since I developed it myself starting from Matteo's.
Great as usual. Thanks a lot.
While using blender, you can automate the intervals of frames by changing the number of steps. Render ➡Frame range ➡Step
Thanks for the heads up! I was sure there was a setting somewhere
Grandissimo! Keep them coming.
Grazie!
Since you don't need to extract depth from photos but from Blender, you could just use the Blender compositor to save depth passes for all frames to a folder and then load them into ControlNet from that folder.
Yeah I debated doing that, since I was made aware of it in a previous video, but ultimately I decided on going this way because my audience is more used to comfyUI than Blender. I didn’t want to overcomplicate things in Blender, even if they might seem easy to someone who’s used to it, but exporting depth directly is definitely the better way to do it
This was my thought…it would help prevent the legs from moving through the tube.
I am seeing it but not believing it. It's incredible. Incredible is a weak word for it.
Fantastic stuff! I own more 3d assets that I'm eager to admit and using generative AI in this way was the idea from the beginning. I can't thank you enough . and of course Matteo as well.
ahah at least this is a good way to put those models to use! Glad you liked it!
🔥
Did you use the AnimatedDiff workflow for your video of the photorealistic swimming girl?
Yep, same workflow for all the examples in the video
It is so complicated not easy😅😮
Morning Andrea! cool workflow!
Where can I find the Lora's "LCM_pytorch_lora_weight_15.safetensors"?
argh that's the only model I missed in the description! I'm adding it now.
You can find it here: huggingface.co/latent-consistency/lcm-lora-sdv1-5
@@risunobushi_ai Hehe
dope
Sir, nice video! Recently I'm considering retire my old 1650ti and upgrade to a gpu with higher vram. With budget limits, I can only afford a rtx4070 the 16gb vram one. Would this be enough for the workflow in the video, or any suggestions?
Hey there! I always tell people to try out things before buying them, and there’s a ton of services out there that let you rent a GPU for a few cents / hour. I’d test out a few things and see what works for your own needs, before spending some big bucks on a piece of hardware. Either way, VRAM is one of the most important spec to have when running stable diffusion, moreso if you want to run videos and animations
Awesome stuff! Just wondering how are you able to use ipadapter plus style transfer with an sd 1.5 model like you're using? I thought that wasn't possible and it never works for me
Huh, I’ve never actually had an my issue with it. I tested it with both 1.5 and SDXL when it was first updated and I didn’t encounter any errors.
The only thing that comes to mind is that I have collected a ton of clipvision models over the past year, so maybe I have something that works with 1.5 by chance?
@@risunobushi_ai ok maybe, I remember Mateo mentioned it also in his ipadapter update tutorial that it wouldn't work for 1.5 but maybe it works for some and yes maybe you have some special tool that unlocked it. Regardless this is great stuff, loving and learning a lot from your tutorials
How to make stable diffiusion stable??? Where show your animation???? I don't seee. Its standard AI bla bla bla bla...where You show full your animation???
There's gotta be a better way to load those key frames, thanks!
Yep there is, as I’m saying in the video it’s either this or using a batch loader node that targets a folder, but for the sake of clarity in the explanation I’d rather have all nine frames on video
AI didn't eliminate graphic designers. It evolved them into 3d designers and Ai graphic engineers.
As well as insanely technical node tree composers!
Depth Anything ,OpenPose Pose~~~ No results found
The ControlNet aux nodes sometimes act up during installation. You can try uninstalling the pack and installing it via command line.
@@risunobushi_ai How to install from command line?
Great way to steal other peoples work and make it look like you did it without learning any skills
If you're talking about the ethics of generative AI, we could discuss about this for days. If you're talking about the workflow, I don't know what you're getting at, since I developed it myself starting from Matteo's.
wahhh wahhh