Simple animations with Blender and Stable Diffusion - SD Experimental

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ก.ย. 2024

ความคิดเห็น • 34

  • @M4rt1nX
    @M4rt1nX 4 หลายเดือนก่อน +4

    Great as usual. Thanks a lot.
    While using blender, you can automate the intervals of frames by changing the number of steps. Render ➡Frame range ➡Step

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Thanks for the heads up! I was sure there was a setting somewhere

  • @emanuelec2704
    @emanuelec2704 4 หลายเดือนก่อน +3

    Grandissimo! Keep them coming.

  • @pandelik3450
    @pandelik3450 4 หลายเดือนก่อน +1

    Since you don't need to extract depth from photos but from Blender, you could just use the Blender compositor to save depth passes for all frames to a folder and then load them into ControlNet from that folder.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +2

      Yeah I debated doing that, since I was made aware of it in a previous video, but ultimately I decided on going this way because my audience is more used to comfyUI than Blender. I didn’t want to overcomplicate things in Blender, even if they might seem easy to someone who’s used to it, but exporting depth directly is definitely the better way to do it

    • @Rubberglass
      @Rubberglass 2 หลายเดือนก่อน

      This was my thought…it would help prevent the legs from moving through the tube.

  • @merion297
    @merion297 4 หลายเดือนก่อน +2

    I am seeing it but not believing it. It's incredible. Incredible is a weak word for it.

  • @gimperita3035
    @gimperita3035 4 หลายเดือนก่อน +1

    Fantastic stuff! I own more 3d assets that I'm eager to admit and using generative AI in this way was the idea from the beginning. I can't thank you enough . and of course Matteo as well.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      ahah at least this is a good way to put those models to use! Glad you liked it!

  • @andredeyoung
    @andredeyoung 9 ชั่วโมงที่ผ่านมา

    🔥

  • @Utsab_Giri
    @Utsab_Giri 3 หลายเดือนก่อน

    Did you use the AnimatedDiff workflow for your video of the photorealistic swimming girl?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน +1

      Yep, same workflow for all the examples in the video

  • @armandadvar6462
    @armandadvar6462 3 หลายเดือนก่อน

    It is so complicated not easy😅😮

  • @eias3d
    @eias3d 4 หลายเดือนก่อน

    Morning Andrea! cool workflow!
    Where can I find the Lora's "LCM_pytorch_lora_weight_15.safetensors"?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      argh that's the only model I missed in the description! I'm adding it now.
      You can find it here: huggingface.co/latent-consistency/lcm-lora-sdv1-5

    • @eias3d
      @eias3d 4 หลายเดือนก่อน +1

      @@risunobushi_ai Hehe

  • @moritzryser
    @moritzryser 4 หลายเดือนก่อน +1

    dope

  • @田蒙-u8x
    @田蒙-u8x 3 หลายเดือนก่อน

    Sir, nice video! Recently I'm considering retire my old 1650ti and upgrade to a gpu with higher vram. With budget limits, I can only afford a rtx4070 the 16gb vram one. Would this be enough for the workflow in the video, or any suggestions?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน

      Hey there! I always tell people to try out things before buying them, and there’s a ton of services out there that let you rent a GPU for a few cents / hour. I’d test out a few things and see what works for your own needs, before spending some big bucks on a piece of hardware. Either way, VRAM is one of the most important spec to have when running stable diffusion, moreso if you want to run videos and animations

  • @arong_
    @arong_ 4 หลายเดือนก่อน

    Awesome stuff! Just wondering how are you able to use ipadapter plus style transfer with an sd 1.5 model like you're using? I thought that wasn't possible and it never works for me

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      Huh, I’ve never actually had an my issue with it. I tested it with both 1.5 and SDXL when it was first updated and I didn’t encounter any errors.
      The only thing that comes to mind is that I have collected a ton of clipvision models over the past year, so maybe I have something that works with 1.5 by chance?

    • @arong_
      @arong_ 4 หลายเดือนก่อน

      @@risunobushi_ai ok maybe, I remember Mateo mentioned it also in his ipadapter update tutorial that it wouldn't work for 1.5 but maybe it works for some and yes maybe you have some special tool that unlocked it. Regardless this is great stuff, loving and learning a lot from your tutorials

  • @nocnestudio7845
    @nocnestudio7845 2 หลายเดือนก่อน

    How to make stable diffiusion stable??? Where show your animation???? I don't seee. Its standard AI bla bla bla bla...where You show full your animation???

  • @dannylammy
    @dannylammy 4 หลายเดือนก่อน

    There's gotta be a better way to load those key frames, thanks!

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Yep there is, as I’m saying in the video it’s either this or using a batch loader node that targets a folder, but for the sake of clarity in the explanation I’d rather have all nine frames on video

  • @BoomBillion
    @BoomBillion 4 หลายเดือนก่อน +3

    AI didn't eliminate graphic designers. It evolved them into 3d designers and Ai graphic engineers.

    • @srb20012001
      @srb20012001 4 หลายเดือนก่อน +2

      As well as insanely technical node tree composers!

  • @yrcnm
    @yrcnm 13 วันที่ผ่านมา

    Depth Anything ,OpenPose Pose~~~ No results found

    • @risunobushi_ai
      @risunobushi_ai  10 วันที่ผ่านมา

      The ControlNet aux nodes sometimes act up during installation. You can try uninstalling the pack and installing it via command line.

    • @yrcnm
      @yrcnm 9 วันที่ผ่านมา

      @@risunobushi_ai How to install from command line?

  • @reimagineuniverse
    @reimagineuniverse 4 หลายเดือนก่อน

    Great way to steal other peoples work and make it look like you did it without learning any skills

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +4

      If you're talking about the ethics of generative AI, we could discuss about this for days. If you're talking about the workflow, I don't know what you're getting at, since I developed it myself starting from Matteo's.

    • @Caret-ws1wo
      @Caret-ws1wo 2 หลายเดือนก่อน

      wahhh wahhh