ToonCrafter - A Diffusion Model Really Change The Industry? (A Honest Review)

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 มิ.ย. 2024
  • ToonCrafter - A Diffusion Model Really Change The Industry?
    In today's video, we're diving into the fascinating world of the ToonCrafter diffusion model. This recently published research paper introduces a new way to generate captivating cartoon interpolations using just two image frames.
    If you want to download the ComfyUI node and try out, in case you have not Pay Attention to what you downloaded, the Demo Workflow linked here : / 105635630
    Full Review With Opinion and some more experiment results : / 105681816
    Mentioned about Diffusion Models : • 10 Diffusion Models Fo...
    ToonCrafter :
    github.com/ToonCrafter/ToonCr...
    huggingface.co/Doubiiu/ToonCr...
    ComfyUI-DynamiCrafterWrapper :
    github.com/kijai/ComfyUI-Dyna...
    Join us as we explore the showcase of the ToonCrafter model, witnessing its ability to generate seamless in-between motions and actions in the form of stunning videos and animations. From subtle movements to vibrant light effects, this diffusion model works its magic, enhancing the overall animation.
    While the ToonCrafter excels in replicating simple motions like character walking and maintaining consistent shadows, we also discuss its limitations when faced with more complex movements.
    One of the key features of this model is its use of sketching guidance, similar to the control net in Stable Diffusion. By generating a line art or soft edge outline of characters and backgrounds, the ToonCrafter predicts the color of each object and fills in the final video output with vibrant coloration.
    Although the current resolution limit for this model is 512 pixels, it showcases the immense potential of diffusion models in creating captivating cartoon-style animations from just a few key frames.
    If you're an animation enthusiast or interested in exploring the innovative ToonCrafter diffusion model, this video is a must-watch.
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 13

  • @JamBassMusic
    @JamBassMusic หลายเดือนก่อน +1

    I noticed that it works well with images extracted from animations, but when you try to generate 2 different images, even if it is consistent, the results are not the same... It seems it was trained on animation videos and not quite useful yet for generated images.

    • @TheFutureThinker
      @TheFutureThinker  หลายเดือนก่อน +1

      Exactly. So for creators using other images, it won't work.
      And if images extract from anime, i think theres not point to use this AI and regenerate again.

  • @chinyewcomics
    @chinyewcomics 26 วันที่ผ่านมา

    keep getting this error "Input type (c10::Half) and bias type (float) should be the same" Somebody help please

  • @RhapsHayden
    @RhapsHayden หลายเดือนก่อน +1

    I was looking into it last night. I won't be able to use it with a 4070 ti then.😢 Like you said, I hope that we get optimizations in the future

    • @TheFutureThinker
      @TheFutureThinker  หลายเดือนก่อน

      try the Hugging Face space : huggingface.co/spaces/Doubiiu/tooncrafter

  • @noobplayer-jc9hy
    @noobplayer-jc9hy หลายเดือนก่อน +1

    We cant test reference based sketch colorization ,sparse sketch guidance as of now right?

    • @TheFutureThinker
      @TheFutureThinker  หลายเดือนก่อน

      Nope

    • @noobplayer-jc9hy
      @noobplayer-jc9hy หลายเดือนก่อน

      Ok thank you ,I hope that it is not cherry picked.❤❤

  • @reaperhammer
    @reaperhammer หลายเดือนก่อน

    🎉

  • @kalakala4803
    @kalakala4803 หลายเดือนก่อน

    I don't think it help too much for making anime movies.

    • @TheFutureThinker
      @TheFutureThinker  หลายเดือนก่อน

      In other words it is called the gimmick diffusion🤭

  • @T0b1maru
    @T0b1maru 24 วันที่ผ่านมา

    People over hyping things again as always.