ToonCrafter - A Diffusion Model Really Change The Industry? (A Honest Review)
ฝัง
- เผยแพร่เมื่อ 5 มิ.ย. 2024
- ToonCrafter - A Diffusion Model Really Change The Industry?
In today's video, we're diving into the fascinating world of the ToonCrafter diffusion model. This recently published research paper introduces a new way to generate captivating cartoon interpolations using just two image frames.
If you want to download the ComfyUI node and try out, in case you have not Pay Attention to what you downloaded, the Demo Workflow linked here : / 105635630
Full Review With Opinion and some more experiment results : / 105681816
Mentioned about Diffusion Models : • 10 Diffusion Models Fo...
ToonCrafter :
github.com/ToonCrafter/ToonCr...
huggingface.co/Doubiiu/ToonCr...
ComfyUI-DynamiCrafterWrapper :
github.com/kijai/ComfyUI-Dyna...
Join us as we explore the showcase of the ToonCrafter model, witnessing its ability to generate seamless in-between motions and actions in the form of stunning videos and animations. From subtle movements to vibrant light effects, this diffusion model works its magic, enhancing the overall animation.
While the ToonCrafter excels in replicating simple motions like character walking and maintaining consistent shadows, we also discuss its limitations when faced with more complex movements.
One of the key features of this model is its use of sketching guidance, similar to the control net in Stable Diffusion. By generating a line art or soft edge outline of characters and backgrounds, the ToonCrafter predicts the color of each object and fills in the final video output with vibrant coloration.
Although the current resolution limit for this model is 512 pixels, it showcases the immense potential of diffusion models in creating captivating cartoon-style animations from just a few key frames.
If you're an animation enthusiast or interested in exploring the innovative ToonCrafter diffusion model, this video is a must-watch.
If You Like tutorial like this, You Can Support Our Work In Patreon:
/ aifuturetech
Discord : / discord - วิทยาศาสตร์และเทคโนโลยี
I noticed that it works well with images extracted from animations, but when you try to generate 2 different images, even if it is consistent, the results are not the same... It seems it was trained on animation videos and not quite useful yet for generated images.
Exactly. So for creators using other images, it won't work.
And if images extract from anime, i think theres not point to use this AI and regenerate again.
keep getting this error "Input type (c10::Half) and bias type (float) should be the same" Somebody help please
I was looking into it last night. I won't be able to use it with a 4070 ti then.😢 Like you said, I hope that we get optimizations in the future
try the Hugging Face space : huggingface.co/spaces/Doubiiu/tooncrafter
We cant test reference based sketch colorization ,sparse sketch guidance as of now right?
Nope
Ok thank you ,I hope that it is not cherry picked.❤❤
🎉
I don't think it help too much for making anime movies.
In other words it is called the gimmick diffusion🤭
People over hyping things again as always.
Yup marketing