Hey Benji, Thanks for this one. I'm trying to get it to run on just the face to save time. Can V2 animate a cropped face? Or does that one have to be a full body?
Thank you for this great video! What I still don't understand is, how do companies like HeyGen create their streaming avatars? They clone my appearance, my voice, and then through APIs they can produce a video response (text to video) in real-time. How is this possible? Is it just a matter of having the right hardware? Or does their secret technology come from the future? :)
EchoMimic v2 in ComfyUI and Gradio
www.patreon.com/posts/echomimic-v2-ai-117438511
ComfyUI_EchoMimic
github.com/smthemex/ComfyUI_EchoMimic
great video bro ❤❤❤❤❤❤❤
Thanks 🔥
Please omnigen in comfyui.. but small size
th-cam.com/video/DuY_wlU8FgM/w-d-xo.htmlsi=baeVhrhR_oGjfpva
can it be driven in realtime from microphone or audio recording? Or is it only rated for non-realtime generation?
Thanks a lot. Bro, can you combine liveportrait facial expressions and EchoMimicV2 hand pose?
Yes, definitely. That is a good idea to develop further for a talking avatar and create enhancements for the video.
Looks promising. Thank you for the video very helpful!
Great video and Confy as always! 🎉
@@guile3d 👍👍
Hey Benji, Thanks for this one. I'm trying to get it to run on just the face to save time. Can V2 animate a cropped face? Or does that one have to be a full body?
thank you! i'm getting double ghost head on my result. Any idea why?
Nice tutorial but it can be nice to let people know about the VRAM ... it's a hungry VRAM model so under 12 it takes ages ... appreciate the tutorial
Great video, any idea for realtime talking avatar?
Livepeotrait can be using Webcam as Input driving motion. You should look into that one.
but why i cna not output admin ? it run not stop
Great helpful videos, thx a lot for all your effort!
Is any lip sync ai which works fine with a creatures?
Hey! If you don't mind me asking, what version of python do you use for your ComfyUI? Thank you for the video!
I am using Python version: 3.11.8.
Thanks.
Thank you for this great video!
What I still don't understand is, how do companies like HeyGen create their streaming avatars? They clone my appearance, my voice, and then through APIs they can produce a video response (text to video) in real-time. How is this possible? Is it just a matter of having the right hardware? Or does their secret technology come from the future? :)
Video enchantments by de-sampling will be great please!
partial animations are a good idea cause we can replace backgrounds
yes, that's another methods can be use like LivePortrait or AI Video using v2v unsampling
😢