WOW!! We needed it so much! Especially when we don't have an advanced open source video model yet! The results are amazing: seems natural and even quick to render. I appreciate all your efforts in making such amazing content for us Pixaroma! I am amazed with the hard work you put in any video, and just to imagine you have to really devote so much energy just for the correct voice over & timing. Thank you Pixaroma!
Hey! I'm using your method but with a target video instead of a photo. Everything works, but I would like to add the expression Editor (phm) in addition to the video, for comedic effect :) Problem is, when I insert it into the chain it freezes at the first frame of the video. Is there a way to "increment" the frame number as it goes?
I'm sorry, I'm an idiot. Of course I just need to add another ALP module to modify the diriving video and then use that modified driving video on the target.
Hey! A few months ago, I gave Live Portrait a shot on my YT channel, and I was honestly blown away by the results. Took it a step further by re-targeting Hedra animations to Live Portrait, and everything ran super smoothly in ComfyUI - and this was on my basic RTX 3060! 😱 If I could wish for one thing, though, it’d be the ability to re-target full-body animations or slap on some mocap presets. That would level things up! 🙌 Also, quick thought - TH-cam’s auto-dubbing feature is cool, but it’d be amazing if they could match TTS voices to the original voiceovers. It’s still not quite there yet (even on your channel), but fingers crossed they improve it soon! Anyway, great video as always - keep rocking and having fun! 🤘
Great video. Is there away to have a live camera as the source driving video? Basically to live animate an image and be able to stream the preview window? 🤔
Hi, thank you for sharing your work with us. I'm Brazilian and this video has the second audio (Portuguese and some others), which is useful to me to help me understand. Anyway, I have a simple question is possible to create a notebook with this ComfyUI workflow to be used in Google Colab? I ask because I do not have a good computer and a good understanding. Anyway I would like to learn how to create Notebooks. Thanks again
Thanks, unfortunately i dont have knowledge of colab, so i don't know how to do that, but probably someone already did that, maybe you can find on Reddit or on civit ai some workflows
Could this be used with live video input & output for virtual avatars? For example, when streaming, on video calls, etc? Essentially allowing you to use any face / cartoon as an avatar instead of only ones already in an avatar application?
Not sure if it works the model looks for faces, can be a little in angle but if is too much it cannot recognize probably the face. So it is hard to move things in certain angles and things might get distorted
Not sure, probably needs a better model, you can use maybe a video upscaler to make it after bigger, or you can save the frames and maybe those can be upscaled, i only recently started to use it so I need to do more research
I don't know why so many videos describing the advanced live portrait node call it motions when it is called expressions. The node is creating in-between motions by linearly interpolating between the Expressions in the numbered expression nodes, not motion nodes.
@pixaroma I don't think it's a good idea to just continue a very bad terminology. Interpolating motion between motion is very hard and I would say almost impossible, to understand or there needs some very in-depth explanation on what that is supposed to mean.
Man, please continue with this clear format, you're the king!
This is yet another excellently constructed and communicated tutorial. Top notch stuff, sir.
Thanks 😊
WOW!! We needed it so much! Especially when we don't have an advanced open source video model yet! The results are amazing: seems natural and even quick to render. I appreciate all your efforts in making such amazing content for us Pixaroma! I am amazed with the hard work you put in any video, and just to imagine you have to really devote so much energy just for the correct voice over & timing. Thank you Pixaroma!
Thank you 😊
Very useful tutorial and congratulations on 20K subscribers 😀
Almost there , 85 more and I am there 😊 thanks
Fantastic tutorial Thank you and congratulation on your 20K subscribers. Time to celebrate. Well done.
Thank you very much 😊
Hey!
I'm using your method but with a target video instead of a photo.
Everything works, but I would like to add the expression Editor (phm) in addition to the video, for comedic effect :)
Problem is, when I insert it into the chain it freezes at the first frame of the video.
Is there a way to "increment" the frame number as it goes?
Did you try those command field, i included different examples workflows on discord maybe you didn't connect sometime right
I'm sorry, I'm an idiot.
Of course I just need to add another ALP module to modify the diriving video and then use that modified driving video on the target.
Well done Pix. Useful stuff 🙏
thanks 🙂
Nice tutorial as always, was using live portrait in Forge, nice to have it in comfyui
Nice breakdown of the process! 📚🎥 The live portrait and expression tips were super useful-thanks!
Thanks Uday 😊
Nice thank you for sharing.
Bonnes explications, merci
Tutoriales fantásticos que puedo disfrutar gracias al audio doblado de youtube. Gracias pixaroma y gracias youtube!!
you are welcome 🙂
I’m looking forward to learning how to set up custom nodes and fine-tune animations for both realistic and stylized effects.
Another amazing tutorial. Thank you sensei 🙏
Thanks 😊
Hey! A few months ago, I gave Live Portrait a shot on my YT channel, and I was honestly blown away by the results. Took it a step further by re-targeting Hedra animations to Live Portrait, and everything ran super smoothly in ComfyUI - and this was on my basic RTX 3060! 😱 If I could wish for one thing, though, it’d be the ability to re-target full-body animations or slap on some mocap presets. That would level things up! 🙌 Also, quick thought - TH-cam’s auto-dubbing feature is cool, but it’d be amazing if they could match TTS voices to the original voiceovers. It’s still not quite there yet (even on your channel), but fingers crossed they improve it soon! Anyway, great video as always - keep rocking and having fun! 🤘
thank you, I am sure in the future new tech would appear that make things easier, is just the begining
Great video. Is there away to have a live camera as the source driving video? Basically to live animate an image and be able to stream the preview window? 🤔
I saw someone do it, do a search for webcam capture node comfyui, saw something with WebcamCaptureCV2, maybe you can find more info
@@pixaromaI'll look into that! Thanks!
@@pixaroma is this taxing on the GPU?
No, work ok even on lower vram
Hi, thank you for sharing your work with us. I'm Brazilian and this video has the second audio (Portuguese and some others), which is useful to me to help me understand. Anyway, I have a simple question is possible to create a notebook with this ComfyUI workflow to be used in Google Colab? I ask because I do not have a good computer and a good understanding. Anyway I would like to learn how to create Notebooks. Thanks again
Thanks, unfortunately i dont have knowledge of colab, so i don't know how to do that, but probably someone already did that, maybe you can find on Reddit or on civit ai some workflows
thx. well explained
Top notch...thanks
Could this be used with live video input & output for virtual avatars?
For example, when streaming, on video calls, etc?
Essentially allowing you to use any face / cartoon as an avatar instead of only ones already in an avatar application?
I think i saw someone using it so it must be possible, just not sure what nodes it needs
How can we adapt this workflow with characther in movement or not frontal? In your opinion is there any workflow/node that can do that?
Not sure if it works the model looks for faces, can be a little in angle but if is too much it cannot recognize probably the face. So it is hard to move things in certain angles and things might get distorted
@@pixaroma It's true, maybe training the model with frontal face and add after..
Share the specifications of the machine you're working on. I'm a bit curious about it.
This is what I have
- CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
- GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
- Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
- 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
- SSD Samsung 980 PRO, 2TB, M.2
- SSD WD Blue, 2TB, M2 2280
- Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
- Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
- PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
- Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
- Wacom Intuos Pro M
@@pixaroma A very nice set.
Hi!
Is there any chance to cloth changing in comfy UI with FLUX?
Check episode 19 and 23, you probably could get something with both of those methods
@pixaroma i saw every your episode) and it really cool!
But i mean to change to our cloth we choose😅
Thank you for your work!
I don't have a video but search for On context lora, or for: try on comfyui
thanks !
Great as usual. Can you do an episode on Sana from nvidia?
I am looking into it, will do some research and probably a video too if all works out
is there a way to make it higher res tho
Not sure, probably needs a better model, you can use maybe a video upscaler to make it after bigger, or you can save the frames and maybe those can be upscaled, i only recently started to use it so I need to do more research
I don't know why so many videos describing the advanced live portrait node call it motions when it is called expressions. The node is creating in-between motions by linearly interpolating between the Expressions in the numbered expression nodes, not motion nodes.
The node creator described in their example motion 1, motion 2 and so on, just tried to keep it consistent with what they have
@pixaroma I don't think it's a good idea to just continue a very bad terminology. Interpolating motion between motion is very hard and I would say almost impossible, to understand or there needs some very in-depth explanation on what that is supposed to mean.
Comment! Great as always.
Thank you 😊
What if the head turns left or right?
You can turn it but not too much, depends on image, you can not het complete side view, but you can still rotate a little
Creepy... lol :) Good video, thank you!
❤❤
👍
Awesome. Thanks!