That girl doing the hyperactive facial movements has a very particular talent set indeed. It's actually crazy how nicely it translates to the model though and the model's smile gives it an almost anime like quality alongside the over-emphasised faces that anime characters love to pull.
An image to image implementation of this can really help creating a more diverse facial expressions for all of the ai generations, definitely testing it out.
Hi there, thanks a lot for this, I've been tinkering with it for hours. I just got an issue when I bypass the image concatenate multi node, the output that I get from video combine is not the source image animated, I just get the source video again.
Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome
Is the Gradio Interface able to produce the same results as the comfyui interface? Is there any difference? Regarding speed of generation, or quality of output? Or is the gradio version and comfy UI ONLY a user interface difference?
can you do a video of changing the mouth movement in a video? like input a video then change the video to say what you want , this would be good in a stand alone open source format. also something that works on AMD not just invidia
One thing with hedra though is you can create a talking avatar with just a single image and a wav/mp3 file. I wish we could do that in comfyui but everything i have tried is either broken or needs a video to guide it. The insane thing is over a decade ago I had a tool called crazy talk that did what hedra does... maybe not as well and you had to mask the avatar, set the eyes and mouth your self.... but it did it.
@TheFutureThinker yeah used to be tons of mobile phone ones... which worked... to a point lol.. we are spoiled for what we have now, just occasionally you find something we just can't do. Sound+single image to talking avatar being one. Local music generation with lyrics would be the other major one. Suno and udio(though I found for my needs udio is really bad) have cornered the market there and not about to release the weights anytime soon.
@@TheFutureThinker I think my other reply got deleted, may have been the github link and the fact I am on a new channel. I found a possible one for avatar gen from sound and single image called EchoMimic by BadToBest. At work so havent tested it, and since the models it has you download are not safetensors it has me a bit nervous.
saw a lots of AI software introductory channels , telling people to use some software like this to create Faceless channel, doing news, sports report. haha, well, it can be creative.
I think this can be , there's a live cam node in Comfy, connect that one as ref. video, putting small batch of images into the LivePortraitProcess node. and trigger the rendering time like every second in ComfyUI (just like how the SDXL turbo do). In theory, this is doable.
Forgive me, this tutorial appears to jump straight into comfyUI. How do I get that?! it's like missing the first step. I'm very interested in this but can anyone link to the missing part, the first part of this tutorial please?
@@crazyleafdesignweb First....would be what is comfyUI and how do I get it not showing the custom node already open in....what I'm assuming is comfyAI but I don't know.
Standalone? Yes you can use their Github project. But even this one is complicated for you, then open source GitHub project is not for you. You should wait for an app and buy it later.
@@TheFutureThinker My bad, I had multiple youtube windows open and wrote this comment accidentally here although it was meant to another video (so it really doesn't make sense here at all...) Cool stuff though, and thanks for replying :)
This is really not at all like Hedra. The main thing Hedra does is animate a photo and lip sync it with text to audio with an AI voice. This does none of that except animate a face and it needs an input video to even do that.
Keep getting this result anyone has the same error? When loading the graph, the following node types were not found: DownloadAndLoadLivePortraitModels LivePortraitProcess Nodes that have failed to load will show as red on the graph.
i tried this and its good but as shown in the source github page it works on video too but every time i tried this on video after processing for few mins it gives error ,,,,any idea how to solve this issue?
Combine AI video generator, this one, the Mimic Motion and some other new models, i think its possible to do a movie. I mean not the camera panning style.
@@TheFutureThinker well i work for films vfx and i believe with the mix of traditional methods along with Ai its doable (i mean movie with camera motion) and for last 1 or 2 months i have been doing research on it to get an workaround because i want to start something ...can you help ben? it will make the whole process much faster , i saw lot of potential for mimic motion n live portrait but i tried v2v with live portrait but its failing
@@TheFutureThinker Will you be doing a tutorial on running this outside of comfy? Or do you have something that helps explain it? Also what about the part about the companies retaining your data such as speech and face. Thank you for your earleir reply btw.
@@seanknowles7987 no, I don't. I can smell that you are trying to implement this on the backend of app ? And if you want to build an app around this, you should hire someone to do. There's nothing Free , like Jack Ma said. And InsightFace can sue you till you bankrupts if you use their library in your app backend. Just beware of that.
@@Vashthareaper nice, and I am trying with v2v method in LivePortrait. Looks like the Comfy custom node did not have this feature implement. the sampler are supporting single image only. I have to mod the code.
Mentioned in the video sir, if you have pay attention get some information, don't just be a zombie geek click and download, you will able to find it, sir.
@@TheFutureThinker I see your point, in context it didn't really translate that well because I was looking for the transformation to somewhat match the phonemes, but you're right if we consider timing only. Also, Jack Ma has already gone "missing" once, it can happen again 😂
That girl doing the hyperactive facial movements has a very particular talent set indeed. It's actually crazy how nicely it translates to the model though and the model's smile gives it an almost anime like quality alongside the over-emphasised faces that anime characters love to pull.
I will try some anime image and test how smooth it can be. :)
Do you think it can work on a 6gb GPU ?
@@knightride9635 na... don't put yourself in a difficult situation. You won't have mood to run AI with it.
@@knightride9635 cant
An image to image implementation of this can really help creating a more diverse facial expressions for all of the ai generations, definitely testing it out.
Yes
You need Resize and Concat nodes if you want to see ref and output videos side-by-side only
Hi there, thanks a lot for this, I've been tinkering with it for hours. I just got an issue when I bypass the image concatenate multi node, the output that I get from video combine is not the source image animated, I just get the source video again.
Man you are such a blessing! God bless you mate.
You too! God bless ❤️
Wow. Thank you for video. Can you do a tutorial how use Wunjo open source?
I always have an error on my first step 😂 that first custom node you mentioned shows 2 node conflicts. I’m new to this so idk what to do next
Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome
U are in a right time😉 check out the Community tab in this channel
Is the Gradio Interface able to produce the same results as the comfyui interface? Is there any difference? Regarding speed of generation, or quality of output? Or is the gradio version and comfy UI ONLY a user interface difference?
can you do a video of changing the mouth movement in a video?
like input a video then change the video to say what you want , this would be good in a stand alone open source format.
also something that works on AMD not just invidia
Which would be the best hardware specs to use these tools?
You're Awesome man ! Awesome... Best Comfy channel in TH-cam is this channel. thanks for creating such a useful content 👍💯
Glad it helps
One thing with hedra though is you can create a talking avatar with just a single image and a wav/mp3 file.
I wish we could do that in comfyui but everything i have tried is either broken or needs a video to guide it.
The insane thing is over a decade ago I had a tool called crazy talk that did what hedra does... maybe not as well and you had to mask the avatar, set the eyes and mouth your self.... but it did it.
Oh yes, that reminds me. Some face swap alike software before.
@TheFutureThinker yeah used to be tons of mobile phone ones... which worked... to a point lol.. we are spoiled for what we have now, just occasionally you find something we just can't do.
Sound+single image to talking avatar being one.
Local music generation with lyrics would be the other major one. Suno and udio(though I found for my needs udio is really bad) have cornered the market there and not about to release the weights anytime soon.
@@DaveTheAIMad and SFX also one
@@TheFutureThinker I think my other reply got deleted, may have been the github link and the fact I am on a new channel. I found a possible one for avatar gen from sound and single image called EchoMimic by BadToBest. At work so havent tested it, and since the models it has you download are not safetensors it has me a bit nervous.
Great one 👍 i have been play around with it this afternoon 😆
this is good! thanks for sharing!
Glad you liked it!
this node could be used in many AI Videos. Education , Fun , News , ...
saw a lots of AI software introductory channels , telling people to use some software like this to create Faceless channel, doing news, sports report. haha, well, it can be creative.
@@TheFutureThinker ❤💯
@@TheFutureThinker and some youtubers already use it for Faceless channel. including you 😊😊😊
Awesome. Is there any similar node for static images? I want to copy the exact expression from a picture.
Well, you can change the Load Video to Load Image, I think. Because in theory its handle image frames as the face pose.
I wonder how long till we can use this with a live feed webcam
I think this can be , there's a live cam node in Comfy, connect that one as ref. video, putting small batch of images into the LivePortraitProcess node. and trigger the rendering time like every second in ComfyUI (just like how the SDXL turbo do). In theory, this is doable.
Forgive me, this tutorial appears to jump straight into comfyUI. How do I get that?! it's like missing the first step. I'm very interested in this but can anyone link to the missing part, the first part of this tutorial please?
Link to of the Github in video description.
And how to get that? Show you already in here 01:40.
@@crazyleafdesignweb First....would be what is comfyUI and how do I get it not showing the custom node already open in....what I'm assuming is comfyAI but I don't know.
Isn't it available as a standalone app? I want to use it but it seems too complicated for me.
It can input a json file,but cant use non-commercial in nature.
@@carterd2870 This is very unfortunate. I'd still like it to be something personal and not run in the cloud.
Standalone? Yes you can use their Github project. But even this one is complicated for you, then open source GitHub project is not for you. You should wait for an app and buy it later.
Can you and will you implement emotion speech controls to this?
Its just depends on the driving video and the audio to record your speech
@@TheFutureThinker My bad, I had multiple youtube windows open and wrote this comment accidentally here although it was meant to another video (so it really doesn't make sense here at all...) Cool stuff though, and thanks for replying :)
@@d3nshirenji no worry :) have fun
This is really not at all like Hedra. The main thing Hedra does is animate a photo and lip sync it with text to audio with an AI voice. This does none of that except animate a face and it needs an input video to even do that.
hi thank you so much, and how to make more than 8 second video? my input video is 13 sec but the ouput is just 8 sec. why is that?
TypeError: LivePortraitProcess.process() missing 1 required positional argument: 'crop_info'
This what i end up with, pls help
great technique ... but is there a node in comfy to set a live video as source ... on a Mac? thx in advance!
yes, I forgot the Github name of the livecam Comfy node. but it's possible to do so.
Awesome!
Glad you it help :)
How did you get just the target image?
Can this also be added to Automatic 1111?
this is awesome! thank you!
Have fun😉
Keep getting this result anyone has the same error?
When loading the graph, the following node types were not found:
DownloadAndLoadLivePortraitModels
LivePortraitProcess
Nodes that have failed to load will show as red on the graph.
basically, comfyui cannot recongnize this costume node
same, any suggestions?
same , have not found a solution yet
@@liangmen you need to install insightface and that should fix it
my generated output video only run for 1 second. Any option to increase the generated video length?
set frame caps. in load video
i tried this and its good but as shown in the source github page it works on video too but every time i tried this on video after processing for few mins it gives error ,,,,any idea how to solve this issue?
Can you please make a video on how to run it in colab?
can we use it for commercial??
What?!😮 Damn thats very good
More to come
@@TheFutureThinker Can't wait 💪😎
Combine AI video generator, this one, the Mimic Motion and some other new models, i think its possible to do a movie. I mean not the camera panning style.
@@TheFutureThinker 😃 Yess that would be awesome and for sure possible. Ai stuff is getting better so fast
@@TheFutureThinker well i work for films vfx and i believe with the mix of traditional methods along with Ai its doable (i mean movie with camera motion) and for last 1 or 2 months i have been doing research on it to get an workaround because i want to start something ...can you help ben? it will make the whole process much faster , i saw lot of potential for mimic motion n live portrait but i tried v2v with live portrait but its failing
😆😆lets go make some fun faces
have fun ! :)
This is pretty funny 😂
😂😂😂 beautiful characters can have funny moment
is the data of your face being retained by these companies? Also is this possible outside of comfyui?
The AI Model can be run outside of Comfyui. The custom node is like an extension pack in ComfyUI to run this AI.
@@TheFutureThinker Will you be doing a tutorial on running this outside of comfy? Or do you have something that helps explain it? Also what about the part about the companies retaining your data such as speech and face. Thank you for your earleir reply btw.
@@seanknowles7987 no, I don't. I can smell that you are trying to implement this on the backend of app ? And if you want to build an app around this, you should hire someone to do. There's nothing Free , like Jack Ma said. And InsightFace can sue you till you bankrupts if you use their library in your app backend. Just beware of that.
Kijai just updated the node to run it on cpu if anyone having problems with onnx like myself
Ha! Nice, how is the performance for Cpu running? Have anyone try?
@@TheFutureThinker it's pretty good tbh. Just as fast as gpu I'd say
@@Vashthareaper nice, and I am trying with v2v method in LivePortrait. Looks like the Comfy custom node did not have this feature implement. the sampler are supporting single image only. I have to mod the code.
@@TheFutureThinker i did try with cpu mode too although i didnt compare the timing with GPU but i use gpu mode
@@TheFutureThinker ohh here i got the answer pls do let me know if u get an workaround for v2v.
Hello and thanks, will you publish a workflow on patreon? Thanks!
It have the workflow including in custom node pack. Doesn't need much customize for this AI.
03:06 here
@@TheFutureThinker Yes, but you made some changes and I wasn't able to do :)
Okay, i will create a new one in there.
These AI programs need to be programmed to make the entire body move, not just the head.
can do it video to video too ?
look smooth, but better to have an online version...
Mantap bro... Saya sudah subscribe dan like
Bolehkah saya meminta workflownya???
is this in real time?
It could be.
animations like anime where they just animate the eyes and mouth XD
✨👌😎😮😮😮😎👍✨
What about the rest of the body
You need this : th-cam.com/video/q816HyZiw18/w-d-xo.html
workflow plz sir
Mentioned in the video sir, if you have pay attention get some information, don't just be a zombie geek click and download, you will able to find it, sir.
@@TheFutureThinkersorry sir!
and i'm here, waiting for a1111 version :)))
What are you saying? The example with Jack Ma doesn't match AT ALL ! WTF
04:20 - no drama here. Or he will find you.
That was a setting expirement when I compare the retargeting option On/Off.
@@TheFutureThinker I see your point, in context it didn't really translate that well because I was looking for the transformation to somewhat match the phonemes, but you're right if we consider timing only. Also, Jack Ma has already gone "missing" once, it can happen again 😂
@@TsujioDragan-zz3bj well... he is somewhere in some place, from a legendary entrepreneur to a mystery person now. XD
AAAAAAAAA GIVE ME WORKFLOW
i know this is unrelated but give the Quran a read
Also: Mikey mouse is pretty good aswell
Going to try it. Making a lookbook for YT and using this randomly to scare the crap out of people.
LOL haha yup :)
@@TheFutureThinker Oh yeah! Testing it now and uh its fun :) Really low VRAM usage so far but im not going with high frames atm
Yup, it can gen 30 seconds with 1 queue from my test. I am not sure if this thing can handle a longer length video.