Yeah, I wasn't sure either, but I figured it was kind of weird to have the face swapper model turned on and not be able to use it. So I just randomly tried it and 🎉
Using the original wav2lip model instead of wav2lip_gan should improve accuracy when the subject is smiling or already talking. I prefer to use it in most cases. The downside is that gan looks slightly better, but after face enhancement you wouldn't notice anyway.
wav2lip model doesn't do automatic mouth location adjustment. So if the mouth area is too large, you need to modify the mouth region box manually. Apparently, facefusion doesn't support those parameters yet
Yep, this isn't the main feature of FaceFusion, so it's nice that it does as well as it does for now. Hopefully, over time, the lip sync model will get some love.
Yeah, I keep forgetting that you can do that, but I prefer not to change the files as when updating to a new version it can not work sometimes. But what I do instead (that I just haven't bothered with for the occlusion) is adding it to my run command in Terminal. Currently I have this: cd facefusion && source venv/bin/activate && python run.py -o /Volumes/Prometheus/Downloads/ --execution-providers coreml --execution-thread-count 16 --face-swapper-model inswapper_128 And that way I always have it set to coreml, 16 thread count, and inswapper_128 for the model (though I believe that is back to being default for me now). So I could add in "--face-mask-types occlusion" and that would do the same thing as changing the config. And I just tried it to be sure, and yep, that works.
On my older Intel Mac I can only do either face swap or lip sync at one time. Choosing more than one of these options simply disables the start-button. I actually thought that was the case with all computers since they also said on Discord that you could only do lip sync as a standalone run, but here you are doing it all at once. What I tried to do, since merely doing an enhanced face swap on a clip with dialogue usually screws up the mouth movement a bit, was to first do the face swap and then do a second pass with lip sync and the original unchanged audio that already should match the lip movements. But that didn’t work.
Hmm, that's a weird issue I've never heard of. At this point, I would look into using FaceFusion on something like RunDiffusion - which I made a video of more recently. Yes, it costs money (very little and pay as you go), but the benefits you'll get from it are incredible compared to using an outdated machine. They do have a free usage trial too. Otherwise, bring up the issue on the Discord server in the MacOS channel and see if they have any ideas.
The support responses people are getting on the FaceFusion Dscord is "You can't use face_swapper at the same time as lipsync". So I don't think it can be called a weird issue. You are lucky to have it working like that. I agree about RunDiffusion. I'm already a bit familiar with RunPod and I see that they have a template for it. So checking that out could be the way to go. Having said that, my old Mac does have 8gb of video memory so it does meet the requirements to run FaceFusion. Albeit not at a lightning fast speed.
Source or target? If you put 2 different people in the source, you will get an amalgam of the 2 faces swapped onto the target. If you meant the opposite where there are 2 faces in the target, then yes, you can definitely do it, but only on face at a time. However, you will lose the audio each time from the previous render, so you will have to edit that back in.
If you're asking about traditional 2D animation, no, it won't work. It will work (usually) with 3d animation, but still not as well as with actual faces. The more information for the face that FaceFusion can gather, the better the results. 2D has basically no information.
It can export whatever your original video size is and even larger as there are 2x and 4x frame enhancers that will upscale the video by 2x and 4x. So if you really wanted to, you could output an 8K video from a 1080p.
Yes and no. Nothing can be done in the app itself, but there are 2 other options. There is a file that you can manually change or add the settings you want. I prefer to not mess with the files if I don't have to and if I had to change it again, it's kind of a pain. What I do is add commands to my run command every time I open the app that instills those settings. Here is what my run command looks like: python run.py -o /Volumes/Prometheus/Downloads/ --execution-providers coreml --execution-thread-count 16 --face-swapper-model inswapper_128 First, with the -o it changes the download directory. The other 3 settings should be obvious. But there are run commands for most settings. I just don't have access to them right now on my phone.
The installation is the same regardless of the version. And as for just upgrading, make sure you're in the FaceFusion directory and have the venv activated. Then... git pull python install.py
@@Avalon19511 yes, exactly. The other option is to do a completely clean install by deleting the FF directory and then installing from scratch. It would still be the same procedure as the first time you installed it. And 2.1.1? That's like 5 years old in the AI world. 😛
@@Robert.Lachowski550 Just a basic M1 Mini 16GB. I speed up the video a lot so people aren't bored watching it render. But that's why I keep Terminal open, so you can see the actual time and speeds I'm getting.
Bro, you crushed this review. Crazy detailed, Much appreciate the time you put into it 🙏
Excellent tutorial. Covered the bread and butter within the first 2 minutes.
Thanks! Had no idea you could drop audio and image at the same time 👍🏻
Yeah, I wasn't sure either, but I figured it was kind of weird to have the face swapper model turned on and not be able to use it. So I just randomly tried it and 🎉
Looks like it really depends on what your original footage is. Sometimes it turns out great. Sometimes, not so great.
Man make a simple local installation guide. Really appreciate your videos
I already have one. The version doesn't change the installation process. th-cam.com/video/NAmC3SftSAk/w-d-xo.html
Thank you so much for such a fantastic tutorial. You make everything super simple. I was very confused before.
Thanks for the video man, very helpful. Glad to be 100th that put a like on your video!
Thank you! Much appreciated!
Another great video, thanks
Using the original wav2lip model instead of wav2lip_gan should improve accuracy when the subject is smiling or already talking. I prefer to use it in most cases. The downside is that gan looks slightly better, but after face enhancement you wouldn't notice anyway.
wav2lip model doesn't do automatic mouth location adjustment. So if the mouth area is too large, you need to modify the mouth region box manually. Apparently, facefusion doesn't support those parameters yet
Correct, sort of. You said 2 different things. It DOES adjust for the location, just NOT the size. Otherwise it would only work with static images.
how to do it manually?
Thanks. I found it useful . Agree teeth closeup is real pain,
Yep, this isn't the main feature of FaceFusion, so it's nice that it does as well as it does for now. Hopefully, over time, the lip sync model will get some love.
great video, interesting stuff
Great video, great voice. I like your intro. Wish I could replicate it....haha.
you can edit the FaceFusion config to start with occlusion. I did that also and it's convenient. Ivan in their Discord could let you know how.
Yeah, I keep forgetting that you can do that, but I prefer not to change the files as when updating to a new version it can not work sometimes. But what I do instead (that I just haven't bothered with for the occlusion) is adding it to my run command in Terminal. Currently I have this:
cd facefusion && source venv/bin/activate && python run.py -o /Volumes/Prometheus/Downloads/ --execution-providers coreml --execution-thread-count 16 --face-swapper-model inswapper_128
And that way I always have it set to coreml, 16 thread count, and inswapper_128 for the model (though I believe that is back to being default for me now). So I could add in "--face-mask-types occlusion" and that would do the same thing as changing the config. And I just tried it to be sure, and yep, that works.
i found that if you also upload a picture that shows the bottom teeth it will fix that teeth issue up a little bit.
Hmmm, interesting. I'll have to test that out when I work on the updated video. Thanks for the info.
COOL Explanattion!. Thank you.
On my older Intel Mac I can only do either face swap or lip sync at one time. Choosing more than one of these options simply disables the start-button. I actually thought that was the case with all computers since they also said on Discord that you could only do lip sync as a standalone run, but here you are doing it all at once.
What I tried to do, since merely doing an enhanced face swap on a clip with dialogue usually screws up the mouth movement a bit, was to first do the face swap and then do a second pass with lip sync and the original unchanged audio that already should match the lip movements. But that didn’t work.
Hmm, that's a weird issue I've never heard of. At this point, I would look into using FaceFusion on something like RunDiffusion - which I made a video of more recently. Yes, it costs money (very little and pay as you go), but the benefits you'll get from it are incredible compared to using an outdated machine. They do have a free usage trial too.
Otherwise, bring up the issue on the Discord server in the MacOS channel and see if they have any ideas.
The support responses people are getting on the FaceFusion Dscord is "You can't use face_swapper at the same time as lipsync". So I don't think it can be called a weird issue. You are lucky to have it working like that.
I agree about RunDiffusion. I'm already a bit familiar with RunPod and I see that they have a template for it. So checking that out could be the way to go. Having said that, my old Mac does have 8gb of video memory so it does meet the requirements to run FaceFusion. Albeit not at a lightning fast speed.
Will this work if there are 2 people in the Source?
Source or target? If you put 2 different people in the source, you will get an amalgam of the 2 faces swapped onto the target.
If you meant the opposite where there are 2 faces in the target, then yes, you can definitely do it, but only on face at a time. However, you will lose the audio each time from the previous render, so you will have to edit that back in.
😍
liked & subscribed.
@@channel1535 thanks! Much appreciated!
Hi mate, did you try the cartoon face? if it can swap in the facefusion? thanks
If you're asking about traditional 2D animation, no, it won't work. It will work (usually) with 3d animation, but still not as well as with actual faces. The more information for the face that FaceFusion can gather, the better the results. 2D has basically no information.
This is great review. I just can't find anywhere if facefusion can export full hd (1080p) videos.
It can export whatever your original video size is and even larger as there are 2x and 4x frame enhancers that will upscale the video by 2x and 4x. So if you really wanted to, you could output an 8K video from a 1080p.
@@shadyendeavor Woow. Thank you.
you forgot to link your previous install videos in your description.
My bad. I instead just added my FaceFusion playlist that has the installs and other tips and tricks videos on FaceFusion.
Whot is COREMI in execution providers?
Any idea how to save settings from session to session?
Yes and no. Nothing can be done in the app itself, but there are 2 other options. There is a file that you can manually change or add the settings you want. I prefer to not mess with the files if I don't have to and if I had to change it again, it's kind of a pain.
What I do is add commands to my run command every time I open the app that instills those settings. Here is what my run command looks like:
python run.py -o /Volumes/Prometheus/Downloads/ --execution-providers coreml --execution-thread-count 16 --face-swapper-model inswapper_128
First, with the -o it changes the download directory. The other 3 settings should be obvious. But there are run commands for most settings. I just don't have access to them right now on my phone.
How do I get it on my laptop?
Follow these instructions in this video.
th-cam.com/video/ebf1On3OsN8/w-d-xo.html
Please share the link
Link to what?
Is google Collab available??
Currently, yes.
@@shadyendeavor share link, should be latest facefusion 2.3.0 on Google Collab please
install guide to 230 would be nice:)
The installation is the same regardless of the version. And as for just upgrading, make sure you're in the FaceFusion directory and have the venv activated. Then...
git pull
python install.py
@@shadyendeavor lol I just realized I was on 200, now it's up to 211 I bet now I have to do the git pull again to get it to 230 am I right:)?
@@Avalon19511 yes, exactly. The other option is to do a completely clean install by deleting the FF directory and then installing from scratch. It would still be the same procedure as the first time you installed it.
And 2.1.1? That's like 5 years old in the AI world. 😛
@@shadyendeavor We are all good and thanks for your help, I had downloaded the wrong version, but all is well now, thanks again:)
@@Avalon19511 Good to hear and you're welcome
whats your gpu pls?
I'm on an M1 Mac
@@shadyendeavor your output looks so quickly generated, is that an M1 MAX?
@@Robert.Lachowski550 Just a basic M1 Mini 16GB. I speed up the video a lot so people aren't bored watching it render. But that's why I keep Terminal open, so you can see the actual time and speeds I'm getting.
Lol, that looks like shit
Thank you for watching!
awesome video!