This is pretty great when you combine it with eye/head tracking and a basic idle speech animation. Now I just have to find an easy way to apply it to my non-metahuman models...
my route here is to build a retargeter script from metahuman to AR kit shapes. Im going to build this script in Maya, but im guessing this can also be done with blueprint.
@@Puppetier3D Don't suppose I can grab it off you? I've been attempting it through an anim blueprint (curve value to morph), but getting the sheer number of metahuman bones and blends to work smoothly with arkit's 52 isn't progressing well. lol
yep , im stuck here too. the lipstoaudio is cute but the face rig kills the anim sequence , when face rig get deleted nothing happens .. gpt o1 is sending me to cpp for workaround or rig graphs , or constrain
can you add other layers of animation to actually move the face? the lip sync is good enough, but how do you add the facial expressions (animated by hand or motion captured through other methods)? that would be interesting to see.
In your Level Sequence, click on the Metahuman's Face track and go to the "Bake To Control Rig" option. In the menu, select "Face_ControlBoard_CtrlRig". An "Options For Baking" popup menu will appear. On the popup menu, make sure that you tick the "Reduce Keys" checkbox. Otherwise, you end up with a lot of keyframes. Then press "Create". The animation will then be converted into keyframes on the Face control rig and you can modify eyes, eyebrows, etc.
@@SmartPoly I have that course yes the survival one. But my main focus would be on the fps. I'm dying to see what's it gonna be like. Thanx for the response. Keep well bro
This would have been funnier if you had made a copy of 'Dr. Girlfriend' from The Venture Bros. using the Metahuman customization option and uploaded your voice into it. Thanks for the tutorial!
Great tutorial. I'd like to use this audio-to-facial animation feature in play mode as well. Do you think it would be possible to switch the audio file and generate animation using a blueprint during gameplay?
Thank you for the video. Can this be done fully at runtime in playmode without processing he audio clip in the editor? Imagine recording an audio clip in a shipping build and getting the character to lip sync to it.
they should expose this to any skeletal mesh. if the audio can drive blendshapes, we could assign it to any character with blendshapes and have auto lip sync
This isn't facial capture - it's mouth interpretation. Acting is the whole face. Would like to see it used in combination with facial capture to see if it can help with the mouth to some degree.
@@SmartPolyCan you make a course similar to Pepsiman? This is a very interesting idea to make a small game for beginners, and it will be very helpful for beginners
Thanks, but the problem is that in the sequencer you can only animate the face. If you try to make the character turn his head at this time, everything resets and becomes crooked.
Thank you for the video. Is there a way to create metahumans based on photos of real humans? Does the metahuman creator allow extreme adjustments to hair, body etc.? 👍
Great video! However i cannot seem to be able to put it in to the scene. How do i do that? I've tried the "export animation" and "export level sequence".
Made face with audio animation but audio would not play in runtime. Animation ran. In BP I put in play 2d sound with my audio and it worked. Any suggestions? Very useful tutorial!
Edit: I downloaded the quixel bridge plug in again from the quixel webpage. I didnt think of that since I didn't have to again for UE 5.3 and 5.4. Ill leave this here in case anyone makes the same mistake as myself. Im on UE 5.5 now as of December 2024 and the quixel bridge plugin doesn't seem to be thee anymore. Im trying to figure out how to get my metahuman likeness in to UE 5.5, but keep hitting roadblocks saying to activate it in the plugins. Ill try doing it in UE 5.4 which still seems to have the bridge option
@@SmartPoly yea, imagine playing any multiplayer game and seeing this interpolation while someone's talking, instead of using a pre-recorded audio file.
Pretty wild that it took Epic nearly 20 years to do what Valve did with the Source Engine for Half-Life 2, and they had to use machine learning to do it.
There might be if the MH performance code was open source, like most UE code is, but unfortunately because they rely on third party libraries such as OpenAI whisper the source code isn't available, Epic doesn't have the rights to distribute it. There is one way but it's not applicable for realtime games, you can have a second UE editor process open, or run UE in headless mode and do the MH performance generations in the second editor instance and migrate them to your original project at runtime. Which actually works pretty well if you're doing live link stuff, but it won't work in a packaged game
@GlassesAndCoffeeMugs I see. Ya, I'm working on a solution that should work at runtime using a server. So I send a wav generated with tts at runtime to the server, then the server transcribe the wav, then converts it into phonemes, then to visemes with timestamps. On the unreal side, I create an anim montage at runtime with the server response and insert the visemes as notifies at the correct time. From there, I just have to map the notifies to the visemes that already come with metahuman and use that to control the face rig. I'm close to having this working.
@ Yep I did more or less the same thing for one of my contracts, basically set up a web socket server in one UE instance, ran python code to generate the MH performance, then send it back to the primary project and create a runtime montage for playback. Works fine at runtime without any hitches. In our case though we just migrated entire assets when they were finished processing, since we aren't packaging the project and that was easier. Your solution could potentially work in a packaged game since you're sending viseme curves/values as opposed to the actual generated .uasset files. Good luck
@GlassesAndCoffeeMugs Thanks, actually I'm sending just a json with the visemes as text not the actual visemes just the names, and then the idea is to map the texts with the actual visemes in unreal, but your approach is interesting too, actually that was the first that came to my mind as a solution but it was just to much trouble and not an elegant solution at least for what I want to do. Thanks. Good luck to you, too.
The Lips are not even matching, making this a useless thing basically, and you also have not demonstrated the main part - editing the facial animation at certain key frames for more control on the outcome, and also how to add expressions that can match the audio, without having to individually set the keys frame by frame ?
what did you expect? AAA perfect matching lip syncing animations that are BETTER than a $10,000 facial motion capture device? Its audio to lip sync, not a 10 million dollar face capture device
Just animate the rest of the face on a secondary animation layer....ie, shifting of eyes, head movements etc. What they have done saves you alot of work...
Check Out My Multiplayer Survival Game Course:
smartpoly.teachable.com
Wow! I never knew you could do this with Metahumans. Thanks so much for the video.
Can't wait to see what they do with this in the future
Yes, a cut scene video would be great. Newbie here!
This is pretty great when you combine it with eye/head tracking and a basic idle speech animation. Now I just have to find an easy way to apply it to my non-metahuman models...
my route here is to build a retargeter script from metahuman to AR kit shapes.
Im going to build this script in Maya, but im guessing this can also be done with blueprint.
have you found anything
@@DrDealer already wrote a script for it
@@Puppetier3D Don't suppose I can grab it off you? I've been attempting it through an anim blueprint (curve value to morph), but getting the sheer number of metahuman bones and blends to work smoothly with arkit's 52 isn't progressing well. lol
yep , im stuck here too. the lipstoaudio is cute but the face rig kills the anim sequence , when face rig get deleted nothing happens .. gpt o1 is sending me to cpp for workaround or rig graphs , or constrain
I would love to see how to make this into a cut scene, you should definitely do that.
this is huge time saving thanks. Epic studio should also make that plug-in compatible with daz3d and character creator.
Yep an interaction video would be choice Sir, thank you
can you add other layers of animation to actually move the face? the lip sync is good enough, but how do you add the facial expressions (animated by hand or motion captured through other methods)? that would be interesting to see.
In your Level Sequence, click on the Metahuman's Face track and go to the "Bake To Control Rig" option. In the menu, select "Face_ControlBoard_CtrlRig". An "Options For Baking" popup menu will appear. On the popup menu, make sure that you tick the "Reduce Keys" checkbox. Otherwise, you end up with a lot of keyframes. Then press "Create". The animation will then be converted into keyframes on the Face control rig and you can modify eyes, eyebrows, etc.
@@OddStops oh I see, that's interesting, I will definitely try that! thank you!
Yes.. We need tutorials like that
Incredible! Yes, tut on NPC dialogue.
@SmartPoly just waiting on your ultimate FPS course.... Anytime next year? Thanx for all the videos so far. Brilliant stuff dude
hopefully yes
@@SmartPoly I have that course yes the survival one. But my main focus would be on the fps. I'm dying to see what's it gonna be like. Thanx for the response. Keep well bro
that's the craziest intro ever
Great video! Thank you so much!)
Nice!!! Really appreciate it! 🙏
Any time!
Great work, even more powerful if it can alter the tone of voices fitting the age of the metahuman.
This would have been funnier if you had made a copy of 'Dr. Girlfriend' from The Venture Bros. using the Metahuman customization option and uploaded your voice into it. Thanks for the tutorial!
Thank you for this tutorial
No problem, glad it helped!
Great tutorial. I'd like to use this audio-to-facial animation feature in play mode as well. Do you think it would be possible to switch the audio file and generate animation using a blueprint during gameplay?
I don’t think there’s runtime support yet. But you can playback the animation in your game
Looks good, make him blink once in a while. That makes a difference.
Make a video on Metahuman clothing
+1
Thank you for the video. Can this be done fully at runtime in playmode without processing he audio clip in the editor? Imagine recording an audio clip in a shipping build and getting the character to lip sync to it.
I don't think they've exposed it to runtime yet
Hi, great video, thank you for sharing your knowledge. Is there a posibillity to get some eye movement?
not from an audio file. But you can always add keys frames and modify the animation
@@SmartPoly ❤
they should expose this to any skeletal mesh. if the audio can drive blendshapes, we could assign it to any character with blendshapes and have auto lip sync
that would be great
Could just clone?@@SmartPoly
This isn't facial capture - it's mouth interpretation.
Acting is the whole face. Would like to see it used in combination with facial capture to see if it can help with the mouth to some degree.
Imagine 1980 games i don't believe in my mind fantastic stuff
It's a very good tutor. I would like to know how long the dialogue can be.
Nice!
thank you
Please a tutorial on how to use it in game dialogs with NPC
Amazing men!😅
New courses pls
hopefully soon
Will it be the FPS?
@@SmartPolyCan you make a course similar to Pepsiman? This is a very interesting idea to make a small game for beginners, and it will be very helpful for beginners
possible to stream microphone input into an ue5.5 application and handle it with metahuman?
Would love to know how to get this into a scene or in the game
every time i enable the metahuman plugin it just keeps crashing at 75% trying to load the plugin on the splash screen.
Thanks, but the problem is that in the sequencer you can only animate the face. If you try to make the character turn his head at this time, everything resets and becomes crooked.
I do have the lates version of Unreal but i cannot change the quality only cinematic quality is available . How can i fix it ?
Thank you for the video. Is there a way to create metahumans based on photos of real humans? Does the metahuman creator allow extreme adjustments to hair, body etc.? 👍
Great video! However i cannot seem to be able to put it in to the scene. How do i do that? I've tried the "export animation" and "export level sequence".
Hi , how can we make it real time any idea? I want to trigger it when audio file change
metahuman plugin available for windows only. Unfortunately you can't install on mac :(
Made face with audio animation but audio would not play in runtime. Animation ran. In BP I put in play 2d sound with my audio and it worked. Any suggestions? Very useful tutorial!
Curious of this is will work with CC4 morph targets.
Did metahumans get rid of simple_face_CtrlRig in 5.5?
Thanks you!Is it possible to connect Chat Gpt 4 vision in Unreal engine (meta human) to get its reaction in real time? If so, how?
there is a plugin you can link to the chat gpt api
@SmartPoly thanks.
noob here, i mean complete noob. How do i get the metahuman hair as well as the face.
Obviously this guy isn't that Smart or he would have shown how to do it. Who wants a bald person's head?
Edit: I downloaded the quixel bridge plug in again from the quixel webpage. I didnt think of that since I didn't have to again for UE 5.3 and 5.4. Ill leave this here in case anyone makes the same mistake as myself.
Im on UE 5.5 now as of December 2024 and the quixel bridge plugin doesn't seem to be thee anymore. Im trying to figure out how to get my metahuman likeness in to UE 5.5, but keep hitting roadblocks saying to activate it in the plugins. Ill try doing it in UE 5.4 which still seems to have the bridge option
Can youmake a system that work like that in game when player talk in vchat ?
voice chat lip sync?
@@SmartPoly yea, imagine playing any multiplayer game and seeing this interpolation while someone's talking, instead of using a pre-recorded audio file.
Gaming technology has faaar surpassed the industry's story telling abilities
Pretty wild that it took Epic nearly 20 years to do what Valve did with the Source Engine for Half-Life 2, and they had to use machine learning to do it.
Does anybody know if there is a way to do this at runtime?
There might be if the MH performance code was open source, like most UE code is, but unfortunately because they rely on third party libraries such as OpenAI whisper the source code isn't available, Epic doesn't have the rights to distribute it. There is one way but it's not applicable for realtime games, you can have a second UE editor process open, or run UE in headless mode and do the MH performance generations in the second editor instance and migrate them to your original project at runtime. Which actually works pretty well if you're doing live link stuff, but it won't work in a packaged game
@GlassesAndCoffeeMugs I see. Ya, I'm working on a solution that should work at runtime using a server. So I send a wav generated with tts at runtime to the server, then the server transcribe the wav, then converts it into phonemes, then to visemes with timestamps. On the unreal side, I create an anim montage at runtime with the server response and insert the visemes as notifies at the correct time. From there, I just have to map the notifies to the visemes that already come with metahuman and use that to control the face rig. I'm close to having this working.
@ Yep I did more or less the same thing for one of my contracts, basically set up a web socket server in one UE instance, ran python code to generate the MH performance, then send it back to the primary project and create a runtime montage for playback. Works fine at runtime without any hitches. In our case though we just migrated entire assets when they were finished processing, since we aren't packaging the project and that was easier. Your solution could potentially work in a packaged game since you're sending viseme curves/values as opposed to the actual generated .uasset files. Good luck
@GlassesAndCoffeeMugs Thanks, actually I'm sending just a json with the visemes as text not the actual visemes just the names, and then the idea is to map the texts with the actual visemes in unreal, but your approach is interesting too, actually that was the first that came to my mind as a solution but it was just to much trouble and not an elegant solution at least for what I want to do. Thanks. Good luck to you, too.
Bro What Could be Add in Unreal Engine 5.6
Yeah
0:01 Wasn’t the face reveal I expected
lmao
00:00 is this a face reveal?!
yep
The Lips are not even matching, making this a useless thing basically, and you also have not demonstrated the main part - editing the facial animation at certain key frames for more control on the outcome, and also how to add expressions that can match the audio, without having to individually set the keys frame by frame ?
what did you expect? AAA perfect matching lip syncing animations that are BETTER than a $10,000 facial motion capture device? Its audio to lip sync, not a 10 million dollar face capture device
Ok but add Miraj like heat waves in sunny day even in skin
Can you take a test with the Spanish language?
yep, it works for any language
Needs blinking and eye movement. Creepy af.
it has a little bit of eye blinking or movement
Too bad the rest of the face doesnt move
i’m sure they will update this in the future. If you want better results, you can always use your iphone and use metahuman animator instead
Just animate the rest of the face on a secondary animation layer....ie, shifting of eyes, head movements etc. What they have done saves you alot of work...
Honestly for what i want, live link is really good
bro u funny looool u need some hair in this face
the result for the Spanish language is bad
I haven’t tried, but the showcase by epic showed french language
The face is stiff. They already need to improve that. Audio 2 Face is still better
I know what is not right with all that metahumans animations… it lack of eye balls movement.
Works very bad. Metahumans SDK was better, but for money.
you can always use iphone motion capture instead with metahuman animator
This is just the beginning, plus it does the basic for the mouth you can still use this for far shots and stiff
aweful
Why can’t metahuman works with blender 😭💔… This is so good 🥹
You can make assets for the metahuman in blender
you can make a face in blender and turn it into Metahuman
@@crazyAlexander23 hi, can you please point out to any video on how to convert 3d cartoon created in blender converted into metahuman, thank you.
@@crazyAlexander23 Please tell me, how?
@@david-patton Tell me how I can make assets for metahuman in blender…