Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability
Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end does anyone knows where i can find it ?
Hi! Please open Content Browser settings and enable Engine and Plugins content as on the screenshot cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&
Metahumans have 2 skeletons, one for the head and one for the body. You can direct animations to both skeletons at the same time and set them up in a suitable way so that the movements match your wishes.
LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!
When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body. Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/
im stuck here, min 1:10 when importhing the metahuman to unreal engine via bridge. already download the metahuman preset, but when I add the metahuman to UE 5, nothing happen, can someone help me on this one?
Hi! When you have already downloaded metahuman in Quixel Bridge you need to export it to the project. After that you need to open content browser in the project and find MetaHumans folder which contains exported metahumans.
Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?
Hi! Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io
I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here
Hi! Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode. More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync Also you can ask for help in our Discord discord.gg/MJmAaqtdN8
Hi! They support the ARKit blendshape set after version 3.4, so you can just select the ECustom option in the ATL Mapping Mode settings, this should help.
If possible, please share information about Unreal Engine version, send us the project log file via discord discord.com/invite/MJmAaqtdN8 or email support@metahumansdk.io. At the moment we can't reproduce the error and animation is created correctly for custom meshes without additional mapping options.
When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame. Can you help with these two issues? Thank you.
Hi! We recommend you to use this tutorial th-cam.com/video/oY__OZAa0I4/w-d-xo.html Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉 If you need more advice please contact us in discord discord.gg/MJmAaqtdN8
@@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.
@borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically. We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1
Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).
Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?
Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!
Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/ I'll ask to the team about cases for games may be we can create tutorial about it.
Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.
Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.
Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.
Hi! At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work. Here is the video about token attachment or generating new in the personal account: th-cam.com/video/3wmmaE-8aoE/w-d-xo.html&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab If it doesn't help please tell us and we try tio help with your issue.
Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.
@@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?
Hi! Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.
it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)
Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) th-cam.com/video/cC2MrSULg6s/w-d-xo.html
Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?
@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?
Hi! You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes. I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?
At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.
@gavrielcohen7606 hi! Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉
The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?
Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video
I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?
Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.
I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same
You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation. th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation
Hi! If i understand it right you have a delay between start of animation and sound. You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime th-cam.com/video/jrpAJDIhCFE/w-d-xo.html If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io
thanks for posting the video. really inspiring. i just want to clarify, is it possible to make the metahuman speak in bahasa Indonesia? have some difficulties to develop this kind of product. really need your help. thanks in advance
Hi! Azure and Google TTS standard voices are currently supported. As far as I know, Azure should have a language id-ID Indonesian (Indonesia). Also you can use your TTS to send audio to the ATL (Audio To Lip-sync) node.
Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations
Hi! Please try to start from Talk Component. This is the easiest way to use Streaming options. Here is tutorial about it th-cam.com/video/jrpAJDIhCFE/w-d-xo.html If you still have some issues please visit our discord discord.gg/MJmAaqtdN8
Hi! You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800 Also we showed other but more difficult way with State Machines th-cam.com/video/oY__OZAa0I4/w-d-xo.html&lc=UgzNwmwaQIB3hOhKE7F4AaABAg
Hi! It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation. If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL. Regular ATL available on the Liet, Standard and Pro tariffs.
@@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.
Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io? We will try to help you with this issue but we need more details about your case.
Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.
Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.
Hi! Delay highly depends on the network connection and length of the sound. Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ? We will try to help.
Hi! Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk. You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo
When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.
Hi! The limit for generating one animation of 5 seconds is present only on the Trial plan. If you have a different plan, please email us at support@metahumansdk.io and we will check your account.
Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!
Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?
Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?
Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.
Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.
@@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?
Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running
You can try to use blend for animations that you want to combine. You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/
how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?
We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681 And for animation we use EPositive emotion so it looks more expressive in our opinion.
I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?
Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?
Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL. As example you can use this tutorial when we use OpenAI plugin for chatbot th-cam.com/video/kZ2fTTwu6BE/w-d-xo.html
Hi! TTS available for Chatbot tariff plan only. You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458
Hi! You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
Hi! Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637
Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?
hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other. I don't know if this has happened to anyone else. Can you give me some advice to solve this problem?
Hi! Is it possible to share a blueprint in our discord server? Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: th-cam.com/video/oY__OZAa0I4/w-d-xo.html
We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month. Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL You can try it right now 😉
I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...
You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us. If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.
Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?
Hi! You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8
You need to use skeleton of your avatar in the ATL node and arkit mapping mode. You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.
Hi V-Risetech! Please select Show Engine Content in Content Browser settings it should help. We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504
Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.
The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?
Hi, Qin Jason! It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it. Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8
Hi, When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work. Best regards.
Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this
@@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character
Hi! Sorry for the late answer. We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354
Hi! Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.
We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project. You can find that in the demo folder of plugin.
@@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body
Hi! Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable
hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?
Hi! Can you please check that your API token is correct in the project settings? If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io
It's updated. i think i saw a diffrent video looking almost the same. amazing work !
Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability
I found it fine THis is public....
@@ryudious Well, my comment was from over a year ago
This is mind blowing!!!!!!
2:12 hi, I dont see the Create Speech from text. I have added the API key as well.
Hi!
Did you try it on a wav file?
@@metahumansdk hello is this only for version 5.1 below? not 5.4?Thank you
@chBd01 You can find the 5.4 version in the marketplace
www.unrealengine.com/marketplace/en-US/product/digital-avatar-service-link
Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end
does anyone knows where i can find it ?
Hi!
Please open Content Browser settings and enable Engine and Plugins content as on the screenshot
cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&
How are you making the head and shoulders move along with the speech too?
Metahumans have 2 skeletons, one for the head and one for the body. You can direct animations to both skeletons at the same time and set them up in a suitable way so that the movements match your wishes.
LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!
Hi!
We used regular control rig to add poses in the sequencer timeline and make body animation manually in this tutorial
@@metahumansdk haha as a beginner I have no idea what that means 😂 I’ll try to find a tutorial searching some of the words u said
When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body.
Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/
im stuck here, min 1:10 when importhing the metahuman to unreal engine via bridge.
already download the metahuman preset, but when I add the metahuman to UE 5, nothing happen, can someone help me on this one?
Hi!
When you have already downloaded metahuman in Quixel Bridge you need to export it to the project. After that you need to open content browser in the project and find MetaHumans folder which contains exported metahumans.
Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?
Hi!
Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io
h have same problem
Hi!
@Ali_k11, can you give some details about your issue?
thanks for your greak Plugin!
I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here
Hi!
Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode.
More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync
Also you can ask for help in our Discord discord.gg/MJmAaqtdN8
Has anyone got this working with a reallusion character creator rigged model? Did you have to seperate the head? Which preset did u use?
Hi!
They support the ARKit blendshape set after version 3.4, so you can just select the ECustom option in the ATL Mapping Mode settings, this should help.
@@metahumansdk ok, and under ECustom option, what mapping asset and bone asset do i select? as if i dont select any, the anim it creates is blank.
If possible, please share information about Unreal Engine version, send us the project log file via discord discord.com/invite/MJmAaqtdN8 or email support@metahumansdk.io.
At the moment we can't reproduce the error and animation is created correctly for custom meshes without additional mapping options.
Hello, why do I follow your steps, at 12:03, the sound ends but the mouth moves on and doesn't stop
Hi! Could you please clarify if you are experiencing any performance issues?
When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame.
Can you help with these two issues? Thank you.
Hi!
We recommend you to use this tutorial th-cam.com/video/oY__OZAa0I4/w-d-xo.html
Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉
If you need more advice please contact us in discord discord.gg/MJmAaqtdN8
@@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.
@borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically.
We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1
@@metahumansdk Thanks, I appreciate your time.
i have the same problem
Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).
Hi! Currently our plugin just send full message to TTS services but you can separate text and send smaller parts manually.
At 10:11 in the video, when I scroll over it shows that the type of "CurrentChunk' is not compatible with Index, I don't know what's wrong
10:10
hello can you help me with this problem
hi, make sure CurrentChunk is of type integer aswell as index
@@ffabiang thank you
Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?
Hi Fly to the Toon!
You can select in the ATL eye blinking, also it works for ATL nodes.
Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!
Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
I'll ask to the team about cases for games may be we can create tutorial about it.
Thank you so much!
not working
for me plugin doesn't genrates any lipsync anim
Please, send us more details to the our discord server or mail support@metahumansdk.io
We will try to help with your issue
Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.
I wonder if there is anything similar for Unity platform as well? Would be great if there is support available as the avatars look great.
Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.
Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.
Hi!
At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work.
Here is the video about token attachment or generating new in the personal account: th-cam.com/video/3wmmaE-8aoE/w-d-xo.html&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab
If it doesn't help please tell us and we try tio help with your issue.
Error: fill api token via project settings
First time using this sdk, how can I fix this?
Please contact us through e-mail support@metahumansdk.io we will help you with token.
Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.
Hi! If your avatar have not separated model you can blend an animation for the body and neck with our facial animation.
@@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?
You can mark Neck Movement in the ATL node to add it to the animation in MetahumanSDK plugin
Well, this 2000 years kater joke was good. I am waiting just 3 hours now to have the Metahuman loaded, LOL
Is it possible for audio to lip sync to work on other 3d character models, rather than meta human ones?
Hi!
Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.
Great, does the combo do ATL Strinming things as well?
Hi!
We are working on it. If all goes fine we add it in the nearest releases on 5.2
it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)
Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) th-cam.com/video/cC2MrSULg6s/w-d-xo.html
@@metahumansdk the link guide me to private video
@Ahmed Ismail my bad, replaced it to the correct link th-cam.com/video/cC2MrSULg6s/w-d-xo.html
I say go to the moon for a walk, and as soon as I spoke, I called to return, hahhahaaaa
Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?
Hi ffabian, you can use wav file without sound to generate facial animation from our SDK then use it for your project as idle😉
@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?
Can we have a step-by-step screen shots of it, please? it's very hard to follow the steps
Please visit our Discord server discord.com/invite/kubCAZh37D or ask about advice to the e-mail support@metahumansdk.io
Hi, is there a way to do audio input as well as text input. Also is there a way for answer to be written as text in widget blueprint.
Hi!
You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes.
I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
awesome thx!
Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?
@@domagojmajetic9820 Sadly no, if I find anything I will write here
At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.
@@metahumansdk Hi, great tutorial. I was wondering if there is a payed version where we can exceed the 5 second limit?
@gavrielcohen7606 hi!
Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉
The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?
Hi! You can choose different TTS options such as Google, Azure and others.
Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video
Hi!
You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.
I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?
Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.
I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same
You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation.
th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation
Hi!
If i understand it right you have a delay between start of animation and sound.
You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io
thanks for posting the video. really inspiring. i just want to clarify, is it possible to make the metahuman speak in bahasa Indonesia? have some difficulties to develop this kind of product. really need your help. thanks in advance
Hi! Azure and Google TTS standard voices are currently supported. As far as I know, Azure should have a language id-ID Indonesian (Indonesia).
Also you can use your TTS to send audio to the ATL (Audio To Lip-sync) node.
I need to know how to copy face reaction and lip syncing via android phone and also need to know how to motion movement's, thank you
Hi!
Currently, our plugin only supports Windows and Linux operating systems.
@@metahumansdk will it work on every metahuman? And is it free?
Hi!
You can use the plugin for free for two days after registering at the space.metahumansdk.io/
hi! I am having problems with the blue print, since nothing about metahuman sdk appears in the functions, could you help me with that?
Hi!
Api Manager has been renamed to Lipsync Api Manager in the latest version of the plugin.
Please try to call plugin functions through this name.
Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations
Hi!
Please try to start from Talk Component. This is the easiest way to use Streaming options.
Here is tutorial about it th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
If you still have some issues please visit our discord discord.gg/MJmAaqtdN8
Quick question. How can I add an idle animation to the body? When I've tried it so far, the head comes off of the model. Thanks for any help!
Hi!
You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800
Also we showed other but more difficult way with State Machines th-cam.com/video/oY__OZAa0I4/w-d-xo.html&lc=UgzNwmwaQIB3hOhKE7F4AaABAg
I am getting error 401 no ATL permission
Hi!
It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation.
If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL.
Regular ATL available on the Liet, Standard and Pro tariffs.
@@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.
Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io?
We will try to help you with this issue but we need more details about your case.
How to synchronize facial expressions with mouth movements? Could you provide a tutorial on this? Thank you
Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.
Hi! Is the 'Explicit Emotion' option selected in the 'Create MetaHumanSDKATLInput' tab?
I selected 'Ehappy' and it works, but selecting 'Eangry' doesn't have any effect. Do you have any solutions or tutorials for this issue? Thank you!
Hi! Can you please clarify, is the avatar not displaying the desired emotion or is the expression of the avatar not matching the chosen emotion.
Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.
Hi!
Delay highly depends on the network connection and length of the sound.
Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ?
We will try to help.
great tutorial.I got a perfect lip synch,but only one issue is the animation doesn't stop after the sound completes,can you help me out?
Hi!
Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk.
You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo
When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.
Hi!
You can try this video to fix the head th-cam.com/video/oY__OZAa0I4/w-d-xo.html&lc=Ugz9BC
The lip Sync is only working for 5 seconds. It is not working for longer audio files.
Hi!
The limit for generating one animation of 5 seconds is present only on the Trial plan.
If you have a different plan, please email us at support@metahumansdk.io and we will check your account.
👀 wow !
Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!
Hi! Could you please specify how you are generating the soundwave and provide logs if possible?
Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?
Hi! To use the generated audio in parts, first you need to call the Text To Speech function and then call the ATL stream function.
@@metahumansdk And that can't be done by combo right?
You can add the same pipeline but connect it to other head so you can use few metahumans in the same time.
Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?
Hi! It depends on your solution. You can make a stream and make render on a server or you can make an app that will use client's device resources.
Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.
Hi! You can choose different emotions at the moment of lip sync generation from audio (speech to animation stage)
Hi,I want to add some other facial movements when talking how can I do it like blinking etc.
Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.
@@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?
Hi! Yes, this function can read the path to the local file. In this parameter you must specify the path to your audio file.
Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running
You can try to use blend for animations that you want to combine.
You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/
how to give flashing mocap?
We didin't use mocap, our plugin generate animation from the sound
I like Your Plugin 🫡🫡🫡👍 thank you
how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?
We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681
And for animation we use EPositive emotion so it looks more expressive in our opinion.
I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?
Hi! We are working on the delays for now but on current version 3-4 seconds for the 1st chunk is nirmal situation.
Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?
You need to mark it in the settings of Content Browser window
this is great plugin with more detailed functionality also is it possible to integrate our own custom chatbot api? if yes please share a video
Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL.
As example you can use this tutorial when we use OpenAI plugin for chatbot th-cam.com/video/kZ2fTTwu6BE/w-d-xo.html
it only seems to able to generate 5 second lipsync animation. Am i doing something wrong or longer animation is a paid option.
If you use a trial tariff plan you can generate 5 seconds of ATL per one animation only.
when i try the sdk on UE 5.3 i get no tts permission error,what's the matter?
Hi!
TTS available for Chatbot tariff plan only.
You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458
I want the metahuman to start talking only when im close to him, how i can achieve that?
Hi!
You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
hello one question this is compatible with eleven labs api?? or voice notes? thanks
Hi!
If they produce 16 bit wav files you can easely use it with our MetahumanSDK plugin.
can we make a build(exe) by using metahumansdk in which we can upload audio and metahuman start speaking and body idle animation?? please help
Hi!
Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637
Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?
Hi Skyknight!
Can you tell little more about errors to our support on support@metahumansdk.io?
@@metahumansdk I'm already getting support on your discord, thanks :D
hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other.
I don't know if this has happened to anyone else.
Can you give me some advice to solve this problem?
Hi!
Is it possible to share a blueprint in our discord server?
Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: th-cam.com/video/oY__OZAa0I4/w-d-xo.html
When will it be available for UE.2?
We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month.
Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL
You can try it right now 😉
I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...
Hi!
You can use this official documentation from the UE developers docs.unrealengine.com/5.2/en-US/rendering-out-cinematic-movies-in-unreal-engine/
is there any way to integrate this on a website
Sure, you can try to make a pixel streaming project for example.
Would it be possible to put a TTS (like MaryTTS) or an LLM (like llama) in the c++ code, to avoid network calls and that it is free?
You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us.
If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.
Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?
Hi!
You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8
Hi, I have a non Metahuman character with ARKit expressions (from Daz studio). How can I use this plugin with my character ?
You need to use skeleton of your avatar in the ATL node and arkit mapping mode.
You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.
Hi,
when i try to do same things from last tutorial, i can t see mh_ds_mapping in my project. Do you know anything about this for solve?
Hi V-Risetech!
Please select Show Engine Content in Content Browser settings it should help.
We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504
I cant find the ceil
where I can connect my dialogflow chatbot? api key??
Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.
The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?
Hi, Qin Jason!
It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it.
Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8
TTS accessed from other cloud services,but it's really in the same blueprint.Would splitting into multiple blueprints avoid this problem?
Hi,
When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work.
Best regards.
Hi @skeras!
You need to mark for showing Engine Content and Plugins Content in the Content Browser
@@metahumansdk Done,Thanks.
Is this still working? Says "unavailable" on the Unreal Marketplace
Hi! That was marketplace servers maintenance, now plugin is available to download.
Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this
Hi! Emotions are selected when you creating audio tracks from the text are selected in a special drop down menu. Please try
@@metahumansdk can you elaborate a little on this as i am still stuck
@@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character
Hi!
Sorry for the late answer.
We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354
When will the combo request support Chinese?
Hi!
Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages
I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.
Unavailable to buy in Kyrgyzstan =(
Hi!
Please check that
1. Other plugins is available
2. If you try to use our site make sure that EGS louncher is started
3. EGS louncher is updaterd
Can I use this on Android?
Hi!
We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.
can we get sample code git repo?
Hi! You can find plugin files in the engine folder \Engine\Plugins\Marketplace\DigitalHumanAnimation
@@metahumansdk sample code of the project not the plugin to get started
We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project.
You can find that in the demo folder of plugin.
Hi, does this support Spanish or Italian?
Hi Alejandro Ramírez!
You can use any language you want because animation is created from sound.
Please update this to 5.3 ....please!?
Hi!
Work in progress 👨🔧
my head is detached now
Hi Популярно в България !
You need to use Blend Per Bone node in the Face AnimBP to glue head to the body when both parts are animated.
@@metahumansdk How to do this?
@@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body
@@Enver7able I found this video on their discord channel:
th-cam.com/video/oY__OZAa0I4/w-d-xo.html&ab_channel=MetaHumanSDK
@@metahumansdk thanks!
5.4 please
Hi!
You can find a test build for 5.4 in our discord discord.com/channels/1010548957258186792/1010557901036851240/1253377959700463647
was pulling my hair out wondering why I couldnt get the plugin to work, then realised I hadnt generated a token! 🙃
Hi!
Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable
The video level in the way you show is like un3. sorry
The girls need some love dude. They look so sad and depressed :P:P...
se debe pagar para usarlo, no funciona
Hi!
Can you please share more details about your issue?
Perhape this tutorial can help you th-cam.com/video/cC2MrSULg6s/w-d-xo.html
spanish?
MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.
hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?
I mean, how to edit the number of sampled keys/frames
hi, im having problem at 2:32, when i generate my text to speech it does not appear in my folders :/
Hi!
Can you please check that your API token is correct in the project settings?
If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io
Hi I really need your help, in 6:29 i drag and drop my BP_metahuman but is not showing up in the blueprint, don't know why
Hi!
You need to use metahuman from the Outliner of your scene but not directly from the Content Browser.