Tutorials: how to use the plugin

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ธ.ค. 2024

ความคิดเห็น •

  • @arielshpitzer
    @arielshpitzer ปีที่แล้ว +2

    It's updated. i think i saw a diffrent video looking almost the same. amazing work !

  • @AltVR_YouTube
    @AltVR_YouTube ปีที่แล้ว +12

    Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability

    • @ryudious
      @ryudious 5 หลายเดือนก่อน

      I found it fine THis is public....

    • @AltVR_YouTube
      @AltVR_YouTube 4 หลายเดือนก่อน

      @@ryudious Well, my comment was from over a year ago

  • @TimothyMack-s6x
    @TimothyMack-s6x ปีที่แล้ว +4

    This is mind blowing!!!!!!

  • @anveegsinha4120
    @anveegsinha4120 10 หลายเดือนก่อน +2

    2:12 hi, I dont see the Create Speech from text. I have added the API key as well.

    • @metahumansdk
      @metahumansdk  10 หลายเดือนก่อน

      Hi!
      Did you try it on a wav file?

    • @chBd01
      @chBd01 4 หลายเดือนก่อน

      @@metahumansdk hello is this only for version 5.1 below? not 5.4?Thank you

    • @metahumansdk
      @metahumansdk  3 หลายเดือนก่อน

      @chBd01 You can find the 5.4 version in the marketplace
      www.unrealengine.com/marketplace/en-US/product/digital-avatar-service-link

  • @NeoxEntertainment
    @NeoxEntertainment ปีที่แล้ว +1

    Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end
    does anyone knows where i can find it ?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Hi!
      Please open Content Browser settings and enable Engine and Plugins content as on the screenshot
      cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&

  • @Jungleroo
    @Jungleroo 3 หลายเดือนก่อน

    How are you making the head and shoulders move along with the speech too?

    • @metahumansdk
      @metahumansdk  3 หลายเดือนก่อน +1

      Metahumans have 2 skeletons, one for the head and one for the body. You can direct animations to both skeletons at the same time and set them up in a suitable way so that the movements match your wishes.

  • @TheAIAndy
    @TheAIAndy ปีที่แล้ว +1

    LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +2

      Hi!
      We used regular control rig to add poses in the sequencer timeline and make body animation manually in this tutorial

    • @TheAIAndy
      @TheAIAndy ปีที่แล้ว

      @@metahumansdk haha as a beginner I have no idea what that means 😂 I’ll try to find a tutorial searching some of the words u said

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body.
      Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/

  • @TheOsirisband
    @TheOsirisband 5 หลายเดือนก่อน

    im stuck here, min 1:10 when importhing the metahuman to unreal engine via bridge.
    already download the metahuman preset, but when I add the metahuman to UE 5, nothing happen, can someone help me on this one?

    • @metahumansdk
      @metahumansdk  5 หลายเดือนก่อน

      Hi!
      When you have already downloaded metahuman in Quixel Bridge you need to export it to the project. After that you need to open content browser in the project and find MetaHumans folder which contains exported metahumans.

  • @blommer26
    @blommer26 ปีที่แล้ว

    Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io

    • @Ali_k11
      @Ali_k11 11 หลายเดือนก่อน

      h have same problem

    • @metahumansdk
      @metahumansdk  11 หลายเดือนก่อน

      Hi!
      @Ali_k11, can you give some details about your issue?

  • @mn04147
    @mn04147 ปีที่แล้ว +1

    thanks for your greak Plugin!

  • @phantomebo6537
    @phantomebo6537 ปีที่แล้ว

    I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode.
      More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync
      Also you can ask for help in our Discord discord.gg/MJmAaqtdN8

  • @Jungleroo
    @Jungleroo 3 หลายเดือนก่อน

    Has anyone got this working with a reallusion character creator rigged model? Did you have to seperate the head? Which preset did u use?

    • @metahumansdk
      @metahumansdk  3 หลายเดือนก่อน

      Hi!
      They support the ARKit blendshape set after version 3.4, so you can just select the ECustom option in the ATL Mapping Mode settings, this should help.

    • @Jungleroo
      @Jungleroo 3 หลายเดือนก่อน

      @@metahumansdk ok, and under ECustom option, what mapping asset and bone asset do i select? as if i dont select any, the anim it creates is blank.

    • @metahumansdk
      @metahumansdk  3 หลายเดือนก่อน

      If possible, please share information about Unreal Engine version, send us the project log file via discord discord.com/invite/MJmAaqtdN8 or email support@metahumansdk.io.
      At the moment we can't reproduce the error and animation is created correctly for custom meshes without additional mapping options.

  • @HuYilai
    @HuYilai ปีที่แล้ว

    Hello, why do I follow your steps, at 12:03, the sound ends but the mouth moves on and doesn't stop

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Could you please clarify if you are experiencing any performance issues?

  • @borrowedtruths6955
    @borrowedtruths6955 ปีที่แล้ว +1

    When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame.
    Can you help with these two issues? Thank you.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      We recommend you to use this tutorial th-cam.com/video/oY__OZAa0I4/w-d-xo.html
      Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉
      If you need more advice please contact us in discord discord.gg/MJmAaqtdN8

    • @borrowedtruths6955
      @borrowedtruths6955 ปีที่แล้ว

      @@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      @borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically.
      We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1

    • @borrowedtruths6955
      @borrowedtruths6955 ปีที่แล้ว

      @@metahumansdk Thanks, I appreciate your time.

    • @ayrtonnasee3284
      @ayrtonnasee3284 11 หลายเดือนก่อน

      i have the same problem

  • @LouisHirtz
    @LouisHirtz ปีที่แล้ว +1

    Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Currently our plugin just send full message to TTS services but you can separate text and send smaller parts manually.

  • @HuYilai
    @HuYilai ปีที่แล้ว

    At 10:11 in the video, when I scroll over it shows that the type of "CurrentChunk' is not compatible with Index, I don't know what's wrong

    • @HuYilai
      @HuYilai ปีที่แล้ว

      10:10

    • @HuYilai
      @HuYilai ปีที่แล้ว

      hello can you help me with this problem

    • @ffabiang
      @ffabiang ปีที่แล้ว

      hi, make sure CurrentChunk is of type integer aswell as index

    • @HuYilai
      @HuYilai ปีที่แล้ว

      @@ffabiang thank you

  • @flytothetoon
    @flytothetoon ปีที่แล้ว +3

    Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi Fly to the Toon!
      You can select in the ATL eye blinking, also it works for ATL nodes.

  • @lukassarralde5439
    @lukassarralde5439 ปีที่แล้ว

    Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
      I'll ask to the team about cases for games may be we can create tutorial about it.

  • @honglabcokr
    @honglabcokr ปีที่แล้ว +1

    Thank you so much!

  • @abhishekakodiya2206
    @abhishekakodiya2206 ปีที่แล้ว +4

    not working
    for me plugin doesn't genrates any lipsync anim

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Please, send us more details to the our discord server or mail support@metahumansdk.io
      We will try to help with your issue

    • @mistert2962
      @mistert2962 ปีที่แล้ว +1

      Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.

  • @hardikadoshi3568
    @hardikadoshi3568 10 หลายเดือนก่อน

    I wonder if there is anything similar for Unity platform as well? Would be great if there is support available as the avatars look great.

    • @metahumansdk
      @metahumansdk  9 หลายเดือนก่อน

      Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.

  • @AICineVerseStudios
    @AICineVerseStudios ปีที่แล้ว +1

    Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +2

      Hi!
      At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work.
      Here is the video about token attachment or generating new in the personal account: th-cam.com/video/3wmmaE-8aoE/w-d-xo.html&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab
      If it doesn't help please tell us and we try tio help with your issue.

  • @Relentless_Games
    @Relentless_Games 8 หลายเดือนก่อน +2

    Error: fill api token via project settings
    First time using this sdk, how can I fix this?

    • @metahumansdk
      @metahumansdk  8 หลายเดือนก่อน

      Please contact us through e-mail support@metahumansdk.io we will help you with token.

  • @SKDyiyi
    @SKDyiyi ปีที่แล้ว

    Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! If your avatar have not separated model you can blend an animation for the body and neck with our facial animation.

    • @SKDyiyi
      @SKDyiyi ปีที่แล้ว

      @@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You can mark Neck Movement in the ATL node to add it to the animation in MetahumanSDK plugin

  • @dyter07
    @dyter07 ปีที่แล้ว +1

    Well, this 2000 years kater joke was good. I am waiting just 3 hours now to have the Metahuman loaded, LOL

  • @danD315D
    @danD315D ปีที่แล้ว +2

    Is it possible for audio to lip sync to work on other 3d character models, rather than meta human ones?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Hi!
      Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.

  • @realskylgh
    @realskylgh ปีที่แล้ว

    Great, does the combo do ATL Strinming things as well?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      We are working on it. If all goes fine we add it in the nearest releases on 5.2

  • @ahmedismail772
    @ahmedismail772 ปีที่แล้ว

    it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) th-cam.com/video/cC2MrSULg6s/w-d-xo.html

    • @ahmedismail772
      @ahmedismail772 ปีที่แล้ว

      @@metahumansdk the link guide me to private video

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      @Ahmed Ismail my bad, replaced it to the correct link th-cam.com/video/cC2MrSULg6s/w-d-xo.html

  • @arianakis3784
    @arianakis3784 10 หลายเดือนก่อน

    I say go to the moon for a walk, and as soon as I spoke, I called to return, hahhahaaaa

  • @ffabiang
    @ffabiang ปีที่แล้ว

    Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +4

      Hi ffabian, you can use wav file without sound to generate facial animation from our SDK then use it for your project as idle😉

    • @ffabiang
      @ffabiang ปีที่แล้ว

      ​@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?

  • @mwa8385
    @mwa8385 6 หลายเดือนก่อน

    Can we have a step-by-step screen shots of it, please? it's very hard to follow the steps

    • @metahumansdk
      @metahumansdk  6 หลายเดือนก่อน

      Please visit our Discord server discord.com/invite/kubCAZh37D or ask about advice to the e-mail support@metahumansdk.io

  • @NiksCro96
    @NiksCro96 10 หลายเดือนก่อน

    Hi, is there a way to do audio input as well as text input. Also is there a way for answer to be written as text in widget blueprint.

    • @metahumansdk
      @metahumansdk  10 หลายเดือนก่อน

      Hi!
      You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes.
      I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here th-cam.com/video/jrpAJDIhCFE/w-d-xo.html

  • @dome7415
    @dome7415 ปีที่แล้ว +1

    awesome thx!

  • @Bruh-we9mv
    @Bruh-we9mv 10 หลายเดือนก่อน

    Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?

    • @Bruh-we9mv
      @Bruh-we9mv 10 หลายเดือนก่อน

      @@domagojmajetic9820 Sadly no, if I find anything I will write here

    • @metahumansdk
      @metahumansdk  10 หลายเดือนก่อน

      At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.

    • @gavrielcohen7606
      @gavrielcohen7606 9 หลายเดือนก่อน

      @@metahumansdk Hi, great tutorial. I was wondering if there is a payed version where we can exceed the 5 second limit?

    • @metahumansdk
      @metahumansdk  9 หลายเดือนก่อน

      @gavrielcohen7606 hi!
      Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉

  • @jumpieva
    @jumpieva ปีที่แล้ว +1

    The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can choose different TTS options such as Google, Azure and others.

  • @SaadSohail-ug9fl
    @SaadSohail-ug9fl 8 หลายเดือนก่อน

    Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video

    • @metahumansdk
      @metahumansdk  7 หลายเดือนก่อน

      Hi!
      You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.

  • @borrowedtruths6955
    @borrowedtruths6955 ปีที่แล้ว

    I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.

  • @syedhannaan2974
    @syedhannaan2974 6 หลายเดือนก่อน

    I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same

    • @metahumansdk
      @metahumansdk  6 หลายเดือนก่อน

      You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation.
      th-cam.com/video/jrpAJDIhCFE/w-d-xo.html

  • @damncpp5518
    @damncpp5518 7 หลายเดือนก่อน

    im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation

    • @metahumansdk
      @metahumansdk  6 หลายเดือนก่อน

      Hi!
      If i understand it right you have a delay between start of animation and sound.
      You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
      If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io

  • @TheOsirisband
    @TheOsirisband 5 หลายเดือนก่อน

    thanks for posting the video. really inspiring. i just want to clarify, is it possible to make the metahuman speak in bahasa Indonesia? have some difficulties to develop this kind of product. really need your help. thanks in advance

    • @metahumansdk
      @metahumansdk  5 หลายเดือนก่อน

      Hi! Azure and Google TTS standard voices are currently supported. As far as I know, Azure should have a language id-ID Indonesian (Indonesia).
      Also you can use your TTS to send audio to the ATL (Audio To Lip-sync) node.

  • @Silentiumfilms007
    @Silentiumfilms007 4 หลายเดือนก่อน

    I need to know how to copy face reaction and lip syncing via android phone and also need to know how to motion movement's, thank you

    • @metahumansdk
      @metahumansdk  4 หลายเดือนก่อน

      Hi!
      Currently, our plugin only supports Windows and Linux operating systems.

    • @Silentiumfilms007
      @Silentiumfilms007 4 หลายเดือนก่อน

      @@metahumansdk will it work on every metahuman? And is it free?

    • @metahumansdk
      @metahumansdk  3 หลายเดือนก่อน

      Hi!
      You can use the plugin for free for two days after registering at the space.metahumansdk.io/

  • @GenesisDominguez-n1c
    @GenesisDominguez-n1c 2 หลายเดือนก่อน

    hi! I am having problems with the blue print, since nothing about metahuman sdk appears in the functions, could you help me with that?

    • @metahumansdk
      @metahumansdk  2 หลายเดือนก่อน

      Hi!
      Api Manager has been renamed to Lipsync Api Manager in the latest version of the plugin.
      Please try to call plugin functions through this name.

  • @bruninhohenrri
    @bruninhohenrri 8 หลายเดือนก่อน

    Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations

    • @metahumansdk
      @metahumansdk  8 หลายเดือนก่อน

      Hi!
      Please try to start from Talk Component. This is the easiest way to use Streaming options.
      Here is tutorial about it th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
      If you still have some issues please visit our discord discord.gg/MJmAaqtdN8

  • @corvetteee1
    @corvetteee1 ปีที่แล้ว

    Quick question. How can I add an idle animation to the body? When I've tried it so far, the head comes off of the model. Thanks for any help!

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800
      Also we showed other but more difficult way with State Machines th-cam.com/video/oY__OZAa0I4/w-d-xo.html&lc=UgzNwmwaQIB3hOhKE7F4AaABAg

  • @anveegsinha4120
    @anveegsinha4120 10 หลายเดือนก่อน +2

    I am getting error 401 no ATL permission

    • @metahumansdk
      @metahumansdk  9 หลายเดือนก่อน

      Hi!
      It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation.
      If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL.
      Regular ATL available on the Liet, Standard and Pro tariffs.

    • @BluethunderMUSIC
      @BluethunderMUSIC 9 หลายเดือนก่อน

      @@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.

    • @metahumansdk
      @metahumansdk  9 หลายเดือนก่อน

      Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io?
      We will try to help you with this issue but we need more details about your case.

  • @k动画的肥虫
    @k动画的肥虫 ปีที่แล้ว

    How to synchronize facial expressions with mouth movements? Could you provide a tutorial on this? Thank you

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.

    • @k动画的肥虫
      @k动画的肥虫 ปีที่แล้ว

      Hi! Is the 'Explicit Emotion' option selected in the 'Create MetaHumanSDKATLInput' tab?

    • @k动画的肥虫
      @k动画的肥虫 ปีที่แล้ว

      I selected 'Ehappy' and it works, but selecting 'Eangry' doesn't have any effect. Do you have any solutions or tutorials for this issue? Thank you!

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Can you please clarify, is the avatar not displaying the desired emotion or is the expression of the avatar not matching the chosen emotion.

  • @uzaker6577
    @uzaker6577 ปีที่แล้ว

    Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Delay highly depends on the network connection and length of the sound.
      Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ?
      We will try to help.

  • @rajeshvaghela2772
    @rajeshvaghela2772 ปีที่แล้ว

    great tutorial.I got a perfect lip synch,but only one issue is the animation doesn't stop after the sound completes,can you help me out?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk.
      You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo

  • @CharleneTeets
    @CharleneTeets ปีที่แล้ว

    When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You can try this video to fix the head th-cam.com/video/oY__OZAa0I4/w-d-xo.html&lc=Ugz9BC

  • @sanjay1994.
    @sanjay1994. 2 หลายเดือนก่อน

    The lip Sync is only working for 5 seconds. It is not working for longer audio files.

    • @metahumansdk
      @metahumansdk  2 หลายเดือนก่อน +1

      Hi!
      The limit for generating one animation of 5 seconds is present only on the Trial plan.
      If you have a different plan, please email us at support@metahumansdk.io and we will check your account.

  • @ragegohard9603
    @ragegohard9603 ปีที่แล้ว

    👀 wow !

  • @juanmacode
    @juanmacode ปีที่แล้ว

    Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Could you please specify how you are generating the soundwave and provide logs if possible?

  • @devpatel8276
    @devpatel8276 ปีที่แล้ว

    Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! To use the generated audio in parts, first you need to call the Text To Speech function and then call the ATL stream function.

    • @devpatel8276
      @devpatel8276 ปีที่แล้ว

      @@metahumansdk And that can't be done by combo right?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You can add the same pipeline but connect it to other head so you can use few metahumans in the same time.

  • @MilanJain-y4s
    @MilanJain-y4s ปีที่แล้ว

    Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! It depends on your solution. You can make a stream and make render on a server or you can make an app that will use client's device resources.

  • @k动画的肥虫
    @k动画的肥虫 ปีที่แล้ว

    Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can choose different emotions at the moment of lip sync generation from audio (speech to animation stage)

  • @方阳-q6q
    @方阳-q6q ปีที่แล้ว

    Hi,I want to add some other facial movements when talking how can I do it like blinking etc.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.

    • @方阳-q6q
      @方阳-q6q ปีที่แล้ว

      @@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Yes, this function can read the path to the local file. In this parameter you must specify the path to your audio file.

    • @方阳-q6q
      @方阳-q6q ปีที่แล้ว

      Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You can try to use blend for animations that you want to combine.
      You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/

  • @rachmadagungpambudi7820
    @rachmadagungpambudi7820 ปีที่แล้ว +1

    how to give flashing mocap?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      We didin't use mocap, our plugin generate animation from the sound

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 ปีที่แล้ว

      I like Your Plugin 🫡🫡🫡👍 thank you

  • @ai_and_chill
    @ai_and_chill ปีที่แล้ว

    how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681
      And for animation we use EPositive emotion so it looks more expressive in our opinion.

  • @realskylgh
    @realskylgh ปีที่แล้ว

    I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! We are working on the delays for now but on current version 3-4 seconds for the 1st chunk is nirmal situation.

  • @asdfasdfsd
    @asdfasdfsd ปีที่แล้ว

    Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You need to mark it in the settings of Content Browser window

  • @sumitranjan7005
    @sumitranjan7005 ปีที่แล้ว

    this is great plugin with more detailed functionality also is it possible to integrate our own custom chatbot api? if yes please share a video

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL.
      As example you can use this tutorial when we use OpenAI plugin for chatbot th-cam.com/video/kZ2fTTwu6BE/w-d-xo.html

  • @funkyjeans8667
    @funkyjeans8667 10 หลายเดือนก่อน

    it only seems to able to generate 5 second lipsync animation. Am i doing something wrong or longer animation is a paid option.

    • @metahumansdk
      @metahumansdk  10 หลายเดือนก่อน

      If you use a trial tariff plan you can generate 5 seconds of ATL per one animation only.

  • @Ali_k11
    @Ali_k11 11 หลายเดือนก่อน

    when i try the sdk on UE 5.3 i get no tts permission error,what's the matter?

    • @metahumansdk
      @metahumansdk  11 หลายเดือนก่อน

      Hi!
      TTS available for Chatbot tariff plan only.
      You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458

  • @I-MM-O-R-T-A-L
    @I-MM-O-R-T-A-L ปีที่แล้ว

    I want the metahuman to start talking only when im close to him, how i can achieve that?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/

  • @enriquemontero74
    @enriquemontero74 ปีที่แล้ว

    hello one question this is compatible with eleven labs api?? or voice notes? thanks

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Hi!
      If they produce 16 bit wav files you can easely use it with our MetahumanSDK plugin.

  • @jaykunwar3312
    @jaykunwar3312 ปีที่แล้ว

    can we make a build(exe) by using metahumansdk in which we can upload audio and metahuman start speaking and body idle animation?? please help

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637

  • @skyknightb
    @skyknightb ปีที่แล้ว

    Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi Skyknight!
      Can you tell little more about errors to our support on support@metahumansdk.io?

    • @skyknightb
      @skyknightb ปีที่แล้ว

      @@metahumansdk I'm already getting support on your discord, thanks :D

  • @rafaeltavares6162
    @rafaeltavares6162 ปีที่แล้ว

    hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other.
    I don't know if this has happened to anyone else.
    Can you give me some advice to solve this problem?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Is it possible to share a blueprint in our discord server?
      Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: th-cam.com/video/oY__OZAa0I4/w-d-xo.html

  • @leion44
    @leion44 ปีที่แล้ว

    When will it be available for UE.2?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month.
      Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL
      You can try it right now 😉

  • @krishnakukade
    @krishnakukade ปีที่แล้ว

    I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You can use this official documentation from the UE developers docs.unrealengine.com/5.2/en-US/rendering-out-cinematic-movies-in-unreal-engine/

  • @dreamyprod591
    @dreamyprod591 8 หลายเดือนก่อน

    is there any way to integrate this on a website

    • @metahumansdk
      @metahumansdk  8 หลายเดือนก่อน

      Sure, you can try to make a pixel streaming project for example.

  • @guilloisvincent2286
    @guilloisvincent2286 ปีที่แล้ว

    Would it be possible to put a TTS (like MaryTTS) or an LLM (like llama) in the c++ code, to avoid network calls and that it is free?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us.
      If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.

  • @Ysys-king
    @Ysys-king ปีที่แล้ว

    Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8

  • @unrealvizzee
    @unrealvizzee ปีที่แล้ว

    Hi, I have a non Metahuman character with ARKit expressions (from Daz studio). How can I use this plugin with my character ?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      You need to use skeleton of your avatar in the ATL node and arkit mapping mode.
      You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.

  • @v-risetech1451
    @v-risetech1451 ปีที่แล้ว

    Hi,
    when i try to do same things from last tutorial, i can t see mh_ds_mapping in my project. Do you know anything about this for solve?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi V-Risetech!
      Please select Show Engine Content in Content Browser settings it should help.
      We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504

  • @LoongKinGame
    @LoongKinGame ปีที่แล้ว

    I cant find the ceil

  • @aihumans.official
    @aihumans.official ปีที่แล้ว

    where I can connect my dialogflow chatbot? api key??

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.

  • @qinjason1199
    @qinjason1199 ปีที่แล้ว

    The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi, Qin Jason!
      It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it.
      Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8

    • @qinjason1199
      @qinjason1199 ปีที่แล้ว

      TTS accessed from other cloud services,but it's really in the same blueprint.Would splitting into multiple blueprints avoid this problem?

  • @skeras1171
    @skeras1171 ปีที่แล้ว

    Hi,
    When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work.
    Best regards.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi @skeras!
      You need to mark for showing Engine Content and Plugins Content in the Content Browser

    • @skeras1171
      @skeras1171 ปีที่แล้ว

      @@metahumansdk Done,Thanks.

  • @kirkr
    @kirkr ปีที่แล้ว

    Is this still working? Says "unavailable" on the Unreal Marketplace

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! That was marketplace servers maintenance, now plugin is available to download.

  • @umernaveed6936
    @umernaveed6936 ปีที่แล้ว

    Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! Emotions are selected when you creating audio tracks from the text are selected in a special drop down menu. Please try

    • @umernaveed6936
      @umernaveed6936 ปีที่แล้ว

      @@metahumansdk can you elaborate a little on this as i am still stuck

    • @umernaveed6936
      @umernaveed6936 ปีที่แล้ว

      @@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Sorry for the late answer.
      We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354

  • @benshen9600
    @benshen9600 ปีที่แล้ว

    When will the combo request support Chinese?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages
      I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.

  • @BAYqg
    @BAYqg ปีที่แล้ว

    Unavailable to buy in Kyrgyzstan =(

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Please check that
      1. Other plugins is available
      2. If you try to use our site make sure that EGS louncher is started
      3. EGS louncher is updaterd

  • @mahdibazei7020
    @mahdibazei7020 7 หลายเดือนก่อน

    Can I use this on Android?

    • @metahumansdk
      @metahumansdk  7 หลายเดือนก่อน

      Hi!
      We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.

  • @sumitranjan7005
    @sumitranjan7005 ปีที่แล้ว

    can we get sample code git repo?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi! You can find plugin files in the engine folder \Engine\Plugins\Marketplace\DigitalHumanAnimation

    • @sumitranjan7005
      @sumitranjan7005 ปีที่แล้ว

      @@metahumansdk sample code of the project not the plugin to get started

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project.
      You can find that in the demo folder of plugin.

  • @AlejandroRamirez-ep3wo
    @AlejandroRamirez-ep3wo ปีที่แล้ว

    Hi, does this support Spanish or Italian?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Hi Alejandro Ramírez!
      You can use any language you want because animation is created from sound.

  • @mohdafiqtajulnizam9421
    @mohdafiqtajulnizam9421 ปีที่แล้ว

    Please update this to 5.3 ....please!?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Work in progress 👨‍🔧

  • @ПопулярновБългария
    @ПопулярновБългария ปีที่แล้ว +2

    my head is detached now

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว +1

      Hi Популярно в България !
      You need to use Blend Per Bone node in the Face AnimBP to glue head to the body when both parts are animated.

    • @Enver7able
      @Enver7able ปีที่แล้ว

      @@metahumansdk How to do this?

    • @Fedexmaster91
      @Fedexmaster91 ปีที่แล้ว

      @@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body

    • @Fedexmaster91
      @Fedexmaster91 ปีที่แล้ว

      @@Enver7able I found this video on their discord channel:
      th-cam.com/video/oY__OZAa0I4/w-d-xo.html&ab_channel=MetaHumanSDK

    • @ПопулярновБългария
      @ПопулярновБългария ปีที่แล้ว

      @@metahumansdk thanks!

  • @Silentiumfilms007
    @Silentiumfilms007 4 หลายเดือนก่อน

    5.4 please

    • @metahumansdk
      @metahumansdk  4 หลายเดือนก่อน

      Hi!
      You can find a test build for 5.4 in our discord discord.com/channels/1010548957258186792/1010557901036851240/1253377959700463647

  • @commanderskullySHepherdson
    @commanderskullySHepherdson ปีที่แล้ว

    was pulling my hair out wondering why I couldnt get the plugin to work, then realised I hadnt generated a token! 🙃

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable

  • @리저드
    @리저드 4 หลายเดือนก่อน

    The video level in the way you show is like un3. sorry

  • @EnricoGolfettoMasella
    @EnricoGolfettoMasella ปีที่แล้ว

    The girls need some love dude. They look so sad and depressed :P:P...

  • @inteligenciafutura
    @inteligenciafutura 8 หลายเดือนก่อน

    se debe pagar para usarlo, no funciona

    • @metahumansdk
      @metahumansdk  7 หลายเดือนก่อน

      Hi!
      Can you please share more details about your issue?
      Perhape this tutorial can help you th-cam.com/video/cC2MrSULg6s/w-d-xo.html

  • @inteligenciafutura
    @inteligenciafutura 8 หลายเดือนก่อน

    spanish?

    • @metahumansdk
      @metahumansdk  7 หลายเดือนก่อน

      MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.

  • @kreamonz
    @kreamonz 7 หลายเดือนก่อน +1

    hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?

    • @kreamonz
      @kreamonz 7 หลายเดือนก่อน

      I mean, how to edit the number of sampled keys/frames

  • @theforcexyz
    @theforcexyz ปีที่แล้ว

    hi, im having problem at 2:32, when i generate my text to speech it does not appear in my folders :/

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      Can you please check that your API token is correct in the project settings?
      If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io

  • @luchobo7455
    @luchobo7455 ปีที่แล้ว

    Hi I really need your help, in 6:29 i drag and drop my BP_metahuman but is not showing up in the blueprint, don't know why

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi!
      You need to use metahuman from the Outliner of your scene but not directly from the Content Browser.