Tutorials: How to prepared Demo scene tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น •

  • @AndreiGhenoiu
    @AndreiGhenoiu 2 ปีที่แล้ว +7

    Thank you so much for this! For metahumans, in the Generate Lipsync animation popup, once you select your Face_Archetype_Skeleton, under Mappings Mode, choose EMapping asset, and for the Mapping assets choose DigitalHuman_To_MetaHuman_Mappings. hope this helps.

  • @NeoxEntertainment
    @NeoxEntertainment 3 ปีที่แล้ว +5

    Hey awesome SDK and Video about it ! I had some problems with setting that up for a metahuman i created i also imported the audio and opened the lilpsync plugin to select the skeleton i used the one from the face of my metahuman but didnt worked do you know what i did wrong ?

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว +1

      Hey! Please, stay tuned for our updates. We are going to record additional tutorials specially for MetaHumans. You can follow for roadmap and bug fixes here metahumansdk.canny.io/
      In your case, could you send the screenshot of "generate lipsync" windows parameters?

    • @AlexandarKaravelov
      @AlexandarKaravelov 2 ปีที่แล้ว +2

      Hi there!
      Aren't you already used to Unreal engine tutorials? :))
      So what i did to remap the lipsincing to a metahuman character is:
      1. You need the Face_Archetype_Skeleton from the face of your metahuman character.
      2. Right click on the WAV audio file and create Lipsync animation
      3.Choose your WAV file (sound)
      4.Set the skeleton to the Face_Archetype_Skeleton
      5.Set Mapping mode to EMappingAsset
      6.Tick the Set up for Metahuman box
      7.Under mapping asset you should have a DigitalHuman_to_Metahuman_Mapping asset and you should choose it!
      8.Generate
      9.TADAAA! :)
      Now the animation works perfectly on the metahuman character (Face)!

    • @NeoxEntertainment
      @NeoxEntertainment 2 ปีที่แล้ว +1

      @@AlexandarKaravelov awesome thank you :)

  • @niharika1003
    @niharika1003 ปีที่แล้ว +1

    Hello currently trying out your plug-in. On Mac once we click generate lipsync Unreal crashes. On windows, nothing is generated after clicking generate. Please help! This is too cool to miss !

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi Niharika!
      Can you share your log file to our support mail support@metahumansdk.io or on the discord server discord.gg/MJmAaqtdN8 ?

  • @AnchorLee
    @AnchorLee 2 ปีที่แล้ว

    I am really excited about this plugins...I can generate a face animation, but there is nothing in it... the lips don't work. Really frustrating...

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      Thank you for using our plugin. Firstly, could you try used demo scene, included in plugin content, for better understanding pipeline.

    • @AnchorLee
      @AnchorLee 2 ปีที่แล้ว

      @@metahumansdk Thank you for the reply. Where can I get the demo scene? I’ve tried, but can’t find it.

  • @groovygrover6178
    @groovygrover6178 3 ปีที่แล้ว +1

    If u want to use a selfmade character, how do you setup the skeleton of your character to make it work with this plugin?

    • @metahumansdk
      @metahumansdk  3 ปีที่แล้ว

      You can use any skeleton settings, the main point to use right rig for character face

  • @JCPhotographyMallorca
    @JCPhotographyMallorca 3 ปีที่แล้ว +3

    nothing happens when i try to generate the lipsync

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว +1

      Hey! Please try it on Default scene and follow the tutorial, do you have any questions about that?

    • @JCPhotographyMallorca
      @JCPhotographyMallorca 2 ปีที่แล้ว

      ​@@metahumansdk got it to work sorry just took a little searching !, just trying to figure out how you can get it to work on a metahuman rather than the sample project

  • @georges8408
    @georges8408 3 ปีที่แล้ว

    it is interesting !!! but the truth is that it is a bit outdated since now with an iphone you can sync both lyps and facial motion... also there is a question here : if someone wants to make a lipsync he also wanted a facial sync or animation... so although it is very interesting (and free) it is a bit useless for a more holistic facial mocap animation.

    • @metahumansdk
      @metahumansdk  3 ปีที่แล้ว +1

      Thank you for your comment! Actually, we are solving different tasks... use our plugin you don't need create/record animation at all, you just need audio and you can create animation runtime. About facial sync, it's included and we are improving it in each version :)

    • @NickAB94
      @NickAB94 2 ปีที่แล้ว

      The live link and other realtime solutions aren't quite good enough for me. Ive been using nvidia's A2F as a layer on top of the face mocap (From Faceware which has better lip sync even than the Iphone link) which for me has slightly improved the lip movement quality. I'm yet to try this one out

  • @donhsi
    @donhsi 2 ปีที่แล้ว

    how does this compare to NVIDIA audio2face?

  • @魏凤熙
    @魏凤熙 2 ปีที่แล้ว

    Is there any relevant expression file that can be used together? If a simple voice dialogue, the character knows that his eyes are particularly dull, and he needs the expression without lip action, such as anger, happiness, etc., in order to make the action expression that conforms to the real situation. Are there any relevant solutions?

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      Hey! If I got right, you need different expression for different situation. You can generate appropriate animation file, for different emitions. Right now only calm, happy and angry available, but in the futher more, we are working on it. Stay tune for update of roadmap metahumansdk.canny.io/

    • @魏凤熙
      @魏凤熙 2 ปีที่แล้ว

      @@metahumansdk I don't seem to be able to map the generated animation to my metahumans' skeleton. I can generate files and see expression curves, such as eyes and mouth, but the character's face doesn't move in the animation. It seems that this has something to do with name.

  • @晓龙纹路
    @晓龙纹路 2 ปีที่แล้ว

    The following modules are missing orbuilt with a different engine version:
    MetahumanSDK
    100
    MetahumanSDKEditor
    100
    Engine modules cannot be compiled atruntime. Please build through your IDE.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Hi 晓龙纹路
      Perhaps you try to use SDK on the wrong version of Engine. Can you share more details aboud SDK version and Engine version?
      Also you can ask questions in our discord with logs and screenshots.

  • @Amelia_PC
    @Amelia_PC 2 ปีที่แล้ว +1

    Hey! Seems impressive! But it just doesn't work. I've followed all the steps and still, there's no "DigitalHumanAnimation Content". Tried clean project, version 4.26 and 4.27. Nothing. Plugin Show Content is set on. Without the sample, I can't do anything.

    • @qntmstudios1214
      @qntmstudios1214 2 ปีที่แล้ว +2

      Make sure show engine content is on as well.

    • @Amelia_PC
      @Amelia_PC 2 ปีที่แล้ว

      @@qntmstudios1214 Yup, Show Engine content was on too. I've tried everything and had no content generated. (Tried versions 4.26 and 4.27, with different projects: MetaHuman project and a clean project. I'm using Windows 10 X64). Thanks for answering.

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว +2

      We added that problem for more deep testing and we'll check it again for all versions. Thank you for your comments.

    • @Amelia_PC
      @Amelia_PC 2 ปีที่แล้ว

      @@metahumansdk Thanks :)

  • @jonlon4406
    @jonlon4406 ปีที่แล้ว

    ::Audio to lipsync) this node is too slow in runtime,is there a way to go faster?

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Delay of the animation depends on the length of audio, AVG time for ATL generation is 70% of audio length.
      If you use ATL Stream in best cases we get a response with the 1st chunk in 3 sec.

  • @陈亭羽-c5o
    @陈亭羽-c5o 2 ปีที่แล้ว

    Hello, why is DigitalHumanAnimation Content not displayed after I open the plugin and set the project settings?I checked the download version, there should be no download error

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว +1

      Hello! Don't forget settings "Show content" for plugins in UE

    • @陈亭羽-c5o
      @陈亭羽-c5o 2 ปีที่แล้ว

      @@metahumansdk Thanks! the problem is solved

  • @PerFeldvoss
    @PerFeldvoss 3 ปีที่แล้ว

    Thanks, but where do I find instructions on how to get a "lipsync animation generated by our cloud server"? I assume that you can not use 'any' audio file... if so the instructions are not clear?

    • @metahumansdk
      @metahumansdk  3 ปีที่แล้ว

      You can find full documentation here arvrlab.notion.site/MetaHuman-SDK-Documentation-555624b940b943be8e0d7096cc0eb6ba
      Recommended audio file parameters:
      - Specifications: PCM - Format: WAV - Bitrate: 16 - Sample Rate: 16000 Hz - Speaker Channels: Mono

  • @georgeluna6217
    @georgeluna6217 2 ปีที่แล้ว

    any tutorial on how to petit up with meta human?

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      Hey! Please, stay tuned for our updates. We are going to record additional tutorials specially for MetaHumans. You can follow for roadmap and bug fixes here metahumansdk.canny.io/

  • @mistert2962
    @mistert2962 2 ปีที่แล้ว

    How long should audio files be? I tested 15 minutes audio but it is not working. 2 minutes audio is working fine. 3 minutes I am trying right now.

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      Actually, it doesn't matter, but could you clarify what kind of audio file do you use? please, send detailed settings.

    • @mistert2962
      @mistert2962 2 ปีที่แล้ว

      @@metahumansdk I wanted to create a 15 minutes long animation for my 15 minutes long learning topic. But now MetaHumanSDK seems to be gone. I can not connect to your server. Is it because of this whole war thing?

  • @qumetademo
    @qumetademo 3 ปีที่แล้ว +1

    How many languages does it support?

    • @metahumansdk
      @metahumansdk  3 ปีที่แล้ว

      It's language-agnostic plugin, but the most tested in russian. Stay tuned for more updates.

  • @晓龙纹路
    @晓龙纹路 2 ปีที่แล้ว

    Why my voice is out of sync with my mouth

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      We need more details about your case of use because there is too much scenarios in engine and all of them need their own settings.
      If it possible share more details in our discord

  • @sarbatore
    @sarbatore 3 ปีที่แล้ว

    Where is our BP_FaceExample ?

    • @mischaschaub5864
      @mischaschaub5864 2 ปีที่แล้ว +1

      Cannot find it either? Please help

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      Hey! You can find it in "DigitalHumanAnimation Content" folder. Don't forget setting "Show content" in UE

  • @aitrends8901
    @aitrends8901 2 ปีที่แล้ว

    No intention to critic this work but I would expect better fidelity..

  • @luiginicastro1101
    @luiginicastro1101 2 ปีที่แล้ว

    When I right click to generate lip sync, I don't see the option to generate lip sync animations, any suggestions?

    • @luiginicastro1101
      @luiginicastro1101 2 ปีที่แล้ว +1

      Oh my mistake, I didn't notice my audio file wasn't .wav

  • @hernanmartz
    @hernanmartz 2 ปีที่แล้ว

    Will it be compatible with UE5?

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว +1

      Dear Hernán Martz,
      yes, we are working on a plugin update and expect it in June

    • @hernanmartz
      @hernanmartz 2 ปีที่แล้ว

      @@metahumansdk awesome!! Thanks. 😃

    • @hernanmartz
      @hernanmartz 2 ปีที่แล้ว +2

      @@metahumansdk still waiting for it. 😢

  • @FormerDeveloper
    @FormerDeveloper 2 ปีที่แล้ว

    Unreal engine version?

    • @metahumansdk
      @metahumansdk  2 ปีที่แล้ว

      4.26 or 4.27

    • @FormerDeveloper
      @FormerDeveloper 2 ปีที่แล้ว

      @@metahumansdk i cant download my custom metahuman in unreal engine 5 preview 2, can you help me with this?

    • @FormerDeveloper
      @FormerDeveloper 2 ปีที่แล้ว

      @@metahumansdk im using quixel bridge..

  • @nextgenmatrix
    @nextgenmatrix 3 ปีที่แล้ว +1

    Here is an example of how powerful this SDK is when you put your mind to it: th-cam.com/video/O7OlLeUgUSI/w-d-xo.html

    • @stuff7274
      @stuff7274 3 ปีที่แล้ว +1

      Damn, that's pretty good stuff !

    • @_pastras
      @_pastras ปีที่แล้ว

      Looks horrible honestly

  • @VladGohn
    @VladGohn 2 ปีที่แล้ว

    ребят а вы не могли еще страшнее демо проект сделать? я всё понимаю, но это просто ужас, что нельзя было в своем же метахумане сделать нормальное лицо и свет поставить немного. выглядит просто ужасно, качество звука - ужасно, как документация полностью не годится. спасибо((

  • @2010Edgars
    @2010Edgars ปีที่แล้ว

    No good lipsync. Nobody speak like this.

    • @metahumansdk
      @metahumansdk  ปีที่แล้ว

      Dear EdgarsAL_77. Thank you for the feedback.
      We already complete some improvements for our lipsync animation and every day we try to make generated lipsync better.
      You can watch our tutorial to compare the result, hope you will enjoy this one: th-cam.com/video/xo474w8-4ac/w-d-xo.html