- 14
- 107 342
MetaHumanSDK
เข้าร่วมเมื่อ 1 ธ.ค. 2021
MetaHumanSDK is a groundbreaking plugin that brings real-time lip sync from text or audio to life, creating a new dimension for engaging with 3D characters. Whether you’re a film director, developer, or cartoon animator, MetaHumanSDK simplifies and speeds up the animation process, giving you the freedom to create stunning, high-quality results faster than ever.
MetaHumanSDK Lip Sync Test with Amazon Web Services’ Zoey
Have you already seen Zoey, an Autonomous Virtual Human, from Amazon Web Services (AWS)? What do you think of their solution? We couldn’t resist testing our lip sync technology with it! Check out the original video here - th-cam.com/video/-umCNWZk9Yg/w-d-xo.html.
MetaHumanSDK is a multilingual lip sync plugin. Our technology supports multiple languages and delivers precise speech synchronization for Unreal Engine. We can adapt our lip sync to any case.
By the way, credits in this video also go to 11labs for providing synthetic voice.
Link to our discord: discord.com/invite/kubCAZh37D?_gl=1*3jm5w1*_ga*MTMyODEzMDY0NC4xNjc4MDMxMDAy*_ga_L1M1F0N7HH*MTY4NDI0NDI1NC4xNy4xLjE2ODQyNDYxNjYuMC4wLjA
Link to our website: metahumansdk.io/
👇 Let us know in the comments which case you’d like us to test next!
MetaHumanSDK is a multilingual lip sync plugin. Our technology supports multiple languages and delivers precise speech synchronization for Unreal Engine. We can adapt our lip sync to any case.
By the way, credits in this video also go to 11labs for providing synthetic voice.
Link to our discord: discord.com/invite/kubCAZh37D?_gl=1*3jm5w1*_ga*MTMyODEzMDY0NC4xNjc4MDMxMDAy*_ga_L1M1F0N7HH*MTY4NDI0NDI1NC4xNy4xLjE2ODQyNDYxNjYuMC4wLjA
Link to our website: metahumansdk.io/
👇 Let us know in the comments which case you’d like us to test next!
มุมมอง: 479
วีดีโอ
Making a Realistic MetaHuman in Unreal Engine | Realtime lip sync with MetaHumanSDK
มุมมอง 7K9 หลายเดือนก่อน
MetaHuman SDK is an innovative tool for facial character animation, powered by ML. This plugin for Unreal Engine streamlines the animation process, ensuring real-time facial expressions and lip-syncing thanks to our cloud service. Bring your projects to life with ease and high precision with MetaHuman SDK. Use MetaHuman SDK for your creative projects and enjoy a variety of functions: • Text to ...
User Showcase AI influencer
มุมมอง 75410 หลายเดือนก่อน
Introducing first user case in our User Showcases playlist! A huge thank you to our talented member and his team for this video. Catching the trend wave #IAmAnEverybodyWantsTo with an AI influencer. How do you think what else AI influencer can do? Share your cases with us in discord discord.com/invite/jHtA39RfwB and perhaps the next video will feature your project!
Tutorial: Personal account. How to get an API token.
มุมมอง 2.9Kปีที่แล้ว
MetaHumanSDK Team has prepared personal accounts for you. We prepared tutorial how to log in and get tokens Follow these simple steps: Log in to your personal account (space.metahumansdk.io/) 1. Enter your email address 2. Add tokens that are relevant to you or generate a new one 3. Update the UE MetahumanSDK plugin to the current version (supporting the logic of working with the account) (UE 4...
Tutorial: how to install an improved module Epic pixel streaming
มุมมอง 2.9Kปีที่แล้ว
We have prepared a tutorial to install an improved module Epic pixel streaming to create a chat with MetaHuman in the browser and work together with our service which generates lip sync and facial expressions. Why do you need it? It’s a great starting point to create your own chat with an Avatar on the web. What is included: -customizable web UI with npm and typescript compatibility - tweakable...
MetaHuman SDK: use case
มุมมอง 4.2Kปีที่แล้ว
Check out how clearly lip sync is set up!! We added our plugin to MetaHuman from Unreal Engine. Check out how realistic it turned out! Avatar withstands all pauses and the lips perfectly match the audio track. Download our plugin and try to create your own content. Write your impressions in the comments. Link to our discord: discord.com/invite/kubCAZh37D?_gl=1*3jm5w1*_ga*MTMyODEzMDY0NC4xNjc4MDM...
Tutorial: how to use Azure TTS in Metahuman SDK
มุมมอง 4.9Kปีที่แล้ว
We have prepared a tutorial on how to start using Azure TTS from MetaHuman SDK in blueprints . Azure TTS offers a wide selection of different languages and dialects with different accents that sound natural and will make the avatar even more human-like. Test and share the results in the comments. Link to our discord: discord.com/invite/kubCAZh37D?_gl=1*3jm5w1*_ga*MTMyODEzMDY0NC4xNjc4MDMxMDAy*_g...
Tutorial: Unreal Engine ChatGPT with Metahuman SDK
มุมมอง 22Kปีที่แล้ว
You can now use the GPT artificial neural network, which belongs to the same family as ChatGPT, to communicate with MetaHuman SDK! Chat GPT - a tool that uses artificial intelligence to create a more natural and productive dialogue between the user and the system. In this tutorial you can see how to use chat GPT with our plugin. Here you can find the blueprint from this tutorial: discord.com/ch...
Tutorials: how to use the plugin
มุมมอง 48Kปีที่แล้ว
MetaHuman SDK is an automated AI solution to generate realistic animation for characters. This Unreal Engine plugin allows you to create and use lip sync animation generated by our cloud server. We have prepared a detailed tutorial describing how to use our plugin: -integrate TTS -add audio to lip sync -add audio to lip sync streaming -integrate a chat bot -combine everything into a single comb...
Tutorials: How to prepared Demo scene tutorial
มุมมอง 14K3 ปีที่แล้ว
MetaHuman SDK is an automated AI solution to generate realistic animation for characters. This Unreal Engine plugin allows you to create and use lip sync animation generated by our cloud server. In this tutorial, we describe in detail how to start using our plugin: add a TTS and a voice track, as well as merge everything into a combo request. Test and tell us what you got in the comments. Link ...
Hai, Thank's for share. Anyway where we can find the blue print node for Display result, display chat response, and play lypsinc animation?
@dzeeriq, Hi! We have an example project for UE5.2, where we connected ChatGPT through a third-party plugin OpenAI and already from it received data for the work of our plugin, this project is available at drive.google.com/file/d/1VZ4wziAbMuIn4UINf6FwMp4QQgxOmmSk/view This project use custom plugins so it require to rebuild included plugins. For rebuild you need installed VS 2019 or higher visualstudio.microsoft.com/vs/older-downloads/ with recommended parameters for Unreal Engine docs.unrealengine.com/5.2/en-US/setting-up-visual-studio-development-environment-for-cplusplus-projects-in-unreal-engine/
wow thank you so much for tutorial,. I want to ask, can we make a system as conventional ? i ask a question from my character and the character answer as ai with voice?
Hi! Yes, you can send a ready audio track in wav or ogg format to Talk Component or ATL to get animation of the sent phrase. If you need to receive responses from your assistant, you can also send their text responses to Talk Component, TTS or Combo. You can see the Talk Component tutorial here th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
@@metahumansdk thank you so much
Hello, I have a metahuman and I want it to speak according to the text I specified. I watched the videos, but I encountered a problem. When you open the Blueprint section, the "metahumanSDKAPIMANAGER" section does not appear. can you help?
Hi! We recommend using the updated Talk Component tutorial, it better fits the plugin for newer versions of Unreal Engine and makes the plugin easier to use. Link to the video - th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
Hi! Can your avatars be integrated into the website or just into the games?
Hi! Absolutely! Our avatars can be interactive not only for games but also for websites, virtual events, and much more. You can check out the video th-cam.com/video/nq4EFu88oXM/w-d-xo.html for some great examples! For more details, feel free to reach out at support@metahumansdk.io - we'd love to chat further!
hi! I am having problems with the blue print, since nothing about metahuman sdk appears in the functions, could you help me with that?
Hi! Api Manager has been renamed to Lipsync Api Manager in the latest version of the plugin. Please try to call plugin functions through this name.
metahuman sdk does not show in my pugins´s settings
Hi! Try to find the plugins section in the project settings and the name Lipsync there. This has changed in the latest version, which may cause confusion.
When do you plan to offer Metahuman fully supported for mac os? With the Mesh to Metahuman plugin etc? After all these years of release I hope you take this part of the community with you as well.
The lip Sync is only working for 5 seconds. It is not working for longer audio files.
Hi! The limit for generating one animation of 5 seconds is present only on the Trial plan. If you have a different plan, please email us at support@metahumansdk.io and we will check your account.
Slow. There is a clear lag...
Thank you for your feedback! We would love to understand more about your experience. Could you please clarify what aspect of our service felt "slow" to you? Did you encounter any specific delays or issues while using the plugin? Your input helps us improve.
@4:30 ... he seems a little drunk to me 😂
It's just that this IDLE was done on a ship during a storm🚢
Hello! When will the 5.4 version of this plugin come?
Hi! You can already download the 5.4 version from the marketplace www.unrealengine.com/marketplace/en-US/product/digital-avatar-service-link
Has anyone got this working with a reallusion character creator rigged model? Did you have to seperate the head? Which preset did u use?
Hi! They support the ARKit blendshape set after version 3.4, so you can just select the ECustom option in the ATL Mapping Mode settings, this should help.
@@metahumansdk ok, and under ECustom option, what mapping asset and bone asset do i select? as if i dont select any, the anim it creates is blank.
If possible, please share information about Unreal Engine version, send us the project log file via discord discord.com/invite/MJmAaqtdN8 or email support@metahumansdk.io. At the moment we can't reproduce the error and animation is created correctly for custom meshes without additional mapping options.
Does it work with all languages?
Hi! Our plugin is language independent and generates animations from audio files. To get better results we recommend you to use clear records of voice or generated voice without side effects or noises. Also using too fast speech may cause some artifacts in animation.
How are you making the head and shoulders move along with the speech too?
Metahumans have 2 skeletons, one for the head and one for the body. You can direct animations to both skeletons at the same time and set them up in a suitable way so that the movements match your wishes.
If only the video quality were 1080p (or higher) and the parameters adjustment section was zoomed in, it would be even better
Hi! Thank you! As of late we've only been posting videos in 4k, but the zoom idea is great and we'll try to improve the experience in new videos.
I just created an account, so I should still be within the free period, but it stops at the login screen to begin with. I have tried from various wired and wireless networks but I just can't log in. Does anyone have a solution to this problem?
Hello @dsk0313 We are very sorry that you encountered this difficulties. Please contact us at support@metahumansdk.io we will figure it out.
5.4 please
Hi! You can find a test build for 5.4 in our discord discord.com/channels/1010548957258186792/1010557901036851240/1253377959700463647
I need to know how to copy face reaction and lip syncing via android phone and also need to know how to motion movement's, thank you
Hi! Currently, our plugin only supports Windows and Linux operating systems.
@@metahumansdk will it work on every metahuman? And is it free?
Hi! You can use the plugin for free for two days after registering at the space.metahumansdk.io/
your blueprint was different from you have shared
Hi! We posted a finished example project because plugins change over time, so this video has lost relevance, and that's one of the reasons for changing the Blueprints. In the example project from discord we used specific versions of plugins and made Blueprints taking into account the use of specific versions. You can rely on the example project to understand the blueprints logic, but if you update the OpenAI plugin - we can't guarantee that the example project will work correctly.
@@metahumansdk so you should tell the right tutorial bro
The video level in the way you show is like un3. sorry
im stuck here, min 1:10 when importhing the metahuman to unreal engine via bridge. already download the metahuman preset, but when I add the metahuman to UE 5, nothing happen, can someone help me on this one?
Hi! When you have already downloaded metahuman in Quixel Bridge you need to export it to the project. After that you need to open content browser in the project and find MetaHumans folder which contains exported metahumans.
thanks for posting the video. really inspiring. i just want to clarify, is it possible to make the metahuman speak in bahasa Indonesia? have some difficulties to develop this kind of product. really need your help. thanks in advance
Hi! Azure and Google TTS standard voices are currently supported. As far as I know, Azure should have a language id-ID Indonesian (Indonesia). Also you can use your TTS to send audio to the ATL (Audio To Lip-sync) node.
I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same
You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation. th-cam.com/video/jrpAJDIhCFE/w-d-xo.html
Can we have a step-by-step screen shots of it, please? it's very hard to follow the steps
Please visit our Discord server discord.com/invite/kubCAZh37D or ask about advice to the e-mail support@metahumansdk.io
Hey, what about Mac? The SDK is not available for it. ☹
At this time we have removed the Mac version due to a virus found for that platform in one of the third-party libraries we used for the plugin on macOS.
@@metahumansdk Virus on Mac? Hmm… ok. Hope it can be fixed asap. Thanks. 😃
Realism doesn't even smell like it
We are sorry to hear about this. Could you please send us an email to support@metahumansdk.io about what you expect from a service like ours? It would help us to be better!
@metahumansdk I don’t mean the lip synchronization to the speech (it’s really at a high level), but the appearance of the characters themselves, the title of the video clearly states: “Creating a *REALISTIC* metahuman”, so where is the realism here? Even a schoolchild will immediately understand that these are computer characters... Watch videos about Metahumans by blogger JSFILMZ or others like him, that’s what I understand *REALISM!*
What is the difference between audio-to-lipsync and the real-time-lipsync service that is listed in the available services in the chatbot subscription?
Hi! The chatbot can use streaming requests that generate animations in chunks, so the generated animation runs faster than a normal ATL request.
im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation
Hi! If i understand it right you have a delay between start of animation and sound. You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime th-cam.com/video/jrpAJDIhCFE/w-d-xo.html If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io
Why the heck my code stopped working ???????????? The GetChunk function is returning invalid, and the code was like, WORKING, a few weeks ago, today a opened the same build and the speech doesnt work anymore ? Did you guys changed anything ? I'm not even using any AI services of ai, just the standard audio-to-lipsync conversion. What the heel happened ? I have a deadline on this project, imagine for my surprise, when i open it and the code doesn't work anymore....
Hi! We didn't change anything on our side, all depends on your tariff plan only. For more information please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io
hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?
I mean, how to edit the number of sampled keys/frames
This tutorial needs to be updated.
Can I use this on Android?
Hi! We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.
Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video
Hi! You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.
se debe pagar para usarlo, no funciona
Hi! Can you please share more details about your issue? Perhape this tutorial can help you th-cam.com/video/cC2MrSULg6s/w-d-xo.html
spanish?
MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.
Error: fill api token via project settings First time using this sdk, how can I fix this?
Please contact us through e-mail support@metahumansdk.io we will help you with token.
Hi. Can I add an audio file to have lips-sync ?
Yes. We recommend you to use 16-bit PCM wave for it.
is there any way to integrate this on a website
Sure, you can try to make a pixel streaming project for example.
i am not able to register at the site for the token
Hi! Please contact us through the support@metahumansdk.io We will help you with the token while our site on reconstruction 🛠️
Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations
Hi! Please try to start from Talk Component. This is the easiest way to use Streaming options. Here is tutorial about it th-cam.com/video/jrpAJDIhCFE/w-d-xo.html If you still have some issues please visit our discord discord.gg/MJmAaqtdN8
It shows as unavailable in Unreal Engine Marketplace.
Hi! Please reopen EGS launcher. It should fix a bug with unavailable content.
Absolutely stunning!