Unreal Engine 5.5 Metahuman Facial Animation Using Any Video! No iPhone, HMC, or Depth Cam Needed!

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 95

  • @orenrede
    @orenrede 18 วันที่ผ่านมา

    A true genius, both the plug and your process are fascinating. Well done and thank you very much. You are my hero for the coming year, that's for sure.

    • @xlipdev
      @xlipdev  18 วันที่ผ่านมา

      Very kind of you ^^ Great to hear it helps 🤗There's more exciting stuff on the way!

  • @emotional-robot-dynamics
    @emotional-robot-dynamics หลายเดือนก่อน +6

    Wow this is perfect for those of us not wanting to purchase an iphone >> the workflow is easy to follow and the results are looking good >> great work I hope you get lot's of attention for this plugin 👌💯👀🎯😎🌟

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +2

      Thank you for the kind words! I hope Epic Games integrates this feature natively so no one needs a plugin!

  • @itsMBWAAA
    @itsMBWAAA 16 วันที่ผ่านมา

    Thank you TH-cam for recommending me this video! Can't wait to try this out!

    • @xlipdev
      @xlipdev  16 วันที่ผ่านมา

      @@itsMBWAAA Great to hear you like it ^^

  • @deygus
    @deygus หลายเดือนก่อน +1

    This is absolutely amazing, definitely gonna give it a try asap.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Thanks I'm happy you liked it ^^

  • @OverJumpRally
    @OverJumpRally 21 วันที่ผ่านมา

    INSANE!! Amazing job! I'm glad I don't need to buy an iPhone :D

    • @xlipdev
      @xlipdev  21 วันที่ผ่านมา

      @@OverJumpRally Thanks a lot! Hope you enjoy it ^^ By the way, I’ve surpassed iPhone performance internally 😆 I’m working on the new version, so keep eye on it, I’ll be sharing the details soon ^^

  • @I_am_a_Legion
    @I_am_a_Legion หลายเดือนก่อน

    Very cool! Thank you! Just what I need now. Definitely a subscription, waiting for new videos

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      Thanks for the support! I'll keep updating, improving, and sharing more videos.

  • @s.patterson5698
    @s.patterson5698 หลายเดือนก่อน

    Great job!! Looking forward to using it.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Hope you enjoy it!

  • @incrediblesarath
    @incrediblesarath หลายเดือนก่อน +1

    Love it! Thank you!♥

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Glad you liked it ^^

  • @squeezypixels
    @squeezypixels 20 วันที่ผ่านมา

    Great! I also considered doing something similar (using MiDaS to generate depth frames) but then came across your solution. Honestly, looking at the depth frames from the iPhone, I have doubts about whether these depth data actually used in any meaningful way during animation generation. I can’t quite grasp how such noisy data could help differentiate emotions. Have you tried feeding in blank, solid-color depth frames instead of the generated ones? I suspect the results might not differ much.

    • @xlipdev
      @xlipdev  20 วันที่ผ่านมา +1

      That's cool! I actually started with MiDaS too, but since the models aren’t trained for faces, I couldn’t get accurate depth maps. I ended up finding other models specifically trained on faces that produced much better high-quality maps. However, each frame is generated individually, which makes it incompatible with the entire sequence when using AI.
      And yeah, I also noticed that depth frames don’t significantly affect facial features in most cases, so I decided to stick with my current implementation. That said, there are still scenarios where they can have an impact, mainly due to camera calibration (like head distance, rotation, etc.).
      From my testing, it seems impossible to process the MH performance pipeline with empty or solid depth maps, it just crashes without providing any clear info. I guess the Epic team wants to keep the magic under wraps for now! 😆 Or maybe I just haven’t dug deep enough or found the right sources to investigate further.

  • @creatorguy8391
    @creatorguy8391 หลายเดือนก่อน +2

    Love it! Quality-wise how does this compare to default iPhone footage with depth?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +4

      Iphone footages have real depth data so comparing to that, IPhone performances should be better 👍

    • @creatorguy8391
      @creatorguy8391 หลายเดือนก่อน

      @@xlipdev Thanks man!

  • @XanderMoss
    @XanderMoss หลายเดือนก่อน +2

    This looks incredible well done! I discovered this video one day before going to the store and give $300 for an iPhone 12 just for its camera. Have you managed to compare it with iPhone's captures to see what the plugin is missing?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      @@XanderMoss Thanks, I’m happy you liked it ^^ iPhone footage has the advantage of real depth data, which naturally should offer better performances. However, I don’t see a huge difference in overall quality. In the video, I also pointed out that the tongue isn’t being solved. This is because the generated depth maps rely on a base face mesh that currently doesn’t include inner mouth vertices. I’m actively working on improving this.
      Keep in mind that generating depth maps requires computational power, so if you’re working with large or lengthy videos, converting them into depth frames can take some time. But you can still use the public repository I shared to generate your own depth frames for free and experiment with them as much as you like 😊

    • @MaestroSoundsFilms
      @MaestroSoundsFilms หลายเดือนก่อน

      exactly what i was about to do lmao

  • @sathishapple5600
    @sathishapple5600 หลายเดือนก่อน +1

    That's great, sir. Keep making updated videos.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Glad you liked it! I will for sure!

    • @hoseynheydari2901
      @hoseynheydari2901 หลายเดือนก่อน +1

      @@xlipdev DONT SELL IT PUT OUT FOR FREE

  • @weshootfilms
    @weshootfilms หลายเดือนก่อน +1

    This is nice will it be compatible with new versions as they come out also you should create a plug in to bring in motions very easy

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +2

      Yes! I'll keep updating and improving the plugin as we go. Major engine updates typically bring breaking changes, but since it already supports the newly released 5.5 version, there's nothing to worry about. Can you clarify a bit more what you mean by 'you should create a plugin to bring in motions very easily'?

    • @weshootfilms
      @weshootfilms หลายเดือนก่อน

      @ what I mean is I find that the workflow to being in animations to work with metahumans is frustrating at times. I would love if it could be like reallusion. It’s so easy to apply a motion and get to story telling.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      ​@@weshootfilms Ah, I see, and I totally agree. Currently, the entire Metahuman animator pipeline relies on the Metahuman plugin, so I suppose we’ll need to wait for the Epic Games engineers to make any changes to the pipeline.

  • @unrealfab
    @unrealfab หลายเดือนก่อน

    so realistic human animation

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Thanks ^^ Your nick looks exciting btw 😅

  • @MalikGillani-x3z
    @MalikGillani-x3z 8 วันที่ผ่านมา

    Can we extract any video clip data through this plugin, which is not calibrated through metahuman animator?

    • @xlipdev
      @xlipdev  7 วันที่ผ่านมา +1

      Yes, but currently, extracted frames are scaled to 720p during extraction to match the default calibration data for plugin processing. However, in the new version, all settings will be exposed, enabling you to specify any aspect ratio (resolution) for extraction and processing as you create custom calibration assets with the plugin 👍

    • @malikgillani
      @malikgillani 7 วันที่ผ่านมา

      ​@@xlipdevcool 👍🏻👍🏻👍🏻

    • @malikgillani
      @malikgillani 7 วันที่ผ่านมา

      ​@@xlipdevbut what about the initial calibration process? The front view, side views and teeth view. How can that be done with a custom video clip ???

    • @xlipdev
      @xlipdev  7 วันที่ผ่านมา +1

      ​@@malikgillani Initially, you’ll set your video and depth frame resolution/fps etc. before extraction, (there’s no longer a restriction for the 9-16 aspect ratio anymore!) Before creating the Footage Capture Data asset, you’ll need a Camera Calibration asset. With the plugin, you simply adjust the focal length, and it will automatically create and assign a Camera Calibration asset to your footage. From there, you can proceed with the regular MetaHuman pipeline for identity creation etc.

    • @malikgillani
      @malikgillani 7 วันที่ผ่านมา

      @@xlipdev please do make a tutorial video for this too.
      Thank you for responding 😇

  • @camilo2069
    @camilo2069 16 วันที่ผ่านมา

    HiHi, I bought the plugin, but when I want to process the frames in depth, this is what I get. The process was not completed :(
    Please check the logs for more details. If the process ended unexpectedly midway, the already processed depth frames have been saved.
    Consider running the process again and remaining in the Unreal Engine editor until it completes to prevent the engine from unexpectedly terminating background tasks.

    • @xlipdev
      @xlipdev  16 วันที่ผ่านมา

      Could you try lowering the values for "Parallel Process Count" and "Batch Size"? If that doesn't resolve the issue, join the Discord channel so we can take a closer look together.

  • @cettura
    @cettura หลายเดือนก่อน

    Nice job but is there a way to eliminate the small shaking of the head?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      You can set the "Head Movement Mode" to "Disabled" in the MetaHuman performance asset (in the video, I set it to "Control Rig") if you prefer to avoid any head movement or shake.

  • @ak_fx
    @ak_fx หลายเดือนก่อน

    Hi there, ive been working on a similar system using ai as the depth map generator. Although im not a developer so my knowledge is limited. But maybe look into either sapiens depth or depthpro by apple. Specifically the latter gives really good and accurate depth results, The way i made it work is to use mediapipe to track where the nose is and where the ears are, the user would then use a ruler or smthn to to measure the distance on their face and input and you can use that information to scale the depth map correctly, and remove the backround make it a value of 100. The depth maps are surprisingly accurate however my solution had many issues and i eventually gave up

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Wow, our pipelines are really similar! Thanks for sharing these tools ^^ I also started generating depth frames with AI initially, but the issue I faced was that each frame was generated individually with ai, making it incompatible with the overall sequence. I came across some open-source AIs for training your own models to produce more reliable depth frames, but I’m still figuring out how to integrate them into the system.
      For now, I generate depth maps only for the face area using mediapipe with the default mesh, ensuring consistency throughout the sequence. They just rely on landmarks extracted from the video frames to generate the depth maps. (background depth value removal other calculations are the same to match with unreal's mesh generator) Additionally, I’ve implemented options for rotation, scaling, and translation if necessary, along with a preview feature before generating all the frames like what you did with the ears and scaling.
      Check out the repo I shared and give it a try! It might give some new ideas, and I'd be very happy if you could share some feedback ^^

  • @HappyMadArtist
    @HappyMadArtist หลายเดือนก่อน

    Can I import another depth map images from another depth video that I created using other depth map methods?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Unfortunately other depth map generation methods/apps won't work! For unreal engine depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) also you need to double check 0 and infinite values and arrange them, But still you can edit/modify your depth maps (also make sure image/depth dimensions and your calibration matches) to match these things and it should work. I initially started with midas depth generator AI model to create some depth maps but it didn't go well so I decided to create them myself and the plugin also relies on this logic.

  • @I_am_a_Legion
    @I_am_a_Legion หลายเดือนก่อน

    Friend, could you tell me how to download LOD0 in UE 5.5? In the new version, metahumans were optimized and even when downloading the cinematic version, only LOD3 and LOD5 are downloaded. This is terrible, the model's hair is square. All day I've been looking for where and how to download LOD0, I can't find anything.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      I haven’t explored UE5.5 in depth yet, but I’ll take a look, and if I find anything, I’ll share 👍

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      I looked into the downloaded files in my case and found FaceNormal_MAIN_LOD.uasset, FaceCavity_MAIN.uasset, and FaceRoughness_MAIN_LOD.uasset. Based on the file sizes compared to other LODs, I assume these correspond to LOD0. If you've upgraded your MetaHuman, it's possible that something went wrong on the MetaHuman backend during the generation process. You might want to try regenerating it from scratch or maybe I'm missing something🤔

  • @bunyaminkartal2026
    @bunyaminkartal2026 27 วันที่ผ่านมา

    Hello friend. Where do you sell the plugin? On Discord? Discord is banned in Turkey and therefore cannot be used. Are you actively selling this plugin at the moment or should we wait for fab? Thanks in advance

    • @xlipdev
      @xlipdev  27 วันที่ผ่านมา +1

      Hey man hope you enjoyed the video ^^ I put the link in the description. I’m uploading the new versions on ko-fi platform actively since Fab is still under review, and I’m not sure when it’ll be approved 🥲I know Discord is banned in Turkiye, so it might be tricky for you to keep up with the new version announcements but I will post them in ko-fi platform just in case. So it’s totally up to you. If you feel lazy about updating every time, then it’s probably better to wait for Fab. But if you want to try the new versions right away, you can grab it ^^

    • @bunyaminkartal2026
      @bunyaminkartal2026 27 วันที่ผ่านมา

      @xlipdev Thank you for your answer. You have made a great software. We have a project with friends and it was a great help to us. We will contact you as soon as possible. I wish you continued success.👍🏻

    • @xlipdev
      @xlipdev  27 วันที่ผ่านมา

      @@bunyaminkartal2026 Great thanks for the kind words man ^^ Sure you can contact me from social media as well lets talk, Btw if you missed it I've already shared the public repository in the other video for free which does the same thing initially, I'd say go and try maybe it'd be enough for your project

    • @bunyaminkartal2026
      @bunyaminkartal2026 22 วันที่ผ่านมา

      @@xlipdev My friend, I sent you a request on Instagram.

  • @inteligenciafutura
    @inteligenciafutura หลายเดือนก่อน +1

    does it also work on 5.4?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      Yes! Initially it was developed and tested on 5.4! tbh it gives slightly better results for the same depth frames on 5.4 comparing the 5.5

  • @febinfrancis8376
    @febinfrancis8376 23 วันที่ผ่านมา

    Assertion failed: Child [File:D:\build\U5M-Marketplace\Sync\LocalBuilds\PluginTemp\HostProject\Plugins\MetaHuman\Source\MetaHumanPipeline\Public\Pipeline\DataTree.h] [Line: 95]
    Data does not exist
    this issue is coming while clicking process in metahuman performance asset
    please some one help me to rectify this issue🙏

    • @xlipdev
      @xlipdev  23 วันที่ผ่านมา

      It seems this issue is related to the official MetaHuman plugin from Epic Games. Which engine version are you using? You could try uninstalling and reinstalling the official MetaHuman plugin first to see if that resolves the problem, if not please join to our discord channel and lets resolve it 👍

  • @SeyyahRessam
    @SeyyahRessam หลายเดือนก่อน

    can i test it?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      We did an experimental version test session with some volunteers a couple of weeks ago sorry you missed it :(

  • @TheShinorochi
    @TheShinorochi หลายเดือนก่อน

    is it consume a computation resource as Metahuman animators?

    • @TheShinorochi
      @TheShinorochi หลายเดือนก่อน

      I currently use iPhone 15pro for capturing from Livelink app then metahuman animators but it do require amount of resource GPU

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      @@TheShinorochi If you have an iPhone, you likely don't need to generate your own depth frames, as the iPhone already comes with a depth camera. In this case, you can use MetaHuman Animator without needing my approach(unless you want to animate something that wasn’t recorded with your iPhone). Regarding your question, the plugin currently uses Unreal Engine's integrated Python to compute depth maps, which shares CPU resources with the editor, making it relatively slow since it runs continuously. However, I'm working on a proof of concept (POC) that utilizes the PyTorch plugin within Unreal Engine to offload these computations to the GPU, which should significantly speed up the process, depending on the user's selection. I hope I answered your question.

    • @TheShinorochi
      @TheShinorochi หลายเดือนก่อน

      @@xlipdevThank you, You explained it clearly.

  • @alphadraconian1114
    @alphadraconian1114 หลายเดือนก่อน

    How long can the videos be?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      There is no limit to generating depth frames, but keep in mind that the process takes time as it handles all the frames extracted from the video.
      As an upcoming feature, I’ve moved the generation process out of the editor, allowing it to utilize full CPU power. This makes the process significantly faster and prevents the editor from being blocked. This improvement will be released soon in version 2025.1.7! (GPU processing is currently in the proof-of-concept stage.)

  • @Vlogdelguia
    @Vlogdelguia หลายเดือนก่อน +1

    Will be a plugin?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      I shared in the description, hopefully will be published in Fab marketplace soon

    • @hoseynheydari2901
      @hoseynheydari2901 หลายเดือนก่อน +1

      @@xlipdev YOU SHOULDVE PUT IT OUT FOR FREEEEEEEEEEEEE NOT TO MAKE 20$ FROM EVERY PURCHASE
      WHAT A RIP OFF

    • @MotuDaaduBhai
      @MotuDaaduBhai หลายเดือนก่อน

      @@hoseynheydari2901 Hey PIECE of SHIT, F**K OFF! He worked on it so he deserves so earn money. I bet you never paid anything for any software in life and pirated all the shit. Cheap A$$ Piece of $HIT

    • @stalkershano
      @stalkershano 26 วันที่ผ่านมา

      ​@@hoseynheydari2901 Shame on you...this guy spent months and created REALLY helpfull plugin..worth every penny )) when you will make animations or assests sell when for free 😂

  • @gameswownow7213
    @gameswownow7213 29 วันที่ผ่านมา

    is your plugin free?

    • @xlipdev
      @xlipdev  29 วันที่ผ่านมา

      Unfortunately, the UE plugin isn’t free. But, the public repo I shared earlier still offers the same functionality for free you can check my other video

  • @bsasikff4464
    @bsasikff4464 หลายเดือนก่อน

    br is it free ?

    • @xlipdev
      @xlipdev  หลายเดือนก่อน +1

      Unfortunately, the UE plugin isn’t free. It's set to "pay what you want" with a minimum of $19.99 for initial version. However, the public repo I shared earlier still offers the same functionality for free!

  • @FireF1y644
    @FireF1y644 หลายเดือนก่อน

    Whats the point of Metahumans? You can't easily make a custom character, yes ive seen people using some woodoo to use custom humanoids but its ridiculously complicated, and completely useless for non-humans.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      I totally get where you're coming from, creating stylized characters can definitely be tricky. But on the bright side, Unreal gives you almost everything you need for human characters right out of the box. There are also tools out there that make it easier to integrate custom characters with MetaHumans. For non-human characters, yeah, MetaHumans is not the best fit I guess, but if you check Unreal's roadmap, there are cool stuff like modular rigs which makes the non-human character rigging a lot easier etc.

    • @MotuDaaduBhai
      @MotuDaaduBhai หลายเดือนก่อน

      If you find MH complicated, don't use it. Folks like you want everything on a silver platter and don't want to work for it. What used to take weeks to do, can be achieved in few hours probably few days if you want awesome results. Epic provided a framework to start and people already doing amazing custom characters with MH skeleton. If it is too easy to use, every Jack and Jill will jump on it and tbh it will be saturated. Why are you here anyway? Go do something easy and let us enjoy the complicated technology.

    • @FireF1y644
      @FireF1y644 หลายเดือนก่อน

      @@MotuDaaduBhai all you do is create same looking boring "cinematic" characters anyway, for me it's easier to make something from scratch in Blender, or use Character Creator for humanoids, because their ecosystem supports any type pf character, not just perfect humanoids.

    • @MotuDaaduBhai
      @MotuDaaduBhai หลายเดือนก่อน

      @@FireF1y644 And how many blendshapes Character Creator model can provide? 52 right? MH delivers over 600. Boring cinematic? That's not an issue. Whatever Epic provided is a starting point and user have ways to make the best out of it. If you want to cry over it, be my guest. MH is customizable if you are willing to do it. A lot of Character Creator user end up porting their model to use MH Skeleton because it does best facial animation out of the box. I will take that over 52 Blendhsapes boring ARKit. If you prefer instant coffee, what can I say.

    • @ak_fx
      @ak_fx หลายเดือนก่อน +1

      @@FireF1y644 yeah and character creator is ridiculously expensive for what it does. There is a use case for metahumans, which is realistic characters in a mocap pipeline. If you try and use it for something its not meant to do then obviously your gonna have to put in more work... And metahumans can look unique, if you want studio level vfx, you have to put in the work. For stylised characters i just rig it in blender, cause it will give faster and maybe better results than cc4 or metahumans

  • @belkheirifathi4726
    @belkheirifathi4726 หลายเดือนก่อน

    You should have told us that it is not free, and that for its price you can buy an iPhone... You have become dependent on this annoying method of advertising.

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      The video begins with promotional content notification, and I also mention at the start that if you prefer a free approach, you can check out my previous video where I already shared the script resources for free and updated just today to make it better. I'm not sure what else I can do. And that's funny where can you buy an iphone for 19 bucks I'm willing to get one

    • @MotuDaaduBhai
      @MotuDaaduBhai 29 วันที่ผ่านมา +1

      @belkheirifathi4726 $20 is pocket change compared to an iPhone with depth cam. Get over it or go somewhere else to nag. Stupid freeloaders on the mother earth!

    • @belkheirifathi4726
      @belkheirifathi4726 23 วันที่ผ่านมา

      @@xlipdevYou can buy an iPhone for this price in dreams, it's just an exaggeration hahaha, I liked this plugin, I would like to get it and I hope to get help from you if I need it, thank you

  • @AIDesigner5323
    @AIDesigner5323 หลายเดือนก่อน

    Hello, I tried your plug-in in the previous version 5.4, but I found that some of the expression capture restoration is not enough, may I ask is there any improvement in this version? I saw the demo in the video is very good

    • @xlipdev
      @xlipdev  หลายเดือนก่อน

      The plugin functionalities are the same for both engine versions, but the performance processing depends on the official MetaHuman plugin, and since different plugin versions are used for each engine, there may be some differences. Honestly, I’ve noticed slightly better results with engine 5.4 using the same depth frames. Please make sure the face is always clear and visible in your video, and that the depth frame preview fits well before generating the frames. You can message me on Discord, and I hope I can help you get better results ^^