Metahuman Facial Motion Performance Test with Faceware Mark IV HMC

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ส.ค. 2024
  • “I am still learning” ~ Michelangelo
    I put together this facial motion performance test in Unreal Engine, as I am continuously learning how to fully utilize these incredible high fidelity metahumans, created by 3Lateral and Epic Games.
    I recently had the pleasure of being invited to do a Faceware webinar, where I discuss my facial motion workflow with metahumans and facial motion in Unreal Engine: • Workflows for Creating...
    For anyone interested in learning more about my facial motion workflow with Metahumans, I share a video of this process in the webinar.
    Special thanks to the team at Faceware (Pete Busch, Catarina Rodrigues, Joshua Beaudry, Tatjana Vejnovic and Brandon Suyemoto) for having me on your webinar, and to Karen Chan for being a wonderful host. This company listens to indie creators like myself, and I am extremely grateful for their incredible products and support.
    Thank you to the companies whose tools I use that has made my metahuman motion capture workflow possible:
    Glassbox Technologies (Norman Wang & Johannes Wilke)
    Xsens (Katie Jo Turk)
    MANUS™ (Arsène van de Bilt)
    Puget Systems and NVIDIA Design and Visualization
    Thank you, to an incredible community of collaborations for being a part of my journey and creative process:
    Jonathan Winbush (best teacher ever)
    Bernhard Rieder (lighting and cinematics)
    Michael Weisheim (character artist)
    Daniel Rodriguez Cadena (character textures)
    PixelUrge (character modifications)
    Konstantin D’Lakrua (rigging)
    Marc Morisseau (feedback)
    Fellowship team Gabriel Paiva Harwat, Emanuele Salvucci, Diana Diriwaechter, Jet Olaño, Pavan Balagam & Ernesto Argüello
    Gear I Use:
    Faceware Mark IV HMC: facewaretech.c...
    Faceware Studio: facewaretech.c...
    GlassboxTech Live Client Plugin: glassboxtech.c...
    Xsens Link Suit & MVN Animate Pro: www.xsens.com/...
    Manus Prime II Gloves: www.manus-meta...
    Puget Systems Workstation & Nvidia RTX A6000: www.pugetsyste...
    Link to my Discord: / discord
    #unrealengine #metahumans #virtualproduction #facemocap #mocap #facialmotioncapture #motioncapture #faceware #xsens #faceware #liveclient #glassboxtech #nvidiartx #pugetsystems #epicgames #metaverse

ความคิดเห็น • 28

  • @JonathanWinbush
    @JonathanWinbush 2 ปีที่แล้ว +2

    This looks amazing I still don't know how you do it!

  • @GlassboxTech
    @GlassboxTech 2 ปีที่แล้ว +1

    Wonderful work, @Feeding_Wolves !!!

  • @StyleMarshall
    @StyleMarshall 2 ปีที่แล้ว +1

    looks really good ! 👍

  • @Jobutsu
    @Jobutsu 2 ปีที่แล้ว +1

    Incredible tech !

  • @onethingeverything
    @onethingeverything 2 ปีที่แล้ว

    Awesome work as always!

  • @LoganPinney
    @LoganPinney 2 ปีที่แล้ว

    Amazing!!!!!! Looks so good, love to see you pushing the boundaries of Ue and mocap - rock on Gabby!!

  • @calvinromeyn
    @calvinromeyn 2 ปีที่แล้ว

    Beautiful work!

  • @mocappys
    @mocappys 2 ปีที่แล้ว +1

    Lovely work as always, Gabby. The results look really solid. Off to watch your Face Ware video next to see how you do it 👍

  • @meowchat6175
    @meowchat6175 2 ปีที่แล้ว

    Absolutely amazing, I love it😍🐱

  • @virtucci
    @virtucci 2 ปีที่แล้ว

    Amazing! And inspiring at the same time. Thank you for uploading this,

    • @FeedingWolves
      @FeedingWolves  2 ปีที่แล้ว

      Right back at you! Your metahuman work is awesome!

    • @virtucci
      @virtucci 2 ปีที่แล้ว

      @@FeedingWolves Thank you! I have been learning a lot from your videos!

  • @triMirrorTV
    @triMirrorTV 2 ปีที่แล้ว

    Great work 😎

  • @arcinfinium
    @arcinfinium 2 ปีที่แล้ว

    Wow 🔥🔥🔥

  • @original9vp
    @original9vp 2 ปีที่แล้ว

    Clean!

  • @fracturedreality88
    @fracturedreality88 2 ปีที่แล้ว +1

    Faceware is still the best no question.

  • @neXib
    @neXib 2 ปีที่แล้ว +1

    Awesome work. It's really mostly minor details in muscles around the mouth that is a big tell, I imagine that's insanely hard to perfect though.

    • @BlackDidThis
      @BlackDidThis 2 ปีที่แล้ว

      Depends....
      Mostly on what you would be referring to..
      But even in the most optimistic way you look at it: Calling it insanely difficult would be a far fetch...
      "Not worth the investment" may be more suitable as of yet maybe.. But that would be pretty much my definition.
      You see.. the "insanely" part of the work s pretty much all set due to the amazingly versatile rig... What is mostly left is pretty much "custom" rather than how harder it would be to compare to what has been set behind the already presented technology.
      The coolest thing about discovering new ways to do things via technical accomplishments is how they build upon it.
      I can understand that you must have meant the "extra mile" or what in the movie industry is referred to as the "extra ten percent" that costs ten times the previous 90 percent of it all.
      So I am not being judgemental nor am I up for a fight.. But since I am here with like minded individuals with obviously interest in similar technologies: Please allow me to entertain your choice of words with hopes to not be too annoying.
      As years have already passed since it... I recall still how hard work it was back in NZ trying to set FACS to a digital character called Gollum in his very impressive performance in a movie squeal that was to come out.
      The improvement of using "joint based" rig was SO great and proven to be incomparible to the linear deformations provided via the predisessor "Blend shapes"...
      But even still the massive amount of "correctional" blend shapes as well as the many joint orientations obligated that in order to performance capture the mo-cap stunt actors facial characteristics: The digital character had to be almost completely remodeled for the second movie (So such that the Actor had even said to Bait at one point "He looks like a hybrid of my grandfather and my new born child). But the results were OUTSTANDING..
      So outstanding in fact that the directorial/production staff had pretty much been willing to have the character look almost completely different in the second film (Albeit he was seen very little in the first).
      Now my point being: with all the technology and brilliance of the amazing crew back then... As well as the over promoted performance of it's actor: the WHOLE show was up to an army (Not an exaggeration) of VERY talented animators that would literally tweak every vertice "one by one when necessary" to capture what they were visually observing on the actor on the face-focused camera (Nope.. Was not automated a technology as was promoted in the "making" moveis/documentaries).
      This extra work was all because of how the performance was too dependent on visual matching.
      Years forward... We have the impressive (In my opinion underappreciated) and massively advanced "Alita" rig.. which is literally light years ahead of the Gollum rig series (I say series because it had consisted of a lot of rigs and many movies to follow)... Even though the new technologies have been introduced to the many new versions of the movie franchise with Gollum in it.
      But good Sir: what we have HERE is running circles round any part of the Alita rig sets I was permitted to examine...
      As since it is not CUSTOMIZED to match the limitations of the actor/ess... It is a completely "generalized" rig performing what a few years back was basically declared impossible... With even University papers stating how clearly joint driven would/could not be a way to go for scan data based articulation.
      So you see.. no.. It is not the harder part that remains... It is the customizing part.
      And though this young lady with her very fortunate features would be worth investing customization for her performances... But the thing is that this is not a show-case of "what has already been done".. It is a showcase of "what it was thought to never be possible" just a little while back.
      Since customized rigs/models are so well practiced and improved upon in the many movies we have been watching for over a decade..
      Also do note please that: All the fab back in the "Gullom days" was FACS to compare to blend-shapes and the industry was all back to reading an investigatory documentation of a research decades old to accommodate themselves with facial key positions of expressions.
      But now when you google search for FACS.. You get so little hits digital facial rigging related... It is all a "standard" as was once phonetic facial blend shapes.
      I even remember how you had to do a massive amount of scripting in Maya in order to use a brand new "claim" (Not even a technology note you.. a "Claim") that you could/should be capable of using something called "normal mapping" to project geometry.
      I think that as appreciated as it is.. The overlooked success of the Meta-human rig sets are the thing called the "DNA" under it's hood that hopefully within months time... shall too: become pretty much industry standard (Or even obsolete due to massive new improvements in AI detection in rag dolling FACS as we have articulate limbs of bipeds back with endorphin).
      This is definitely a step in the right direction... And it's results are creepily impressive.

  • @TheDarkestAgeUnreal
    @TheDarkestAgeUnreal 2 ปีที่แล้ว +1

    Amazing quality, especially the lip syncing. Is this raw capture, or did you touch up the lip sync after?

    • @FeedingWolves
      @FeedingWolves  2 ปีที่แล้ว

      Thank you so much! 90% was Faceware, and for the other 10% I used the Face Control Rig board in Unreal to fine tune it.

  • @prodev4012
    @prodev4012 2 ปีที่แล้ว +1

    Amazing..dang I really want to get into facial motion capture but the cameras are just so expensive. 23k for the mark 4 and 5k for the indie. Are there any cheaper cameras that you could recommend to use with faceware if I have a custom helmet rig? I heard that cell phones overheat too fast though

  • @conbuch9758
    @conbuch9758 2 ปีที่แล้ว

    Was any of this animation cleaned up? Or is this the raw input via your software chain?

  • @ibrews
    @ibrews 2 ปีที่แล้ว

    Looks wonderful!! Lol who was calling you ??

  • @binyaminbass
    @binyaminbass 2 ปีที่แล้ว

    How great of a difference is there between Glassbox and the free live link that I can get with faceware?

    • @FeedingWolves
      @FeedingWolves  2 ปีที่แล้ว

      Not sure, as I have always used the glassbox plugin

  • @donalddade5643
    @donalddade5643 2 ปีที่แล้ว

    She talks to nonchalantly about $70K worth of hardware. It damned well better be impressive.

  • @yoteslaya7296
    @yoteslaya7296 2 ปีที่แล้ว +1

    we can we expect some deepfake action