There's tolerances in voice sync back and forward because of the way the brain is used to compensating (for someone speaking IRL across the table vs someone across the room.) In general if you experiment with moving your audio earlier it'll be "tighter" to the performances and elevate the end product (says the dude whose worked in audio specifically for longer than I care to admit!)
Amazing hiqh-quality work, thanks for sharing your process! I first saw your work on Reddit in the UE forums, and also your videos have now started popping up on the TH-cam feed 🙂looking forward to seeing your future videos!
Yes I definitely will, and soon. I need to find a mocap solution that is more accessible to more people, since I am using Move.AI which is a pro software license and also not the best solution for everybody. I am thinking of doing a short film using entirely the Rokoko free dual camera setup, if I can get it to look good enough.
It's a great question. Depends if you are getting gloves with your suit or not, and a few other things. Since this video, i have figured out how to get a much better calibration out of Move, so I don't have such horrible jitters that need to be smoothed. That makes a big difference. With Move, I can do two people at the same time, unlike with a suit. Also, suits are a pain to put on and off, especially if you are putting it on someone who has never worn one, or who has a lot bigger or smaller body. Also, I have not used a rokoko suit, although I have seen a lot of videos of them in use, and it requires a similar amount of cleanup, albeit in Maybe different ways, than Move. Ultimately, they are different tools that can be used to achieve similar outcomes, both with their tradeoffs. I am preferring Move and other Ai-based solutions. This is based on my experi3nce owning an Xsens Awinda, and ones of the very first Perception Neuron suits (which was godawful).
The pipeline is intriguing and appears to be functioning, though the final output is not yet ready for the final show. It requires post-production work to make the animations believable and lifelike, which is where the final touch from an animator becomes essential. Nonetheless, it is commendable to have reached this stage! I believe an additional layer is needed to refine some animations, such as hand gestures and facial expressions, to bring everything to life. However, I understand that this would compromise real-time performance, which is not the current objective. I am interested in learning more about the tools you are using, as I am working on a similar workflow and still experimenting. Keep up the great work, and thank you very much!
If anyone sees this and is wondering, it starts at $15/month with 3 minutes of tracking included, then $7.50 per minute of tracking after that. Each person tracked counts against the time separately, so 90 seconds included and $15 a minute after that for tracking two people. I was more interested in their multicam option, until I saw that the price was 'contact us'.
It sounds like you’ve tried MoveAI and Xsens. Have you also tried Rokoko? I currently have a Rokoko suit and Im curious which you think gives the best results. Rokoko has some clear issues, but the overall quality doesn’t appear too different to what you’ve achieved here. That said, I’ve seen some results with MoveAI that blew me away. So I’m wondering if those examples had a lot of cleanup, or if you’re still figuring out how to get the best results?
Move.AI is almost amazing for me. Because it's vision based, as opposed to inertia based, the accuracy of the movements seems to be better than xsens. However, I get a bunch of micro jitters in my animations that needs to be filtered out, and I am not sure why. Also, the hands I think really throw off the rest of the animation. They don't get captured really well unless you have your hands close to one of the cameras, and then the retargeter seems to make them a bit claw-like. I can probably fix that with some pre-made hand poses that I can just drop on an additive layer over the hands in sequencer.
@@NorthwoodsInteractive Interesting! So you think MoveAI might be better than Xsens? I've wondered whether paying the extra for an Xsens suit might give better results than Rokoko, but that suggests the improvement probably isn't worth the price. I think using those pre-made hand poses will work well for you. I basically do the same thing with the Rokoko gloves. The actual movement is really good, but the key poses are often off. Fix the poses and all the in-betweens usually look pretty good 👍
This was a super rough script we wrote in about 20 minutes, and memorized in about the same time, with a little bit of freestyle. We need some proper Monty Python writing up in here!
The facial Mocap does not seem to be capturing the movements around the eyes or forehead on the meta human, and so the expressions are more wooden than they should be. What in the workflow could be modified? Does the actor need more extreme expressions? Did you attempt to tweak and exaggerate those controllers at the end of the workflow?
I didn't tweak the face animations at all. In my other video with the talking pig I used the transform tool to elongate the curves for certain parts of the face, like the mouth control, lips, and brow
Super workflow.. this is what we need, time saver. This workflow has so many applications. Is there any possibility of this workflow working with blender? I understand UE is great for this workflow. but just wondering... how would it work with blender, like create a ai character convert it to 3d and then rig it in blender? cheers
I think Metahumans are only supposed to be used in Unreal Engine, as per their license stipulations. I have seen people pull metahumans into Blender, to do custom topology work. Metahuman Animator is all inside Unreal Engine, and is one of the main reasons I have made Unreal Engine and Metahumans the center of my 3D animation.
Very nice. I am just developing similar workflow for myself. I have two xsens suits though. What is your opinion on that Rokoko Headrig? Is it sturdy enough? What about wobbling and fit? and is the camera location far enough from face so there will be no clipping / focus problems? I am in between on buying this Rokoko headrig and second option i consider is Facegood D4 helmet which is much pricier but has two custom cameras. What are your thoughts on that? Regards and keep up inspiring work ;)
I like the rokoko headrig. If you are using an iPhone bigger than a mini, you will have to unscrew the phone mount and mount it backwards, if that makes sense, in order for the phone to be far enough from your face. Other than that, it is pretty good. It will move around a little bit if you are really whipping your head around, but mostly it is snug and small movements don't seem to affect the animation quality too much. Overall it is super good since it will fit any head size.
Have you tried anything like doing a facepalm or scratching your face with the facial capture thingy? I assume it won't work and will screw with the capture, but I'm still curious about it.
Oh yes, metahumans like to do that. Was a huge PITA for a long time. Try making your metahuman the child of a basic actor, then adding the actor to your timeline, instead of just adding the metahuman on its own. Then, whenever you have to move the metahuman, move the actor instead. Sometimes, the head will still get detached if you are doing this, so just restart UE, that should fix your sequence
4:35 I don't have experience in 3D animation but that graph seems nightmarish . There's a channel called Rotted that seem to be making show in UE4 , idk if they use the same tools
The graph is rough, which is why the only thing I really do with it is select all the keyframes and apply a filter. I'm definitely not getting too granular with it. Also, I checked out that show, Rotted. Pretty hilarious, and yeah they seem to be using the same tools. Metahumans, Metahuman Animator for the face, but then just key framing the body animations, so they do not seem to have a mocap solution. Still pretty funny.
Have you had any luck getting a hold of the team at move? I've been trying to get a move multicam plan for a few weeks now and no one will respond to multiple outlets of communication
So, I usually make my metahuman a child of a basic actor, and use the actor to move and position the metahuman, since once it has animations applied to both the face and body in sequencer, it can get a little buggy and the head can come off and just float. If you recorded your face animation separately, and did not use a headrig, then make sure you disable the neck and head movement when processing the animation in Metahuman Animator.
"AAA" is a bit of a stretch here, this is nowhere near the level of quality that it claims to be. The issues is that none of these animations were actually polished after capturing.
Yes you are correct, and the hands are pretty stiff too. I was mostly referring to the facial animation, which is pretty much there. When combined with even rudimentary mocap, the results are decent, especially if you frame out as much of the jank in the animations as possible
@@NorthwoodsInteractive no, sorry mate, but even the facial animation is nowhere near "there", firstly the audio and video aren't synced properly (there's a noticeable delay) secondly, it clearly really needs an animation pass, it's quite uncanny and the poses don't really hold well together. Like, if this were stuff for background characters, sure, but on a cinematic with closeups etc? Not really. The shaders also need a lot of work, you can really tell the guard is a metahuman character, and it just looks like a bunch of assets that don't really belong together, all bundled in a scene. You could somewhat fake it with some different lighting, to bring the non-metahuman asset to the same level of fidelity as the metahuman face, or you could modify the MH shader to be lower in fidelity and match the rest. I've done my fair share of facial animation at my previous studio, so I kinda have that engrained into me at this stage
Unfortunately even the facial animation is just facial capture, there's a lot that goes into facial animation to get it looking natural. And you have done any of them except for motion capture. Blend shapes are a good start but building onwards from those shapes is how you get to that level you claim to be on.@@NorthwoodsInteractive
“Nowhere near the level of quality it claims to be” ? Have you seen AAA games recently ? It’s a pretty decent effort. Not sure what you’ve made yet and when you gonna show us
@@itsdw2323 The keyword there being "recently", don't take Ubisoft as an example, their games are only expensive to make due to mismanagement lol AAA quality is stuff like Black Myth Wukong, or at least, Spacemarine 2, that is what AAA quality is meant to be. That said you can easily find one of my old reels if you google my name.
Bro 1- is move ai has any free stuffs? 2-can i do face animation via Android? 3 any free alternative for motion capture Can i do face animation to any character or only metahuman character?
Move.AI has the Move One app, whish isn't free but has some free credits. It uses just one phone camera, and has decent results for what it is. I do not believe Metahuman Animator works with Android, since it is using the True Depth camera on the iPhone to get such accurate face animations you can do face animation on characters if they have blendshapes or a face rig, and I believe there is a way to use face animations that are captured with Metahuman Animator on non metahuman characters. It involves baking the animation to a different face rig, I think. Look around, I know there are some recent tutorials showing this.
As someone who grew up on stage and in movie sets, this tech is impressive but completely devoided of any fun. Sets, costumes, attention, preparation, those were parts of that that were extremely tough but rewarding and fun. Digital is the complete opposite and a bore.
It's too bad you feel that way, I find this kind of filmmaking to be extremely fun. As someone who has been on many shoots of all sizes, both for work and for passion projects, I also love the energy and creative spontaneity of a set. But sometimes you just want to make something that is in your head and it is just not practical because of budget, logistics, etc. The feeling of finding some cool assets that spark an idea for a scene and playing with them in UE is awesome. It's a different kind of creative reward than the synergy of a good film crew on set, but it's rewarding all the same.
Yeah idk why my animation turns out so jittery. I am trying to work it out with Move. The reason I have kept the service is because I am able to get the animation smooth enough, without too much work. Xsens is great, but their raw capture is still a little jittery, you need the cloud processing to get the full value of that suit, and that is $500per month minimum, on top of your $3.5k minimum suit, and that only works for one person at a time. Like i said, tradeoffs. If you don't need to do multiple people at the same time, a suit might make more sense. I am going to do a video a out the tradeoffs and why I use Move over Xsens, which I do have experience with.
There's tolerances in voice sync back and forward because of the way the brain is used to compensating (for someone speaking IRL across the table vs someone across the room.) In general if you experiment with moving your audio earlier it'll be "tighter" to the performances and elevate the end product (says the dude whose worked in audio specifically for longer than I care to admit!)
Looks great! As time passes processes that took days even weeks or months to be done we are now able to do them in a day..
Oh how times have changed.
literally EXACTLY what I was looking for!!! Please make another!!!!
EDIT: Also pls go into depth about prices and alternatives
Will do!
Looks great... Using a similar workflow in my productions. Looking forward to seeing more...
Amazing hiqh-quality work, thanks for sharing your process! I first saw your work on Reddit in the UE forums, and also your videos have now started popping up on the TH-cam feed 🙂looking forward to seeing your future videos!
Awesome, thank you!
Thanks for sharing this pipeline
also yes a much more in depth breakdown would be extremely beneficial!
impressivly dope !!! so inspiring for futur projects !
Lovely and Inspiring -Thank you!
Thank you!
Great workflow. I would love to see some more breakdown. 🙏🏾
in my experience making the lipsync a bit earlier than audio helps make it look more in sync.
Thanks, I've definitely been trying that now
though it's a mocap video, I really like the fascinating cinematic lighting coming along the faces
Thank you! All about the lighting
Could you please try to make a In Depth Tutorial on this short film making process. Which would be more helpful.
Yes I definitely will, and soon. I need to find a mocap solution that is more accessible to more people, since I am using Move.AI which is a pro software license and also not the best solution for everybody. I am thinking of doing a short film using entirely the Rokoko free dual camera setup, if I can get it to look good enough.
Wow, great sharing!
Thank you!
Thanks for sharing! Isn't a Rokoko suit a better offer in terms of pricing and quality?
It's a great question. Depends if you are getting gloves with your suit or not, and a few other things. Since this video, i have figured out how to get a much better calibration out of Move, so I don't have such horrible jitters that need to be smoothed. That makes a big difference. With Move, I can do two people at the same time, unlike with a suit. Also, suits are a pain to put on and off, especially if you are putting it on someone who has never worn one, or who has a lot bigger or smaller body. Also, I have not used a rokoko suit, although I have seen a lot of videos of them in use, and it requires a similar amount of cleanup, albeit in Maybe different ways, than Move. Ultimately, they are different tools that can be used to achieve similar outcomes, both with their tradeoffs. I am preferring Move and other Ai-based solutions. This is based on my experi3nce owning an Xsens Awinda, and ones of the very first Perception Neuron suits (which was godawful).
@@NorthwoodsInteractive thank you for your answer! Good luck with your projects!
I can predict that 7000 used coming down soon .
Great ! Thanks for sharing
Interesting and very cool!
The pipeline is intriguing and appears to be functioning, though the final output is not yet ready for the final show. It requires post-production work to make the animations believable and lifelike, which is where the final touch from an animator becomes essential. Nonetheless, it is commendable to have reached this stage! I believe an additional layer is needed to refine some animations, such as hand gestures and facial expressions, to bring everything to life. However, I understand that this would compromise real-time performance, which is not the current objective. I am interested in learning more about the tools you are using, as I am working on a similar workflow and still experimenting. Keep up the great work, and thank you very much!
Low pass butterworth. Great tip. Cheers gents.
Our pleasure!
Can you make a video on how you linked animations together so the root and mesh moved correctly in Sequencer?
All the animations come in as root motions. If I have to link animations, I just cut around it rather than blending
Sucks that Move AI raised their Indie prices after the Beta. Business is business though! Great job.
If anyone sees this and is wondering, it starts at $15/month with 3 minutes of tracking included, then $7.50 per minute of tracking after that. Each person tracked counts against the time separately, so 90 seconds included and $15 a minute after that for tracking two people.
I was more interested in their multicam option, until I saw that the price was 'contact us'.
@@hellomistershiftyI have a tutorial coming using Move One and Metahuman Animator
It sounds like you’ve tried MoveAI and Xsens. Have you also tried Rokoko?
I currently have a Rokoko suit and Im curious which you think gives the best results.
Rokoko has some clear issues, but the overall quality doesn’t appear too different to what you’ve achieved here. That said, I’ve seen some results with MoveAI that blew me away. So I’m wondering if those examples had a lot of cleanup, or if you’re still figuring out how to get the best results?
Move.AI is almost amazing for me. Because it's vision based, as opposed to inertia based, the accuracy of the movements seems to be better than xsens. However, I get a bunch of micro jitters in my animations that needs to be filtered out, and I am not sure why. Also, the hands I think really throw off the rest of the animation. They don't get captured really well unless you have your hands close to one of the cameras, and then the retargeter seems to make them a bit claw-like. I can probably fix that with some pre-made hand poses that I can just drop on an additive layer over the hands in sequencer.
@@NorthwoodsInteractive Interesting! So you think MoveAI might be better than Xsens?
I've wondered whether paying the extra for an Xsens suit might give better results than Rokoko, but that suggests the improvement probably isn't worth the price.
I think using those pre-made hand poses will work well for you. I basically do the same thing with the Rokoko gloves. The actual movement is really good, but the key poses are often off. Fix the poses and all the in-betweens usually look pretty good 👍
That's great: amazing volume size!
Haha thank you, empty garages make great mocap volumes!
Really love it! Could you also show your script or your approach in recording?
This was a super rough script we wrote in about 20 minutes, and memorized in about the same time, with a little bit of freestyle. We need some proper Monty Python writing up in here!
This is awesome! Where are you based? I’d love to work with you! - Vo Williams
Chicago and Minneapolis
needdd moreeeeee!!!!!!!!!!!!!!!
The facial Mocap does not seem to be capturing the movements around the eyes or forehead on the meta human, and so the expressions are more wooden than they should be. What in the workflow could be modified? Does the actor need more extreme expressions? Did you attempt to tweak and exaggerate those controllers at the end of the workflow?
I didn't tweak the face animations at all. In my other video with the talking pig I used the transform tool to elongate the curves for certain parts of the face, like the mouth control, lips, and brow
Super workflow.. this is what we need, time saver. This workflow has so many applications. Is there any possibility of this workflow working with blender? I understand UE is great for this workflow. but just wondering... how would it work with blender, like create a ai character convert it to 3d and then rig it in blender? cheers
I think Metahumans are only supposed to be used in Unreal Engine, as per their license stipulations. I have seen people pull metahumans into Blender, to do custom topology work. Metahuman Animator is all inside Unreal Engine, and is one of the main reasons I have made Unreal Engine and Metahumans the center of my 3D animation.
using meta-humans outside Unreal Engine breaks their TOS. I wouldnt recommend it
Could you guys list which guy on upwork got the job done? :> also looking into having some custom characters turned into metahumans using meta pipe
Arman Avetisov
@@NorthwoodsInteractive you the mvp!
Very nice. I am just developing similar workflow for myself. I have two xsens suits though. What is your opinion on that Rokoko Headrig? Is it sturdy enough? What about wobbling and fit? and is the camera location far enough from face so there will be no clipping / focus problems? I am in between on buying this Rokoko headrig and second option i consider is Facegood D4 helmet which is much pricier but has two custom cameras. What are your thoughts on that? Regards and keep up inspiring work ;)
I like the rokoko headrig. If you are using an iPhone bigger than a mini, you will have to unscrew the phone mount and mount it backwards, if that makes sense, in order for the phone to be far enough from your face. Other than that, it is pretty good. It will move around a little bit if you are really whipping your head around, but mostly it is snug and small movements don't seem to affect the animation quality too much. Overall it is super good since it will fit any head size.
@@NorthwoodsInteractive thank you for answer. did you use iphone also for recording the voice or did tou had separate dedicated mics on set?
@madisepler7059 I just used the phone audio
Have you tried anything like doing a facepalm or scratching your face with the facial capture thingy? I assume it won't work and will screw with the capture, but I'm still curious about it.
I have, and it does mess up the face capture. Check out my latest video, I did some smoking shots where I brought a cigarette to my face.
Who did you use on Upwork for the conversion? Having a hard time find oune.
Arman Avetisov, he's great! Does amazing character work from scratch, too
Great video. I tried this workflow two months ago for a short and all it did was detach the head., Have you encountered this?
Oh yes, metahumans like to do that. Was a huge PITA for a long time. Try making your metahuman the child of a basic actor, then adding the actor to your timeline, instead of just adding the metahuman on its own. Then, whenever you have to move the metahuman, move the actor instead. Sometimes, the head will still get detached if you are doing this, so just restart UE, that should fix your sequence
full please
Coming very soon
Sir how much does move ai cost , is it expensive. I saw thier plans but i wonder if you do a lot such animations would suit be cheaper
Sir I give that info in the video :)
4:35 I don't have experience in 3D animation but that graph seems nightmarish . There's a channel called Rotted that seem to be making show in UE4 , idk if they use the same tools
The graph is rough, which is why the only thing I really do with it is select all the keyframes and apply a filter. I'm definitely not getting too granular with it.
Also, I checked out that show, Rotted. Pretty hilarious, and yeah they seem to be using the same tools. Metahumans, Metahuman Animator for the face, but then just key framing the body animations, so they do not seem to have a mocap solution. Still pretty funny.
Have you had any luck getting a hold of the team at move? I've been trying to get a move multicam plan for a few weeks now and no one will respond to multiple outlets of communication
I think I initially reached out through their website? Not sure, but it was something like that. They have been fairly responsive for me
Do you have advice for attaching a head to a metahuman body. I am struggling to get my animated body and head to be attached
So, I usually make my metahuman a child of a basic actor, and use the actor to move and position the metahuman, since once it has animations applied to both the face and body in sequencer, it can get a little buggy and the head can come off and just float. If you recorded your face animation separately, and did not use a headrig, then make sure you disable the neck and head movement when processing the animation in Metahuman Animator.
"AAA" is a bit of a stretch here, this is nowhere near the level of quality that it claims to be. The issues is that none of these animations were actually polished after capturing.
Yes you are correct, and the hands are pretty stiff too. I was mostly referring to the facial animation, which is pretty much there. When combined with even rudimentary mocap, the results are decent, especially if you frame out as much of the jank in the animations as possible
@@NorthwoodsInteractive no, sorry mate, but even the facial animation is nowhere near "there", firstly the audio and video aren't synced properly (there's a noticeable delay) secondly, it clearly really needs an animation pass, it's quite uncanny and the poses don't really hold well together. Like, if this were stuff for background characters, sure, but on a cinematic with closeups etc? Not really. The shaders also need a lot of work, you can really tell the guard is a metahuman character, and it just looks like a bunch of assets that don't really belong together, all bundled in a scene. You could somewhat fake it with some different lighting, to bring the non-metahuman asset to the same level of fidelity as the metahuman face, or you could modify the MH shader to be lower in fidelity and match the rest. I've done my fair share of facial animation at my previous studio, so I kinda have that engrained into me at this stage
Unfortunately even the facial animation is just facial capture, there's a lot that goes into facial animation to get it looking natural. And you have done any of them except for motion capture. Blend shapes are a good start but building onwards from those shapes is how you get to that level you claim to be on.@@NorthwoodsInteractive
“Nowhere near the level of quality it claims to be” ? Have you seen AAA games recently ?
It’s a pretty decent effort. Not sure what you’ve made yet and when you gonna show us
@@itsdw2323 The keyword there being "recently", don't take Ubisoft as an example, their games are only expensive to make due to mismanagement lol AAA quality is stuff like Black Myth Wukong, or at least, Spacemarine 2, that is what AAA quality is meant to be. That said you can easily find one of my old reels if you google my name.
How did u linkup the animation of the face to the body?
They are synced up in the Sequencer
Very interesting! Another awesome tech I didnt know about, totally looking forward to more!
What are the mounts called he used to hold camera to record his face
It's called the Rokoko Headrig
How do you have access to the move ai cloud processing platform? I only have mone one app on my phone and I can download a .fbx but that's it
Yep I have Move Pro, can use as many gopros as I want and can capture up to two people
❤
Bro
1- is move ai has any free stuffs?
2-can i do face animation via Android?
3 any free alternative for motion capture
Can i do face animation to any character or only metahuman character?
Move.AI has the Move One app, whish isn't free but has some free credits. It uses just one phone camera, and has decent results for what it is.
I do not believe Metahuman Animator works with Android, since it is using the True Depth camera on the iPhone to get such accurate face animations
you can do face animation on characters if they have blendshapes or a face rig, and I believe there is a way to use face animations that are captured with Metahuman Animator on non metahuman characters. It involves baking the animation to a different face rig, I think. Look around, I know there are some recent tutorials showing this.
How much you paid for meta human conversion
They need to add LIDAR to other phones.. all this crApple exclusivity is pissing me off...
대사 싱크가 잘 안맞는데요
Yeah I messed up a few lines
7000 $. wtf !!!!
You need improve your photography of footage, its not just about technology, how do the movie e cinema technnics it´s for important.
7k annual lmao
As someone who grew up on stage and in movie sets, this tech is impressive but completely devoided of any fun. Sets, costumes, attention, preparation, those were parts of that that were extremely tough but rewarding and fun. Digital is the complete opposite and a bore.
It's too bad you feel that way, I find this kind of filmmaking to be extremely fun. As someone who has been on many shoots of all sizes, both for work and for passion projects, I also love the energy and creative spontaneity of a set. But sometimes you just want to make something that is in your head and it is just not practical because of budget, logistics, etc. The feeling of finding some cool assets that spark an idea for a scene and playing with them in UE is awesome. It's a different kind of creative reward than the synergy of a good film crew on set, but it's rewarding all the same.
@@NorthwoodsInteractive Sure, it's cool for tinkering with it. I do experiment myself as well. I'm thinking more broadly though.
Wow $7,000 for some jittery AI mocap? I'll stick with Xsens
Yeah idk why my animation turns out so jittery. I am trying to work it out with Move. The reason I have kept the service is because I am able to get the animation smooth enough, without too much work. Xsens is great, but their raw capture is still a little jittery, you need the cloud processing to get the full value of that suit, and that is $500per month minimum, on top of your $3.5k minimum suit, and that only works for one person at a time. Like i said, tradeoffs. If you don't need to do multiple people at the same time, a suit might make more sense. I am going to do a video a out the tradeoffs and why I use Move over Xsens, which I do have experience with.
Metapipes a bag of crap with poor support unfortunately