Excellent video, absolutely right about the turning too. If I see another promo video for a new mocap app where all the footage is just star jumps and waving at the camera...
I know right, it is all too easy to detect those "sudden" and gesturely "loud" moves. Give me a system that detects foot placement and turning movements correctly and you have yourself a winner!
Hey there! Great tutorial on freemocap and it's functions. I know this one was a bit of an older video but do you know what formats the mocap data can be exported? Is it a bvh? I tried looking at the documentation and the GitHub and I don't see that particular information of export types.
Well I suppose anything that could work within blender. .Bvh is fine, but if not I am still willing to work with other formats that are usable within Blender, especially since its a good enough workflow for most of my 3d needs, and can be retargeted with Rokoko.
@@user-beepbopbeep Then you will be VERY HAPPY to know that freemocap works DIRECTLY with Blender. As a matter of fact, it literally OPEN UP BLENDER!!! when the tracking is done and there is a plug-in that converts it to rigify's rig!!! Congratulations, your wish comes true!
I am curious if the glossiness of your calibration board impacted your crappy webcams... there is a shine in your video... but I suppose maybe you did not light the same way when you were doing mocap? Just a thought.... Thanks for the info!
@@tmlander You are absolutely correct! The lighting for the video in front of the camera is not the same setup for the mocap, but you already have a sense of that since you guessed it correctly. Hope you will have some fun with freemocap!
Hi, thanks for this video. I am also looking for alternatives with multiple webcams. Have you tested with 922 Logitech? Also, there is a limit of 2 webcams per USB controllers, otherwise the third will not work and the others will have frame rate decreased. In laptops, that is a serious constraint. You'll need at least 30fps at 640x480 resolution for acceptable results. For 3-cam setups, at least 1 webcam should be placed about 6ft/180cm above the floor.
I saw your videos and it is nice to see someone who is into mocap to this extend too. I bought the kinect version 2 for mocap test using the SDK, it was ok. I wanted to try ipisoft but the price is a tad too much for me. I was looking at Brekel too but then freemocap comes along and the quality just destroys everything I have seen before. You are also right about the webcam limit, I have to use three laptops if I were to use webcams, but I think I will stick with actual cameras because the webcam's framerate is not stable, the output video's framerate is stable but when I am syncing them in the video editor, I realize they skipped frame capture internally and I just cannot trust those cheap webcams anymore.
You're right, @BracerJack. Price tags nowadays are a limiting factor, and some "AI tools" pricing models are just delusional. Most of these techs are available as open source, but the setup is not beginner friendly. Also, this mocap business didn't grow as predicted, so we don't have many affordable providers. I'll be testing Freemocap with single and multiple cameras very soon and record a video with my results soon. There another one that is free, "EasyMocap", which also allows multiple cameras, but has a way more complicated setup, not so "easy" as the name suggests: th-cam.com/play/PL1NJ84s5bryvhGJzcCjPMDJiI9KW9uJ-7.html&feature=shared
Thanks@@BracerJack Unfortunately I don't use Discord, but I'll update my results here. I tested two Kinect 360 in the same computer and unfortunately my hardware specs didn't handle the USB bandwidth requirements.
@@BracerJackhave you ever tried xranimator? It uses mediapipe to track the body. Rn it only works with one camera at a time, but It allows you to record motion capture and export it into blender. And I'm curious how well it would work compared to this.
@@kevinwoodrobotics YES, that too, but remember the dilemma: 1: If the board is small, when you move closer to one camera, the other camera will not see it or would be too small to be of any use to that other cameras. 2: Attempting to rectify the problem by moving the camera closer together remove the parallax property that is the sole principle behind being able to figure out a point in 3D space. A bigger board would allow the camera, especially if you only have two, to be far apart from each other for maximum positional triangulation in 3D space while allowing the board to still be detected by both cameras at the same time.
@@kevinwoodrobotics They cross reference each other to figure out where exactly in 3D space are the cameras and the board. The more cameras are able to catch a glimpse of the board at the same instance in time, the surer the positional space is. I believe your next question might be "So that means the videos need to be all synced up then?" The answer to that will be: "Yes".
I've set up everything and have export my animation to Blender but all I have is a walking skeleton, how do I transfer my animation to the human mesh I created
Well first of all, good job 👍 There is a blender add-on that is created for this purpose created by the community from freemocap, check their GitHub and or discord to get it, it automates the process by a few clicks.
I am about to try this to make s 3d animated model of myself just for shadow and reflection casting. I don't need super accurate. Does it work if subject is sitting down? I need it to capture me seated playing guitar. Not my hands, but rest of upper body. Well, and I also need to capture guitar neck. Encouraging to watch your video!
I don't think the AI would work if.... Hold on let me think about how to best explain this.... First and foremost understand that AI motion capture have difficulty detecting the body's pose if the outline isn't very strong. You sitting down and not really moving very much is probably going to kill the detection, to make matters worse you are about to bring in a foreign object that the A.I. would then have to wonder whether it is your body or not. This is a brave experiment, I wish you luck 😀
Hi, I would think it should work. Am going to try today. Even if there are periods of no motion that shouldn't cause it to not work, since camera trackers work even when no motion as long as there are good initial frames of trackable points. So if I am sitting with two cameras - one on each side of me - and my upper body is moving is moving at first, that should be enough to generate data and set the tracking in motion. Interesting how the guitar will affect things, but I would think that too would not be a problem. AI is just large data set statistics, basically regression based on high dimension datasets, in other words high dimensional curve fitting, so if there are good initial frames, mocap should understand where arm joints are, and the nature of training data should be such that the system would not result in, say, arm joints suddenly jumping and being mapped to the guitar or something, since such jumps would be absent in the training data, and if this system works like next token prediction (the way OpenGPT) then the liklihood calculations would unlikely choose sudden deformation of a joint as a likely next token. I suspect the devs have some sort of constraints system so that frame to frame you wouldn't have the body geometry wildly deforming. I joined their discord channel, will let them know how this goes. Also, been playing around with the great camera tracking software SynthEyes, it has geometric hierarchial tracking, where it would be able to track an object as a separate hierarchy from some other object (like a person holding the object). These FreeMoCap devs and the SynthEyes dev should collaborate! SynthEyes manual describes a multicamera motion capture function that it has, but I have yet to see example of anyone using it. But if I get this working, then I won't bother trying to use SynthEyes for motion cap. @@BracerJack
Let me know when this is a real app that doesn't require command line installation and reliance on Python. Couldn't get it up and running on my Mac after the installation.
What is the output in? Is it the application itself? Is it simple to get it into blender? It would be good to have a wirefrane background and floor to get an idea of how much drift uoure getting. Apart from getting up from the crouch it was pretty damned good
1: The output is a Blender file, there is even a Blender add-on that auto converts the file into a rigify Blender file. 2: The sliding of the foot have a lot of room for improvement, hopefully, having another camera will help. 3: Yeah, maybe when the body is squeezed like that, the outline recognition part of the A.I. simply give up to some extend but the fact that it works at all...wow.
@@raymondwood5496 Let me guess, when the program was attempting to export to blender, it get stuck right ? That can be resolved, backup and then delete your Blender configuration folder, that will do it. This happened to me as well, once you have cleared the folder, you are good !
@@xyzonox9876 If you have only one animation to do, yea, maybe. But we have 2024! And this is not a solution, especially if you have to manage hundrets of animations, select their range that needs to be saved, organize and rewatch them quickly, to maybe redo the capturing again and so on!
Excellent video, absolutely right about the turning too.
If I see another promo video for a new mocap app where all the footage is just star jumps and waving at the camera...
I know right, it is all too easy to detect those "sudden" and gesturely "loud" moves.
Give me a system that detects foot placement and turning movements correctly and you have yourself a winner!
Could you please make a walk-through tutorial? I have 0 python experience, the original tutorial is really confusing to me🤣
Sounds like something I should do someday!
Thanks for the suggestion :D
Hey there! Great tutorial on freemocap and it's functions. I know this one was a bit of an older video but do you know what formats the mocap data can be exported? Is it a bvh? I tried looking at the documentation and the GitHub and I don't see that particular information of export types.
@@user-beepbopbeep Good Question, may I know what specific format you need and why?
Well I suppose anything that could work within blender. .Bvh is fine, but if not I am still willing to work with other formats that are usable within Blender, especially since its a good enough workflow for most of my 3d needs, and can be retargeted with Rokoko.
@@user-beepbopbeep Then you will be VERY HAPPY to know that freemocap works DIRECTLY with Blender.
As a matter of fact, it literally OPEN UP BLENDER!!! when the tracking is done and there is a plug-in that converts it to rigify's rig!!!
Congratulations, your wish comes true!
I am curious if the glossiness of your calibration board impacted your crappy webcams... there is a shine in your video... but I suppose maybe you did not light the same way when you were doing mocap? Just a thought.... Thanks for the info!
@@tmlander You are absolutely correct! The lighting for the video in front of the camera is not the same setup for the mocap, but you already have a sense of that since you guessed it correctly.
Hope you will have some fun with freemocap!
Hi, thanks for this video. I am also looking for alternatives with multiple webcams. Have you tested with 922 Logitech? Also, there is a limit of 2 webcams per USB controllers, otherwise the third will not work and the others will have frame rate decreased. In laptops, that is a serious constraint. You'll need at least 30fps at 640x480 resolution for acceptable results. For 3-cam setups, at least 1 webcam should be placed about 6ft/180cm above the floor.
I saw your videos and it is nice to see someone who is into mocap to this extend too.
I bought the kinect version 2 for mocap test using the SDK, it was ok.
I wanted to try ipisoft but the price is a tad too much for me.
I was looking at Brekel too but then freemocap comes along and the quality just destroys everything I have seen before.
You are also right about the webcam limit, I have to use three laptops if I were to use webcams, but I think I will stick with actual cameras because the webcam's framerate is not stable, the output video's framerate is stable but when I am syncing them in the video editor, I realize they skipped frame capture internally and I just cannot trust those cheap webcams anymore.
You're right, @BracerJack. Price tags nowadays are a limiting factor, and some "AI tools" pricing models are just delusional. Most of these techs are available as open source, but the setup is not beginner friendly. Also, this mocap business didn't grow as predicted, so we don't have many affordable providers. I'll be testing Freemocap with single and multiple cameras very soon and record a video with my results soon.
There another one that is free, "EasyMocap", which also allows multiple cameras, but has a way more complicated setup, not so "easy" as the name suggests: th-cam.com/play/PL1NJ84s5bryvhGJzcCjPMDJiI9KW9uJ-7.html&feature=shared
@@LFA_GM If you are using discord, you can join my discord at discord.gg/AU7Rg6KD so that we can chat about mocap stuff :D
Thanks@@BracerJack Unfortunately I don't use Discord, but I'll update my results here. I tested two Kinect 360 in the same computer and unfortunately my hardware specs didn't handle the USB bandwidth requirements.
@@BracerJackhave you ever tried xranimator? It uses mediapipe to track the body. Rn it only works with one camera at a time, but It allows you to record motion capture and export it into blender. And I'm curious how well it would work compared to this.
I think you should be able to calibrate your cameras with a smaller chessboard. You would just need to move closer to the camera
@@kevinwoodrobotics YES, that too, but remember the dilemma:
1: If the board is small, when you move closer to one camera, the other camera will not see it or would be too small to be of any use to that other cameras.
2: Attempting to rectify the problem by moving the camera closer together remove the parallax property that is the sole principle behind being able to figure out a point in 3D space.
A bigger board would allow the camera, especially if you only have two, to be far apart from each other for maximum positional triangulation in 3D space while allowing the board to still be detected by both cameras at the same time.
@@BracerJack I see. So they require you to calibrate all the cameras simultaneously instead of independently?
@@kevinwoodrobotics They cross reference each other to figure out where exactly in 3D space are the cameras and the board.
The more cameras are able to catch a glimpse of the board at the same instance in time, the surer the positional space is.
I believe your next question might be "So that means the videos need to be all synced up then?"
The answer to that will be: "Yes".
@@BracerJack ok that makes sense now! Thanks!
@@kevinwoodrobotics Glad I am able to help you! Be well on your journey.
do you have a walk through of the usage of freemocap from capture to blender?
Not at the moment, maybe I will do that someday :D
I've set up everything and have export my animation to Blender but all I have is a walking skeleton, how do I transfer my animation to the human mesh I created
Well first of all, good job 👍
There is a blender add-on that is created for this purpose created by the community from freemocap, check their GitHub and or discord to get it, it automates the process by a few clicks.
I am about to try this to make s 3d animated model of myself just for shadow and reflection casting. I don't need super accurate.
Does it work if subject is sitting down? I need it to capture me seated playing guitar. Not my hands, but rest of upper body. Well, and I also need to capture guitar neck. Encouraging to watch your video!
I don't think the AI would work if....
Hold on let me think about how to best explain this....
First and foremost understand that AI motion capture have difficulty detecting the body's pose if the outline isn't very strong.
You sitting down and not really moving very much is probably going to kill the detection, to make matters worse you are about to bring in a foreign object that the A.I. would then have to wonder whether it is your body or not.
This is a brave experiment, I wish you luck 😀
Hi, I would think it should work. Am going to try today. Even if there are periods of no motion that shouldn't cause it to not work, since camera trackers work even when no motion as long as there are good initial frames of trackable points. So if I am sitting with two cameras - one on each side of me - and my upper body is moving is moving at first, that should be enough to generate data and set the tracking in motion. Interesting how the guitar will affect things, but I would think that too would not be a problem. AI is just large data set statistics, basically regression based on high dimension datasets, in other words high dimensional curve fitting, so if there are good initial frames, mocap should understand where arm joints are, and the nature of training data should be such that the system would not result in, say, arm joints suddenly jumping and being mapped to the guitar or something, since such jumps would be absent in the training data, and if this system works like next token prediction (the way OpenGPT) then the liklihood calculations would unlikely choose sudden deformation of a joint as a likely next token. I suspect the devs have some sort of constraints system so that frame to frame you wouldn't have the body geometry wildly deforming. I joined their discord channel, will let them know how this goes. Also, been playing around with the great camera tracking software SynthEyes, it has geometric hierarchial tracking, where it would be able to track an object as a separate hierarchy from some other object (like a person holding the object). These FreeMoCap devs and the SynthEyes dev should collaborate! SynthEyes manual describes a multicamera motion capture function that it has, but I have yet to see example of anyone using it. But if I get this working, then I won't bother trying to use SynthEyes for motion cap. @@BracerJack
@@BrianHuether I wish I am there to conduct the experiment with you, this is interesting. I actually want to know the result.
@@BracerJack will report back here! Going to print out that calibration image and follow the calibration steps.
@@BrianHuether You....you use synth eyes for BODY MOTION CAPTURE?!!!
Let me know when this is a real app that doesn't require command line installation and reliance on Python. Couldn't get it up and running on my Mac after the installation.
I am sorry to hear that, maybe they will create a precompiled binary someday.
Question, have you also tried walking towards the camera?
It should be fine AS LONG AS....more than one camera is capturing your moment.
What is the output in? Is it the application itself? Is it simple to get it into blender? It would be good to have a wirefrane background and floor to get an idea of how much drift uoure getting. Apart from getting up from the crouch it was pretty damned good
1: The output is a Blender file, there is even a Blender add-on that auto converts the file into a rigify Blender file.
2: The sliding of the foot have a lot of room for improvement, hopefully, having another camera will help.
3: Yeah, maybe when the body is squeezed like that, the outline recognition part of the A.I. simply give up to some extend but the fact that it works at all...wow.
hi were i can get that large background image
That image is included in the github repo :D
This is very helpful and informative. Thanks for the video.
You are welcome Sal Elder :D
How to import animation in blender please?
There is a plugin for it, it comes with the mocap program, you can also join their discord to get the latest version of the add-on.
Great Job!
Thank you Ryan.
the application seems to be revolutionary but hard to install..
I also wish there is a one click install exe, but well, until then :D
Nice demo, where to get that addon.
Go to their discord, the person who make the add-on is there, you can get the latest version there :D
Does this work for 2 people detection?
I have no idea, I have never tested it with two people :D
Awesome, but can it dance? 😄
It can...when I can....someday ;p
may i know what camera you used?
I used the Canon g7x mark ii and Sony ZV1.
@@BracerJack ok thanks.... Great video.. I still having problem moving the data to blender.. I will try kinect using ipisoft first..
@@raymondwood5496 Let me guess, when the program was attempting to export to blender, it get stuck right ? That can be resolved, backup and then delete your Blender configuration folder, that will do it.
This happened to me as well, once you have cleared the folder, you are good !
@@BracerJackthanks for the tips.. Will try again tonight..
@@raymondwood5496 I never got the chance to try ipisoft, the multi-cam option is really out of my budget.
Good luck !
No it does not! It is totally horrible to use! It needs a complete UI with FBX export option!
Try to discuss with the creator about your issue in discord, he is very helpful :-)
Command Line Interface is cool though
@@BracerJack I did already. He avoids any improvements and just points out to the existing solution.
@@xyzonox9876 If you have only one animation to do, yea, maybe. But we have 2024! And this is not a solution, especially if you have to manage hundrets of animations, select their range that needs to be saved, organize and rewatch them quickly, to maybe redo the capturing again and so on!
@@The-Filter yikes.