there is and it's called object tracking. Issue is that you need at least 8 trackers on your object which is ridiculous. This is really just a tutorial on manual object tracking. Technically you could try speeding this up by running trackers on the 2 markers, using their position data to create a simple 1 bone rig that moves the sphere, and change scale with an expression and some drivers.
I used a single dot the ball and use the montion tracking tab to get a single tracking point on that dot, add a sphere, set the origin of the sphere to the empty track and boom location has been taken care of, now i just have to manually set the rotation, it works really well tho
Its a good skill to know. In real production you don't always get footage with convenient tracking markers, so you have to wing it and do it the hard way.
This is a nice quick overview of the process, particularly the compositing, but suffers from 1 common error - you should not be changing scale unless the photographed object actually changed scale in the shot. It seems convenient, but you should always try to match the REALITY of the scene. The main reason is that you cannot properly match lighting, cast-shadows, and environmental interaction if your object is not moving accurately through space, at the proper scale. Sure, its more difficult to account for 3 more axes of motion, but that is the nature of the problem. Also, you are correct to break the sequence into large intervals, but you should find the extremes of position and key those first. Then break the intervals into powers of 2, such as starting at 64 or 32, depending on the length of the shot. Then continue breaking the intervals down by a factor of 2 until you are down to a singe frame between keys. Its common to build a rig with the Z-depth axis oriented to the camera - that will account for your apparent change in scale. Then you can choose a point, track that, and lock the object local X,Y position to that point. Then you are left with rotations, which are oriented around the single tracked point. The Z-depth axis passes through the tracked point as well.
In particular, a linear movement in z direction will require a non-linear movement along the "scale axis". Using scale can be considered quick and dirty for not so much (and more erratic) depth change in the scene and no intersction with other objects
You would be right if he were needing to rerender lighting to match the original scene, but his method more or less overlays the original lighting onto the new footage
@@aliensoup2420 the lighting comes from the scene and compositing, if CGmatter used lamps to approximate the lighting then the movement would be necessary
Senior Roto Artist here, It's easier if you were to analyse the footage then figure out your key poses and mark those down, then fill in the ease and outs in between your key poses. Then you pretty much got a full track without having to do by 20s or 10s. It's much easier if you look at roto/matchmoving from an animator perspective. As animation is and always will be a study of physics and movement. I'm sure you knew that but you prob didn't want to spend 10 mins explaining how animation works LOL. Also feels bad when most big studios already have a matchmoving software xDDD we're spoiled.
I wanted to say this too, also in Blender you got the Animation curves witch are easy to edit, with view keyframes you can match the motion of your complete footage.
@@aronseptianto8142 try pftrack or syntheyes pftrack has 'better' interface and easy to solve for easy ones but it total shit when you have a muddy shaky shot lol
Professional compositor here, so I've done my fair share of roto in my time; Keying every 20 or so frames is great. Even better is key the start frame, then the end frame, then the middle, and keep subdividing that way. Even better than that, and what I do on every rotoshape or anything I'm tracking manually, is do the first and last frames, and then keyframe the key-poses. Say the ball starts at the bottom of frame, goes up, and then down again. You would put a key at the extents of each of those motions. Depending on the motion, I will then usually put a key a few frames after/before those initial keyframes, to nudge the object forward a bit while it is still slowly accelerating to its next pose. That usually captures the ease in/ease out that real natural motion has. Then I will start to subdivide the keys, and get down to frame by frame level if need be.
Wow, this music is way too intense for a technically oriented- *"This is some of my favorite background music"* I can see we're growing apart, CGMatter... we're definitely growing apart.
I think this is the kind of thing that separates the CGMatter videos from the Default Cube videos. CGMatter is meant to create intrigue into various things you can do in Blender, Default Cube takes its time and breaks things down. At least that's my take on it.
Funny, I did something very simmular for a shortfilm i work on atm. I have one tip for you, you can use one tracker inside the motiontracking to match the position of the sphere, then import an tracked empty and now you can keyframe the motion of your object you want to track in as in this video. The thing i also did another way was to keyframe the scale afterwards, witch could also be tracked inside the motiontracking system. This pipeline made the location tracking very easy.
I don't know what half of this means, but I've been reccomended this at least four times now. So, I'm going to nod and pretend I understand, because I have to respect how much work this video took.
Okay, another approach: Take your entire image sequence and put it through a Hough transform to detect circular objects. You can even do this from within Blender by running a Python script that imports OpenCV and make it create/adapt a sphere that matches the state of the sphere in your sequence.
what if instead of scaling the sphere, you set the 3d cursor to the camera position, set the 3d cursor as the origin of transformations, and set it to only change location? Then you would have the sphere moving in 3d space at least.
Note: if your ball rotates too far for two dots to account for, don't simply add more dots. Add more dots *of different colors*. That way you'll know at a glance in what orientation relative to the starting position you are in, which is important because your eye will constantly fool you regarding how far a sphere has rotated.
Also using shortcuts will save you time....... Just Alt + mouse scroll wheel to scroll between frames or arrow keys, and hit the Record button to automatically insert keyframes
Scale instead of moving the ball's position in the tracking process, this is a one way off to re-do the work if you need zdepth or simulation later on.
Please do the explosion integration tutorial 🙏 Great tutorial, cool to see another tracking technic, thanks as usual, keep up all the awesome work you're doing! Does CGMatter? Oh yeah, it does
Now show us how to track camera footage perfectly aligned into a 3D LiDAR Scan of the scene in the footage (and render the two together with some transparency or something). I can provide data if you want...?
I think it require at least 4 points to determine orientation, but also distance from the camera. I tried something similar, but after linking empties to a track, all empties are in single plane. Now, important question - is there a way (maybe python programming) to move object or empties based of the space between them? For example, if object is closer, empties are more spread apart, if far, then empties are closer each to other. Four markers (and four empties) for four corners of the image, then some mathematics approach to reconstruct depth data? If distance between original points are known, then we should be able to use this data to reconstruct the distance between each point and the camera, not only from each to other.
What you are describing is auto-tracking software that already exists in Blender. Yes, if you have sufficient data, use an auto-tracking algorithm - but the point of this video is how to do something when you don't have sufficient data to use auto-tracking.
But by just scaling the balls and not tracking them in 3D you are limited in the interaction between them, aren't you? So if you use Metaballs and put them together and hold one ball behind the other in the video what will happen is, that they will melt together even though they are fairly far apart depthwise in the video, since in Blender they are just a big and a small ball next to each other.
To all you "Ugh, couldn't we not do this manually" out there, note that CGMatter is an honest hard-working individual (who happens to make blender memes on the daily) and will not tolerate your laziness. You can go to channels like EWan Lazubert if you want to be lazy.
Why would you use scale instead of depth to track? Using scale, you won’t be able to add proper shadows and shading, and your texture mapping will be inaccurate because the amount of the front of the sphere you see vs the peripherals changes with depth. In more extreme cases you will not be able to track rotation properly once you use this workflow. Is there a reason why you didn’t use software tracking or is it just not available on your editing platform? Manual tracking is not great not just because of all the work, but it jitters too much most of the time.
there has to be a simpler way of doing this
i m 100% sure there it is
i have the feeling this tutorial it's a bit of a troll :))) still interesting
Jonathan L well yes...buuuuuut this realm is set hardmode
there is and it's called object tracking. Issue is that you need at least 8 trackers on your object which is ridiculous. This is really just a tutorial on manual object tracking. Technically you could try speeding this up by running trackers on the 2 markers, using their position data to create a simple 1 bone rig that moves the sphere, and change scale with an expression and some drivers.
@@CGMatter Would it be possible/realistic to use three tracking dots to track position, scale and rotation?
I used a single dot the ball and use the montion tracking tab to get a single tracking point on that dot, add a sphere, set the origin of the sphere to the empty track and boom location has been taken care of, now i just have to manually set the rotation, it works really well tho
**THAT WAS... TOO INTENSE FOR A TUTORIAL**
the dude speaking like those AI bots on some youtube Channel
>sponsored by squarespace
>shows a sphere
Salvato 😂😂😂
😂
default square
Spherespace
spherespace
tracking manually like a medieval peasant?! 😲
Its a good skill to know. In real production you don't always get footage with convenient tracking markers, so you have to wing it and do it the hard way.
yes.
Very ew
6 days late
🙄🙄🙄🙄🙄
I always figured you more of a Raid: Shadow Legends man rather than a Squarespace man.
Definitely Not Dan hmm..
Nah I’m more of a standing up man
Or maybe a Nord vpn
what about honey
More like bore ragnarok
Wow, 6 minutes! That's a full on documentary!
This is a nice quick overview of the process, particularly the compositing, but suffers from 1 common error - you should not be changing scale unless the photographed object actually changed scale in the shot. It seems convenient, but you should always try to match the REALITY of the scene. The main reason is that you cannot properly match lighting, cast-shadows, and environmental interaction if your object is not moving accurately through space, at the proper scale. Sure, its more difficult to account for 3 more axes of motion, but that is the nature of the problem. Also, you are correct to break the sequence into large intervals, but you should find the extremes of position and key those first. Then break the intervals into powers of 2, such as starting at 64 or 32, depending on the length of the shot. Then continue breaking the intervals down by a factor of 2 until you are down to a singe frame between keys. Its common to build a rig with the Z-depth axis oriented to the camera - that will account for your apparent change in scale. Then you can choose a point, track that, and lock the object local X,Y position to that point. Then you are left with rotations, which are oriented around the single tracked point. The Z-depth axis passes through the tracked point as well.
In particular, a linear movement in z direction will require a non-linear movement along the "scale axis". Using scale can be considered quick and dirty for not so much (and more erratic) depth change in the scene and no intersction with other objects
You would be right if he were needing to rerender lighting to match the original scene, but his method more or less overlays the original lighting onto the new footage
He's not lighting
@@cynthetic4896 You need light to render. It's just the wrong way to do it.
@@aliensoup2420 the lighting comes from the scene and compositing, if CGmatter used lamps to approximate the lighting then the movement would be necessary
Senior Roto Artist here, It's easier if you were to analyse the footage then figure out your key poses and mark those down, then fill in the ease and outs in between your key poses. Then you pretty much got a full track without having to do by 20s or 10s. It's much easier if you look at roto/matchmoving from an animator perspective. As animation is and always will be a study of physics and movement.
I'm sure you knew that but you prob didn't want to spend 10 mins explaining how animation works LOL.
Also feels bad when most big studios already have a matchmoving software xDDD we're spoiled.
Heavy Metal this animation isn't smooth enough to define key positions
I wanted to say this too, also in Blender you got the Animation curves witch are easy to edit, with view keyframes you can match the motion of your complete footage.
Can tou do a tutorial or a forum or something😅
Blender has its own planar tracker and camera solver
how crap is it is up to debate, but still
@@aronseptianto8142 try pftrack or syntheyes
pftrack has 'better' interface and easy to solve for easy ones but it total shit when you have a muddy shaky shot lol
I got lost at “Open up blender” now I’m toasting a frog. Send help
Free the frog
Professional compositor here, so I've done my fair share of roto in my time; Keying every 20 or so frames is great. Even better is key the start frame, then the end frame, then the middle, and keep subdividing that way.
Even better than that, and what I do on every rotoshape or anything I'm tracking manually, is do the first and last frames, and then keyframe the key-poses. Say the ball starts at the bottom of frame, goes up, and then down again. You would put a key at the extents of each of those motions. Depending on the motion, I will then usually put a key a few frames after/before those initial keyframes, to nudge the object forward a bit while it is still slowly accelerating to its next pose. That usually captures the ease in/ease out that real natural motion has. Then I will start to subdivide the keys, and get down to frame by frame level if need be.
The music was cool but the dude kept talking and talking in the background.
Ok, i will stick to microsoft paint after seeing this
lmao.
lmao.
lmao.
lmao.
oaml
Wow, this music is way too intense for a technically oriented-
*"This is some of my favorite background music"*
I can see we're growing apart, CGMatter... we're definitely growing apart.
No, I like it. It makes it suspenseful. And much more interesting.
I think this is the kind of thing that separates the CGMatter videos from the Default Cube videos. CGMatter is meant to create intrigue into various things you can do in Blender, Default Cube takes its time and breaks things down. At least that's my take on it.
@@Wander4P Now hes doing asmr shit and i hate it
what's the name of the music?
Thank you for the comment, Penny Gadget.
We can animate Pokeballs, now!
YES
That's a lot more work than I had imagined! Btw have you made a tutorial on object tracking?
Shutter Authority a 2 freaking million subscriber channel comment has no likes or replies.
This shutter guy reminds me of a default cube
I think the "Go 20 frames" then subdivide the frame intervals is brilliant...
Seriously how can he talk like 4 minutes non stop without breathing?
Something's wrong, I can feel it!
All women can do that.
@@REDxFROG I like how your comment somehow manages to be both insulting masculinity and insulting women through casual sexism.
@@flytrapYTP it's insulting when women have the skills and I state this as a fact?🤓
Cuts.
Why this is in my recommended i don't even know what this means
Funny, I did something very simmular for a shortfilm i work on atm. I have one tip for you, you can use one tracker inside the motiontracking to match the position of the sphere, then import an tracked empty and now you can keyframe the motion of your object you want to track in as in this video. The thing i also did another way was to keyframe the scale afterwards, witch could also be tracked inside the motiontracking system. This pipeline made the location tracking very easy.
I also got a nice moon mapping, check it out on my channel, you can have it for playing around, if you want.
I don't even video edit but I find this style of beating stuff down people's throat very entertaining.
the first blender tutorial i followed and actually finished
wow something I'll never use, but it's finally in a nice compact video so I can save time. Revolutionary
I don't know what half of this means, but I've been reccomended this at least four times now. So, I'm going to nod and pretend I understand, because I have to respect how much work this video took.
theres a video called The Octo-Bouncer where the ball position is tracked by a camera basically the only thing last is to add rotation tracking
Me with my potato PC:
I'ma try doing this.
Me one mistake later:
I shouldn't have done that!
I should not have done that!!!
Best beginner tutorial for blender
I don't even have a sphere
I don't know for what purpose you need to track a sphere, but good job, time consuming, but really amazing result!
Okay, another approach: Take your entire image sequence and put it through a Hough transform to detect circular objects. You can even do this from within Blender by running a Python script that imports OpenCV and make it create/adapt a sphere that matches the state of the sphere in your sequence.
These are more than just tutorials, I watch all of your videos even though I can't make most of them
I'm watching your video at 3AM because of your commentary, I don't understand a single thing about blender but you make it very interesting
what if instead of scaling the sphere, you set the 3d cursor to the camera position, set the 3d cursor as the origin of transformations, and set it to only change location? Then you would have the sphere moving in 3d space at least.
Note: if your ball rotates too far for two dots to account for, don't simply add more dots. Add more dots *of different colors*. That way you'll know at a glance in what orientation relative to the starting position you are in, which is important because your eye will constantly fool you regarding how far a sphere has rotated.
Also using shortcuts will save you time....... Just Alt + mouse scroll wheel to scroll between frames or arrow keys, and hit the Record button to automatically insert keyframes
Scale instead of moving the ball's position in the tracking process, this is a one way off to re-do the work if you need zdepth or simulation later on.
the compositing trick was really nice!
I just want a really long compilation of lots of amazing tracked 3D animations-
I Refuse To Believe That You Really Absolutely And Utterly Undeniably Have To Do The Tracking Manually!
Please do the explosion integration tutorial 🙏
Great tutorial, cool to see another tracking technic, thanks as usual, keep up all the awesome work you're doing! Does CGMatter? Oh yeah, it does
Now show us how to track camera footage perfectly aligned into a 3D LiDAR Scan of the scene in the footage (and render the two together with some transparency or something). I can provide data if you want...?
I thought it will be a whole bunch of masking but this is much smarter, thanks!
This was exactly what I needed before I knew I'd need it.
love your tracking video
which is rare on youtube for some reason
Bro you are not a cg artist, you are fully a magician!
This is just everyday bread for this mastermind, I wasn't sure what a sphere was before seeing this
love the split screen.
You’re amazing at editing, my guy!
I like how he purpose fully doesn't want any one to be able to follow along
I wish I had the equipment and skill to do this. One day i'll try though. Hopefully, it would go well.
compositing node network be like:➡️⬇️⤴️↩️↙️⬇️↩️⬅️↗️⬅️⬅️
This is the guy who made Jesus walk on water
Amazing and entertaining video! Thank you. #blender #3dtracking #vfx #matchmoving ps: yes. The background track is awesome.
I'd better watch this in default cube channel. Man, about 80 percent stuffs went straight through my head.
Haha, overwhelmed? Naw! Bring it on! Thanks as always! Awesome tutorial.
if squarespace is the sponsor it should have been "default cube tracking"
This guy's comedic jokes make motion tracking more interesting
I wanted to cry watching this tutorial
I think it require at least 4 points to determine orientation, but also distance from the camera. I tried something similar, but after linking empties to a track, all empties are in single plane. Now, important question - is there a way (maybe python programming) to move object or empties based of the space between them? For example, if object is closer, empties are more spread apart, if far, then empties are closer each to other. Four markers (and four empties) for four corners of the image, then some mathematics approach to reconstruct depth data? If distance between original points are known, then we should be able to use this data to reconstruct the distance between each point and the camera, not only from each to other.
What you are describing is auto-tracking software that already exists in Blender. Yes, if you have sufficient data, use an auto-tracking algorithm - but the point of this video is how to do something when you don't have sufficient data to use auto-tracking.
Glad to see your face back!
i have a feeling this is gonna be recommended to a lot of random people very soon.
man i was not expecting that trickshot
I legit thought this was Adam Ragusea when the SquareSpace ad appeared
thank you for sharing the compositing process, i alway feel alone when it's time to render a vfx :)
Wait you're saying the real meatballs were fake?!?! O:
So next tutorial you ll make explosion using default cube?
Me, at 4am who doesn’t even have blender and needs to go to sleep: oooOOOoh, sphere tracking???
But by just scaling the balls and not tracking them in 3D you are limited in the interaction between them, aren't you? So if you use Metaballs and put them together and hold one ball behind the other in the video what will happen is, that they will melt together even though they are fairly far apart depthwise in the video, since in Blender they are just a big and a small ball next to each other.
To all you "Ugh, couldn't we not do this manually" out there, note that CGMatter is an honest hard-working individual (who happens to make blender memes on the daily) and will not tolerate your laziness. You can go to channels like EWan Lazubert if you want to be lazy.
One after effects user disliked this video
ok
imagine paying for after effects
I've been doing a lot of experimentation with the new fluid sim explosions, it sure would be nice to get a tutorial on it.
Thanks. I am now a profesional at whatever this is.
Is this one of those videos that will show up in my recommended 6 years later
Pair a vertex on the ball to an empty, track the marker on the ball, parent the empty to the tracker?
man can you give me link for the background music
?
th-cam.com/video/Ax9VSC2h04Y/w-d-xo.html
Pf track has geometry track, and cinema 4d has object tracker
100k sub soon. Consistent uploads. 👏👏
spam
Wow, blew my mind
Majestic, but.. I guess I'll need a slower and a bit more detailed explanation) (yeah, newbie here ;) )
Yo I've been trying to work this exact thing out for YEARS in AE to no avail. Looks like I'm finally making the move to Blender.
I like that it says"was" at 1:08
6 minutes for cgmatter is like 12 hours for another video
Why would you use scale instead of depth to track? Using scale, you won’t be able to add proper shadows and shading, and your texture mapping will be inaccurate because the amount of the front of the sphere you see vs the peripherals changes with depth. In more extreme cases you will not be able to track rotation properly once you use this workflow. Is there a reason why you didn’t use software tracking or is it just not available on your editing platform? Manual tracking is not great not just because of all the work, but it jitters too much most of the time.
I thought this was deafualt cube since he was talking so slowly
Would you please explain that how you mask your finger 🤔
quick question. couldnt you object track 3 points on the ball to create a triangle object whose rotation you can copy to the sphere object?
that ending made me giggle
I feel nothing but squarespace
Maybe After Effects will be better for rotoscoping the fingers?
Dear friend! Show how to merge audio with img sequence in blender in the end of main cg work please )
CGMatter: *0:20*
Me: *_*ElijahWoodLaugh.mp3*_*
i did not breathe through the whole video
Where are the controls for managing my life?
@liberP lovPrimeNumbers :)))
we need that explosion tutorial!
ant needed this a week ago
How do you mask out the Ball so the finger stays in Front of it?
u are right. still one of the dopest background music around #Skaler
this guy, so big brain
wow. easier than I thought
Godzilla had a seizure listening to this and died
change the title to something related to compositing in blender,
awesome compositing