@@gibsonliketheguitar5507 if you are using AnchorEntities, it's just an extra argument. if you are getting the hand tracking data you have to call a different function to get the predicted hand tracking pose and pass in the time at you want the predicted to predict to every frame. which way have you been doing it?
@@eliegebran4849 the api too get the hand pose can now take a time at for the pose where it'll predict the pose at that time. I coded this test with Anchor Entities. I'll make another video showing the code with both ways to do it soon. what are you working on?
Is this new Hand Tracking an improvement over VisionOS 1? Is is better than Meta Quest hand tracking?
looks better. can you compare it to meta quest 3?
Thanks for the video :)
you're welcome. subscribe to help me get to 1000 and let me know what else you all want to see :)
Looks great
Should be better for gaming
Keep it up love it
TYVM!
swingbug!
Which api is this?
@@gibsonliketheguitar5507 my own test app I coded
@@hanleyleung does the visionOS 2 use a different api?
@@gibsonliketheguitar5507 if you are using AnchorEntities, it's just an extra argument. if you are getting the hand tracking data you have to call a different function to get the predicted hand tracking pose and pass in the time at you want the predicted to predict to every frame. which way have you been doing it?
@@hanleyleung thank you for the amazing review! What update function should I use if I’m not using anchor entities?
@@eliegebran4849 the api too get the hand pose can now take a time at for the pose where it'll predict the pose at that time. I coded this test with Anchor Entities. I'll make another video showing the code with both ways to do it soon. what are you working on?
Music is too loud taking away from your voice over.
yeah thanks it's a bad habit of mine :) will tone it down