Hey, I'm one of the developers of XRI! Really happy to see you talk about our package in this video. A few thoughts and answers to your questions: - Why isn't XR Hands a part of XRI? XRI is meant to be an optional package that lets devs build rich interactions for their XR apps, but it's not a core input package, like the OpenXR package. The idea is that whether or not you use XRI, you should still be able to use hand tracking. - Input readers, your example showed referencing interaction events. This was supported before. The new readers are meant to replace the action based controller, where you had to decide all the input actions you'd ever need, and the interactors would have to read their transform and input values from this. Its was problematic because, notably with hands, you'd often have different poses for different interactors, and you rarely needed all input actions for all interactors. This makes it a lot more modular. - Near Far Interactor - a huge benefit of the new design is that casting is modular and it's easy to implement new strategies for getting valid targets without having to override the whole interactor. We also pulled out attach transform manipulation, and added a push/pull gesture. It should be easier than ever to implement custom behavior, like say the gravity glove mechanics from half life alyx, just by making a custom attach controller. For extra info btw, we finally published our GDC talks online. Here's a link to the playlist, including my talk about XRI 3.0. th-cam.com/play/PLFg9suyZ1OnI2HreqpoBoXsrbpfDZES7L.html I'll also be doing a livestream tomorrow, for anyone who wants to ask questions and watch me dive into XRI 3.0 th-cam.com/video/ZcXnoKTNsIg/w-d-xo.html
Haha, just wanted to say those exact same things. Except the last point...the near far interactors are actually cool. I've been missing these feature for quite some time already. Thanks 👌
I am looking forward to it. The current Interaction with inheritance is a bit to complex for my taste. If you want to have different behaviour you needed have multiple objects and turn a lot of stuff off and on to get the designed behaviour. (At least how I made it to work..) I hope it is getting better and more structured, that making adjustments gets easier.
🤣I keep wanting to & trying to use U6, but... it's new, so about 80% of my assets don't want to play nice with things. Ah well, it all looks promising at least. Oh, and there's no R in foveated: "foe-vee-ate-ed". Thanks for the update! 👏
It really does look like the new engine will be a ton of fun! I've also accepted my internet sin of mispronouncing words. The comments are roasting me 😆. Thanks for the correction!!!
I upgraded my existing project to XRI3 and everything stopped working. Now I gonna rebuild a new code foundation based on Unity6 + XRI3 for the next installment of Cactus Cowboy.
@@FistFullofShrimp The existing games still use device based input, hence it's such a big task right now. But feel free to check the series. Desert Warfare is a good example for custom interaction on top of XRI. No other VR Input SDK is used. XRI and lots of protected override voids :D
"Foveated" is pronounced as foh-vee-ey-ted. Here's a breakdown of the pronunciation: "Foh" sounds like "foe" (rhyming with "go"). "Vee" sounds like the letter "V". "Ay" sounds like the letter "A". "Ted" sounds like "ted," as in the name.
Hey, I'm one of the developers of XRI! Really happy to see you talk about our package in this video.
A few thoughts and answers to your questions:
- Why isn't XR Hands a part of XRI? XRI is meant to be an optional package that lets devs build rich interactions for their XR apps, but it's not a core input package, like the OpenXR package. The idea is that whether or not you use XRI, you should still be able to use hand tracking.
- Input readers, your example showed referencing interaction events. This was supported before. The new readers are meant to replace the action based controller, where you had to decide all the input actions you'd ever need, and the interactors would have to read their transform and input values from this. Its was problematic because, notably with hands, you'd often have different poses for different interactors, and you rarely needed all input actions for all interactors. This makes it a lot more modular.
- Near Far Interactor - a huge benefit of the new design is that casting is modular and it's easy to implement new strategies for getting valid targets without having to override the whole interactor. We also pulled out attach transform manipulation, and added a push/pull gesture. It should be easier than ever to implement custom behavior, like say the gravity glove mechanics from half life alyx, just by making a custom attach controller.
For extra info btw, we finally published our GDC talks online.
Here's a link to the playlist, including my talk about XRI 3.0.
th-cam.com/play/PLFg9suyZ1OnI2HreqpoBoXsrbpfDZES7L.html
I'll also be doing a livestream tomorrow, for anyone who wants to ask questions and watch me dive into XRI 3.0
th-cam.com/video/ZcXnoKTNsIg/w-d-xo.html
Nice ! Thank you for your work !
banger
Haha, just wanted to say those exact same things. Except the last point...the near far interactors are actually cool. I've been missing these feature for quite some time already. Thanks 👌
Eric - appreciate so much that you're basically everywhere that the work appears to try and help out devs. So awesome - thank you!
Thanks for all your work Eric!
Adaptive probe volumes look sick, and I realized I'm not using nearly enough light probes. Thanks for the video!
I'm super excited for them. I'm sure they'll be issues starting out, but the potential for quicker iteration will be fantastic. Thanks for watching!!!
"Four-vation level". 10/10 pronunciation. Lol 😂
four-vetted
When all else fails just add an imaginary R and roll with it!
Up there with Cumberbatch's "Pangwang"
That foveation pronunciation was something else
I can't wait for your updated tutorial. After it my project will stop yelling at me.
Any chance of any AR tutorials. I always myself coming back to these tutorials. Always the best.
Hey, thanks for hard work.
Love your content my dude, hands down one of the best unity XR explainers in the game.
Thank you for the kind words! I try my best as a little shrimp can 🍤 🍤 🍤
Thanks for this video but Quest 3 can handle these APV, Gpu Residant Drawing or Gpu Occlusion Culling?
I am looking forward to it. The current Interaction with inheritance is a bit to complex for my taste. If you want to have different behaviour you needed have multiple objects and turn a lot of stuff off and on to get the designed behaviour. (At least how I made it to work..)
I hope it is getting better and more structured, that making adjustments gets easier.
XR Hands is the OpenXR backend tech for hand tracking that any XR toolkit can use, not just Unity's XRIT. Keeping them separate is good.
This is great to know! Thank you so much for saying this.
Do you know if the XR Toolkit uses Meta's MultiModal, allowing a game to be created with both Hand Tracking AND controllers?
Unfortunately not. We target openxr mainly, and multi modal requires the meta all in one sdk to use in Unity.
What is the colorful space game in the intro?
Thanks a lot!! Great help!! 😊
Could you do a video on game map creation??
Whats the game you showed at 0:23?
This is from Unity's free URP sample project. You can download it and test this scene out!
@@FistFullofShrimp Wow it's so impressive looking I thought it was from some indie VR game I never heard of
🤣I keep wanting to & trying to use U6, but... it's new, so about 80% of my assets don't want to play nice with things. Ah well, it all looks promising at least.
Oh, and there's no R in foveated: "foe-vee-ate-ed".
Thanks for the update! 👏
It really does look like the new engine will be a ton of fun! I've also accepted my internet sin of mispronouncing words. The comments are roasting me 😆. Thanks for the correction!!!
@@FistFullofShrimp ROTFL! Nothing but love, my friend. You've got this ;)
I upgraded my existing project to XRI3 and everything stopped working. Now I gonna rebuild a new code foundation based on Unity6 + XRI3 for the next installment of Cactus Cowboy.
I'm already sold on the name cactus cowboy 🤠 🌵
@@FistFullofShrimp The existing games still use device based input, hence it's such a big task right now. But feel free to check the series. Desert Warfare is a good example for custom interaction on top of XRI. No other VR Input SDK is used. XRI and lots of protected override voids :D
what game is that? with the spaceship?
It's a sample scene from the urp demo template that unity provides.
can you make a tut0orial on how to add physics hands
I would love a series of 3 videos of max 10min each where you speedrun the creation of a veeeery simple Immersive Sim using the XRTK 3.
I love this idea! Thanks for the suggestion 🍤
"Foveated" is pronounced as foh-vee-ey-ted. Here's a breakdown of the pronunciation:
"Foh" sounds like "foe" (rhyming with "go").
"Vee" sounds like the letter "V".
"Ay" sounds like the letter "A".
"Ted" sounds like "ted," as in the name.
its pronounced fo-vee-ated rendering
... It's a buzz word it's just fov rendering
I've brought shame upon my shrimp family. Forgive me and thank you for the correction 🍤 🦐 🍤
Foveated (foe-viated), not fore-vidded
1:00 It's pronounced "FOH - VEEH- ATE - ED"
Forgive the shrimp. English is second language and shrimp is first. Thank you for the correction 🍤
Foveated. Adj. Pronounced: "FOE-VEE-AYY-TED"
The amount of roasting I get on my terrible pronunciation of this would be enough to cook every Christmas dinner in 2 days 🤣
Your pronunciation needs some work but good video!