thank you so much everyone for checking out our talk! this project has been a tremendous effort from many many people on many many teams. The R&D continues as we all work toward pushing the boundaries of realism in virtual reality. watch this space and cheers! -Alex from Agile Lens
@@ibrews Dozens for pushing VR ? sure. Helping raise the tide for all boats (educating and sharing knowledge at the bleeding edge ) ? very few. count them on 'one' finger ;). Cheers b
Que increíble cuando ves estos grandes trabajos y ves cada minúsculo detalle que observan para lograr algo único. Felicitaciones! Me da mucha motivación para seguir avanzando en VR
Hi Jose, Alex and Neil, incredible work! I don't quite understand the part about the mirrors, can ray tracing and sphere reflections capture work at the same time? I thought that if you have ray tracing reflections activated you cannot see screen space reflections. Thank you very much and excuse my English.
Incredible effort and attention to detail. Optitrack is very accurate for mocap but its expensive and not a easily portable hardware solution. If you were to do it again with the Quest 2, would Meta's Shared Spatial anchors have been good enough to be safe for multiple users in a shared space? How about in the case of critically aligned IRL objects with their digital twin counterparts? Repeatable?
Been doing real world alignment with objects for years with Vive Trackers, even controllers sometimes (if the users don’t need them) Shared spatial anchors are still very bad and we could not rely on them. No improvement yet as far as we can see
This kind of problem would be so much more easy to solve if people were leveraging Dynamic Foveated Rendering. It's such a shame that people don't recognize its potential; I hope the Vision Pro will change that.
Alex here! We did try to use dynamic foveated rendering using the eye tracking of the Meta Quest Pro but a) in 4.27 it requires the Oculus branch of Unreal and b) the latency was perceivable Much much better in 5.3 now!
do you think the tracking has improved with the new meta quest 3? I.e. the special equipment you made for the meta quest pro headsets is no longer needed ? also, I have been personally working on a VR scene and I am able to get absolutely amazing quality in PC VR for a scene but I see these sleek white lines, really faded but if you really try to look you can see them, usually its on furniture like sofa's or beds maybe its the texture that is causing that or maybe I am just missing a rendering setting? I would really appreciate it if you can tell me what settings did you use in rendering tab to get the highest quality possible ?
Hello there! If you are using raytracing and the white lines you are seeing are in the edges of the furniture, it is related to raytracing and the max roughness setting. Editing the values it in the postprocess volume or the roughness in the material should fix it !
They always talk about the program and place a model scale, but they never talk about the hardware resources that were used, make everything more transparent
I like how they can't sell rich people houses unless every single one of the houses in that ugly clump of development has the same paper-thin veneer of reality as the people trying to buy property there.
Couldn't you use neural nets to add detail onto the images without using more polygons? In fact why use polygons and traditional rendering at all? In theory a neural driven image could have infinite resolution, the closer you get to something the more detail is seen as the neural net keep generating what should be there.
Probably, but that project started about 2 years ago, the tech was pretty much unknown for the archviz industry. and probably you would need lots and lots of renderings from the original scenes.
Nanite works great and NeRFs aren’t mature enough for production applications like this. The super resolution model you’re describing doesn’t exist yet for rendering realtime 3D scenes.. it might in 5-10 years but right now it’s Sci-Fi lol
Good suggestion. We did explore it at the time, but we could not get the results we wanted to achieve. Remember that in VR with full 6DOF, we needed to make sure even if you get super close to any object, you would still get a very realistic view. Neural nets are promising and we will continue our R&D on them, but it was just not the case for this project.
thank you so much everyone for checking out our talk! this project has been a tremendous effort from many many people on many many teams. The R&D continues as we all work toward pushing the boundaries of realism in virtual reality. watch this space and cheers!
-Alex from Agile Lens
Man Alex is like the only dude pushing VR in Unreal. Good on him for pushing the bar
there are dozens of us. DOZENS!
@@ibrews Dozens for pushing VR ? sure. Helping raise the tide for all boats (educating and sharing knowledge at the bleeding edge ) ? very few. count them on 'one' finger ;).
Cheers
b
@@behrampatel4872 🫶
@@ibrewsdozens of us I tell you 😂 need to catch up soon Alex
@@bolayer yes yes !
Alex is a very, very talented guy. Generous with his knowledge too. Cheers
You’re kind! There’s a very large team who helped make this happen and I’m just fortunate to be one of the messengers 🎉
I got to experience this in-person and it absolutely blew my mind.
thank you for coming Nathie!!
Que increíble cuando ves estos grandes trabajos y ves cada minúsculo detalle que observan para lograr algo único. Felicitaciones! Me da mucha motivación para seguir avanzando en VR
That was amazing!
This is pure gold .Thanks !!
Very informative, thanks.
Good. Let´s push push push!
Such a great video and presentation, thanks heaps to everyone involved 🙌🏽
wow! awesome job!!
this guy is really talented for real. i love vr and i cannot wait to push it
Im not sure why Epic they take important feature as HTML5 support . I think we need this feature back
Hi Jose, Alex and Neil, incredible work!
I don't quite understand the part about the mirrors, can ray tracing and sphere reflections capture work at the same time?
I thought that if you have ray tracing reflections activated you cannot see screen space reflections.
Thank you very much and excuse my English.
Hi! That’s correct, the project ONLY uses raytracing reflections but changes sample count and other levels of precision depending on context
@@ibrews Thank you so much 👍
The highest value that I can apply in samples is 2 with my RTX 3080 mobile
@@arealvisionvideos this runs on desktop rtx 4090
Incredible effort and attention to detail. Optitrack is very accurate for mocap but its expensive and not a easily portable hardware solution. If you were to do it again with the Quest 2, would Meta's Shared Spatial anchors have been good enough to be safe for multiple users in a shared space?
How about in the case of critically aligned IRL objects with their digital twin counterparts? Repeatable?
Been doing real world alignment with objects for years with Vive Trackers, even controllers sometimes (if the users don’t need them)
Shared spatial anchors are still very bad and we could not rely on them. No improvement yet as far as we can see
🤞
This kind of problem would be so much more easy to solve if people were leveraging Dynamic Foveated Rendering. It's such a shame that people don't recognize its potential; I hope the Vision Pro will change that.
Alex here! We did try to use dynamic foveated rendering using the eye tracking of the Meta Quest Pro but a) in 4.27 it requires the Oculus branch of Unreal and b) the latency was perceivable
Much much better in 5.3 now!
@ibrews Roughly what percentage gain in performance did you see after you toggled DFR on?
@@brettcameratraveler at best 20% ? Wasn’t worth it for the artifacts
do you think the tracking has improved with the new meta quest 3? I.e. the special equipment you made for the meta quest pro headsets is no longer needed ?
also, I have been personally working on a VR scene and I am able to get absolutely amazing quality in PC VR for a scene but I see these sleek white lines, really faded but if you really try to look you can see them, usually its on furniture like sofa's or beds maybe its the texture that is causing that or maybe I am just missing a rendering setting?
I would really appreciate it if you can tell me what settings did you use in rendering tab to get the highest quality possible ?
Hello there! If you are using raytracing and the white lines you are seeing are in the edges of the furniture, it is related to raytracing and the max roughness setting. Editing the values it in the postprocess volume or the roughness in the material should fix it !
@@juanipignatta414 seconding Juani ! :D
They always talk about the program and place a model scale, but they never talk about the hardware resources that were used, make everything more transparent
👍
🤩
go Juani go!!
we all all allergic to uv unwrapping
I like how they can't sell rich people houses unless every single one of the houses in that ugly clump of development has the same paper-thin veneer of reality as the people trying to buy property there.
Lol you guys are not the first. I've build a capital just for VR. I'm one developer with no billions
Couldn't you use neural nets to add detail onto the images without using more polygons? In fact why use polygons and traditional rendering at all? In theory a neural driven image could have infinite resolution, the closer you get to something the more detail is seen as the neural net keep generating what should be there.
Probably, but that project started about 2 years ago, the tech was pretty much unknown for the archviz industry. and probably you would need lots and lots of renderings from the original scenes.
Nanite works great and NeRFs aren’t mature enough for production applications like this. The super resolution model you’re describing doesn’t exist yet for rendering realtime 3D scenes.. it might in 5-10 years but right now it’s Sci-Fi lol
Good suggestion. We did explore it at the time, but we could not get the results we wanted to achieve. Remember that in VR with full 6DOF, we needed to make sure even if you get super close to any object, you would still get a very realistic view. Neural nets are promising and we will continue our R&D on them, but it was just not the case for this project.