I asked in a VR subreddit about a year ago why nobody is making Async for computer games and people gave me shit about it like "wouldn't work that way, the idea is stupid, just not possible, etc." so I gave up. Glad I asked the right people
There are a lot of people who like to do the seemingly safe bet of saying "it won't work" without actually knowing, because they aren't the experts they want to pretend they are. If a person speaks in absolutes without even trying explain why, chances are, they are not truly experts. They might know some things and even genuinely think themselves to be experts, but in reality, they have much more to learn.
Maybe you got this answer because it was discussed and tested like a hundred times since John Carmack invented it in 2012. There are serious issues with this that are much less problematic in VR.
@@kristmadsen it was the same when the first computer mouse was invented. the higher ups said its useless, why would anyone need this? And bam! everyone has a mouse or a trackpad
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project. I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly. I should find my old demo and see if I can get it compiling again.
Not mentioned in the video: you can render frames at slightly higher fov and resolution the the screen, so that there's some information "behind" the monitor corner. Won't save you from turning 180 degrees, but it will fix most of the popup for a very slight hit on performance
this is not what pc gaming is supposed to be about. using vr handmedown techs. and vr and pc scene shouldn't be segregated and minding their own scene either if yall sudden mindlessly hurrah at this crossover weirdofiesta
I am so happy that Philip managed to get the message THIS far out. I do fear that this tech might have issues with particles and moving objects and the like, but when you mentioned that we could use DLSS to ONLY FILL IN THE GAPS, my jaw dropped. Thast so genius! I really hope that this is one of those missed opportunity oversights in gaming, and there isnt like some major issue behind it not being adopted yet.
You don't need to worry about particles, just render them later. The whole idea behind this solution is to split rendering into two phases: 1. Render the scene (expensive 3d phase) 2. Render the final frame from pictures of the scene (cheap 2d rendering) Just move all particle and HUD rendering to phase 2 To be honest I would suggest to go even further and add phase `1.1` where you use DLSS to draw the less important background stuff, this way you can render the important objects in 4k and background objects (buildings, grass, trees etc) in 720p or lower and just upscale with DLSS. Or go even further and render each layer in different framerate. Background in 30fps, while objects in 60fps and final image in 120fps
@@hubertnnn I mean cheap is relative, if all animated and moving objects and particles have to be rendered later on it won't be that cheap. Especially if it means that transparent object have to be rendered after that. Also screen-space-reflections of animated objects will disappear if they are not part of an animated object itself. Not saying it's not interesting, but it's definitely not a solution without compromises.
I know philip will see this and I know he will feel awesome. You have come a long way Philip. I am proud to be part of your community since your first tutorial videos.
Just think about that: You can see the difference on a TH-cam video! Granted it's 60FPS but it's still compressed video streamed from TH-cam. I can only imagine how much of the difference you can see live running it yourself. This makes it even more amazing!
oh wait, time for another balls to the wall computer build! only the third this week. /s But for real, they've been doing a great job with not doing what I just said
This is great, it really explains some odd behavior I've noticed while playing VR games, and using it for flat games sounds like an awesome idea, especially for consoles.
Handhelds too. This would make any game on the Switch or Steam Deck run near perfectly without having to tap into too much hardware power. Why are we not funding this? GPUs are the size of a gaming console nowadays but they couldn't bother to solve those issues with much simpler and cheaper solutions?
Yes ! Sometimes when the game is stuttery, you can still move freely but you can see the black screen ! Such a cool tech. It works really well, input latency is really important.
@@nktslp3650 yeah, when he showed the black borders I had a strong feeling of "i have seen this before", but i couldn't put a finger on it, until he mentioned VR.
You might be able to hide a lot of the edge warping by basically implementing overscan where the game renders at a resolution that's like 5-10% higher than the display resolution, but crops the view to the display resolution. It should in theory be only a very minor frame rate hit since you're just adding a relatively thin border of extra resolution.
The size of border you would need to eliminate the edge warping would probably impact performance more than just using a higher refresh rate to lower the amount of warping in the first place.
The magic combo there would be foveated rendering alongside the async reproj with overscan. The games that would make sense for will inevitably be a case-by-case thing for but the performance gains would be massive.
@@carlo6953 That assumes you'd need same resolution for overscan. If game is rendered at 45deg FOV at 1440p, render an overscanned area between 45 and 90deg FOV at 360p. You don't need a lot of detail, just something to make valid guesstimates within that motion blur until proper frame fills up the screen.
I was amazed watching Philip's video when it came out. I'm happy that it has reached you now! Hopefully the game developpers will get the message, I'd be really happy to see this implemented in actual games, because at the moment unless you have the most recent hardware, you have to choose between high resolution and very high framerate...
I wanna see the video in question, 3klinkphilips is the channel right? what's the video? Im guessing around to find this guy/video, what's the title so I can show him some love?
Personally super excited to see 2klicksphilip's video referred to in a LTT video, a lot of Philip's content is really high quality, especially the ones where he covers DLSS and upscaling as mentioned earlier. Can't recommend checking it out enough!
One thing that Philip's video covers that this one does not, and which I'm personally really excited about: combining this with a low shading rate border around the viewport (the fully rendered frame). Since peripheral vision is more trained on movement than detail, this is fine quality wise, and it means the screen doesn't have to guess what's at the edges - the information is already there, just in lower quality than the main viewpoint would have. That would, if not eliminate, significantly reduce the stretching artifacts.
like doing actual FOVeated rendering, where the "sharp part" is the whole normal viewport, while the low resolution is just around it, like extra 5-10% or so
@@ffsireallydontcare what if the lower quality rendered parts are actually outside your screen? You would trade a bit of framerate for more accurate projection predictions which would recoup the lost performance and give you a better experience
@justathought No because it will be fixed in 1/30th of a second. It's obviously not perfect, but that's what this technique is about, compromises. Lower resolution fringes would be way better than stretching.
I think you should just render a full quality border. Way easier, only a minor performance hit, fewer failure points, etc... With how this idea works, I doubt there is a significant difference in perceived performance between 50fps and 60 fps. It could be an issue once you get down near 30, but this seems like it really shines at 60fps and above and rendering an extra 10% of resolution is likely all you need.
This explains the weirdest I've felt in VR. The game itself lagged for random reason but my head tracking and responsiveness of the control wasn't affected. I remember thinking if the head tracking lagged along with the other lag, I would have had severe motion sickness.
Y’all have done a stupid good job recently researching and explaining difficult concepts. Between this video and the recent windows sleep/battery video, my (already high) respect for LMG’s tech knowledge has gone through the roof! And y’all didn’t even discover this hack! Thanks for sharing (and explaining)
As a Hobby Game Engine/GFX developer, I developed this technique with some tweaks: static geometry would just be rendered every few frames, but characters, grass or particles get permanently rendered. With the depth sort and extended viewport, it feels like native rendered and one can really aim precisely on a target, since this is always up to date. As mentioned in the video, dlss uses motion vectors, but has to guess the motion and static geometry. With proper implementation, this guess is not required, but can be calculated by the same hardware as the AI
What happens if rendering your static geometry takes 20ms on the GPU? How do you schedule the reprojection to ensure it's executed in time? Also, which graphics API did you use to implement this?
What I'm wondering is, does the GPU in any way know what it wouldn't need to render, sections of the screen that can persist using this tech & only re-rendering additional frames for the sections that require more updates? This make sense?, its hard to put down.
@@Winston-1984 that's what I'm currently working on, since this is now a common technique for ray tracers. Currently I'm trying to create the formulas I need and proof them for small movements. But with this fixed splitting it works for first person shooter or smth like that with much static geometry. Static geometry is nowadays really fast to render.
Take this a compliment : I love how LTT has now transformed more into a Computer Science/Electronics for Beginners channel than just another "Hey we got a NEW GPU [REVIEW] " channel.
It's why I keep watching them, I got tired of watching reviews of hardware I can't afford/don't really need yet. Though my VR rig is getting very tired.
As per usual: John Carmack is the king of optimizing rendering in games. He first implemented this tech for the Oculus Rift and has a long history of coming up with awesome solutions for problems like this. This is the man that made Doom, he knows his stuff. He's probably laughing right now and having a big "I told you so" moment.
@@Felipemelazzi I thought JC was the one who had seen it somewhere and wanted to bring it to Oculus, but, I don't think he was responsible for its actual creation. Anyone know?
Having just started getting into VR, I just recently learned about what asynchronous reprojection. Really cool to see it getting mentioned, because when I heard about it it seemed like what DLSS 3 wanted to do, only it's been here already for quite some time. Your description about how it decouples player input with the rendering makes me think of rollback netcode for fighting games and how that also decouples player input and game logic, and I'm really excited for what that means for the player experience
Seems like interesting tech. Two immediate thoughts: 1. What about moving objects? Seems like the illusion falls apart there as this only really simulates fake frames of camera tilt, not any changes to things already in your FOV. 2. What if you just slightly over-rendered the FOV? Then you actually have some buffer when tilting the camera where you have an actual rendered image to display before you need to start stretching things at the edges of the screen. Now obviously since you're rendering more geometry, you are going to take a further FPS hit, but is there a point where the tradeoff is a net gain?
In the 2kliksphilips video he mentioned how interesting would be instead of stretching the borders or showing the void leaved by not yet rendered, if we rendered a bit extra of the display area (quasi like overscan) but in low resolution to impact as little as posible the performance, and as our peripheral vision is not great we barely noticed when moving fast that a small area in the corners is momentarily lower resolution. So yes, we would have plenty of ways to improve the illusion, for example you could boost the ondisplay area with DLSS or FSR, and maybe even the extra area (I don't think that always would a good idea, depending on your main resolution is not the same that the extra area is 480p and you're creating that image from 240p with DLSS that it to be 240p and get that from 120p, probably a bad idea at least for the ladder), maybe if the resolution on the extra area is not suitable for DLSS or FSR, you could use for on display area but the frame generation of DLSS 3.0 (and future FSR 3.0) but only the frame generation on the extra area to fill the gaps with mostly fake frames gotten of deducing what your movement would lead to showing and anticipating to it
moving objects still have poor framerates, that's how it is in vr as well, your hands are much more jittery feeling than the rest of the game when your fps drops... in my experience anyway.
1. Yes, moving objects are still noticably 30fps, but coming from someone who has spent time in VR reprojection in a game like Skyrim, with lots of moving actors, you don't notice that nearly as much when your actions are still so instant, as shown in the demo. It's crazy how much you can find yourself forgiving if your head and hand movement is still smooth as butter. 2. That is another technique that VR absolutely uses that works very well to solve that issue. Easily implementable and workable.
I feel like I just had my mind blown wide open at the possibilities. This is one of my favorite ltt videos. Its hard for me to find such a technical concept so well explained. well done.
Really cool stuff. When I’m in VR and the frames drop during loading or something, it does exactly what you showed in the 10fps demo. You can see the abyss behind the projected image on the edges, with the location of the image updating to return right in front of you with each new frame. I had no idea that that’s what it was for.
Oooohhh. You're absolutely right and not once did that occur to me! Imagine if that didn't happen and everywhere you looked was the same loading screen...
I knew about this because of my oculus rift, and as you mentioned, in racing games, asynchronous spacewarp (as oculus calls it) is quite noticeable, moving your head around while driving at 100 mph can be quite jarring but oculus updated the feature and the visual bugs weren't as noticeable, it's quite interesting to see how this works, excellent video guys
With the latest "Application Spacewarp" on Quest2 the games can now send motion vectors, so the extra-polated frames no longer have to rely on so much guesswork.
Yeah the visual artifacts can actually cause MORE issues in VR than not, at least in some very specific games. Its not super noticeable in VRChat but in the Vivecraft mod for minecraft the screen turns into a wavy, smeary mess. I actually hated that WORSE than running at 40fps natively which is what my system could do with my settings at that time
This is really cool! I play shooter games a lot and the most annoying thing about low fps in games is the input lag. Slow visual information is more of an annoyance as long as it's above 30, but the slow input response times at anything below 60 fps drives me insane.
This tech has been a part of VR for years and it's awful. They need to take a new approach and have developers actually implement it at the game level rather than it being an after effect because as it stands now it doesn't work worth a shit. Awful.
@@Thezuule1 On quest, they've built support for it in-engine, it's called SSW. It's actually better than ASW on PC because it has motion data for the image, so the interpolation is quite good. Sure, real frames are still better, but the tech is getting better
@@possamei you've got that a little twisted up but yeah. SSW is the Virtual Desktop version, AppSW is the native Quest version. It works better but still not well enough to have picked up support from any real number of devs. Step in the right direction though.
@@Thezuule1 But what if DLSS and FSR only had to correct the flaws of this instead of making whole frames. DLSS and FSR might get you even more performance.
This actually reminds me of the input delay reduction setting that Capcom added for Street Fighter 6. The game itself still runs at 60fps, but the refresh rate is 120Hz for the sake of decreasing input latency.
Good point. That's one of the added benefits of a high refresh rate monitor. Even though you might not reach a high fps, having a high refresh rate monitor can still benefit you from a reduced input latency.
What does that even mean... The async shown, as I understand it, is essentially shifting your point of view before the GPU producing a new frame. But for a fighting game, it would have to make the new frame no matter what to show your input turning into a move.
The *effect* (not reality, which is a bit different) also reminds me a little bit of QuakeWorld (and to a lesser extent, Quake and Doom). Even when the framerate is high the models use low-FPS animations, and with QuakeWorld I seem to recall objects in motion skipping frames based on your network settings. Meanwhile the movement was still buttery.
On the other side: this is a static scene, no animated textures, no characters moving around, no post process effects, no particles ect. Porting this to modern game would be similar to what assassins creed syndicate (or later one, I don’t remember) did to clothing physics: capped it at 30fps while game run at 60. Effect would look similarly to what modern games do to animations when characters are too far for game engine to update them as frequent as game current fps. So I’m skeptical Also, nice GPU you got there, can’t wait for the review ;)
Yeah, this isn't new tech. Pretty much every web browser does something similar when scrolling or zooming, where most content is static, and it looks terrible when a heavy webpage tries to do parallax scrolling on an under powered system. The whole "no body thought about it" angle in this video is strange and patronising.
Realistically, if you render outside the fov that you are doing by a percentage, you would have enough scene overshoot for it to not really be a problem unless you have extremely low frame rates and incredibly fast movements.
I have said for a very long time that when it comes to refresh rate, I don't mind lower frame rates from a visual standpoint, but the input delay is more what I love about high refresh rate gaming. I'm excited to see where this technology goes.
Cloud gaming would be great with this! You handle the reprojection locally and use the delayed frames as a source. It will basically eliminate the input lag.
You would still need to sent a depth buffer and probably other information to the computer playing the game. So that means more load on the internet connection, but it still sounds interesting.
not that will not do anything, but it will do less than what you think. even if it works, and I'm not sure it does, at least not in this form, because the time till you receive a frame that will fill the stretched gaps you just created by the camera around is so much higher, compared to a computer which will just fill that gap in the next 34ms if you are running at 30fps. then you might have a super smooth camera turning, but the time to shoot, jump, etc will still be the same. heck because of the higher disparity between camera and everything else. it might even be a worsen the experience instead of making it better.
@@khhnator You're right, but the latency for cloud gaming is already not that high. The most noticeable effect at latency is moving the mouse to look around.
This is already done with VR cloud gaming services, when you use oculus air link, you’re basically doing the same thing but over LAN. If you drop a frame, you can still move your head around and it’s perfectly playable all the way down to 30 fps for most games.
I'd imagine that a lot of the edge stretching could be mitigated by rendering slightly more than is displayed on the screen, so there's a bit extra to use when turning before having to guess
I think I remember 2kliksphilip talking / showing this in his video, just have the part just outside of your fov rendered at a lower resolution and use that instead of most of the stretching because you can't see the detail anyway
I just commented something similar. Just have it render out 10% extra which is cropped off by your display anyway, so whatever "stitching" it's doing is outside your view. Would love to see this. Combine that with foveated rendering, so the additional areas rendered outside view is lower resolution.
THIS IS INSANE, I use this already on Assetto Corsa in VR so I play at 120hz but it renders 60fps, Such a light bulb moment at the start, Really wish this can catch on because I've already seen first hand how great this is
The inventor already suggested using it for normal games in 2012. Then many people made experiments and demos over the last decade. This one finally got some traction, so kudos for that, but it's nothing new.
@@kazioo2 truth to be told, I'm a long time klik empire supporter, and I'm always happy if anything good happens to him like getting mentioned by another creator I like. The technology is interesting, and it needs traction to take off, but i actually care more about Philip than the tech.
I absolutely love the recognition that kliksphilip and his brothers have been getting. It really is an amazing idea and would make everything so much better!
This feels like a slightly hacky optimisation you would see in older games, and I personally find that really cool. I always admired hearing the cleaver ways game devs to over come the limitations of hardware. Where as these days it feels like we rely on an abundance of processing power. That abundance of processing power is generally a good thing but it feels like these sorts of optimisations are becoming a lost art
Something worth noting is that comrade stingers demo does not really do what they say it does (mostly due to issues with how unity is made being pretty incompatible with this sort of demo). The gpu draws the entire frame during the last frame, so work is NOT split up over several frames. Doing that would be a pretty complicated task in existing game engines like unity.
This! The demo only distorts the *simulated* bad framerate from the slider. If you ran the demo with an actual bad framerate, it would just lag like normal. To actually implement it properly is much harder than what I did, in unity's case might require some severe shenanigans, or straight up engine modification.
@@comradestinger Do you think that something like that could be a driver feature like the DLSS2 Stuff. Where the GPU gets some motion vectors and shifts existing objects more or less like sprites arround until a new real frame got created?
@@TrackmaniaKaiser I think both could work, though I lean towards it being done by the devs themeslves rather than by driver. Since games vary so much, different scenes and camera modes would benefit/suffer from the effect in different ways. to be honest It's all very complicated.
Wonder if using DOTS and scriptable render pipeline would allow for it, can't imagine figuring all that out in an evening though. I wouldn't trust a solution that leverages Unity's undersupported APIs to be that stable though...
Now that's a public interest video ! Raising awareness on this technique will certainly go a long way, especially in open source. I hope the constructors don't shy away from it from fear that it would diminish interest on their high-end GPUs.
I think one caveat here, which has not been mentioned, is that dynamic objects in the focus/center of the screen will also only be updated by whatever frame rate your GPU allows. I wonder how to handle those scenarios. Still a very worthwhile improvement for a lot of games, for sure!
From my experience with VR, although while looking around is perfectly smooth, animated characters on the screen update slower.. But that really isn’t much of a problem. There is no input lag from your HMD, you can look around perfectly fine and the 45fps the headset makes when reprojecting is still smooth enough to track targets. The only real caveat is the input lag from your controllers. Moving your hands will feel less responsive when reprojecting than when running native framerate. I wonder how this will carry over in desktop reprojection.
This is interesting. I'd love to see this feature compared to actual higher fps to see whether the perceived gain renders an actual competitive advantage.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project. I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly.
Your demo should really have included any animated object. That would have shown some serious limitations and also would be present in nearly all games.
I keep seeing this concern, and while it might be true at low FPS, I don’t think that’s really where it would be aimed. I’d imagine most would still aim for 60+ rendering. Keep in mind that VR headsets are already using this method and are they experiencing these problems? I legitimately don’t know.
It proboly wouldn't have been as obvious as you might think. At least not in 30fps. The reason why 60 or even 120fps is so obviously faster have to do with light retention in the eye when you move the mouse. But you can't se that with animation. Have you ever been Iin the cinema.... 24fps... yes.. 24.... do you think cinema is chopy? I-max+ have 48 fps
@@matsv201 Movies don’t look or feel choppy because there is natural motion blur to everything and you also don’t control anything in them. It’s an apples to oranges comparison. Also IMAX doesn’t “have” 48 fps, IMAX is simply a format. A movie has to be shot at that framerate to display in that framerate. If it’s shot at 24 then it will be 24 in IMAX.
@@MrPhillian Depends on the game and what sort of post processing is going on in the game from my experience. It can work well but if there's a bunch of fancy effects going on it can be very noticable the smoothness is being faked.
I actually was thinking about writing an injector to apply this to existing games a few years ago when I have seen the effect on the HoloLens. A few limitations though: camera movement with a static scene can look near perfect, however if an animated object moves depth reprojection cannot fix it properly, and you would need motion vectors to guess where objects will go, but that will cause artifact near object edges.
One thing I wondered about when I first saw that video is if the PERCIEVED improvement is good enough that you could lose a couple more frames in exchange for rendering a bit further outside the actual fov, but at a really low resolution. Basically like a really wide foveated rendering. It would mean the warp would have a little more wiggle room before things started having to stretch.
I’ve never understood why this hasn’t been done before. I’ve thought it should be done since 2016 when I got my VR headset. Like you said, extremely obvious!
This video is poorly researched. Timewarp was invented by John Carmack in 2012 and described in his post "Latency Mitigation Strategies", more than 10 years ago, in the early 2012. In his original article games other than VR were already mentioned. I remember seeing normal desktop demos many years ago, but it never gained traction despite that.
I remember watching 2klik's video last month and saying wow this is amazing and mind blowing, but thought I was just excited for it because I'm a programmer, guess not
I was also excited for it but thought nothing would come of it since I've only ever seen him talk about it. Now perhaps there's a chance of this actually becoming popular and coming into games.
The main issue with these workarounds is that they depend on the Z buffer, they break down pretty quickly whenever you have objects superimposed like something behind glass, volumetric effects or screen space effects
You technically only need the depth buffer for positional reprojection (eg. stepping side-to-side). Rotational reprojection (eg. turning your head while standing still) can be done just fine without depth, and this is how most VR reprojection works already, as well as electronic image stabilization features in phone cameras (they reproject the image to render it from a more steady perspective). It might sound like a major compromise but try doing both motions, and you'll notice that your perspective changes a lot more from the rotational movement than the positional one, which is why rotational reprojection is much more important (although having both is ideal).
Glad Linus mentioned the application of this to the Steam Deck at the end. One of the best things to do for a lot of games is to cap your FPS at 30-45 on the steam deck to get more battery life with a manageable amount of compromise. In games where using this technique makes sense, it would be amazing. I hope it's not something they have to wait until version 2 to implement though.
it was would be the biggest of ironies if the STEAM Deck featured native Reprojection for their games. Valve originally was against Asynchronous Reprojection tech back when Oculus released it with the help of John Carmack. At the time, Valve was against Reprojection tech and even wrote some tid bits about how 'fake frames are bad, and devs just need to optimize better'. Valve eventually wised up and adopted their own version of Reprojection for SteamVR (not the first time they were proven wrong, Valve initially only wanted teleport locomotion in VR, called free locomotion bad). The most recent iteration of Async Reprojection from Oculus/Meta meshes Reprojection + game motion vector data + Machine Learning; called Application Spacewarp (AppSW). A recent title that features of good example of it in action is Among Us VR on the Quest2.
It's huge for low/mid range setups to make games more responsive but it's also nice for high end machines because you'd completely negate the impact of 1%frames and feel like you're always at your average
This could have insane potential for handhelds. The steam deck can usually pull at least 30 fps on new triple A titles, which seems to be perfect for making the picture much smoother
This is one of the reasons why I love LTT. Making known new and innovative technologies that could revolutionize the industry. Not only handheld consoles but mobile games and even cloud gaming can get significant improvements from this
Asynchronous projection is great in VRChat (in PCVR on my relatively old PC) where I usually get 15 fps and often even far less than that! I don't really mind the black borders that much in that case especially since the view usually tends to go a bit farther than my FOV making them usually only appear if things are going _extremely_ slow, like, a stutter or any other time I'm getting over 0.25 seconds per frame. So, perhaps another way to make the black bars less obvious, would be to simply increase the FOV of the rendered frames a little bit so that there is more margin. Would make lower frame rates, but it might be worth it in any case where the frame rates would be terrible anyways.
The issue with asynchronous reprojection is that with complex scenes or fast action it creates visible artifacts and weirdness. This is where AI comes in, like DLSS 3 frame generation. By using deep learning, it can more accurately and realisticly insert additional frames. That's really the way I'd the future along with ai upscaling. It has to be, otherwise we're going to need a nuclear reactor to power the future rtx 7090 or whatever.
Even bigger issue is that in VR games (where the camera can just move through walls if you move your head in the wrong place) the only thing async timewarp has to do is take the latest headset position and reproject there. However, in a regular game you can't just take keyboard/controller input and reproject to a new position based on that. Or your character would go through the floor, walls or obstacles in the map. Instead you would have to run full collision detections and physics simulations in order to tell where the camera is supposed to be in the reprojected frame. This not only makes it massively harder to implement compared to VR where it can be automatic. But it also increases the chance of hitting a CPU bottleneck and not gaining that much performance anyway. Then when you combine visual artifacts and other issues you start seeing why game developers haven't used their development time to implement this before.
Oculus/Meta is already doing that with their 3rd generation reprojection tech - Application Space Warp (AppSW). It meshes Reprojection tech with game engine motion vector data w/ a splash of Machine Learning to generate the best looking VR Reprojection yet. The recently released Among Us VR on the Quest2 is a good example of a game using the latest AppSW techniques. All thx to John Carmack, the grand daddy of Asynchronous Time Warp (ATW; the first mainstream Asynchronous Reprojection)
@@meowmix705 I don't play enough quest 2 games to be able to speak on AppSW (I play pc through link, and admittedly almost all of always disable regular ASW because of the inherent visual ghosting). Carmack may be the GOAT... I also get a kick out of the fact that he's probably the most reluctant Meta employee ever, but he sticks around because he just loves VR that much
@@shawn2780 Yup, Carmack is the Goat. As to PC-ASW, the Rift mostly uses 1.0 of ASW (has the typical ghosting visual artifacts). ASW 2.0 is only used for a handful of games (greatly diminished the ghosting, but required depth data from the game; many games did not support it). ASW 3.0 is unfortunately a Quest exclusive (for now?), they've rebranded it as AppSW to not confuse it with PC-ASW.
This actually seems interesting to try out, some games I play, even on a 2060 may struggle to have even just a stable fps, so perhaps this kind of thing will help for those kind of situations
i honestly think this might be really great for ultra and super wide gaming as even more of th rendered frame is already in your peripharal vision, so the suboptimal edges will be even less noticible
As a dude with a 32:9 monitor, I'd settle for games actually supporting my monitor resolution AND aspect ratio. Most games I have to play windowed mode, because even if they allow me to go full screen (and don't add any black bars), usually the camera zoom is completely fucked and/or the UI elements are not properly positioned. Heck, even a newer game such as Elden Ring just give me two 25% black bars on each side, but with mods it actually supports 32:9. Wouldn't have been that much work to support it by default. A checkbox for disabling black bars and vignette, and an option to push the UI elements out to the sides and all would be fine. But for some reason, most developers don't even care to support newer monitors with weird resolutions.
@@Imevul I also run with a 32:9 display and feel you, but don't be surprised with elden ring from soft does not care about proper PC support. About the UI thing, I actually can't think of a game off the top of my head that supports 32:9 without also having the option to adjust UI elements as in my experience it's very common and has always been a thing even consoles 10 years ago
That really explains some quirks of the Quest and Streaming, specially on how when something’s loading you can move your head freely and the VR picture would stay still in the space as a single frame, just like you showed here.
This kind of thing is what I actually always think about since way back when motion interpolation becomes common in TV plus the fact that I'm familiar with 3D (I don't do real time 3D rendering, only non real time). What I'm thinking was the fact that you have this motion data, depth, etc should be good enough to have some kind of in game motion interpolation but not really interpolation but for future frame. Even without taking the control input into account, just creating that extra frame based on the previous frame data should be good enough to give that extra visual smoothness feel (basically you'll end up with somewhat the same latency as the original FPS). Since it already working directly within the game, we should be able to account for the controller input and the AI, physics, etc to create the fake frames with an actual lower latency benefit, so basically the game engine run at double the rendering FPS so the extra data can be used to generate the fake frames. For screen edge problem, the simple way to solve it is simply to overscan the rendering (or simply zoom the rendered image a bit) so the game have extra data to work with. Tied to this problem is actually the main problem with motion interpolation and this frame generation/fake frames thing, which is disocclusion. Disocclusion is something that was not in view in the previous frame becoming in view in the current frame. How can the game fill this gap because there is no data to fill the gaps. Nvidia I believe is using AI to fill those gaps which even with AI, it still looked terrible. But as it has been mentioned by people using DLSS3, you don't really see it, which is actually good for non AI solution, because if in motion people don't see that defects, then using non AI solution to fill the gaps (simple warp or something) should be good enough in most situation. Also doesn't need that optical flow accelerator because the reason why Nvidia use optical flow is to get motion data for elements that is not represented on the game motion data (like shadow movement) but in reality, that is not important, as in most probably won't notice when the shadow just move based on the surface motion (rather than the shadow motion itself) for that in between fake frames. For a more advanced application, what I'm thinking is a hybrid approach where most stuff are being rendered at like half the FPS and half of it will reuse the previous frame data to lessen the rendering burden. So unlike motion interpolation or frame generation, this approach will still render the in between frame, but render it less, like probably render the disoccluded part, maybe decouple the screen space stuff and also shadow so it rendered at normal FPS instead of half so what the game end up with is alternating between high cost and low cost frame. When I thought about that stuff, AI wasn't a thing thus I didn't think including any AI stuff in the process. Since AI is a thing right now, some stuff probably can be done better with AI like for example the disocclusion problem, rather than render the disoccluded part normally, probably it can just render the disoccluded part with flat texture as a simple guide for the AI to match that flat rendered look to the surrounding image which might be the faster way to do it.
And here it comes, 2 years later, Nvidia implement asynchronous reprojection in their Reflex 2 feature for their generated frame (now that they insert 3 AI generated frames in-between each computed frame) in order to reduce input lag perception.
As one of the top 250 vrchat players, I remember when async was a new tech in it's buggy years but new people take for granted HOW MUCH of a massive difference it makes nowadays. I was with VR since consumer conception and it's interesting to see how things have shaped up.
seen couple people "oh I need to turn it off in this" and then forget that motion smoothing is not the same and in steam VR it's quite a hidden option that resets every session.
I probably wouldn't rely on it for a higher FPS, but it looks like it has a great potential to smooth out FPS dips (say I'm playing at 60 that can sometimes drop to 50ish when a lot is going on).
Things like Particle FX and transparency is where the issues really show up not just on the edge. A spinning rocket for instance may warp the inner sections where the fins were in the previous frame especially if you pause the game.. since the predicted velocity remains the same but the object has stopped moving. They've solved this for oculus space warp by calculating velocities per object in order to have better prediction, however that requires the game engine to support this on an object basis for all materials and edge cases. We wish we could have used this on Mothergunship: Forge, but had to disable it due to the visible warping. It has the potential to be an absolute savior though since trying to hit 72 or 90 fps basically on a phone attached to your face is a huge challenge for peformance.
I'm curious if you'd still get the same results if you add moving objects into the scene. Since the objects update their position at the true frame rate, I bet they would look super choppy.
Yeah youre right If you maybe played the teardown lidar mod this is kinda the same (atleast the mod has the same downsides as this) Honestly tho it cant get any choppier because the framerate stays the same. The examples were with really low framerates but if you had your normal 120 fps and a 360 hz monitor this technoglogie makes a big difference
The thing is, you always want consistent input no matter what. The alternative in your scenario is the objects are still choppy, but your mouse movement is also sluggish
I'm actually surprise LTT got a demo version without the moving objects (or at least not showing it). Cus yes, animation still looks choppy according to the fps cap, but the perception of moving the camera is smoother than butter 0-0
I totally expected this when it was first implemented for the Oculus Rift... Really surprised that it has taken this long to even start being prototyped for flat games
Another part of the original solution's brilliance that you lose on this that the VR solution exploits, is the fact that the edges of the frame in VR, tend to suffer from some kind of distortion from the lens to begin with. Especially back when John Carmack came up with it in Oculus. It was perfect for that. It's still really interesting on a flat panel but I think the fact some people might notice it is part of why it never really go that far.
I'm very interested in using a form of overscan where you're viewing a cropped in frame of the whole rendered image, so when you're panning your screen around it doesn't have the issue of stretching, unless you pan outside of the rendered frame.
If Async Timewarp becomes popular on desktop, I wonder if it makes sense to spend the extra time render the z-frames behind some of the foreground layers, or special models that are say 'tagged' or earmarked to have render behind turned on. So for example you might want to render behind a fast-moving player model, or a narrow light pole, but you won't want to render behind a wall. Or maybe you only need to render behind a small pixel distance. I'm not sure how easy that is to add to game engines?
asynch reprojection and all other implementations of it such as motion smoothing for VR have always had one major flaw, and that is when rendering stationary objects against a moving background. The best examples are driving and flying titles such as MSFS and American truck sim. The cab/cockpit generally doesn't change distance from the camera/player view so when the scenery goes past the cockpit parts of the cockpit that are exposed to the moving background start rippling at the edges. This is one of the reasons AR it not used in VR anymore and also the reason why Motion smoothing is avoided as well. And besides we are talking two different technologies DLSS V Asynch Repro. One is designed to fill in the frames and the other is an upscaler. Not really an apple to apple comparison!
This has come up multiple times at our local VR dev meetup actually. The main issue with async timewarp you are missing is that, yes it makes the camera appear smooth, but that's all. Test it with something that isn't a purely static scene and it breaks pretty badly. VR has the same problem at low framerates. Sure turning your head won't make you feel sick, but everything else will still feel chunky. Really similar the interpolation effect people hate so much on TVs. So it's a good idea to offer it as an option, but it's not a silver bullet. It also breaks _really_ badly for non-first person views. Amusingly, most games already use a lot of prediction. Many games run at a fixed internal rate, and the graphics are interpolated or extrapolated from that to smooth it out. Many even use reprojection effects for TAA or other stochastic rendering.effects. It is a little weird that no-one offers async timewarp though.
100% agree on there being tons of edge cases and weird scenarios that may or may not work. I'd love to see more people explore this stuff. I think async timewarp is slightly different from reprojection like TAA, in the "asynchronous" bit. As far as I understand, for VR, the timewarp is usually (always?) handled outside of the game in the library/SDK used to control the headset. For example OpenXR does timewarp for you with no real developer effort needed. Draw the scene to a texture, hand it over to OpenXR and you're done. To do it "properly" in a non-vr game would require more deep integration into the engine, creating a "high priority" render-context that interrupts the main rendering before vblank, etc etc.
There are solutions for many of those issues that I was exploring. Using motion vectors can make the special reprojection much better than how its done in this sample, and its out of my scope but combining that with AI would take it even another step forward. Also rather than using a stretched border to cover the blackness my version used variable rate shading and rendered about 15% beyond the bounds of the viewport. It wasn't good enough for super fast mouse movements but again if that was further extended with AI it could be even more convincing.
@@comradestinger Yeah, I'm actually not particularly certain of how the compositor priority works. I thought that most GPUs only had a single graphics queue, so I'm not sure how you mix in multiple presentation jobs with a single long running render command stream. Maybe it uses async compute? No idea...
@@comradestinger Yeah, I don't mean async timewarp is exactly like other reprojection algorithms, just that games do have a few layers like this already to use previously computed stuff to make new frames. Pretty unclear to me too if it's possible to move the work of the compositor process to a local thread.
aaand 2kliksphilip was mentioned once again.. happy to see him getting featured in other channels for his contents. I find it amazing how much different his videos about tech and hardware can be from the more traditional Hardware/Tech youtube channels
I have been in VR for almost 3 years, and as soon as meta implemented ASW and ATW, I wanted this for PC Games... I have been and still waiting for this, for years
This can make a huge difference to desktop gaming for sure, mostly in the budget and aging mid-tier builds, but can you imagine what this could do for the Steam Deck and other handhelds? Not just make games smoother, but the amount of battery life this could save would be a huge advantage. Would love to see how far this tech can be pushed and one day even become as widely adopted as DLSS and FSR
Thanks to Ridge for sponsoring today's video! Save up to *40% off and get Free Worldwide Shipping until Dec. 22nd at www.ridge.com/LINUS
thank you ridge
ridge sucks!!!!!!!!
Why does it look like inside of a vagania
6:43 "Top 5 - 10%" , me whos top .1% & still havent become a pro player...
if you render a little bit off screen you might not need stretching
I asked in a VR subreddit about a year ago why nobody is making Async for computer games and people gave me shit about it like "wouldn't work that way, the idea is stupid, just not possible, etc." so I gave up. Glad I asked the right people
There are a lot of people who like to do the seemingly safe bet of saying "it won't work" without actually knowing, because they aren't the experts they want to pretend they are.
If a person speaks in absolutes without even trying explain why, chances are, they are not truly experts. They might know some things and even genuinely think themselves to be experts, but in reality, they have much more to learn.
People that reply to things on the internet tend to respond that way to new ideas.
Maybe you got this answer because it was discussed and tested like a hundred times since John Carmack invented it in 2012. There are serious issues with this that are much less problematic in VR.
The right idea is not asking redditors anything.
@@kristmadsen it was the same when the first computer mouse was invented. the higher ups said its useless, why would anyone need this? And bam! everyone has a mouse or a trackpad
I'm so happy Phil put a spotlight on this concept, and I'm even happier that a channel like LTT is carrying that torch forwards.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project.
I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly. I should find my old demo and see if I can get it compiling again.
Not mentioned in the video: you can render frames at slightly higher fov and resolution the the screen, so that there's some information "behind" the monitor corner.
Won't save you from turning 180 degrees, but it will fix most of the popup for a very slight hit on performance
this is not what pc gaming is supposed to be about. using vr handmedown techs. and vr and pc scene shouldn't be segregated and minding their own scene either if yall sudden mindlessly hurrah at this crossover weirdofiesta
@@lyrilljackson What do you mean?
@@martinkrauser4029 asmh
@@lyrilljackson Brainlet take
@@JustSomeDinosaurPerson Careful... thats a [Lvl. 163] PC Master-Supremacist, the bane of mobile, console, and vr gamers.....
Plouffe's "He owns a display" gag is always going to crack me up.
Don't all of them own displays? It's a tech media company, I'd hope they do.
There was a video a couple of weeks ago about his _display._
@@Lu-db1uf You know.. that's the joke
@@Lu-db1uf But his display is.... *special*
@@Lu-db1uf he bought the alienware miniled one and hes proud that he was one of the first to get it and now its a meme
"He owns a display" - that's gotta hunt him for ever like the "you're fired" for Colton 😂
Love it 😁
which video is "he owns a display" from again? having a hard time finding it
@@fsendventd He's been doing a lot of monitor unboxing videos on ShortCircuit, I think it's from one of the Alienware monitor videos
@@fsendventd it's from the 8k gaming video
@@Jeffrey_Wong yeah, it's also a direct quote, he says "I own a display" in the dlss 3.0 video
Yeah i got a really good laught out those 4 words under his name.
2kliksphilip and LTT is a crossover I never knew I needed. Make it happen.
Bump lol
Wonder if it can be used with CSGO, whether Valve allows or you brute force it.
They won't
They just use him and his ideas with not even 1 full second of credit
@@morfgo its not malicious
@@morfgo Dude, they literally credited him and his video in the description!
I am so happy that Philip managed to get the message THIS far out. I do fear that this tech might have issues with particles and moving objects and the like, but when you mentioned that we could use DLSS to ONLY FILL IN THE GAPS, my jaw dropped. Thast so genius! I really hope that this is one of those missed opportunity oversights in gaming, and there isnt like some major issue behind it not being adopted yet.
Exactly. On Linux this exact setup has been available for the last year. It makes a massive difference
You don't need to worry about particles, just render them later.
The whole idea behind this solution is to split rendering into two phases:
1. Render the scene (expensive 3d phase)
2. Render the final frame from pictures of the scene (cheap 2d rendering)
Just move all particle and HUD rendering to phase 2
To be honest I would suggest to go even further and add phase `1.1` where you use DLSS to draw the less important background stuff, this way you can render the important objects in 4k and background objects (buildings, grass, trees etc) in 720p or lower and just upscale with DLSS.
Or go even further and render each layer in different framerate.
Background in 30fps, while objects in 60fps and final image in 120fps
This was my first thought, too. Combining Async with DLSS/FSR could potentially be the actual magic bullet we're looking for.
@@hubertnnn
I mean cheap is relative, if all animated and moving objects and particles have to be rendered later on it won't be that cheap.
Especially if it means that transparent object have to be rendered after that.
Also screen-space-reflections of animated objects will disappear if they are not part of an animated object itself.
Not saying it's not interesting, but it's definitely not a solution without compromises.
@@antikz3731 ??? Explain. I use Linux and there's no such thing.
I know philip will see this and I know he will feel awesome.
You have come a long way Philip. I am proud to be part of your community since your first tutorial videos.
Here's to Philip, love his videos on all 3 of his channels
Love him. His tutorials layed the base for my environment artist gamedev job.
kliki boy i love you
@@charliegroves more like 14 lol
Just think about that: You can see the difference on a TH-cam video! Granted it's 60FPS but it's still compressed video streamed from TH-cam. I can only imagine how much of the difference you can see live running it yourself. This makes it even more amazing!
This is probably my favorite type of video from LTT. Highlighting and explaining interesting technology is fascinating.
oh wait, time for another balls to the wall computer build! only the third this week. /s
But for real, they've been doing a great job with not doing what I just said
it's up there for sure
This is great, it really explains some odd behavior I've noticed while playing VR games, and using it for flat games sounds like an awesome idea, especially for consoles.
Handhelds too. This would make any game on the Switch or Steam Deck run near perfectly without having to tap into too much hardware power.
Why are we not funding this? GPUs are the size of a gaming console nowadays but they couldn't bother to solve those issues with much simpler and cheaper solutions?
Yes ! Sometimes when the game is stuttery, you can still move freely but you can see the black screen ! Such a cool tech. It works really well, input latency is really important.
@@nktslp3650 yeah, when he showed the black borders I had a strong feeling of "i have seen this before", but i couldn't put a finger on it, until he mentioned VR.
You might be able to hide a lot of the edge warping by basically implementing overscan where the game renders at a resolution that's like 5-10% higher than the display resolution, but crops the view to the display resolution. It should in theory be only a very minor frame rate hit since you're just adding a relatively thin border of extra resolution.
The size of border you would need to eliminate the edge warping would probably impact performance more than just using a higher refresh rate to lower the amount of warping in the first place.
The magic combo there would be foveated rendering alongside the async reproj with overscan. The games that would make sense for will inevitably be a case-by-case thing for but the performance gains would be massive.
@@carlo6953 That assumes you'd need same resolution for overscan. If game is rendered at 45deg FOV at 1440p, render an overscanned area between 45 and 90deg FOV at 360p. You don't need a lot of detail, just something to make valid guesstimates within that motion blur until proper frame fills up the screen.
Yes, definitely. Surprised they don’t do this.
I’m glad I wasn’t the only one thinking this
I was amazed watching Philip's video when it came out. I'm happy that it has reached you now! Hopefully the game developpers will get the message, I'd be really happy to see this implemented in actual games, because at the moment unless you have the most recent hardware, you have to choose between high resolution and very high framerate...
I wanna see the video in question, 3klinkphilips is the channel right? what's the video? Im guessing around to find this guy/video, what's the title so I can show him some love?
@@SendFoodz it's linked in this video's description my man (if you haven't found it yet)
2kliksphilip is an unsung hero, his DLSS coverage is also one of his best content
His upscaling content is the best ;)
Never seen either of those but I agree
Personally super excited to see 2klicksphilip's video referred to in a LTT video, a lot of Philip's content is really high quality, especially the ones where he covers DLSS and upscaling as mentioned earlier. Can't recommend checking it out enough!
@@elise3455 3klicksphilip is just more work. Both will be _automatically_ obsolete when 0clicksphilip releases.
ouch, dont be so mean
As someone who plays VR constantly, it’s nice to see this brought up for non-VR stuff
One thing that Philip's video covers that this one does not, and which I'm personally really excited about: combining this with a low shading rate border around the viewport (the fully rendered frame). Since peripheral vision is more trained on movement than detail, this is fine quality wise, and it means the screen doesn't have to guess what's at the edges - the information is already there, just in lower quality than the main viewpoint would have. That would, if not eliminate, significantly reduce the stretching artifacts.
like doing actual FOVeated rendering, where the "sharp part" is the whole normal viewport, while the low resolution is just around it, like extra 5-10% or so
@@ffsireallydontcare what if the lower quality rendered parts are actually outside your screen? You would trade a bit of framerate for more accurate projection predictions which would recoup the lost performance and give you a better experience
@justathought No because it will be fixed in 1/30th of a second. It's obviously not perfect, but that's what this technique is about, compromises. Lower resolution fringes would be way better than stretching.
@@gabrielenitti3243 that's literally what they're saying lol
I think you should just render a full quality border. Way easier, only a minor performance hit, fewer failure points, etc... With how this idea works, I doubt there is a significant difference in perceived performance between 50fps and 60 fps. It could be an issue once you get down near 30, but this seems like it really shines at 60fps and above and rendering an extra 10% of resolution is likely all you need.
This explains the weirdest I've felt in VR. The game itself lagged for random reason but my head tracking and responsiveness of the control wasn't affected. I remember thinking if the head tracking lagged along with the other lag, I would have had severe motion sickness.
Yeah it was honestly a very amazing thing when my quest 2 froze, I was like "oh no please no motion sickness" for the first time but it was so normal
Yep, it's pretty standard and required for hmd based VR. (At least some sort of reproduction or time warp) There are a lot of different variations.
Y’all have done a stupid good job recently researching and explaining difficult concepts. Between this video and the recent windows sleep/battery video, my (already high) respect for LMG’s tech knowledge has gone through the roof! And y’all didn’t even discover this hack! Thanks for sharing (and explaining)
older videos were more technical now they suck up to the chump who doesnt know how to navigate a settings menu
3kliks and LTT collabing is what the INTERNET NEEDS!!!!
2kliks but either way Yes.
no it isnt
@@Qimchiy its the same guy
@@cora2887 Erm, if you paid attention you would know they were brothers 🙄
@@Qimchiy might as well get kliksphillip in here too ;)
As a Hobby Game Engine/GFX developer, I developed this technique with some tweaks: static geometry would just be rendered every few frames, but characters, grass or particles get permanently rendered. With the depth sort and extended viewport, it feels like native rendered and one can really aim precisely on a target, since this is always up to date. As mentioned in the video, dlss uses motion vectors, but has to guess the motion and static geometry. With proper implementation, this guess is not required, but can be calculated by the same hardware as the AI
Does this end up looking like motion blur?
What happens if rendering your static geometry takes 20ms on the GPU? How do you schedule the reprojection to ensure it's executed in time?
Also, which graphics API did you use to implement this?
What I'm wondering is, does the GPU in any way know what it wouldn't need to render, sections of the screen that can persist using this tech & only re-rendering additional frames for the sections that require more updates?
This make sense?, its hard to put down.
I want it, and i want it now
@@Winston-1984 that's what I'm currently working on, since this is now a common technique for ray tracers. Currently I'm trying to create the formulas I need and proof them for small movements. But with this fixed splitting it works for first person shooter or smth like that with much static geometry. Static geometry is nowadays really fast to render.
Take this a compliment : I love how LTT has now transformed more into a Computer Science/Electronics for Beginners channel than just another "Hey we got a NEW GPU [REVIEW] " channel.
Well... They had covered everything on that aisle...
It's why I keep watching them, I got tired of watching reviews of hardware I can't afford/don't really need yet. Though my VR rig is getting very tired.
As per usual: John Carmack is the king of optimizing rendering in games. He first implemented this tech for the Oculus Rift and has a long history of coming up with awesome solutions for problems like this. This is the man that made Doom, he knows his stuff.
He's probably laughing right now and having a big "I told you so" moment.
John Carmack is the responsible for asynchronous reprojection!?!?!?!?!?!
This living god never stops to amuse the world of technology!
@@Felipemelazzi I thought JC was the one who had seen it somewhere and wanted to bring it to Oculus, but, I don't think he was responsible for its actual creation. Anyone know?
he basically mimicked how your eye works in real life, I thought of this too but i thought it was already implemented.
Meta improved on this tech, now it is called Asynchronous Spacewarp and bundled with Oculus Quest 2. And let me tell you, it is really cool.
Having just started getting into VR, I just recently learned about what asynchronous reprojection. Really cool to see it getting mentioned, because when I heard about it it seemed like what DLSS 3 wanted to do, only it's been here already for quite some time.
Your description about how it decouples player input with the rendering makes me think of rollback netcode for fighting games and how that also decouples player input and game logic, and I'm really excited for what that means for the player experience
Philip is revolutionising the way we think about gaming and game dev just with common sense
what? nothing here is new
@@Neurotik51 using technology for VR with conventional monitors? I haven’t heard of that before
Seems like interesting tech. Two immediate thoughts:
1. What about moving objects? Seems like the illusion falls apart there as this only really simulates fake frames of camera tilt, not any changes to things already in your FOV.
2. What if you just slightly over-rendered the FOV? Then you actually have some buffer when tilting the camera where you have an actual rendered image to display before you need to start stretching things at the edges of the screen. Now obviously since you're rendering more geometry, you are going to take a further FPS hit, but is there a point where the tradeoff is a net gain?
2. You can do that I think.
1. They will move at the actual framerate. All asynchronous reprojection does is make the game feel more responsive.
I mean vr uses it and it runs pretty well
In the 2kliksphilips video he mentioned how interesting would be instead of stretching the borders or showing the void leaved by not yet rendered, if we rendered a bit extra of the display area (quasi like overscan) but in low resolution to impact as little as posible the performance, and as our peripheral vision is not great we barely noticed when moving fast that a small area in the corners is momentarily lower resolution.
So yes, we would have plenty of ways to improve the illusion, for example you could boost the ondisplay area with DLSS or FSR, and maybe even the extra area (I don't think that always would a good idea, depending on your main resolution is not the same that the extra area is 480p and you're creating that image from 240p with DLSS that it to be 240p and get that from 120p, probably a bad idea at least for the ladder), maybe if the resolution on the extra area is not suitable for DLSS or FSR, you could use for on display area but the frame generation of DLSS 3.0 (and future FSR 3.0) but only the frame generation on the extra area to fill the gaps with mostly fake frames gotten of deducing what your movement would lead to showing and anticipating to it
moving objects still have poor framerates, that's how it is in vr as well, your hands are much more jittery feeling than the rest of the game when your fps drops... in my experience anyway.
1. Yes, moving objects are still noticably 30fps, but coming from someone who has spent time in VR reprojection in a game like Skyrim, with lots of moving actors, you don't notice that nearly as much when your actions are still so instant, as shown in the demo. It's crazy how much you can find yourself forgiving if your head and hand movement is still smooth as butter.
2. That is another technique that VR absolutely uses that works very well to solve that issue. Easily implementable and workable.
I feel like I just had my mind blown wide open at the possibilities. This is one of my favorite ltt videos. Its hard for me to find such a technical concept so well explained. well done.
I love how much labs has instantly matured this channel. I have watched LTT for a long time but recently its really boosted its level.
But… they are not even done?!
It's been far from instantaneous, but we are starting to see the returns and it is definitely nice.
Really cool stuff. When I’m in VR and the frames drop during loading or something, it does exactly what you showed in the 10fps demo. You can see the abyss behind the projected image on the edges, with the location of the image updating to return right in front of you with each new frame. I had no idea that that’s what it was for.
ive been using vr for 3 years now- and I had no idea what it was until today either! Super cool to learn more about that stuff
Oooohhh. You're absolutely right and not once did that occur to me! Imagine if that didn't happen and everywhere you looked was the same loading screen...
@@MrScorpianwarrior how to get motion sickness lol.
I knew about this because of my oculus rift, and as you mentioned, in racing games, asynchronous spacewarp (as oculus calls it) is quite noticeable, moving your head around while driving at 100 mph can be quite jarring but oculus updated the feature and the visual bugs weren't as noticeable, it's quite interesting to see how this works, excellent video guys
With the latest "Application Spacewarp" on Quest2 the games can now send motion vectors, so the extra-polated frames no longer have to rely on so much guesswork.
Yeah the visual artifacts can actually cause MORE issues in VR than not, at least in some very specific games. Its not super noticeable in VRChat but in the Vivecraft mod for minecraft the screen turns into a wavy, smeary mess. I actually hated that WORSE than running at 40fps natively which is what my system could do with my settings at that time
This is really cool! I play shooter games a lot and the most annoying thing about low fps in games is the input lag. Slow visual information is more of an annoyance as long as it's above 30, but the slow input response times at anything below 60 fps drives me insane.
I always had a feeling that tech like this is actually the real future of gaming / VR performance. And not just raw rtx 4090 performance.
This tech has been a part of VR for years and it's awful. They need to take a new approach and have developers actually implement it at the game level rather than it being an after effect because as it stands now it doesn't work worth a shit. Awful.
@@Thezuule1 I use it in RE8 vr so I can run rtx while in vr and it doesn't feel great but it feels better than native.
@@Thezuule1 On quest, they've built support for it in-engine, it's called SSW. It's actually better than ASW on PC because it has motion data for the image, so the interpolation is quite good. Sure, real frames are still better, but the tech is getting better
@@possamei you've got that a little twisted up but yeah. SSW is the Virtual Desktop version, AppSW is the native Quest version. It works better but still not well enough to have picked up support from any real number of devs. Step in the right direction though.
@@Thezuule1 But what if DLSS and FSR only had to correct the flaws of this instead of making whole frames. DLSS and FSR might get you even more performance.
Glad seeing Philip getting bigger every day. He's amazing
2KP is such an amazing channel, he always has very interesting, out of the box ideas, and I love to see more of his wacky stuff being picked up!
This actually reminds me of the input delay reduction setting that Capcom added for Street Fighter 6. The game itself still runs at 60fps, but the refresh rate is 120Hz for the sake of decreasing input latency.
Good point. That's one of the added benefits of a high refresh rate monitor. Even though you might not reach a high fps, having a high refresh rate monitor can still benefit you from a reduced input latency.
that's not how any of this works...
this tech won't really be useful for fighting games specifically, and I think it would be more counterproductive tbh.
What does that even mean... The async shown, as I understand it, is essentially shifting your point of view before the GPU producing a new frame. But for a fighting game, it would have to make the new frame no matter what to show your input turning into a move.
The *effect* (not reality, which is a bit different) also reminds me a little bit of QuakeWorld (and to a lesser extent, Quake and Doom). Even when the framerate is high the models use low-FPS animations, and with QuakeWorld I seem to recall objects in motion skipping frames based on your network settings. Meanwhile the movement was still buttery.
On the other side:
this is a static scene, no animated textures, no characters moving around, no post process effects, no particles ect. Porting this to modern game would be similar to what assassins creed syndicate (or later one, I don’t remember) did to clothing physics: capped it at 30fps while game run at 60. Effect would look similarly to what modern games do to animations when characters are too far for game engine to update them as frequent as game current fps. So I’m skeptical
Also, nice GPU you got there, can’t wait for the review ;)
it is effective in VR games designed with it, so it will probably be effective on normal PC games if it's kept in mind.
Yeah, this isn't new tech. Pretty much every web browser does something similar when scrolling or zooming, where most content is static, and it looks terrible when a heavy webpage tries to do parallax scrolling on an under powered system. The whole "no body thought about it" angle in this video is strange and patronising.
Realistically, if you render outside the fov that you are doing by a percentage, you would have enough scene overshoot for it to not really be a problem unless you have extremely low frame rates and incredibly fast movements.
I have said for a very long time that when it comes to refresh rate, I don't mind lower frame rates from a visual standpoint, but the input delay is more what I love about high refresh rate gaming. I'm excited to see where this technology goes.
I'm so happy to see phillip reach this far out of the csgo bubble with this
'valve please fix'
A LTT vídeo at 60fps?! My god, the little animations they put like the outro card look so good 👍
yeah doesnt the dummy know 60fps is better than 8k uploads
Cloud gaming would be great with this!
You handle the reprojection locally and use the delayed frames as a source.
It will basically eliminate the input lag.
This is a sick idea
You would still need to sent a depth buffer and probably other information to the computer playing the game. So that means more load on the internet connection, but it still sounds interesting.
not that will not do anything, but it will do less than what you think.
even if it works, and I'm not sure it does, at least not in this form, because the time till you receive a frame that will fill the stretched gaps you just created by the camera around is so much higher, compared to a computer which will just fill that gap in the next 34ms if you are running at 30fps.
then you might have a super smooth camera turning, but the time to shoot, jump, etc will still be the same.
heck because of the higher disparity between camera and everything else. it might even be a worsen the experience instead of making it better.
@@khhnator You're right, but the latency for cloud gaming is already not that high. The most noticeable effect at latency is moving the mouse to look around.
This is already done with VR cloud gaming services, when you use oculus air link, you’re basically doing the same thing but over LAN. If you drop a frame, you can still move your head around and it’s perfectly playable all the way down to 30 fps for most games.
I'd imagine that a lot of the edge stretching could be mitigated by rendering slightly more than is displayed on the screen, so there's a bit extra to use when turning before having to guess
that's an interesting thought. this would bring us back to the age of overscan i feel lol
@@Blancdaddyreject dlss return to overscan
I think I remember 2kliksphilip talking / showing this in his video, just have the part just outside of your fov rendered at a lower resolution and use that instead of most of the stretching because you can't see the detail anyway
I just commented something similar. Just have it render out 10% extra which is cropped off by your display anyway, so whatever "stitching" it's doing is outside your view. Would love to see this. Combine that with foveated rendering, so the additional areas rendered outside view is lower resolution.
imagine the longevity of GPUs if this technology ever becomes standardized. man.
THIS IS INSANE, I use this already on Assetto Corsa in VR so I play at 120hz but it renders 60fps, Such a light bulb moment at the start, Really wish this can catch on because I've already seen first hand how great this is
Yay, 2kliksphilip and his brother 3kliksphilip finally get some well deserved attention!
The inventor already suggested using it for normal games in 2012. Then many people made experiments and demos over the last decade. This one finally got some traction, so kudos for that, but it's nothing new.
@@kazioo2 truth to be told, I'm a long time klik empire supporter, and I'm always happy if anything good happens to him like getting mentioned by another creator I like.
The technology is interesting, and it needs traction to take off, but i actually care more about Philip than the tech.
Brother?
@@semick4729 yeah he has two brothers kliksphilip and 3kliksphilip
@@MrALjo0oker Not sure if that is necessarily true, someone should get to the bottom of that. Valve, please fix.
I stumbled upon 2kliksphilip’s channels when I was researching how to make maps in Hammer. So glad you guys have mentioned him in multiple videos now!
I absolutely love the recognition that kliksphilip and his brothers have been getting. It really is an amazing idea and would make everything so much better!
Wow, not sure who wrote this one but such a good explanation. So clear and well presented, good job!
This feels like a slightly hacky optimisation you would see in older games, and I personally find that really cool. I always admired hearing the cleaver ways game devs to over come the limitations of hardware.
Where as these days it feels like we rely on an abundance of processing power. That abundance of processing power is generally a good thing but it feels like these sorts of optimisations are becoming a lost art
cough cough Gotham Knight
been watching 3kliks for years. I'm glad he's getting some recognition.
2kliks
@@TehF0cus kliks
@@TehF0cus same person
@@TehF0cus but don't confuse him with his evil brother kliksphilip
@@veganssuck2155 Its a joke....
Something worth noting is that comrade stingers demo does not really do what they say it does (mostly due to issues with how unity is made being pretty incompatible with this sort of demo). The gpu draws the entire frame during the last frame, so work is NOT split up over several frames. Doing that would be a pretty complicated task in existing game engines like unity.
This!
The demo only distorts the *simulated* bad framerate from the slider. If you ran the demo with an actual bad framerate, it would just lag like normal.
To actually implement it properly is much harder than what I did, in unity's case might require some severe shenanigans, or straight up engine modification.
@@comradestinger Do you think that something like that could be a driver feature like the DLSS2 Stuff. Where the GPU gets some motion vectors and shifts existing objects more or less like sprites arround until a new real frame got created?
@@TrackmaniaKaiser I think both could work, though I lean towards it being done by the devs themeslves rather than by driver. Since games vary so much, different scenes and camera modes would benefit/suffer from the effect in different ways.
to be honest It's all very complicated.
Wonder if using DOTS and scriptable render pipeline would allow for it, can't imagine figuring all that out in an evening though. I wouldn't trust a solution that leverages Unity's undersupported APIs to be that stable though...
@@comradestinger Good work man
Now that's a public interest video ! Raising awareness on this technique will certainly go a long way, especially in open source. I hope the constructors don't shy away from it from fear that it would diminish interest on their high-end GPUs.
High-end?
What r u talking abt, they can make AAA game 8k 240fps without 6slot GPU
I think one caveat here, which has not been mentioned, is that dynamic objects in the focus/center of the screen will also only be updated by whatever frame rate your GPU allows. I wonder how to handle those scenarios. Still a very worthwhile improvement for a lot of games, for sure!
From my experience with VR, although while looking around is perfectly smooth, animated characters on the screen update slower.. But that really isn’t much of a problem. There is no input lag from your HMD, you can look around perfectly fine and the 45fps the headset makes when reprojecting is still smooth enough to track targets.
The only real caveat is the input lag from your controllers. Moving your hands will feel less responsive when reprojecting than when running native framerate.
I wonder how this will carry over in desktop reprojection.
This is interesting. I'd love to see this feature compared to actual higher fps to see whether the perceived gain renders an actual competitive advantage.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project.
I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly.
I literally did a spit take at 6:01 Now I have coffee all over my keyboard 😂😂
Your demo should really have included any animated object. That would have shown some serious limitations and also would be present in nearly all games.
Seriously though.
I keep seeing this concern, and while it might be true at low FPS, I don’t think that’s really where it would be aimed. I’d imagine most would still aim for 60+ rendering.
Keep in mind that VR headsets are already using this method and are they experiencing these problems? I legitimately don’t know.
It proboly wouldn't have been as obvious as you might think. At least not in 30fps. The reason why 60 or even 120fps is so obviously faster have to do with light retention in the eye when you move the mouse. But you can't se that with animation. Have you ever been Iin the cinema.... 24fps... yes.. 24.... do you think cinema is chopy? I-max+ have 48 fps
@@matsv201 Movies don’t look or feel choppy because there is natural motion blur to everything and you also don’t control anything in them. It’s an apples to oranges comparison. Also IMAX doesn’t “have” 48 fps, IMAX is simply a format. A movie has to be shot at that framerate to display in that framerate. If it’s shot at 24 then it will be 24 in IMAX.
@@MrPhillian Depends on the game and what sort of post processing is going on in the game from my experience. It can work well but if there's a bunch of fancy effects going on it can be very noticable the smoothness is being faked.
The coffee guy, the tech news guy, hi, he owns a display, Mark. Such incredible descriptions of the people and their roles
Holy shit, Philip MADE IT
I actually was thinking about writing an injector to apply this to existing games a few years ago when I have seen the effect on the HoloLens. A few limitations though: camera movement with a static scene can look near perfect, however if an animated object moves depth reprojection cannot fix it properly, and you would need motion vectors to guess where objects will go, but that will cause artifact near object edges.
One thing I wondered about when I first saw that video is if the PERCIEVED improvement is good enough that you could lose a couple more frames in exchange for rendering a bit further outside the actual fov, but at a really low resolution. Basically like a really wide foveated rendering. It would mean the warp would have a little more wiggle room before things started having to stretch.
I’ve never understood why this hasn’t been done before. I’ve thought it should be done since 2016 when I got my VR headset. Like you said, extremely obvious!
I think we just witnessed one of those rare moments when an elegant solution clicks and starts a revolution
This video is poorly researched. Timewarp was invented by John Carmack in 2012 and described in his post "Latency Mitigation Strategies", more than 10 years ago, in the early 2012. In his original article games other than VR were already mentioned. I remember seeing normal desktop demos many years ago, but it never gained traction despite that.
Who's ever PC y'all used to show the games list at the start. Nice to see you as a man of culture. 0:55
I remember watching 2klik's video last month and saying wow this is amazing and mind blowing, but thought I was just excited for it because I'm a programmer, guess not
I was also excited for it but thought nothing would come of it since I've only ever seen him talk about it. Now perhaps there's a chance of this actually becoming popular and coming into games.
The main issue with these workarounds is that they depend on the Z buffer, they break down pretty quickly whenever you have objects superimposed like something behind glass, volumetric effects or screen space effects
Ya, that sounds like it could be a big issue...
You technically only need the depth buffer for positional reprojection (eg. stepping side-to-side). Rotational reprojection (eg. turning your head while standing still) can be done just fine without depth, and this is how most VR reprojection works already, as well as electronic image stabilization features in phone cameras (they reproject the image to render it from a more steady perspective).
It might sound like a major compromise but try doing both motions, and you'll notice that your perspective changes a lot more from the rotational movement than the positional one, which is why rotational reprojection is much more important (although having both is ideal).
Glad Linus mentioned the application of this to the Steam Deck at the end. One of the best things to do for a lot of games is to cap your FPS at 30-45 on the steam deck to get more battery life with a manageable amount of compromise. In games where using this technique makes sense, it would be amazing. I hope it's not something they have to wait until version 2 to implement though.
it was would be the biggest of ironies if the STEAM Deck featured native Reprojection for their games. Valve originally was against Asynchronous Reprojection tech back when Oculus released it with the help of John Carmack. At the time, Valve was against Reprojection tech and even wrote some tid bits about how 'fake frames are bad, and devs just need to optimize better'. Valve eventually wised up and adopted their own version of Reprojection for SteamVR (not the first time they were proven wrong, Valve initially only wanted teleport locomotion in VR, called free locomotion bad). The most recent iteration of Async Reprojection from Oculus/Meta meshes Reprojection + game motion vector data + Machine Learning; called Application Spacewarp (AppSW). A recent title that features of good example of it in action is Among Us VR on the Quest2.
It's huge for low/mid range setups to make games more responsive but it's also nice for high end machines because you'd completely negate the impact of 1%frames and feel like you're always at your average
This could have insane potential for handhelds. The steam deck can usually pull at least 30 fps on new triple A titles, which seems to be perfect for making the picture much smoother
This is one of the reasons why I love LTT. Making known new and innovative technologies that could revolutionize the industry.
Not only handheld consoles but mobile games and even cloud gaming can get significant improvements from this
Asynchronous projection is great in VRChat (in PCVR on my relatively old PC) where I usually get 15 fps and often even far less than that! I don't really mind the black borders that much in that case especially since the view usually tends to go a bit farther than my FOV making them usually only appear if things are going _extremely_ slow, like, a stutter or any other time I'm getting over 0.25 seconds per frame. So, perhaps another way to make the black bars less obvious, would be to simply increase the FOV of the rendered frames a little bit so that there is more margin. Would make lower frame rates, but it might be worth it in any case where the frame rates would be terrible anyways.
The issue with asynchronous reprojection is that with complex scenes or fast action it creates visible artifacts and weirdness. This is where AI comes in, like DLSS 3 frame generation. By using deep learning, it can more accurately and realisticly insert additional frames. That's really the way I'd the future along with ai upscaling. It has to be, otherwise we're going to need a nuclear reactor to power the future rtx 7090 or whatever.
Even bigger issue is that in VR games (where the camera can just move through walls if you move your head in the wrong place) the only thing async timewarp has to do is take the latest headset position and reproject there.
However, in a regular game you can't just take keyboard/controller input and reproject to a new position based on that. Or your character would go through the floor, walls or obstacles in the map. Instead you would have to run full collision detections and physics simulations in order to tell where the camera is supposed to be in the reprojected frame.
This not only makes it massively harder to implement compared to VR where it can be automatic. But it also increases the chance of hitting a CPU bottleneck and not gaining that much performance anyway. Then when you combine visual artifacts and other issues you start seeing why game developers haven't used their development time to implement this before.
Oculus/Meta is already doing that with their 3rd generation reprojection tech - Application Space Warp (AppSW). It meshes Reprojection tech with game engine motion vector data w/ a splash of Machine Learning to generate the best looking VR Reprojection yet. The recently released Among Us VR on the Quest2 is a good example of a game using the latest AppSW techniques. All thx to John Carmack, the grand daddy of Asynchronous Time Warp (ATW; the first mainstream Asynchronous Reprojection)
@@meowmix705 I don't play enough quest 2 games to be able to speak on AppSW (I play pc through link, and admittedly almost all of always disable regular ASW because of the inherent visual ghosting). Carmack may be the GOAT... I also get a kick out of the fact that he's probably the most reluctant Meta employee ever, but he sticks around because he just loves VR that much
@@shawn2780 Yup, Carmack is the Goat. As to PC-ASW, the Rift mostly uses 1.0 of ASW (has the typical ghosting visual artifacts). ASW 2.0 is only used for a handful of games (greatly diminished the ghosting, but required depth data from the game; many games did not support it). ASW 3.0 is unfortunately a Quest exclusive (for now?), they've rebranded it as AppSW to not confuse it with PC-ASW.
7:06 I love how editors put: "he owns a display"
This actually seems interesting to try out, some games I play, even on a 2060 may struggle to have even just a stable fps, so perhaps this kind of thing will help for those kind of situations
i honestly think this might be really great for ultra and super wide gaming as even more of th rendered frame is already in your peripharal vision, so the suboptimal edges will be even less noticible
As a dude with a 32:9 monitor, I'd settle for games actually supporting my monitor resolution AND aspect ratio. Most games I have to play windowed mode, because even if they allow me to go full screen (and don't add any black bars), usually the camera zoom is completely fucked and/or the UI elements are not properly positioned.
Heck, even a newer game such as Elden Ring just give me two 25% black bars on each side, but with mods it actually supports 32:9. Wouldn't have been that much work to support it by default. A checkbox for disabling black bars and vignette, and an option to push the UI elements out to the sides and all would be fine. But for some reason, most developers don't even care to support newer monitors with weird resolutions.
@@Imevul it did support it by default but fromsoftware disabled it intentionally because of "competitive advantage" or some bullshit
@@Imevul I also run with a 32:9 display and feel you, but don't be surprised with elden ring from soft does not care about proper PC support. About the UI thing, I actually can't think of a game off the top of my head that supports 32:9 without also having the option to adjust UI elements as in my experience it's very common and has always been a thing even consoles 10 years ago
That really explains some quirks of the Quest and Streaming, specially on how when something’s loading you can move your head freely and the VR picture would stay still in the space as a single frame, just like you showed here.
This kind of thing is what I actually always think about since way back when motion interpolation becomes common in TV plus the fact that I'm familiar with 3D (I don't do real time 3D rendering, only non real time). What I'm thinking was the fact that you have this motion data, depth, etc should be good enough to have some kind of in game motion interpolation but not really interpolation but for future frame. Even without taking the control input into account, just creating that extra frame based on the previous frame data should be good enough to give that extra visual smoothness feel (basically you'll end up with somewhat the same latency as the original FPS). Since it already working directly within the game, we should be able to account for the controller input and the AI, physics, etc to create the fake frames with an actual lower latency benefit, so basically the game engine run at double the rendering FPS so the extra data can be used to generate the fake frames.
For screen edge problem, the simple way to solve it is simply to overscan the rendering (or simply zoom the rendered image a bit) so the game have extra data to work with. Tied to this problem is actually the main problem with motion interpolation and this frame generation/fake frames thing, which is disocclusion. Disocclusion is something that was not in view in the previous frame becoming in view in the current frame. How can the game fill this gap because there is no data to fill the gaps. Nvidia I believe is using AI to fill those gaps which even with AI, it still looked terrible. But as it has been mentioned by people using DLSS3, you don't really see it, which is actually good for non AI solution, because if in motion people don't see that defects, then using non AI solution to fill the gaps (simple warp or something) should be good enough in most situation. Also doesn't need that optical flow accelerator because the reason why Nvidia use optical flow is to get motion data for elements that is not represented on the game motion data (like shadow movement) but in reality, that is not important, as in most probably won't notice when the shadow just move based on the surface motion (rather than the shadow motion itself) for that in between fake frames.
For a more advanced application, what I'm thinking is a hybrid approach where most stuff are being rendered at like half the FPS and half of it will reuse the previous frame data to lessen the rendering burden. So unlike motion interpolation or frame generation, this approach will still render the in between frame, but render it less, like probably render the disoccluded part, maybe decouple the screen space stuff and also shadow so it rendered at normal FPS instead of half so what the game end up with is alternating between high cost and low cost frame.
When I thought about that stuff, AI wasn't a thing thus I didn't think including any AI stuff in the process. Since AI is a thing right now, some stuff probably can be done better with AI like for example the disocclusion problem, rather than render the disoccluded part normally, probably it can just render the disoccluded part with flat texture as a simple guide for the AI to match that flat rendered look to the surrounding image which might be the faster way to do it.
Interpolation for the future is called extrapolation
And here it comes, 2 years later, Nvidia implement asynchronous reprojection in their Reflex 2 feature for their generated frame (now that they insert 3 AI generated frames in-between each computed frame) in order to reduce input lag perception.
This is so so incredible! I hope this will be the next-gen image helper in all upcoming and older games!
As one of the top 250 vrchat players, I remember when async was a new tech in it's buggy years but new people take for granted HOW MUCH of a massive difference it makes nowadays. I was with VR since consumer conception and it's interesting to see how things have shaped up.
This feels like a self-own
seen couple people "oh I need to turn it off in this" and then forget that motion smoothing is not the same and in steam VR it's quite a hidden option that resets every session.
I probably wouldn't rely on it for a higher FPS, but it looks like it has a great potential to smooth out FPS dips (say I'm playing at 60 that can sometimes drop to 50ish when a lot is going on).
This would be perfect for high resolution gaming. Because if you want you could play at the highest settings + 4k at a smoother feel.
Things like Particle FX and transparency is where the issues really show up not just on the edge. A spinning rocket for instance may warp the inner sections where the fins were in the previous frame especially if you pause the game.. since the predicted velocity remains the same but the object has stopped moving. They've solved this for oculus space warp by calculating velocities per object in order to have better prediction, however that requires the game engine to support this on an object basis for all materials and edge cases. We wish we could have used this on Mothergunship: Forge, but had to disable it due to the visible warping. It has the potential to be an absolute savior though since trying to hit 72 or 90 fps basically on a phone attached to your face is a huge challenge for peformance.
I'm curious if you'd still get the same results if you add moving objects into the scene. Since the objects update their position at the true frame rate, I bet they would look super choppy.
Yeah youre right
If you maybe played the teardown lidar mod this is kinda the same (atleast the mod has the same downsides as this)
Honestly tho it cant get any choppier because the framerate stays the same.
The examples were with really low framerates but if you had your normal 120 fps and a 360 hz monitor this technoglogie makes a big difference
Some games already separate physics fps from rendering fps.
The thing is, you always want consistent input no matter what. The alternative in your scenario is the objects are still choppy, but your mouse movement is also sluggish
I'm actually surprise LTT got a demo version without the moving objects (or at least not showing it). Cus yes, animation still looks choppy according to the fps cap, but the perception of moving the camera is smoother than butter 0-0
The latest version of the demo has moving objects, so you can see for yourself. (they look laggy, moving at the true framerate, as expected) x)
I totally expected this when it was first implemented for the Oculus Rift... Really surprised that it has taken this long to even start being prototyped for flat games
Yeah, it's brilliant for heavy games in VR (looking at you MSFS!).
The little descriptive gamer blurbs around 6:40 were hilarious.
"Tech News Man"
"He owns a display"
"Mark"
10/10 editing.
Another part of the original solution's brilliance that you lose on this that the VR solution exploits, is the fact that the edges of the frame in VR, tend to suffer from some kind of distortion from the lens to begin with. Especially back when John Carmack came up with it in Oculus. It was perfect for that. It's still really interesting on a flat panel but I think the fact some people might notice it is part of why it never really go that far.
I'm very interested in using a form of overscan where you're viewing a cropped in frame of the whole rendered image, so when you're panning your screen around it doesn't have the issue of stretching, unless you pan outside of the rendered frame.
Well, that's as simple as rendering a little more outside of the screen area.
Philip’s name coming up on the video took me by surprise 😂❤
If Async Timewarp becomes popular on desktop, I wonder if it makes sense to spend the extra time render the z-frames behind some of the foreground layers, or special models that are say 'tagged' or earmarked to have render behind turned on.
So for example you might want to render behind a fast-moving player model, or a narrow light pole, but you won't want to render behind a wall. Or maybe you only need to render behind a small pixel distance. I'm not sure how easy that is to add to game engines?
It is finally here! Nvidia just brought it with Frame Warp
asynch reprojection and all other implementations of it such as motion smoothing for VR have always had one major flaw, and that is when rendering stationary objects against a moving background. The best examples are driving and flying titles such as MSFS and American truck sim. The cab/cockpit generally doesn't change distance from the camera/player view so when the scenery goes past the cockpit parts of the cockpit that are exposed to the moving background start rippling at the edges.
This is one of the reasons AR it not used in VR anymore and also the reason why Motion smoothing is avoided as well. And besides we are talking two different technologies DLSS V Asynch Repro. One is designed to fill in the frames and the other is an upscaler. Not really an apple to apple comparison!
This has come up multiple times at our local VR dev meetup actually. The main issue with async timewarp you are missing is that, yes it makes the camera appear smooth, but that's all. Test it with something that isn't a purely static scene and it breaks pretty badly. VR has the same problem at low framerates. Sure turning your head won't make you feel sick, but everything else will still feel chunky. Really similar the interpolation effect people hate so much on TVs. So it's a good idea to offer it as an option, but it's not a silver bullet. It also breaks _really_ badly for non-first person views.
Amusingly, most games already use a lot of prediction. Many games run at a fixed internal rate, and the graphics are interpolated or extrapolated from that to smooth it out. Many even use reprojection effects for TAA or other stochastic rendering.effects. It is a little weird that no-one offers async timewarp though.
100% agree on there being tons of edge cases and weird scenarios that may or may not work. I'd love to see more people explore this stuff.
I think async timewarp is slightly different from reprojection like TAA, in the "asynchronous" bit. As far as I understand, for VR, the timewarp is usually (always?) handled outside of the game in the library/SDK used to control the headset. For example OpenXR does timewarp for you with no real developer effort needed. Draw the scene to a texture, hand it over to OpenXR and you're done.
To do it "properly" in a non-vr game would require more deep integration into the engine, creating a "high priority" render-context that interrupts the main rendering before vblank, etc etc.
There are solutions for many of those issues that I was exploring. Using motion vectors can make the special reprojection much better than how its done in this sample, and its out of my scope but combining that with AI would take it even another step forward. Also rather than using a stretched border to cover the blackness my version used variable rate shading and rendered about 15% beyond the bounds of the viewport. It wasn't good enough for super fast mouse movements but again if that was further extended with AI it could be even more convincing.
@@comradestinger Yeah, I'm actually not particularly certain of how the compositor priority works. I thought that most GPUs only had a single graphics queue, so I'm not sure how you mix in multiple presentation jobs with a single long running render command stream. Maybe it uses async compute? No idea...
@@comradestinger Yeah, I don't mean async timewarp is exactly like other reprojection algorithms, just that games do have a few layers like this already to use previously computed stuff to make new frames. Pretty unclear to me too if it's possible to move the work of the compositor process to a local thread.
A year later... wondering if this will ever get implemented by nvidia amd or hell even intel.
Omfg, ploufe is never living down that "I own a monitor" comment. Genuinely the best thing that's happened on LTT for a while.
aaand 2kliksphilip was mentioned once again..
happy to see him getting featured in other channels for his contents. I find it amazing how much different his videos about tech and hardware can be from the more traditional Hardware/Tech youtube channels
So 2 years later..Nvidia managed to do this with Reflex 2
thought the same thing
I have been in VR for almost 3 years, and as soon as meta implemented ASW and ATW, I wanted this for PC Games... I have been and still waiting for this, for years
Since you explored this topic, you should also do a video about foveated rendering.
I saw Philips original video and was really hoping other youtubers would spread the word
This can make a huge difference to desktop gaming for sure, mostly in the budget and aging mid-tier builds, but can you imagine what this could do for the Steam Deck and other handhelds? Not just make games smoother, but the amount of battery life this could save would be a huge advantage. Would love to see how far this tech can be pushed and one day even become as widely adopted as DLSS and FSR
This is why I love LTT, pure unadulterated excitement and honest love for gaming and tech