Rendering technology is so wild. It's hard to believe how just 30 years ago we had games at a resolution of 160x200 and now computers are just inventing frames whole cloth.
A Pentium from 30 years ago would have produced 10+ quadrillions of cycles by now. The same amount of cycles today's 8 core cpu can do under 30 mins. That's how quick technology has improved and evolved.
As far as I'm aware only pre-8 bit consoles and handhelds ran at resolutions that low. Both the NES and SNES ran at 256x244, although for NTSC regions the image was actually rendered at 256x224. The N64 was capable of 480p image, although the vast majority of games used the 320x240 mode. The original GameBoy and Gameboy Colour both rendered at 160x144. Much older video formats also rendered at 144p. I mean, yes, the fact that the Atari 2600 had pixels you could count on screen and the PS5 and current PCs are rendering at 3840x2160 (which was always a pet peeve of mine, 4K suggests a 4 times larger resolution but if 1080p is 1920x1080, then 4K is actually only double the resolution, the 4K coming from the horizontal axis and not the vertical axis but that's neither here nor there) but also time moves forward.
One thing to note about 24fps movies are that each frame is filled with motion blur which helps our head basically create detail from what is essentially a blurry mess. A frame from a video game capped at 24fps would be essentially moments frozen in time with no motion blur, thus looking less smooth. In fact, shutter angles are a consideration for DPs to vary the length of the motion blur depending on whether it's an action scene (making it look almost superhuman) or to mimic intoxication or being drugged by making the blur extra long.
I dont know if you tried this already, but how does it look if you extract all the AI generated frames from the 30->60fps video and play them back without any real frames (and then ai generate again based on that). Might be a cool experiment to see how often you can generate until it looks bad or non-playable.
Once again Philip is explaing this technology better than anybody else. Somehow this CSGO TH-camr is my go to guy when telling people about AI technology. Great video man. Really shows you're willing to go the extra mile with the effort you put into this stuff. 💙
@@lolroflmaoization sure, I'd agree with some stuff. But with this very example of DLSS 3 digital foundry seems very restricted by Nvidia about what they can show. And didn't dedicate their video to exploring AI frame generation in its entirety, just showed some percentage increases when using DLSS 2 and DLSS 3 in like 2 Nvidia approved games and scenarios. So at least for now, I think Philip has them beat in terms of a full comprehensive look at this technology. So hats off to him I say.
@@eniff2925 ultimately DLSS 3 isn't out yet. So this video isn't trying to be a deep dive and review of DLSS 3 and if you should turn it on or not. It's meant as an overview of what frame interpolation is, how it works, the limitations of the technology and what artifacts to look for, what frame rates the technology works better at and the fact that you see much better results when interpolating smoother looking motion to begin with (such as 24fps+) and explained why with simple and easy to understand graphs about the time between each real frame and AI generated frame and that tells us about what kind of use cases DLSS 3 is going to be best at (60fps to begin with and up) Like why so negative? This video is great for just expanding on a basic understanding of the technology and without early access from Nvidia is the best we can hope to do, it covers many things that weren't mentioned in digital founday's video. And you're negative towards DLSS as well saying it's a huge sacrifice in either latency or resolution. Whilst ignoring the benefits which is much smoother motion to drive monitors like 240Hz and 360Hz and 480Hz at 4k. Or allow higher end demand rendering like ray tracing to become smooth looking. The tech isn't out yet, we don't know how good it is. But even if you take digital founday's numbers on latency at their worst and say that latency is the same as native when you have reflex on and are rendering at 1/4th the resolution. Or a little higher when rendering at native res with frame generation and reflex on presumably. Is that really a massive deal? It's milliseconds more latency for a much much smoother image to drive these new 240Hz+ monitors. If you only focus on the downsides of a few ms higher latency it sounds terrible, but you're just ignoring the positives. Which doesn't really matter at the moment since neither of us can make a good conclusion about the tech yet. But it seems you are instinctively negative and bias against it for whatever reason, a sentiment I've seen a lot of due to the anger with Nvidia over priced and what not. Just give the tech a chance, stop hating on a video for not being an in-depth look at DLSS 3's specific implementation of frame interpolation before the tech is even out, and stop being so negative. 👍
I really appreciate the amount of effort that has gone into this video. Straightforward and easy to understand, even for someone who isn't very tech savvy. Thanks for the all the hard work, very enjoyable!
I've been pretty surprised that MPV seems to do that, while the Smooth Motion option in MPC-HC (video players) doesn't seem to do anything at all. I can't believe that every single player in existence shouldn't be able to do better real time interpolation on PC and a 5600X and 3060Ti than my 10yo TV on some potato chip. RIFE 4.0 via Flowframes is great and shockingly fast, compared to my previous experience 2 years ago with DAIN 0.36 and a GTX 960, but I'd still love to know about something that replicates that standard TV feature.
I'm pretty sure the motion vectors in use will make a massive difference. The motion vectors are 3D, so it knows whether things are moving towards or away from you as well. I just wish games would also use the "time warp" style fps boost that VR uses. With VR you have the drivers doing stuff with the depth buffers, distorting the previous frame, guided by the depth buffer, using your headset's input.
So it allows you to move your head in a scene, while technically the action in the scene isn't newly rendered yet, for a low head-movement input latency? I never heard of this, cause I don't own a VR headset, but that's what I understand this as.
@@Rainquack yes! I encourage you to look it up on youtube. There are videos demonstrating it. The idea is to never let your headset's movement suffer from any framerate changes. When a frame isn't ready in time, it takes the previous frame and warps it according to the geometry of the scene (stored in the depth buffer), so you can always move around freely in the space in the full framerate, and if things are chugging along you will see items and characters and stuff get choppy, but your own movement in the space is always smooth. It's brilliant!
One thing to keep in mind is that rife can see all the frames while nvidia can only see two, the present frame and the next one. Edit: IM WRONG rife only uses two frames despite having access to all of them. Thanks Quaterions
Any time I notice music from Oblivion in the background of a video I have to go back and watch parts again because the music is so good it's distracting.
मेरे विचार से यह एक बहुत सुन्दर , उपयोगी और महत्वपूर्ण तकनीक है चित्रपट निर्माण के लिए । इस तकनीक द्वारा हम विशेष दृश्य प्रभाव जैसे मैट्रिक्स प्रभाव या टाइम-स्लाइस (जो की कई कैमरों का उपयोग करके बनाया जाता है ) ताकि काल धीमा हो जाये या स्थिर प्रतीत हो , बहुत सरलता और तीव्रता गति से से सम्पन किया जा सकता है। वैसे आपने इस चित्रपट में गहन शोध , स्पष्ट अवलोकन , प्रज्ञावान विश्लेषण और बहुत सुन्दर प्रस्तुतीकरण किया है ।
I would love to see a dlss 3 example that shows the input video on the left, the "upscaled" result in the middle, and JUST the newly generated frames on the right. Would make for a really neat comparison.
DLSS 3 will be HUGE for older games that have physics tied to fps. A great example is Ocarina of Time. Trying to play at 20fps gives me a headache sometimes, but I power through it since it's one of my favorite games ever. If DLSS can properly "upscale" it to 60fps on Project64, I'd buy a 40 series card alone for that.
I almost didn't click on this video, because the title sounded boring, but I am very glad that I changed my mind. This video is even better than a lot of your recent ones.
what looks REALLY good is already high fps footage upscaled to even higher (say 240 to 2x or 3x that speed) then RESAMPLED back down to 60fps with sony vegas smart resample. it looks insanely smooth good video btw
Long time SVP user and RIFE user as of this year here. I would like to report that chain link fences are the bane of all things frame interpolation, and you should use it as a worst case scenario instead of grass!
I see what you tried to do here, but all of these technologies are only giving the AI images without motion vectors, whereas dlss3 has all of the motion vectors in thr engine to work with in addition to the visual data. Early comparisons between existing techs show that dlss is far more advanced than off the shelf interpolation AI programs. However what you said about it looking smoother without feeling smoother is probably spot on. That will be interesting to see how it does actually feel versus just watching a game with dlss 3 footage
Wait... is that Uncle Al? Oh god, he's broken into our physical dimension! Aside from that, I did find this whole experiment/demonstration super fascinating. I've always been mesmerized with some plugins and software that can take 60fps and attempt to convert them into slow motion. The weird bending and smeary distortion can look really neat.
Cool video! Thanks for making it. For the 30->60fps examples, could you show the real skipped frame and the interpolated frame side by side or even the difference between them? I feel like that might be really cool to look at. Cheers.
Ray tracing and dlss are the technical computing miracle technologies of my time and I now understand what a lot of older gamers talk about about what I take for granted, it’s such an exciting time
Great video. I've been waiting for real-time frame interpolation tech in games for years; it's very exciting to see it finally coming to life. Plus, the wise mystical music always gets me in a wise, mystical mood.
Idea: The artifacting in the 60 to 120 FPS conversion becomes fast enough to trigger a kind of survival mechanism in the human brain, related to danger response and reflex. Like if you see something unexpectedly move in the corner of your vision, that may be a danger that you need to react to, so your brain distributes more resources to paying attention to it, and divides your attention between the flickering artifacts and whatever thing your trying to concentrate on.
it would be cool if there was like an option that would only activate the fps upscaling when your fps goes below a certain point. for example you would normally play at 60 fps but when the game sometimes dropped to 40 it would generate new frames to hit the 60 fps again
Games tend to benefit from higher framerates but film and animation don't. Especially since double the framerate means double the render time which can make animation a particular hell to work with, and AI interpolation on animation... Tends to be extremely hit or miss. More isn't strictly better. I'm softer on this than most, I've seen interpolations that do actually look incredible and benefit the animation, but I also know the vast majority of it looks bad and changes the "feel" of the animation. Something that used to be snappy and punchy can become overly smooth. Something that was meticulously workshopped to look natural can look artificial. It very much is a right tool for the right job situation.
> Games tend to benefit from higher framerates but film and animation don't. Hard disagree. It's choppy frame rates that artificial-looking. Natural looking is ultra high frame rates to get closer to life-like motion.
8:25 - It’s impressive technology, but I feel like those camera “jumps” would drive me insane after a while. It’s similar to the later examples where you can’t exactly look for the issues, but you can FEEL their presence.
yea the 5-60 csgo scene broke my brain, I have never in my life experienced motion sickness not even when I bought my vr set and instantly started with locomotive movement. But that scene broke me, jesus
its going to be quite interesting for sure. With post processing you can't really test what would happen with a game. 5FPS decimated from a 60FPS video would be as you presented it, an even decimation of the frame rate and thusly, the frame times. but with a game, as i'm sure you know, you have a variable frame time. i'm the most interested in where you are getting, by all means, a measurable "60fps" or "120fps" but your frame TIMES are wonky and uneven. THAT is where i'm sure DLSS 3 will shine. EDIT: and yeah, DLSS3 will also have the motion vectors and depth buffer among other things that can act as hints to it. this is going to be VERY VERY interesting to see.
I'm fairly sure the gpu will utilise more data to interpolate muuuuuuuuch better. A gpu has a z-buffer (depth image), movement vectors for each pixel and more data, basically for free. This additional data can make the output almost perfect, even for the grass situation for example.
I really didn't understand DLSS as well as I thought till I discovered your channel, and since then I've learnt a lot. This technology is going to get pretty wild as time goes on.
Have you seen RTX remix phillip? Who am I kidding, of course you have, feel like it's a techology you'll love. I can't wait to go render so many old dx8 and 9 games with new ray tracing capabilities it'll be very neat to see them in a different light.
You made a great point at the end of the video. Past 60fps, traditionally I think the benefit of higher framerates has been the reduction in latency, not that it appears to move smoother. You can try looking over someone's shoulder as they play a video game and honestly anything above 50 looks reasonably smooth, but they can certainly feel that it's not perfect due to the latency. I'm really curious how DLSS 3 is going to feel on 30fps and 60fps games
Disagree, CRT's in motion resolution are equivalent to 1000fps, 60hz on a sample and hold LCD or OLED screen looks like a smeary mess, in fact resolution drops to 360 lines equivalent to 360p when panning the camera which looks ultra terrible on modern 4k displays. Latency is only 1/2 the picture and only really relevant for E-sport titles / twitchy games. Nvidia, Intel have both done research papers/tests that suggest past a certain point people don't care about latency. Also this testing methodology is useless as DLLS 3 has access to the games render pipeline and motion vectors so all those artefacts he experiences in his CS:GO example are a limitation of RIFE and can't be compared 1:1 with DLSS.
@@theemulationportal You're right about the motion vectors, I expect DLSS to outperform the methodology shown in the video in terms of image quality. My argument was more that the increase in framerate is only really meaningful as long as you can feel improvements in latency. If you can't feel improvements in latency, you probably aren't benefitting anymore from high FPS from a smoothness standpoint either. I suppose we could actually test this empirically by having a 240hz screen and injecting input lag until it has the same latency as if you were playing at 60fps, might be an interesting test to do.
@@mechanicalmonk2020 For a 1000 pixels per second motion tracked by the eye, the object tracked is perceived to have a 16,6 pixel wide symmetrical smear on a sample and hold 60Hz display. Being OLED doesn't change that. A theoretical instant pixel transition technology would not change that either.
OOOOH the bugs you mention about framerate reminded me of an old bug. when you ran oldest cossacks game on 64 bit architecture it couldnt help but speed up the game so fast it was unplayable it was like running a speedhack on very fast setting. it was meant for 32bit architecture. there were also some differences of gameplay between different screen resolutions iirc.
I still think the optimal solution in a game for this sort of tech would be to dynamically raise the "pure" framerate based on the frame to frame distance, essentially demanding more frames for the ai to use as reference based on the current speed of camera. After animating for many years, I can tell you that your can make things appear very smooth even at a low fps assuming the image isn't changing a ton frame to frame.
Loool about the keeping the money for camera comment, imagine in 20 years when cryptography has become so powerful we can run quantum apps on old pentium 3 CPUs that would be hilarious. I always thought the race to better specs was a linear scaling through centralized means of production for profits, rather than a smart scaling through data compression and restructuring.
I certainly trust that DLSS3 will do a way better job with the grass than RIFE can. I imagine having access to actual depth buffer + motion vectors would fix 90% of everything that RIFE got wrong in that example.
Two things I would say- The low frame rate interpolation shimmering could be used for some really good music videos Hypothetically for the corner warping if the area it takes up is consistent enough would rendering the image at a slightly higher resolution and basically cropping to the desired resolution
Most of the artifacts in this video are the result of naively estimating motion by finding the nearest patch of similar looking pixels. This is why high frequency detail like grass, rapidly moving objects, or repeating textures look the worst. DLSS 3 should have access to motion vectors for both frames, so if done properly, most of these artifacts should be avoidable. The only hard part is generating detail that is occluded in input frames. We have the technology to do this convincingly, but I doubt it is optimized enough to run in real time yet, so it looks like DLSS3 is just smudging surrounding colors to fill the gap. Maybe that’s convincing enough in practice.
Very interesting test and comparasion! But I wonder if DLSS3 creates its newly interpolated frames by interpolating between grainly DLSS-upscaled frames Or is this new frame interpolation an entirely separate tech which doesn't involve any upscaling at all? At the top of the video you mentioned that it doesn't allow for higher resolutions than before. So I assume its working like DLSS2 plus interpolated extra frames. I am looking forward to a possible future video where you are testing it out to see how it looks and behaves in detail! Big love from Europe!
I'm surprised that you, as an avid VR fan, made no comparison to the interpolation technology used in headsets! This kind of "Something is wrong, but you can't exactly tell what" happens ALL the time when you use interpolation to smooth out the framerates in VR. Things seem to "hang" on the edges of the screen sometimes and warp out of proportion. It is why I prefer to have a choppier framerate, as this kind of "wrong" feeling disappears, though I presume people who quickly get motion sick cannot enjoy the same luxury.
In the 5 -> 60 flight sim, sure there was a lot of funny business going on over much of the screen, but there was also a lot of surprisingly smooth motion on the more distant objects. Definitely not how I'd want to game, but still super impressive that it can take 5 frames and turn them into 60 with any degree of success even if it's not exactly a good result overall. I am curious how this program does its thing without itself reducing performance. If the computer is not fast enough to produce 60 real frames, how is it fast enough to invent fake frames to make up the difference? I get that the fake frames aren't as complex to resolve as real ones, but they aren't free either, the computer has to do some calculating to produce them. Is there a point at which the computer is spending so much time inventing fake frames that the real frame rate starts to suffer?
The way you blocked your teammate who was trying to reload hurt more and more the more you see it and the culmination was the 50% speed. You could really see the distress in his eyes as he progressively understands what's happening to him
I find it really facinating to see you break down all the new methods used in graphics prosessing. And your facination with AI, content generation and upscaling. And its really also a great source to learn about this stuff. But i wouldn't ignore the fact that AI has a terrible tendency to ruin artistic intent. Both by warping visual art and ruining classical animation. So AI upscaling isn't a silver bullet. And sometimes when you upscale something, there is also the aspect to consider that the product may be worse than the original. Lower framerates and resolutions aren't allways something that should be consided undesirable in all cases. Though this is mainly applicable to artistic works / games & hand animation. I've seen countless examples of people using AI framerates upscaling on traditional animation, and publishing these as "superior" to the origionals. Ignoring the fact that the artefacts / smoothening caused contradicts both the artistic intent by the creators and the core principles of the craft. Which makes me sad. This is not something i expect, or even think you should cover. But i still think the question of IF we should use AI upscaling in the first place, is as important, if not more important as how good said upscaling is. Obviously no disrespect to your facination with the scene. I've been watching you for years, and there are a lot of facinating projects that stem from the field of AI generation.
@@2kliksphilip Hey, i don't want to start i discussion over this. But as a game developer myself, i really value artistic intent. And in artistic projects, AI upscaling is only going to act as a filter which can serve to distort those pieces with artistic intent. This is a general consensus within a large majority of artistic communities, not because of any sort of blanket hate. Thinking of it as that is simply oversimplyfying a complex issue. Yeah, a large ammount of AI modified content is guaranteed. And in most cases, artists have control over how our art is displayed. But if everything is filered trough a layer of AI upscaling, you are taking the artistic power away from those who'se passion it is to make these games in the first place. Then again AI upscaling for games can still be a great tool. Especially for more realistic games, like the ones you've displayed as examples. For people using weaker computers and for games like these. This can be an awesome tool. But all i'm saying is that its not just a straight up improvement in every case. And more of a very useful tool in certain cases, rather than a straight up upgrade to anything its applied to.
@@PhilipLL Did anyone ever say it was "a straight up improvement"? Far as I can tell, that's just a strawman. Nobody who's eager about frame interpolation and/or image/video upscaling is saying it'll improve anything you put on it - quite the contrary, we're looking at the places it makes mistakes in and waiting to see how they'll be resolved. And as for games, would you rather your player "enjoy" your "artistic intent" via 5, 10, 15 fps? Would you rather an AI upscale a rendered frame from your game, or someone either not play at all because of the low framerates or have to play at a lower resolution where your art is completely destroyed? Serious questions.. personally, as both a dev and a player, I'd much, MUCH rather have this sort of technology continue to evolve and am eager to see applications of it both in the now and in the future. Without this sort of stuff I wouldn't be able to play, say, Gmod in VR with my extremely underpowered MX110 and i5, or Arma 3 at native res.
it truly depends on how you look at "artist intent", especially when you then use specific AIs for specific tasks. as an example: my video interpolation AI TVP is able to understand the 12 principles of animation, and although its in beta, it performs pretty well! RIFE is a good test for liveaction/reallife scenarios, but doesnt work so well for animation.
I just hope that some form of it can be modded in or maybe even down the line officially supported on older cards, DLSS2 is great at what it does but fully raytraced cyberpunk still makes my gpu catch fire a bit even on Balanced
Even the 15 real FPS upscaled looked fairly impressive to me. I was thinking about how I played like 80 hours of Oblivion on my crap old PC at 15FPS, and then the Oblivion music kicked in around 9:30 and took me right back there 🕑🕒🕓
Something interesting is going to be if it's like minimum frame times, what if there's 10 frames between some of the "true" frames, after nothing but 1 frame between "true" frames. Something like a new form of stuttering...
My Samsung smart TV upscales frames already it's called sharpness setting, really it's good as long as you don't go overboard with it (mine is set on 70/100). Especially during circular outer pans of camera, that's when you notice all the gaps in spacetime it can not possibly fill :D
I wonder if devs will be able to use dlss3 on specific elements instead of the whole screen. For example, games that have lower FPS things in the distance like particle effects or NPCs. Interpolation on just those distant elements could really help sell the scene.
Seems like first person is the true achille's heel of frame interpolation. Explains why Nvidia used a regular third person drive through instead of panning the camera around in an interior like you're losing your mind where you could possibly see some artifacts.
7:57 Thank you! Atleast Avatar is getting adaptive refresh rate where in action scenes it will have more than 24 fps (which every scene with fastish movement should have imo)
Rendering technology is so wild. It's hard to believe how just 30 years ago we had games at a resolution of 160x200 and now computers are just inventing frames whole cloth.
30 years is an eternity in modern technology terms.
Hell 30 years is a significant portion of a human lifetime.
Man insane how we went from the pinhole camera back in 1021 to what we have today in 2022 absolutely incredible
A Pentium from 30 years ago would have produced 10+ quadrillions of cycles by now.
The same amount of cycles today's 8 core cpu can do under 30 mins. That's how quick technology has improved and evolved.
30 years ago was 1992 and we had SNES with much higher resolution and only 4 years before N64 and SM64
As far as I'm aware only pre-8 bit consoles and handhelds ran at resolutions that low. Both the NES and SNES ran at 256x244, although for NTSC regions the image was actually rendered at 256x224. The N64 was capable of 480p image, although the vast majority of games used the 320x240 mode. The original GameBoy and Gameboy Colour both rendered at 160x144. Much older video formats also rendered at 144p.
I mean, yes, the fact that the Atari 2600 had pixels you could count on screen and the PS5 and current PCs are rendering at 3840x2160 (which was always a pet peeve of mine, 4K suggests a 4 times larger resolution but if 1080p is 1920x1080, then 4K is actually only double the resolution, the 4K coming from the horizontal axis and not the vertical axis but that's neither here nor there) but also time moves forward.
One thing to note about 24fps movies are that each frame is filled with motion blur which helps our head basically create detail from what is essentially a blurry mess.
A frame from a video game capped at 24fps would be essentially moments frozen in time with no motion blur, thus looking less smooth.
In fact, shutter angles are a consideration for DPs to vary the length of the motion blur depending on whether it's an action scene (making it look almost superhuman) or to mimic intoxication or being drugged by making the blur extra long.
usually shutter speed is double the fps you capture at so you get smooth more natural realistic motion blur
The rave scene in Blade being a good example of no blur, if I remember correctly
I dont know if you tried this already, but how does it look if you extract all the AI generated frames from the 30->60fps video and play them back without any real frames (and then ai generate again based on that). Might be a cool experiment to see how often you can generate until it looks bad or non-playable.
I really like this idea! It wouldn't be necessarily in context but it'd be interesting to watch at the very least
Cool idea
this has 3klik's written all over it, love it
Brilliant idea! Would make a perfect 3kliks video.
3KLIKS, 2KLIKS, KLIKS, ANYONE
Once again Philip is explaing this technology better than anybody else.
Somehow this CSGO TH-camr is my go to guy when telling people about AI technology.
Great video man. Really shows you're willing to go the extra mile with the effort you put into this stuff. 💙
I enjoy his videos, but Digital foundry is my go to if you want in depth analysis.
@@lolroflmaoization sure, I'd agree with some stuff. But with this very example of DLSS 3 digital foundry seems very restricted by Nvidia about what they can show. And didn't dedicate their video to exploring AI frame generation in its entirety, just showed some percentage increases when using DLSS 2 and DLSS 3 in like 2 Nvidia approved games and scenarios.
So at least for now, I think Philip has them beat in terms of a full comprehensive look at this technology. So hats off to him I say.
you must be thinking of 3kliksphillip. different guy. brother. father?
@@WhizXGames haha true, the CSGO mapper, the AI expert and the jelly baby eater. A true family.
@@eniff2925 ultimately DLSS 3 isn't out yet. So this video isn't trying to be a deep dive and review of DLSS 3 and if you should turn it on or not. It's meant as an overview of what frame interpolation is, how it works, the limitations of the technology and what artifacts to look for, what frame rates the technology works better at and the fact that you see much better results when interpolating smoother looking motion to begin with (such as 24fps+) and explained why with simple and easy to understand graphs about the time between each real frame and AI generated frame and that tells us about what kind of use cases DLSS 3 is going to be best at (60fps to begin with and up)
Like why so negative? This video is great for just expanding on a basic understanding of the technology and without early access from Nvidia is the best we can hope to do, it covers many things that weren't mentioned in digital founday's video.
And you're negative towards DLSS as well saying it's a huge sacrifice in either latency or resolution. Whilst ignoring the benefits which is much smoother motion to drive monitors like 240Hz and 360Hz and 480Hz at 4k. Or allow higher end demand rendering like ray tracing to become smooth looking.
The tech isn't out yet, we don't know how good it is. But even if you take digital founday's numbers on latency at their worst and say that latency is the same as native when you have reflex on and are rendering at 1/4th the resolution. Or a little higher when rendering at native res with frame generation and reflex on presumably. Is that really a massive deal? It's milliseconds more latency for a much much smoother image to drive these new 240Hz+ monitors. If you only focus on the downsides of a few ms higher latency it sounds terrible, but you're just ignoring the positives. Which doesn't really matter at the moment since neither of us can make a good conclusion about the tech yet. But it seems you are instinctively negative and bias against it for whatever reason, a sentiment I've seen a lot of due to the anger with Nvidia over priced and what not.
Just give the tech a chance, stop hating on a video for not being an in-depth look at DLSS 3's specific implementation of frame interpolation before the tech is even out, and stop being so negative. 👍
Just wanted to say that I really appreciated the random Oblivion OST towards the end of the video, absolutely wonderful
Same taught. I had to check if anyone else noticed.
@@gsedej_MB same
The biggest difference is that dlss has access to color, normal, depth and motion vectors straight from the engine.
I don't see why DLSS wouldn't have access to the vertex data too
I really appreciate the amount of effort that has gone into this video. Straightforward and easy to understand, even for someone who isn't very tech savvy. Thanks for the all the hard work, very enjoyable!
Impressive stuff tho, I still remember when fading between frames to increase framerate was the state of the art
My old copy of Sony Vegas sure likes to remind me that resampling framerates was the future, when it makes even my hard cuts out of black blurry.
I've been pretty surprised that MPV seems to do that, while the Smooth Motion option in MPC-HC (video players) doesn't seem to do anything at all.
I can't believe that every single player in existence shouldn't be able to do better real time interpolation on PC and a 5600X and 3060Ti than my 10yo TV on some potato chip.
RIFE 4.0 via Flowframes is great and shockingly fast, compared to my previous experience 2 years ago with DAIN 0.36 and a GTX 960, but I'd still love to know about something that replicates that standard TV feature.
I'm pretty sure the motion vectors in use will make a massive difference. The motion vectors are 3D, so it knows whether things are moving towards or away from you as well.
I just wish games would also use the "time warp" style fps boost that VR uses. With VR you have the drivers doing stuff with the depth buffers, distorting the previous frame, guided by the depth buffer, using your headset's input.
So it allows you to move your head in a scene, while technically the action in the scene isn't newly rendered yet, for a low head-movement input latency?
I never heard of this, cause I don't own a VR headset, but that's what I understand this as.
@@Rainquack yes! I encourage you to look it up on youtube. There are videos demonstrating it. The idea is to never let your headset's movement suffer from any framerate changes. When a frame isn't ready in time, it takes the previous frame and warps it according to the geometry of the scene (stored in the depth buffer), so you can always move around freely in the space in the full framerate, and if things are chugging along you will see items and characters and stuff get choppy, but your own movement in the space is always smooth. It's brilliant!
One thing to keep in mind is that rife can see all the frames while nvidia can only see two, the present frame and the next one.
Edit: IM WRONG rife only uses two frames despite having access to all of them. Thanks Quaterions
@@dan_loeb not only nvidia others upscale too. It's not special thing tô nvidia
Unfortunately RIFE only uses two frames regardless of the fact it has access to all of them...
@@baka2907 What does this have to do with anything? This is not upscaling.
@@baka2907 ???
RIFE does not use more than 2 frames.
Any time I notice music from Oblivion in the background of a video I have to go back and watch parts again because the music is so good it's distracting.
incredible work with the explanations and demos! The filmed video is literally perfect
0:00 You may not like it, but this is what the peak male form looks like.
मेरे विचार से यह एक बहुत सुन्दर , उपयोगी और महत्वपूर्ण तकनीक है चित्रपट निर्माण के लिए । इस तकनीक द्वारा हम विशेष दृश्य प्रभाव जैसे मैट्रिक्स प्रभाव या टाइम-स्लाइस (जो की कई कैमरों का उपयोग करके बनाया जाता है ) ताकि काल धीमा हो जाये या स्थिर प्रतीत हो , बहुत सरलता और तीव्रता गति से से सम्पन किया जा सकता है।
वैसे आपने इस चित्रपट में गहन शोध , स्पष्ट अवलोकन , प्रज्ञावान विश्लेषण और बहुत सुन्दर प्रस्तुतीकरण किया है ।
You released the video just in time. Was rewatching your older ones
I would love to see a dlss 3 example that shows the input video on the left, the "upscaled" result in the middle, and JUST the newly generated frames on the right. Would make for a really neat comparison.
Wow, those morphing videos created by RIFE remind me of some rather spiritual experiences
?
@@HaxxorElite the RIFE merging looks really similar to an effect one can get while using psychedelics
@@logisticallogic8673 It has no connection
@@HaxxorElite ? I'm not quite sure what you mean, I'm only saying it looks similar to things I've personally seen before.
Do you have autism@@HaxxorElite
2:33 this effect could be used purposefully to make some amazing visual stylisation
DLSS 3 will be HUGE for older games that have physics tied to fps. A great example is Ocarina of Time. Trying to play at 20fps gives me a headache sometimes, but I power through it since it's one of my favorite games ever. If DLSS can properly "upscale" it to 60fps on Project64, I'd buy a 40 series card alone for that.
I almost didn't click on this video, because the title sounded boring, but I am very glad that I changed my mind. This video is even better than a lot of your recent ones.
I think this is the most I have ever critically analyzed and learned from a clip of a half naked man
Other than maybe that unlisted 4kliksphillip video from a while back
what looks REALLY good is already high fps footage upscaled to even higher (say 240 to 2x or 3x that speed) then RESAMPLED back down to 60fps with sony vegas smart resample. it looks insanely smooth good video btw
Would love to see this as well
Long time SVP user and RIFE user as of this year here. I would like to report that chain link fences are the bane of all things frame interpolation, and you should use it as a worst case scenario instead of grass!
I see what you tried to do here, but all of these technologies are only giving the AI images without motion vectors, whereas dlss3 has all of the motion vectors in thr engine to work with in addition to the visual data. Early comparisons between existing techs show that dlss is far more advanced than off the shelf interpolation AI programs. However what you said about it looking smoother without feeling smoother is probably spot on. That will be interesting to see how it does actually feel versus just watching a game with dlss 3 footage
When using Rife with Flowframes, it does have an option to detect scene changes, so it will not interpolate between those 2 frames.
Frame upscaling and the artifacts that come along with it look really similar to asynchronous space warp (frame interpolation for vr headsets)
This is not upscaling though, it's interpolation
@@theRPGmaster you could argue that its temporal upscaling 🤔
@@AngryApple You could argue it's temporal upscaling when plants grow
@@AngryApple no you cant because then it would be called temporal upscaling
Wait... is that Uncle Al? Oh god, he's broken into our physical dimension!
Aside from that, I did find this whole experiment/demonstration super fascinating. I've always been mesmerized with some plugins and software that can take 60fps and attempt to convert them into slow motion. The weird bending and smeary distortion can look really neat.
Don't undersell yourself - you are always quite mesmerizing to watch ;)
doubling the replay speed to increase fps was the most smart thing i ever heard 😮
Cool video! Thanks for making it. For the 30->60fps examples, could you show the real skipped frame and the interpolated frame side by side or even the difference between them? I feel like that might be really cool to look at. Cheers.
I think it could be interesting to just show the 60 AI frames for the 120fps example, to judge what it is doing.
Seeing my own animation in your video caught me of guard. Very nice
your videos are dope. been binging everything since i paused watching you from a few yrs ago, good stuff.
I enjoyed this video thank you for making it
Thank you for your research! I've really enjoyed this type of content from you over the last few years!
Ray tracing and dlss are the technical computing miracle technologies of my time and I now understand what a lot of older gamers talk about about what I take for granted, it’s such an exciting time
Great video. I've been waiting for real-time frame interpolation tech in games for years; it's very exciting to see it finally coming to life. Plus, the wise mystical music always gets me in a wise, mystical mood.
Idea: The artifacting in the 60 to 120 FPS conversion becomes fast enough to trigger a kind of survival mechanism in the human brain, related to danger response and reflex. Like if you see something unexpectedly move in the corner of your vision, that may be a danger that you need to react to, so your brain distributes more resources to paying attention to it, and divides your attention between the flickering artifacts and whatever thing your trying to concentrate on.
I'm glad you sorted out the city packs for flight sim! Sorry, you were probably flooded with comments on the video on that...
Saving this to my "watch later" pile as I'm going to sleep. 😂
see you in the morning my dude
it would be cool if there was like an option that would only activate the fps upscaling when your fps goes below a certain point.
for example you would normally play at 60 fps but when the game sometimes dropped to 40 it would generate new frames to hit the 60 fps again
that intro is gold.
Games tend to benefit from higher framerates but film and animation don't. Especially since double the framerate means double the render time which can make animation a particular hell to work with, and AI interpolation on animation... Tends to be extremely hit or miss. More isn't strictly better. I'm softer on this than most, I've seen interpolations that do actually look incredible and benefit the animation, but I also know the vast majority of it looks bad and changes the "feel" of the animation. Something that used to be snappy and punchy can become overly smooth. Something that was meticulously workshopped to look natural can look artificial. It very much is a right tool for the right job situation.
> Games tend to benefit from higher framerates but film and animation don't.
Hard disagree. It's choppy frame rates that artificial-looking. Natural looking is ultra high frame rates to get closer to life-like motion.
Woah, what is the name of the last song used in Conclusion chapter? It is so good, never heard it on another philip video before.
i also wanna know
Didn't expect to see David Bowie demonstrations today.
The Oblivion score in the background was doing my head in trying to work out if I had another tab open with someone streaming gameplay lol.
8:25 - It’s impressive technology, but I feel like those camera “jumps” would drive me insane after a while. It’s similar to the later examples where you can’t exactly look for the issues, but you can FEEL their presence.
yea the 5-60 csgo scene broke my brain, I have never in my life experienced motion sickness not even when I bought my vr set and instantly started with locomotive movement. But that scene broke me, jesus
its going to be quite interesting for sure. With post processing you can't really test what would happen with a game. 5FPS decimated from a 60FPS video would be as you presented it, an even decimation of the frame rate and thusly, the frame times. but with a game, as i'm sure you know, you have a variable frame time. i'm the most interested in where you are getting, by all means, a measurable "60fps" or "120fps" but your frame TIMES are wonky and uneven. THAT is where i'm sure DLSS 3 will shine.
EDIT: and yeah, DLSS3 will also have the motion vectors and depth buffer among other things that can act as hints to it. this is going to be VERY VERY interesting to see.
I'm fairly sure the gpu will utilise more data to interpolate muuuuuuuuch better. A gpu has a z-buffer (depth image), movement vectors for each pixel and more data, basically for free. This additional data can make the output almost perfect, even for the grass situation for example.
First 17 seconds of this video is the best news I've heard since the great war
Well there is a game realm where something like this is already implemented - ASW in VR games, and it gives very convincing results.
I really didn't understand DLSS as well as I thought till I discovered your channel, and since then I've learnt a lot. This technology is going to get pretty wild as time goes on.
Have you seen RTX remix phillip? Who am I kidding, of course you have, feel like it's a techology you'll love. I can't wait to go render so many old dx8 and 9 games with new ray tracing capabilities it'll be very neat to see them in a different light.
You made a great point at the end of the video. Past 60fps, traditionally I think the benefit of higher framerates has been the reduction in latency, not that it appears to move smoother. You can try looking over someone's shoulder as they play a video game and honestly anything above 50 looks reasonably smooth, but they can certainly feel that it's not perfect due to the latency. I'm really curious how DLSS 3 is going to feel on 30fps and 60fps games
It looks much nicer ngl.
Disagree, CRT's in motion resolution are equivalent to 1000fps, 60hz on a sample and hold LCD or OLED screen looks like a smeary mess, in fact resolution drops to 360 lines equivalent to 360p when panning the camera which looks ultra terrible on modern 4k displays. Latency is only 1/2 the picture and only really relevant for E-sport titles / twitchy games. Nvidia, Intel have both done research papers/tests that suggest past a certain point people don't care about latency. Also this testing methodology is useless as DLLS 3 has access to the games render pipeline and motion vectors so all those artefacts he experiences in his CS:GO example are a limitation of RIFE and can't be compared 1:1 with DLSS.
@@theemulationportal what bargain bin OLED do you have that smears 60fps content?
@@theemulationportal You're right about the motion vectors, I expect DLSS to outperform the methodology shown in the video in terms of image quality.
My argument was more that the increase in framerate is only really meaningful as long as you can feel improvements in latency. If you can't feel improvements in latency, you probably aren't benefitting anymore from high FPS from a smoothness standpoint either. I suppose we could actually test this empirically by having a 240hz screen and injecting input lag until it has the same latency as if you were playing at 60fps, might be an interesting test to do.
@@mechanicalmonk2020 For a 1000 pixels per second motion tracked by the eye, the object tracked is perceived to have a 16,6 pixel wide symmetrical smear on a sample and hold 60Hz display. Being OLED doesn't change that. A theoretical instant pixel transition technology would not change that either.
OOOOH the bugs you mention about framerate reminded me of an old bug. when you ran oldest cossacks game on 64 bit architecture it couldnt help but speed up the game so fast it was unplayable it was like running a speedhack on very fast setting. it was meant for 32bit architecture. there were also some differences of gameplay between different screen resolutions iirc.
6:04 that "1" in "15 FPS" looks sus
artifacts do not worry me. what worries me is the input latency
5:01 This came out really nice with the Vsauce-like music
I still think the optimal solution in a game for this sort of tech would be to dynamically raise the "pure" framerate based on the frame to frame distance, essentially demanding more frames for the ai to use as reference based on the current speed of camera. After animating for many years, I can tell you that your can make things appear very smooth even at a low fps assuming the image isn't changing a ton frame to frame.
excellent video as always, always a pleasure
love your work!
Loool about the keeping the money for camera comment, imagine in 20 years when cryptography has become so powerful we can run quantum apps on old pentium 3 CPUs that would be hilarious. I always thought the race to better specs was a linear scaling through centralized means of production for profits, rather than a smart scaling through data compression and restructuring.
I certainly trust that DLSS3 will do a way better job with the grass than RIFE can. I imagine having access to actual depth buffer + motion vectors would fix 90% of everything that RIFE got wrong in that example.
Two things I would say-
The low frame rate interpolation shimmering could be used for some really good music videos
Hypothetically for the corner warping if the area it takes up is consistent enough would rendering the image at a slightly higher resolution and basically cropping to the desired resolution
The 5 real frames example gave me a headache. Really impressed with 24 though.
the oblivion ost just made this video super
Most of the artifacts in this video are the result of naively estimating motion by finding the nearest patch of similar looking pixels. This is why high frequency detail like grass, rapidly moving objects, or repeating textures look the worst.
DLSS 3 should have access to motion vectors for both frames, so if done properly, most of these artifacts should be avoidable. The only hard part is generating detail that is occluded in input frames.
We have the technology to do this convincingly, but I doubt it is optimized enough to run in real time yet, so it looks like DLSS3 is just smudging surrounding colors to fill the gap. Maybe that’s convincing enough in practice.
Very interesting test and comparasion! But I wonder if DLSS3 creates its newly interpolated frames by interpolating between grainly DLSS-upscaled frames Or is this new frame interpolation an entirely separate tech which doesn't involve any upscaling at all? At the top of the video you mentioned that it doesn't allow for higher resolutions than before. So I assume its working like DLSS2 plus interpolated extra frames. I am looking forward to a possible future video where you are testing it out to see how it looks and behaves in detail! Big love from Europe!
sneaky Oblivion music :)
I'm surprised that you, as an avid VR fan, made no comparison to the interpolation technology used in headsets! This kind of "Something is wrong, but you can't exactly tell what" happens ALL the time when you use interpolation to smooth out the framerates in VR. Things seem to "hang" on the edges of the screen sometimes and warp out of proportion. It is why I prefer to have a choppier framerate, as this kind of "wrong" feeling disappears, though I presume people who quickly get motion sick cannot enjoy the same luxury.
Specifically, I'm speaking of "Asynchronous Spacewarp", which adds frames when you move your head, to interpolate where you're about to look.
In the 5 -> 60 flight sim, sure there was a lot of funny business going on over much of the screen, but there was also a lot of surprisingly smooth motion on the more distant objects. Definitely not how I'd want to game, but still super impressive that it can take 5 frames and turn them into 60 with any degree of success even if it's not exactly a good result overall.
I am curious how this program does its thing without itself reducing performance. If the computer is not fast enough to produce 60 real frames, how is it fast enough to invent fake frames to make up the difference? I get that the fake frames aren't as complex to resolve as real ones, but they aren't free either, the computer has to do some calculating to produce them. Is there a point at which the computer is spending so much time inventing fake frames that the real frame rate starts to suffer?
The way you blocked your teammate who was trying to reload hurt more and more the more you see it and the culmination was the 50% speed. You could really see the distress in his eyes as he progressively understands what's happening to him
This may be great for VR where artifacts at the corner of your eyes may not be noticed
Is there currently anything better than rife? I need 60fps to 120fps conversion for editing purposes.
M2M_VFI
@@D-K-C How is it better? Is it faster or is it more accurate? Or both?
@@Neekzu both
I am distracted by Phil's exceedingly tight trousers
So Elden ring at 120 fps may be possible. That would be cool.
It's FROM software. They can't be arsed to make small fixes to their games let alone integrate dlss
I wonder if this would be adopted in VR, instead of the Asynchronous Space Warp
I find it really facinating to see you break down all the new methods used in graphics prosessing. And your facination with AI, content generation and upscaling. And its really also a great source to learn about this stuff.
But i wouldn't ignore the fact that AI has a terrible tendency to ruin artistic intent. Both by warping visual art and ruining classical animation. So AI upscaling isn't a silver bullet. And sometimes when you upscale something, there is also the aspect to consider that the product may be worse than the original.
Lower framerates and resolutions aren't allways something that should be consided undesirable in all cases. Though this is mainly applicable to artistic works / games & hand animation.
I've seen countless examples of people using AI framerates upscaling on traditional animation, and publishing these as "superior" to the origionals. Ignoring the fact that the artefacts / smoothening caused contradicts both the artistic intent by the creators and the core principles of the craft. Which makes me sad.
This is not something i expect, or even think you should cover. But i still think the question of IF we should use AI upscaling in the first place, is as important, if not more important as how good said upscaling is.
Obviously no disrespect to your facination with the scene. I've been watching you for years, and there are a lot of facinating projects that stem from the field of AI generation.
@@2kliksphilip Hey, i don't want to start i discussion over this. But as a game developer myself, i really value artistic intent. And in artistic projects, AI upscaling is only going to act as a filter which can serve to distort those pieces with artistic intent. This is a general consensus within a large majority of artistic communities, not because of any sort of blanket hate. Thinking of it as that is simply oversimplyfying a complex issue.
Yeah, a large ammount of AI modified content is guaranteed. And in most cases, artists have control over how our art is displayed. But if everything is filered trough a layer of AI upscaling, you are taking the artistic power away from those who'se passion it is to make these games in the first place.
Then again AI upscaling for games can still be a great tool. Especially for more realistic games, like the ones you've displayed as examples. For people using weaker computers and for games like these. This can be an awesome tool.
But all i'm saying is that its not just a straight up improvement in every case. And more of a very useful tool in certain cases, rather than a straight up upgrade to anything its applied to.
@@PhilipLL Did anyone ever say it was "a straight up improvement"? Far as I can tell, that's just a strawman. Nobody who's eager about frame interpolation and/or image/video upscaling is saying it'll improve anything you put on it - quite the contrary, we're looking at the places it makes mistakes in and waiting to see how they'll be resolved. And as for games, would you rather your player "enjoy" your "artistic intent" via 5, 10, 15 fps?
Would you rather an AI upscale a rendered frame from your game, or someone either not play at all because of the low framerates or have to play at a lower resolution where your art is completely destroyed? Serious questions.. personally, as both a dev and a player, I'd much, MUCH rather have this sort of technology continue to evolve and am eager to see applications of it both in the now and in the future. Without this sort of stuff I wouldn't be able to play, say, Gmod in VR with my extremely underpowered MX110 and i5, or Arma 3 at native res.
it truly depends on how you look at "artist intent", especially when you then use specific AIs for specific tasks.
as an example: my video interpolation AI TVP is able to understand the 12 principles of animation, and although its in beta, it performs pretty well!
RIFE is a good test for liveaction/reallife scenarios, but doesnt work so well for animation.
the importance of motion vectors cannot be understated.
Getting these extra frames from DLSS is like bying a vowel in wheel of fortune.
Why are your videos so good?
I just hope that some form of it can be modded in or maybe even down the line officially supported on older cards, DLSS2 is great at what it does but fully raytraced cyberpunk still makes my gpu catch fire a bit even on Balanced
Even the 15 real FPS upscaled looked fairly impressive to me. I was thinking about how I played like 80 hours of Oblivion on my crap old PC at 15FPS, and then the Oblivion music kicked in around 9:30 and took me right back there 🕑🕒🕓
Something interesting is going to be if it's like minimum frame times, what if there's 10 frames between some of the "true" frames, after nothing but 1 frame between "true" frames. Something like a new form of stuttering...
I just want to say i appreciated the music from Oblivion which kicked in at about 9:22.
EDITED: TIMESTAMP IN THIS COMMENT WAS INCORRECT
I had to scroll this far to find the first comment mentioning this?! gg Gabriel (and it's 9:22)
@@stianhoiland Thanks for the correction. Have a nice day. :)
4:32 They tried to recover images from dreams using this technology. This is the best they could do.
2 and a half P. At 2 and a half FPS.
My Samsung smart TV upscales frames already it's called sharpness setting, really it's good as long as you don't go overboard with it (mine is set on 70/100). Especially during circular outer pans of camera, that's when you notice all the gaps in spacetime it can not possibly fill :D
this video reminds me of how much i hated using twixtor for editing haha, could never get the extreme slow motion looking right and now I find out why
I doubt dlss 3's artefacts will look anythink like that, it works very differently
I wonder if devs will be able to use dlss3 on specific elements instead of the whole screen. For example, games that have lower FPS things in the distance like particle effects or NPCs. Interpolation on just those distant elements could really help sell the scene.
Seems like first person is the true achille's heel of frame interpolation. Explains why Nvidia used a regular third person drive through instead of panning the camera around in an interior like you're losing your mind where you could possibly see some artifacts.
I feel like taking LSD turns your brain's DLSS into Rife 5->60fps
Fricken great video!
2:40 that’s the effect I got when playing half life Alyx on my old PC.
That Oblivion music tho
Dr. Egg man has shaved his mustache and grew hair.
my curiosity is satisfied
such an amazing video. holy shit..
7:57 Thank you! Atleast Avatar is getting adaptive refresh rate where in action scenes it will have more than 24 fps (which every scene with fastish movement should have imo)