Kind of... but i wounder if it would work on security camarea photage. Someone should try I seen it work on old film video from late 19 centary, and that looks pretty good.
kinda? It makes the thing look better to our eyes in the grand scheme of things but actually it isn't creating what isn't there, it's just approximating (yes, with approximation you can actually pull out details but you can't take anything for sure as it's just an algorythm that tried to guess what's there and changing the algorythm by a bit would even result in different details)
These deep learning upscaling algorithms are essentially an AI artist's interpretation of what the frame would look like at a higher resolution. Upscaling has an easy training advantage for AI--Nvidia can train them on hundreds of hours of super high resolution footage, so it's like an artist who has only ever watched footage of that exact thing.
The dream has always been Ray Tracing. DLSS is really cool, but it is the second tier feature under support for DXR and Vulkan-RT. Those are more important by far.
I read this comment before watching the video and halfway through it I forgot you had said "next video" and I was wondering when he'd start manually counting 🤦♂️
I remember seeing this video when it came out and thinking "wow, this is the future! Now GPUs will have a much longer life cycle since older cards will be able to keep up thanks to upscaling. It sacrifices the image quality a bit in exchange for not having to upgrade so urgently". Its almost 2025 now and upscaling has become a crutch for optimization and most games are made with it in mind, with certain titles struggling to run natively even with the most powerful cards... This is so depressing to watch in retrospect.
Yeah, this video was my introduction to what DLSS is and how it works. I remember thinking "This is incredible! It's like magic!" and I thought this was the future to achieving 4K with really high fps and medium-tier cards. And nowadays there are some games that, even with a 4090, can't even achieve 4K 60 fps without DLSS and/or frame generation. DLSS has become a necesity to cover up bad performance in games, and that is sad af
Love when NVIDIA does this. There have been so many times over my 20+ years using NVIDIA that they add a new feature or make something new backwards compatible with older cars etc basically yeah downloading more fps like a free upgrade.
@@PanzerVII-df8hg When people say Moore's Law is dead or alive, these people don't seem to understand Moore's Law. It has absolutely nothing to do with what's available to consumers. It's about what is possible *in a vacuum*. Nvidia/AMD/Intel ABSOLUTELY CAN make 50% improvements every two years at a lower price. They DON'T because that's not a good business model and because there are outside factors that affect price, like parts shortages and greedy businesses.
@@PanzerVII-df8hg and transistors don't perfectly equate to performance, $200 being 50% faster in 2years time than $400 now? Ha. Half the price yet 1.5x the performance? Would that be closer to a 3x? 2080 has 18.9b transistors 3080 has 28.3b transistors which is 1.5x for 87.5% the price which equates to 1.7x the transistors price for price which fits into the accurate 1.8x (moores law was close but not perfect) For reference the 1080 has 7.2b which means the 2080 has 2.6x the transistors compared to a 1080! This literally takes less than 3minutes to fact check by a calculator and google to half fact check what you're saying 980 only has 5.8b which means a 1080 has 1.25x the transistors, its barely anything so it should be like 85%-90% the performance right? Why should anyone care about pascal when maxwell exists? That's an embarrassing improvement... Oh the 20series is here, pascal is dead, 4 years leap right there! Pascal wasn't even a 1year leap lol The 20series was the biggest transistor jump since the 900series
MSAA doesn't render at higher resolution and downscale, that's SSAA. MSAA does shading at normal resolution, it just takes more rasterization samples at geometry edges.
@@2kliksphilip imagine a switch pro with tensor cores... it could still run the same game as the base switch, but it would be able to output 4k docked if it so wished.
@@2kliksphilip The applications for competitive gaming where high frame times are important are potentially huge. Being able to play a game at 240 FPS and non-garbage visuals would be crazy.
Remember that TH-cam Compression algorithms are killing the visuals of the video itself, so in reality the image quality is ever so slightly better than presented here, especially when it comes to "1440p" and "2160p" parts of the video.
@@2kliksphilip btw youtube has a different bitrate for each video quality loaded into youtube so upscaled to 4k 1080p being watched in 1080p would look better than basic 1080p watched in 1080p
@@2kliksphilip Imagine if TH-cam had great bitrates like Vimeo, which even offers viewers a download button do get the ~17.000kBit/s original files, if you want. But there are just too many users here. I see why they're dropping bitrates more and more every year, to save their servers. When I want to feel sad, I redownload videos I saved in like 2013 (using the same software at highest settings) and compare the bitrates of the current copy to the old one (They used to constantly wreck old videos more and more, year after year and still do. This has improved a bit with VP9 though.) or compare the Vimeo and YT upload (if applicable): e.g. at 1080p in June 2019 "Don't Hug Me I'm Scared 5" had a bitrate of ~4500kBit/s AVC + 256kBit/s 48kHz AAC on Vimeo vs. ~700kBit/s AVC + 125kBit/s 44.1kHz AAC on YT. It's a day and night difference. I'm not sure if someone made a video about this sometime. Especially that they data degrade old videos that maybe don't have any other copy as time progresses.
and the only reason the compression of youtube loses quality its cause its not lossless they priority is to keep the bitrate at a same amount so internet connection with a specific speed can watch without issues like at 2/5mb connection you can watch 720p just fine on all videos, if youtube did a lossless compression we would have no quality loss and still smaller files, but may not be so stable and different videos would may need more/less internet.
While it wasn't intended for lower resolution, I feel like this could provide some benefit a few years from now when current graphics cards have aged, to squeeze a little bit extra performance out of them if you don't want to or can't upgrade yet at that point.
No, that’s the literally the point of DLSS. It was intended for lower resolution as it’s best suited in upscaling them. People tend to think DLSS and FSR are meant for upscaling (just upscaling from native) specifically but downscaling is the biggest reason it even exist (taking a lower resolution and upscaling it).
Unfortunately, this might be seen as a negative thing for marketing for the same reason that planned obsolescence is seen as a positive thing in marketing. It's great for the consumer because that means having more time to wait before upgrading to the latest tech, but that also means that the company selling it won't sell as many as they would if the feature didn't exist. It seems entirely possible that they may try to phase DLSS out over time or change what it can and can't do, or change it's compatibility with older cards in the future when the current ones are older. If they don't, they would be making less sales, which means less profits. It's irritating that this could become a factor, but it seems very likely to me based on the widespread nature of planned obsolescence in cars, appliances, phones, etc.
well if they're not going to make recycling electronics more accessible or even viable then it's the ethical thing to do. graphics cards are as common as vacuum cleaners, should I have to buy a new one every 5 years? @@Bowlfrog44
Their all too busy fawning over ray tracing and its batshit crazy implementation. Like how puddles can somehow have a cleaner reflection than a bathroom mirror and ALL WINDOWS ARE ONE WAY GLASS
This tech is the future - This is the only way we're gonna power next gen VR experiences without insane unavailable graphics technology. I may not like Nvidia, but holy hot damn do I respect them for this tech. It's a monumental achievement.
*Me having played VR games like PavlovVR and VRChat on a HD7950, RX 480 and RX 5500XT with my Acer WMR headset in 1440p90Hz.* What do you mean you need powerful graphics cards?
I remember seeings dlss come out thinking "Wow this is incredible". Now I hate it because companies just use it as an excuse to poorly optimise their game. Its no longer an optional feature and is required on a lot of new titles unless you have a jacked gpu.
Yeah, it seems like a great match. If needed they can disable the tensor cores when handheld to save battery power - the small screen size wouldn't benefit from DLSS as much as a giant monitor.
8:22 I feel like I'm watching Control run through a 1990s Parasite Eve/Silent Hill graphics filter. Which is, frankly, kinda awesome. DLSS apparently has a time machine built in.
The problem is that you can't just render the game at 240p, then stream the video and then upscale it. DLSS needs some extra information from the game, like motion vectors for example. In it's current implementation you can't "outsource" DLSS from the game.
@@NawarNrr I'm not sure if it would even work. Normally DLSS receives a full image every frame, if the video was encoded, it only contains partial images. Basically every few seconds the video has a full image and then the following frames only contain the parts of the image that have changed since the last frame; I'm not sure if that would work with DLSS. Just taking this video as an example, it's 10:30 long, the 240p30fps option of this video is 20.27MB in size. This means an average of 1.1KB per frame. I don't know how large the motion vector data is per frame, but I think it could be big enough to have an impact on the bandwidth
Funny to look back at this video, AI super sampling looked like a nice gimmick, but the devs literally use the damn thing as a crutch, a current game can't run at 1080p anymore
Unironically, I could see this as an aesthetic to use ina a game with an atmosphere that suits it. It has this weird, dreamy, fuzzy feeling to it that's just a little different from the standard type of blurry.
@Kenn Honson X Res is pretty confusing. 4K is literally 4 x (1920x1980px) = 8 294 400 = (3840x2160)p But 4K technically means 4000 horizontal resolution which special cinemas have (normal is 2K) So PC-4K is neither 4K, nor called 4X as it should be. But manufacturers are making money so who cares
Looking back at this now, I thought it was so revolutionary at the time. But it's become so commonplace that huge companies are using it along with frame gen as a crutch to get barely acceptable performance when they should be optimizing properly instead. I genuinely wish upscaling and frame generation were never invented.
That and pre-rendering reflection textures using spare resources for ray tracing. Would enable doing them on weaker hw, even lacking cores meant for it, when lightly loaded then use traditional means to load them on demand.
@Darth Wheezius I don't know why I suddenly remembered to wonder this. I wonder why files in a shader cache are not compressed by the GPU driver. Try turning on the compression flag for the folder and see the results. Decent space savings and you can't tell a difference in load time.
DLSS (and ai upscaling as a whole) is absolutely fascinating, and I love how many videos you make about it. more people should be bigging this stuff up, it's insanely cool
the problem is it only works in select titles so far, way less than 0,1% of games. If it worked for all games with every gpu, it would be a killer feature. Itll take a couple of years and hopefully it will get the neccessary compatibility upgrades
I just bought a 3060 laptop (max-Q) and honestly, playing Control and Cyberpunk with DLSS is really impressive. I manage 60fps with ultra settings(1080p), that's insane.Thanks for the video
@@MrMarkTheBeastMemes99 why? If DLSS is a 99.9/100 copy of if it wasn't a GT card then does it matter? Your basically paying for development of software, instead of hardware really. Though this evades multiple other facts like other settings that DLSS doesn't account for, the basis being they could software that shit too eventually.
@@MrMarkTheBeastMemes99 Who would that hurt?? That would actually benefit the budget gaming community. This would actually be the right way to go with it, make cheap gpus and use dlss to improve the performance so people can buy gpus for much less. I doubt it'd be releasing anytime soon if they were to do a thing since dlss isn't universal for most games yet but when the time comes I believe this would be a good thing to do
@@TripleLayerLemonCake DLSS requires special cores called Tensor Cores, that are only available in RTX cards. Tensor cores are used in training AIs from self driving cars to chatbots.
Worth pointing out that the text (and 2D HUD elements in general) aren't being scaled, those are always being rendered at the screen's resolution _after_ the 3D render has been scaled (assuming the game was coded by developers with a clue). Just in case some people thought DLSS included some CSI-level "zoom in and read a full page of text that's just three pixels wide" voodoo.
@@olivereisenberger7215 dlss doesnt need optimisation tho, it works preset based( each preset determines what variation of the motion vectors to use) the only reason why ui would artifact is with framegen because framegen doesn't apply on the 3d view but on the entire frame whereas when dlss is enabled, it first upscales the 3d view output and the ui is rendered afterwards so ui is still at native res
5:10 I realize now why I was SO CONFUSED when DLSS came out. I was following it when it was being developed as super-AA. When the 2000 series launched and there was all this talk of boosted frame rates at high resolution I had no idea what they were talking about
"To fill you in - DLSS is about rendering a game at a lower resolution, then using clever upscaling trickery to try and return it to the original resolution again" I see what you did there
Innovation 2007: We built the world's first touchscreen smartphone with cellualr and Wifi in your pocket Innovation 2023: You can play at 240p on an $800 graphics card
The funny thing is, I'm nearsighted, but also unequally between my right and left eyes! So the fact that the background was fuzzy and shimmery actually made my brain think it was higher resolution than it was! It's exactly how I see the world! Who could have imagined that ultra high graphics and ultra high DOF was actually less realistic!
Oh I'm the oppposite, long sighted but my left eye barely fucking works at all so often when I play games the image appears very flat and when I decide to out on my glasses it looks so much more 3d for some reason
I do find it interesting how we're moving from the days of solid, static resolutions to things like dynamic resolution-scaling and all sorts of interesting systems like that to ensure a high frame-rate
@@mechanicalmonk2020 true but personally I had missed out on most of xbox one/PS4 era games due to not having either console - and nowadays developers and the industry are talking way more about it and making a much bigger deal out of it as they've made more DSR-related advancements. as far as I remember, people didn't really talk about upscaling or resolution scaling much at all until things like DLSS and FSR arrived
Yeah, its not going to work like that. Once devs really latch onto the idea of upscaling covering all their performance issues, games are going to start running worse. Eventually, any gain you get from upscaling will be canceled out by poor optimization. Imagine having a 4090 and having to use upscaling to get decent frames. That day is coming.
@@niteowl17 I would really like to know more about this. What percentage of the world can't get better than that on youtube? Where? I'm in the US, with the cheapest option the biggest local internet provider has available, and I'm watching at 4k with no problems. Not flawless, but the hiccups aren't common. It costs more than it should, because of how much a monopoly comcast has, which should get fixed. Wow, only 59% of the global population are internet users? I was thinking maybe they should try applying something like DLSS to these videos to help with that problem, then realized that's basically exactly what lossy video compression does.
Since it was first released, I’ve been more excited about DLSS than any other current graphics advancement. The amount of performance headroom you get allows for more expensive effects actually being viable. Love it
@@Denomote This depends on use case. There already is automated resolution scaling that only lowers resolution when necessary on some games like Cyberpunk. On games that run generally ok but with less than targeted fps it's a quick +10fps for the quality dlss. There also are games where the game looks actually better with fsr or dlss enabled, for example Starfield is a lot more crisp with fsr 2.0 than completely without.
@@life_is_gr8 Because as Kliks said multiple times in the video, he is using it in a way that was completely unintended. It won't become a new standard or crutch because building games around technology like this is flat out stupid and would limit your userbase by something so arbitrary that it wouldn't make sense at all, but its definitely not "caca" when used properly imo.
Now that Dynamic Resolution Scaling and DLSS together exists, this Ultra Performance DLSS now has a actual use. It's the best DRS Floor ever now, as games can lock the output to 4k with DLSS, and Dynamically Scale the Internal Resolution to lock a Framerate, and more often than not (at least on PC) it will likely fall closer to the 1080p side rather than 720p. Imagine this being used on a Switch 2021, it would be revolutionary imho.
@ yeah COVID pushbacks are fun. Although we do legit know that a new SoC for Nintendo is in development thanks the to the guy who leaked literally all of NVIDIA's Ampere GPUs. Even have both the codename and model name (Dane/T239), and what the SoC is based on (Orin). So the system exists, just likely got pushed out of 2021 because of COVID and also production problems/qualitifcaitons on what Nintendo wants out of the SoC.
@@alejandro6070 FSR pales in comparison to DLSS especially at lower native resolutions and you know it. Although not knocking the Steam Deck for that, that should slay if it can use the open-source version of XeSS when it comes out and someone makes a version that works in any game like Lossless Scaling's NIS/FSR
This is why I bought a Acer Nitro 5 for $700. Comes with i5 9500h and rtx 2060 80W. Even tho it's not a refreshed mobile gpu, I can still add +220 on the clock and get 2000+ mhz on the clock. +1000 on memory. I can run Control 720p dlss to 1080p with ray tracing on and average 70+fps.
I can't wait for technologies like this to come to integrated graphics. Just imagine running AAA games at 360p on your mom's laptop and it actually looking good!
The only thing I fear is that game companies start being even lazier with optimizations because "you can just use dlss to fix it bro". This should be used to make even bigger and better games possible, not to allow for new games to be lazier optimizing.
It's technologies like DLSS that makes me feel like we're living in the future, not the faster and bigger graphics cards, which is just predictable. DLSS is exciting and magical.
Yep, it's the lowest standard going up that's amazing. I sold my 570 to a friend who had been using a 270x since its release and he was blown away. He now has double the core count, less power for cheaper than the 200s release price. Same goes for Intel's integrated graphics, it's waaaaay better compared to just a few years ago.
The machine learning stuff is amazing, check out 2 minute papers channels for cool stuff related to AI and graphics i really recommend it. And just wait until these deep learning things are applied to rendering 3d engines and even ray tracing which should yield huge performance increases, now instead of 9 extra pixels for the cost of 1, you could also get 9 extra polygons for the cost 1, or 9 extra rays traced for the cost of one. When its all said and done real hardware could go extinct thanks to AI, or just minimalist hardware to run the AI, sort of like how our brains operate, on like a couple of miliwatts? No 750w powersupply needed to render the universe in infinite rays traces in super high resolution, greater then the eye can even see. Maybe we should use this video card more often.
I'd love to see you do this video again with the latest version of DLSS dlls; they're significantly better at their upsampling rendering than what those games came out with originally.
Interestingly it is also a direction in which rendering in general will be heading in future, a sparse rendering of areas which need the detail spatially and temporally. We should see more reconstruction techniques aimed for temporal/framerates as well.
I just wanna thank you being as I am literally 6 months into my 1st gaming PC. I'm 40 year old man (FML) that has been gaming his entire life thank you for teaching me so much in such a quick video
@@2kliksphilip O absolutely! All my friends have been pressing me to get on the PC bandwagon and I've done a lot of research but your video just made it so easy. Looking forward to watching more of your stuff keep up the good work
Honestly, I know it isn't very likely, since it could radically shift the GPU market, but I really hope they make very mid-end, low-price GPU with this technology implemented, a TON of people would benefit of something like that happening. The opportunity for the PC gaming market to grow when they offer an affordable, quality, and convenient device to markets without much disposable income is insanely high, imagine places like LatinAmerica, the MiddleEast and North Africa, India and South East Asia, all seeing a dramatic rise in players because instead of needing to save up for 1 to 3 years (in many places it's a lot more!) to buy a decent gaming machine, they'd just need to do so for 3 to 6 months. Again, I don't think that'll happen, but it'd be really cool, gaming should be accessible to the less wealthy if it's at all possible.
I'm happy to inform you that such tech is in final testing atm. Tho it's AMD making it and it'll be for all cards, since they're included in base coding (DirectX i think). It just takes a few consumers that dare turn their back to greedy manufactures like Nvidia, to get such tech. Vote with your money next purchase ;)
@@caseyhall2320 Dude make that 3050 The 3060 is already on the level of 2070/2060 super+ 3050 would be a 1080p gaming beast and with DLSS2.0 it can even play at 1440p in modern titles. The future is bright for Nvidia card users. This is the first time we'll see a smooth $200~ 1440p machine.
Thank you Philip for being the seemingly the only one that shows the side of DLSS that is *actually* exciting - running games at low resolutions and letting AI do the rest
These results are incredible. I wish we get DLSS or similar technology in more budget friendly products. This would be at its best in lower end GPUs and anything used on laptops.
The upscaling costs more compute power than a 1000 n64s, its just not a big performance hit for modern hardware. edit: fxaa on the other hand vs MSAA on n64 would save alot of performance, considering you can't blur blurry af n64 textures XD
Think it was the same as fake borderless fullscreen (also the name of the software) I used to play MGSV:PP on an HD7450 set on high 1024x576 resolution then fullscreen borderless on 1080P can't remember much but its somewhat checkered
I didn't even know 8K displays were a thing yet. I remember reading a thing about meta materials maybe 15 years ago or so, and the potential to create incredible stuff with it, such as cameras capable of filming in 32K. It was some crazy resolution like you could take a non-zoomed photo from orbit and the picture would be so high def that if you zoomed in on the image, you could read a post card laying on a sidewalk. Like, a photo without using a telescopic lens. Some crazy shit about negative refraction, where it could make something appear to be floating above the material so detailed that it looks like it's actually there, not some semi-transparent hologram. Makes me wonder whatever happened to the meta material research. I was hoping to see the results by now. Then again, maybe it got bought up by the military, as one of the properties of meta material is the ability to bend light completely around an object and have the light resume the original course. In other words, invisibility cloak. They successfully made an object totally invisible to microwaves (the actual wave, not the cooking machine). Apparently, a display made of meta materials could create an image so high def, you can't distinguish it from reality. Probably where they got the idea in the movie, "Gamer." Anyway, technology is getting crazy. I nearly shit myself when real-time ray tracing was announced. Used to take me over 8 hours back in the day to render a 3D model in 800x600 resolution with ray tracing enabled. Like this Lego walker, which I think too 6 hours to render back when ATI was the dominant GPU maker (early 2000s): scontent-sea1-1.xx.fbcdn.net/v/t1.18169-9/150704_573756515970942_1040442011_n.jpg?_nc_cat=109&ccb=1-7&_nc_sid=cdbe9c&_nc_ohc=4Rz5uLHy7CcAX851lcy&_nc_ht=scontent-sea1-1.xx&oh=00_AfDIY_cXlizzw_l-JRYFf-z8wdhvAFXwlSvmmMUXZIDqOA&oe=637E5C2F What they're doing with A.I. is truly impressive. Check out Nvidia Remix, if you haven't. Utterly mind-blowing, as if real-time ray tracing wasn't mind-blowing enough.
8k displays have been around since about 2018 at least if not before then , it's just they were limited to the TV market not computer monitors , and they were/are mega expensive, 8k is usualy reserved for tv's that go beyond 65" such as 70-103 inch tv's where the pixel density (pixels per inch) really makes a difference from 4k. also smaller size displays make no real beneift from drastically higher resolutions either. there's a lost of return on gains involved. this is why most 20-22 inch monitors are never higher in res than 1080 p with 24's-27's being 1440p , you'll rarely see a 1440p monitor that is over 32 inches. most 40-60 inch displays are 4k , with many 65 inch tv's and beyond doing 4k as well since 8k is stil real expensive , you won't find any 8k tv's smaller than 55 inch though. however msot pc gamers wil use a monitor that is between 22-32 inches , rare are the ones taht use 40+ sized monitors , most pc gamers lookign for that size image just go with TV's because they are cheaper than monitors that big. like myself , (i'm gaming ona 49" 4k samsung tv) EDIT: i stand corrected , just did a quick search on amazon and walmart and target and best buy, the smallest 8k display i can find is a 65" TV , NO comptuer monitors yet but i'm sure one has to be out there , that said the cheapest 8k tv i could find is $1,500 dollars. add that price , the fact there's no actual media for it , and you'd have to own a $3500+ computer to push any games to 8k (and even then that computer would have to rely heavily on upscalers) ... i can see why it's been very slow to catch on. that all said the tv's i have looked at do all upscale images to 8k , , likely the best result would be from a 4k source such as a 4k blu ray player or 4k streamed media. i've seen dvd's upscaled by my 4k tv and they are hideous. but 1080p blu rays still look nice with the tv's upscaler. best image is of coruse , my 4k blu ray player or when i'm running a game native @4k.
This is definitely imperfect, but holy shit for something that is still basically in its infancy, this is absolutely insane. I can only imagine what this will be like in a few years.
I love DLSS. Before Fortnite added it, my game struggled to get to 60fps at 90% of 4k on medium settings, now everything is on maximum, game runs butter smooth and it looks amazing. It's pure magic, and I hope it gets added to as many games as possible.
Cyberpunk is the first game I'll play that has it and I'm stoked at the possibility of running 120hz on ultrawide 1440p stable. I bet the VR crowd is even more hyped for this tech...
If you're on this tech beat, I'd LOVE for someone to do a deep dive on how this DLSS system is trained. Is it specific to each game? Does it generalize? Could it work for live video? Great work.
DLSS is trained on a per game basis I believe (or at least Nvidia did this for first generation titles - see www.nvidia.com/en-gb/geforce/news/nvidia-dlss-your-questions-answered/). Essentially, this method creates a 'model' for each DLSS game that analyses various parts of the aesthetic, art style, gameplay feel, etc to essentially give the AI tools an 'intuition' on how the game environment would be most likely to perform/render so it can add in the extra information. It's not general enough for video yet (although the Nvidia toolkits for streaming and things like Optical Flow are promising) - video is pretty difficult to build a general model for as its not going to be as consistent as a game in terms of appearance. I'm not too familiar with what software is used for upscaling video - I understand most methods that aren't just straight scaling rely on tracking and replicating elements between frames so are fairly computationally intensive to do in real time, but if the quality of upscaling the likes of an Nvidia Shield is capable were possible on their GPUs too that would be brilliant.
DLSS 1.0 was trained on each game individually, while DLSS 2.0 is trained in general on various upscaling datasets. They certainly try for good representation among types of images, since they've decided to release it to a wider audience.
Nvidia released an article about it recently, quoting from their website: "During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images." and also "The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games." I also think (not 100% sure) Nvidia mentioned using their DGX A100 machines to train these models in-house at GTC May 2020.
CSGO Vulkan Multithread Optimized SLI Optimized running on Threadripper 3990WX RTX 3090 SLI RAM Disk, at 160p-480p DLSS at 36000fps on 360Hz monitor with PCIe direct keyboard and mouse for GHz polling and ns latency
Upscaling is a cancer. Some games now use it faking is in native resolution. Is rare to find a game sharp right now. Even for a gamer with a top tier PC.
Ah low FPS motionblur! At stable 60 motionblur kind of stops working as we all grown to love it..... If your rig handles 4k60, bump it up to 8k and motionblur starts working again. Love me those artificial nuclear chemtrails on every twist and turn.....
@@gisopolis77 Nah back when I had a Pentium 4 with 2GB ddr2 no GPU, in 2013, I could barely watch vids higher than 480p. Literally only upgraded the CPU to a core 2 quad and then 1080p streaming was lagless.
The irony is that most people who see this video will bee watching it on theire phone and once your past 180 I stoped beeing able to tell the diffrence
It's not irony. The cultural zeitgeist is slow because humanity would collapse if everyone caught up to everything (regardless of accuracy. Low risk is the way to go so -> slow and unintelligent)
"That's the setting you avoid on TH-cam because it sucks too much to use" *Me changing from my usual 144p to 240p so I can see the graphics settings* T_T
@@kinarast I actually understand the feeling quite well. I had to move in with my mother a while back, for the summer, and her wifi was literally running at an average of 200 to 300 kilobytes per second. Over the course of that summer, I got very used to 240p and 360p (if I was lucky enough to watch it in 360p, had to wait until no one else was using the wifi to get that luxury haha). So whenever I'd visit a friend and we'd watch anything at 480p or higher - usually 1080p to 4K - I'd feel really....weird, as if something was off. That, and I audibly exclaimed, like a child witnessing real life magic, "Woah! Holy shit! It's so high res!" Which my friends would then respond by looking at me as if I had lost my god damn mind, watching me going ape shit over what was practically black magic to me. To be fair, if someone said that to me, I'd be off-put and slightly worried too, lmao. But basically, to wrap this up, when I moved back in with my father and watched videos in 1080p, every day felt like I was in fairy land. As if I had come across something more amazing than a giant pile of gold. But that only lasted for a couple months and now I'm used to 1080p again.
I really appreciate it, how you keep the quality of your videos so high after all these years. Most youtubers fade away into mediocrity at some point, but you are still motivated to produce amazing videos. I really like that. Also thank you for including the songs in the description. :)
TH-cam 240p are not what youtubers uploaded 10-15 years ago -TH-cam has basicilly destroyed those videos with their algoritms -a 240p avi file can be upscaled to 1024p but youtubes 240 are like 32p movies
Honestly as unimpressed as I was with the original first pass DLSS implementation... I'm extremely impressed with DLSS 2.0. Honestly this, it's AMD counterpart (which will probably be more like DLSS 1.0, especially since it's software based if they attach it to their current line of GPUs) and future iterations will probably be *huge* for both console and PC gaming. I'm going to go ahead and guess that Nvidia will stay well-ahead of AMD due to their software+hardware implementations, but man, this is making me actually consider upgrading my monitor as I can use techs like these to run higher framerates without going completely blurred-out on a higher res monitor.
imagine GPU can now access DLSS in everything, programs, chrome, games or just youtube videos or watching 240p movies enhanced by DLSS. Less internet data to be used in near future.
The technology doesn't currently work in way in which that would be remotely plausible. The first most obvious issue is that it has to be trained for the content its working on. That work is being done via partnerships with game developers and Nvidia, on servers. Its unlikely Nvidia is going to work with every entity that produces something on the internet. If we get to the point in which it works dynamically on a local level, only streaming content is made and distributed in tiers. Largely everything else exists, and is distributed at a single quality level. DLSS is a thing that does what it does in specific market, to capture as much share of that market for a single entity.
@@blkspade23 "The technology doesn't currently work in way in which that would be remotely plausible" Eh, that's just not true. "he first most obvious issue is that it has to be trained for the content its working on." Then it is "remotely plausible", isn't it? You just need to supply the data for it. If only there were servers available with exabytes of videos with various content types to train NNs with.
@@blkspade23 Isn't DLSS 2.0 trained with all kinds of content and not just specific stuff. Also, Nvidia Shield TVs have an AI upscaling that seems linked to DLSS. I don't have one myself so I can't personally comment on the quality, but Linus made a video about it and it seems impressive
@@isodoubIet Not every game just works with DLSS. There is a collaboration between game devs and Nvidia. So just because tons of stuff exists doesn't resolve the issue. If far less work to implement something for a handful of AAA devs (you've made deals with) a year, than all the video out there. While my original comment was technically only accurate for DLSS 1.0, 2.0 still starts with the dev implementing tech Nvidia (and only Nvidia) provides into the render pipeline. Video processing is far easier than 3D rendering. Video upscaling isn't new, and as we've seen the result better when the input is better. The devices we own do it locally. Content shot on film has practically infinite resolution, but you have to work from the masters. With video games, the content is the master because its being made in real time (well textures are technically fixed), at what ever resolution you have the horse power for. To achieve the same kind of thing for video, without training, we would need a vector based video codec. The problem DLSS is solving is the performance hit associated with real time rendering of high resolution, interactive content. Better video compression tech is already the solution to streaming, the problem is unfortunately that data caps are largely non-sense cash grabs.
@@blkspade23 "Not every game just works with DLSS" Doesn't matter. DLSS is a specific implementation. You said the with the current technology this kind of thing is not "remotely" possible. But with the current state of the technology it absolutely is possible. It's a convnet, not string field theory. You grab the training data, of which there's oodles -- just find any random video at the resolution you want and downscale it, presto, and if you're doing this off youtube it's pretty much already done for you, and you run it through. There's no real technical issue here and Nvidia's engineers have made much more impressive advances in the past few years, particularly with e.g. reinforcement learning or GANs which are actually hard to train, unlike a boring encoder-decoder convolutional topology.
There's something fun about playing a game in the worst possible way - you have no idea how delighted I was when I found out that No Man's sky let's you play the game at a crispy 64x48p thank to being able to lower the resolution scale all the way down to 10%.
It's interesting how Nvidia originally touted raytracing as the killer feature of it's RTX cards, but DLSS has turned out to be the real killer feature. Another I'd like to see more about is how they're planning to use RTX IO - don't hear much about that but it looks like if you have super fast NVME PCIE 4.0 storage for your games then theoretically your low VRAM card might not matter as much because games can pretty much just stream assets from the drive.
could this be used to make games? like say an artist makes a low fidelity textured model of something, uses this to fill in the gaps, and then goes through and removes any artifacts. It could really speed up workflow. dunno much about the industry and the "pipeline" though so I might be talking out my ass
I don't think so. From my limited understanding of DLSS, it is basically an application of a neural network, which has to be trained for each game individually to produce desired results. If you created low textures to begin with, there wouldn't be the required information to train the network with, I believe. But again, my understanding is limited, so I might be wrong.
Not this DLSS, but a different neural network could (and already exists). You could even make a game without textures and it would "paint" it in style you tell it, check th-cam.com/video/u4HpryLU-VI/w-d-xo.html . Hardware is probably not fast enough today for real-time use in games, but in few years we'll be able to mess with this for sure (imagine modding old games with this).
@@qaulwart You are indeed wrong, or should I say not up to date. DLSS no longer needs to be trained for every game (since April or May if I'm not mistaken), it works out of the box with any title which is able to provide motion vector data (it still needs to be implemented by the developers, though), Unreal Engine already has a DLSS branch (available upon request for "real" projects) in which you just turn it on and it works. You can see it in action along with the RTXGI branch (and another branch with caustics) here: th-cam.com/video/ZefvmV1pdP8/w-d-xo.html
Nvidia has other models that they always present. Check Nvidia GameGAN One model named GauGAN creates better art from strokes of color One model I think in GameGAN finished Pacman by itself. I think this was used in their presentation about autonomous AI characters One model generated a 3D world without a game engine etc
There's various AI Upscale projects for old retro games that utilized pre-rendered images as backgrounds as stuff, like the classic Resident Evils or FF7. Since their BGs were rendered at mere 240p, upscaling them makes 'em look awful, thus leaving AI driven recreations the only sane option. And the results are pretty stunning
i remember playing pirated games at 800x600 or lower, hardly being able to see the difference. I was young and not yet spoiled by FHD+UHD graphics. Things are better when you don't know what you're missing.
@@OverbiteGames Wow i did not know satellite internet was still that bad i know 144p uses around 256kbps this may actually be worse than dali up internet from 1998.
@@Fastwalker27 No you need at least 1.5 mbps more if it its in 60fps for 720p according to a website i found i was wrong and 144p youtube only uses 100kbps.
@@belstar1128 TH-cam is different It compresses the video so that the bit rate is much lower , hence the famous "TH-cam compression" From experience : 144p requires only 40kbs 360p : 70 kbs 720p : 200 kbs ( about 300 if it's 60fps) I have no idea about 1080p , but it's probably 400~500 kbs.
@@Fastwalker27 I don't think so all sources say that even 144p needs more than that 40kbps video would look way worse 40kbps would be barely enough for the sound.
I'm quite confident that AI-completed graphics and videos will become an industry standard before long. It is an extremely efficient tool to improve visuals to close-enough levels compared to actual native visuals, with nowhere near the same processing power levels needed for the results. After all, this is not too different from how human memories work too; making larger images out of limited details.
There should be a "DLSS ECO" mode that runs your game at half its native resolution, upscales it back to native, and disables GPU Boost to save energy.
It's literally the enhance button from CSI we all laughed at.
Kind of... but i wounder if it would work on security camarea photage.
Someone should try
I seen it work on old film video from late 19 centary, and that looks pretty good.
Nvidia will have to backtrace pixels in real time with a Visual Basic GUI before I'm truly impressed.
Lmao so true!
kinda?
It makes the thing look better to our eyes in the grand scheme of things but actually it isn't creating what isn't there, it's just approximating (yes, with approximation you can actually pull out details but you can't take anything for sure as it's just an algorythm that tried to guess what's there and changing the algorythm by a bit would even result in different details)
These deep learning upscaling algorithms are essentially an AI artist's interpretation of what the frame would look like at a higher resolution. Upscaling has an easy training advantage for AI--Nvidia can train them on hundreds of hours of super high resolution footage, so it's like an artist who has only ever watched footage of that exact thing.
Watch this on 240p to be the most immersed.
I cant see anything anymore, so AUTHENTIC
I watch everything on 144p, my internet is just that bad
@@ak47tg93 damn dude that sucks
I chose 144p to get thebest experience
@@Muhluri the 500kb/s internet experience
I'm so glad that this tech is limited to these massive high performance graphics cards, they really need it
Rtx 2070 user here. Unfortunately we do :c
@@Franc1sc0 heh, Rtx 2060 here, this feature saves my wallet from upgrading
@@FatPandaCries heh think again, 1050ti user here, i hate all of you.
@@strayjames8751 we cant use dlss on 1050ti ? sad
@@FatPandaCries why would you need to upgrade your wallet
Raytracing is cool, but what dlss is doing here is incredible to me.
This is the biggest thing Nvidia made that's making me want to switch from AMD. The technology is surreal.
@@zixter4756 Let's hope Big Navi will bring a similar tech
What's cool about AI is that it took Nvidia like months on god knows how much GPU's to train the models we plebs can use in a single GPU.
@@2kliksphilip similar to the denoising step with RTX, I would imagine
The dream has always been Ray Tracing. DLSS is really cool, but it is the second tier feature under support for DXR and Vulkan-RT. Those are more important by far.
Next video "I can manually count the pixels, but is it playable"
10×5
I read this comment before watching the video and halfway through it I forgot you had said "next video" and I was wondering when he'd start manually counting 🤦♂️
Yeah its called minecraft
destruction dairus 2 be like
Lol
I remember seeing this video when it came out and thinking "wow, this is the future! Now GPUs will have a much longer life cycle since older cards will be able to keep up thanks to upscaling. It sacrifices the image quality a bit in exchange for not having to upgrade so urgently".
Its almost 2025 now and upscaling has become a crutch for optimization and most games are made with it in mind, with certain titles struggling to run natively even with the most powerful cards... This is so depressing to watch in retrospect.
You forgot frame generation, it's a crutch as well, when it was supposed to be for lower end PCs to run games better just like upscaling
Yeah, this video was my introduction to what DLSS is and how it works. I remember thinking "This is incredible! It's like magic!" and I thought this was the future to achieving 4K with really high fps and medium-tier cards.
And nowadays there are some games that, even with a 4090, can't even achieve 4K 60 fps without DLSS and/or frame generation.
DLSS has become a necesity to cover up bad performance in games, and that is sad af
This truly is sad
Finally... *”16 TIMES THE DETAIL!“*
Nice
I am worried Todd gonna release Skyrim again with 8k, 16 times the detail ...
@Intel i5 Dual Core all of this just works
When you realize that downloading the new DLSS drivers is the equivalent of downloading more fps
Finally, we can download a better graphics card.
@@TriggerHappyRC1 First Music, Cars, Bears, now GPUs. My body is ready.
Love when NVIDIA does this. There have been so many times over my 20+ years using NVIDIA that they add a new feature or make something new backwards compatible with older cars etc basically yeah downloading more fps like a free upgrade.
you right sir, YOU RIGHTTT
@@nugget6635 very correct bruh
I think its funny how we started rendering higher and higher and downscaling for quality (MSAA) and now we're doing the exact opposite..
Oh man - didn't realize that until I read your comment.
@@PanzerVII-df8hg except it actually isn't... a 50% GPU improvement every 2 years will be a thing for a long time to come
@@PanzerVII-df8hg When people say Moore's Law is dead or alive, these people don't seem to understand Moore's Law. It has absolutely nothing to do with what's available to consumers. It's about what is possible *in a vacuum*. Nvidia/AMD/Intel ABSOLUTELY CAN make 50% improvements every two years at a lower price. They DON'T because that's not a good business model and because there are outside factors that affect price, like parts shortages and greedy businesses.
@@PanzerVII-df8hg and transistors don't perfectly equate to performance, $200 being 50% faster in 2years time than $400 now? Ha. Half the price yet 1.5x the performance? Would that be closer to a 3x?
2080 has 18.9b transistors
3080 has 28.3b transistors which is 1.5x for 87.5% the price which equates to 1.7x the transistors price for price which fits into the accurate 1.8x (moores law was close but not perfect)
For reference the 1080 has 7.2b which means the 2080 has 2.6x the transistors compared to a 1080!
This literally takes less than 3minutes to fact check by a calculator and google to half fact check what you're saying
980 only has 5.8b which means a 1080 has 1.25x the transistors, its barely anything so it should be like 85%-90% the performance right? Why should anyone care about pascal when maxwell exists? That's an embarrassing improvement... Oh the 20series is here, pascal is dead, 4 years leap right there! Pascal wasn't even a 1year leap lol
The 20series was the biggest transistor jump since the 900series
MSAA doesn't render at higher resolution and downscale, that's SSAA. MSAA does shading at normal resolution, it just takes more rasterization samples at geometry edges.
For some reason this is a lot more impressive to me than 1080->4K etc.
Seeing it build passable images from such low resolutions is absolutely insane.
Maybe we could teach A.I. to upscale our money
@@psisis7423 lmao better looking counterfeits.
@@2kliksphilip
imagine a switch pro with tensor cores... it could still run the same game as the base switch, but it would be able to output 4k docked if it so wished.
@@thebravegallade731 Thank you for saying that. Nvidia buys Arm (Low power CPUs) those chips plus DLSS 3.0 in a switch🥺
@@2kliksphilip The applications for competitive gaming where high frame times are important are potentially huge. Being able to play a game at 240 FPS and non-garbage visuals would be crazy.
Remember that TH-cam Compression algorithms are killing the visuals of the video itself, so in reality the image quality is ever so slightly better than presented here, especially when it comes to "1440p" and "2160p" parts of the video.
To just add a bit, anyone that has run 1440p or 4k one the 20 series cards, and then watched the same gameplay will know the difference
@@2kliksphilip btw youtube has a different bitrate for each video quality loaded into youtube so upscaled to 4k 1080p being watched in 1080p would look better than basic 1080p watched in 1080p
@@2kliksphilip Me watching the video at 480p. I mean it looks about right for the footage being shown. Wait a minute..Dear god
@@2kliksphilip Imagine if TH-cam had great bitrates like Vimeo, which even offers viewers a download button do get the ~17.000kBit/s original files, if you want.
But there are just too many users here. I see why they're dropping bitrates more and more every year, to save their servers.
When I want to feel sad, I redownload videos I saved in like 2013 (using the same software at highest settings) and compare the bitrates of the current copy to the old one (They used to constantly wreck old videos more and more, year after year and still do. This has improved a bit with VP9 though.) or compare the Vimeo and YT upload (if applicable):
e.g. at 1080p in June 2019 "Don't Hug Me I'm Scared 5" had a bitrate of ~4500kBit/s AVC + 256kBit/s 48kHz AAC on Vimeo vs. ~700kBit/s AVC + 125kBit/s 44.1kHz AAC on YT.
It's a day and night difference.
I'm not sure if someone made a video about this sometime. Especially that they data degrade old videos that maybe don't have any other copy as time progresses.
and the only reason the compression of youtube loses quality its cause its not lossless they priority is to keep the bitrate at a same amount so internet connection with a specific speed can watch without issues like at 2/5mb connection you can watch 720p just fine on all videos, if youtube did a lossless compression we would have no quality loss and still smaller files, but may not be so stable and different videos would may need more/less internet.
While it wasn't intended for lower resolution, I feel like this could provide some benefit a few years from now when current graphics cards have aged, to squeeze a little bit extra performance out of them if you don't want to or can't upgrade yet at that point.
No, that’s the literally the point of DLSS. It was intended for lower resolution as it’s best suited in upscaling them. People tend to think DLSS and FSR are meant for upscaling (just upscaling from native) specifically but downscaling is the biggest reason it even exist (taking a lower resolution and upscaling it).
Unfortunately, this might be seen as a negative thing for marketing for the same reason that planned obsolescence is seen as a positive thing in marketing.
It's great for the consumer because that means having more time to wait before upgrading to the latest tech, but that also means that the company selling it won't sell as many as they would if the feature didn't exist.
It seems entirely possible that they may try to phase DLSS out over time or change what it can and can't do, or change it's compatibility with older cards in the future when the current ones are older. If they don't, they would be making less sales, which means less profits. It's irritating that this could become a factor, but it seems very likely to me based on the widespread nature of planned obsolescence in cars, appliances, phones, etc.
well if they're not going to make recycling electronics more accessible or even viable then it's the ethical thing to do. graphics cards are as common as vacuum cleaners, should I have to buy a new one every 5 years? @@Bowlfrog44
Yeah, so it's super convinient when Nvidia software locks all new dlss versions to the newest cards
Your DLSS coverage is always fascinating and I'm surprised other creators don't cover this tech in similar detail.
doooooode. totally.
Their all too busy fawning over ray tracing and its batshit crazy implementation. Like how puddles can somehow have a cleaner reflection than a bathroom mirror and ALL WINDOWS ARE ONE WAY GLASS
@@Snaffdude virgin crystal clear reflections on puddles fan vs chad shiny ultra-reflective mirror floors enjoyer
@@2kliksphilip Great analysis mate, on point. I wonder how this tech would translate to vr games. Does Alyx support it?
Digital Foundry and Philip got it covered
This tech is the future - This is the only way we're gonna power next gen VR experiences without insane unavailable graphics technology. I may not like Nvidia, but holy hot damn do I respect them for this tech. It's a monumental achievement.
This tech plus eye-tracking tech (to reduce pixel count in your peripheral vision) would do so much for VR. Oh man I can't wait!
@@MetalJody1990 "Foveated rendering" if anyone is wondering what this is called
*Me having played VR games like PavlovVR and VRChat on a HD7950, RX 480 and RX 5500XT with my Acer WMR headset in 1440p90Hz.*
What do you mean you need powerful graphics cards?
The future? It's the only way Flightsimulator 2020 can be played in VR without it dropping to 30 fps!
Why not like Nvidia? They're awesome and reliable af.
I remember seeings dlss come out thinking "Wow this is incredible". Now I hate it because companies just use it as an excuse to poorly optimise their game. Its no longer an optional feature and is required on a lot of new titles unless you have a jacked gpu.
the nintendo switch definitely needs this. maybe for a switch 2
it's a game changer for the Switch, and gaming laptops too
@@ROCKSTAR3291 ML algorithms need a good amount of processing power to actually begin with. so no.
@@That_Darned_S well that's why nvidia rtx cards have dedicated chips and cores for that, so you could build these in to a switch
@@elsyvien That would just up the power usage as well. Not really suited to a mobile device.
Yeah, it seems like a great match. If needed they can disable the tensor cores when handheld to save battery power - the small screen size wouldn't benefit from DLSS as much as a giant monitor.
Jokes on you because I'm watching this at 240p so all of them look the same
same, bad internet gang
Lol same i have 50gb on my phone plan
@@markdove5930 same shit i have 55gb
I just got 100/200kb a second. Be careful when buying farm property
122p on my 4k screen
This is going to be a game changer in the Switch’s successor.
8:22 I feel like I'm watching Control run through a 1990s Parasite Eve/Silent Hill graphics filter. Which is, frankly, kinda awesome. DLSS apparently has a time machine built in.
This combined with Geforce Now could deliver some extremely impressive low-bandwidth remote gaming
The problem is that you can't just render the game at 240p, then stream the video and then upscale it. DLSS needs some extra information from the game, like motion vectors for example. In it's current implementation you can't "outsource" DLSS from the game.
@@EVPointMaster Interesting. What if you send the information needed for each frame? Vector data is way smaller than pixel data
@@NawarNrr I'm not sure if it would even work. Normally DLSS receives a full image every frame, if the video was encoded, it only contains partial images. Basically every few seconds the video has a full image and then the following frames only contain the parts of the image that have changed since the last frame; I'm not sure if that would work with DLSS.
Just taking this video as an example, it's 10:30 long, the 240p30fps option of this video is 20.27MB in size. This means an average of 1.1KB per frame. I don't know how large the motion vector data is per frame, but I think it could be big enough to have an impact on the bandwidth
And why would you use Geforce Now if you have DLSS capable GPU ?
They need to do something like "Parsec" to be impressive..... latency is the issue also.
Funny to look back at this video, AI super sampling looked like a nice gimmick, but the devs literally use the damn thing as a crutch, a current game can't run at 1080p anymore
Unironically, I could see this as an aesthetic to use ina a game with an atmosphere that suits it. It has this weird, dreamy, fuzzy feeling to it that's just a little different from the standard type of blurry.
reminds me of The Order 1886, what an amazing graphics that game (still) has!
Like Silent Hill
What being nearsighted feels like.
In some kind of underwater game it could work well. Or for heavy rain that hit's your face.
@@MinecraftBlackWolf maybe thats why i dont like it. im nearsighted, and these graphics bothered the hell out of me, like ptsd from being half blind
Linus: Look at this $10K CAD tv to play 8k with!!!!
Phillip: well what happens if you go back to .04K?
Underrated comment about underrated youtuber
if its 0.04K then shouldnt it be like (som number)x40? or am i just an idiot
@@sussybaka6106 I think your over thinking a cheeky comment
@@sussybaka6106 Yeah. But it’s 40x(some number)
@Kenn Honson X Res is pretty confusing. 4K is literally 4 x (1920x1980px) = 8 294 400 = (3840x2160)p
But 4K technically means 4000 horizontal resolution which special cinemas have (normal is 2K)
So PC-4K is neither 4K, nor called 4X as it should be. But manufacturers are making money so who cares
Looking back at this now, I thought it was so revolutionary at the time. But it's become so commonplace that huge companies are using it along with frame gen as a crutch to get barely acceptable performance when they should be optimizing properly instead. I genuinely wish upscaling and frame generation were never invented.
360p to 1080p is actually dope. I would love this technology in low spec systems.
That and pre-rendering reflection textures using spare resources for ray tracing. Would enable doing them on weaker hw, even lacking cores meant for it, when lightly loaded then use traditional means to load them on demand.
@Kenn Honson X Someone got software ray tracing to work on the CPU part of an APU using the iGPU for rendering. It could be done in CUDA or OpenCL.
@Kenn Honson X that's a blatant lie.
@Darth Wheezius Imagine gaming on an RTX50 plugged into a TRX50 with games optimized for both.
@Darth Wheezius I don't know why I suddenly remembered to wonder this. I wonder why files in a shader cache are not compressed by the GPU driver. Try turning on the compression flag for the folder and see the results. Decent space savings and you can't tell a difference in load time.
DLSS (and ai upscaling as a whole) is absolutely fascinating, and I love how many videos you make about it. more people should be bigging this stuff up, it's insanely cool
the problem is it only works in select titles so far, way less than 0,1% of games. If it worked for all games with every gpu, it would be a killer feature. Itll take a couple of years and hopefully it will get the neccessary compatibility upgrades
I just bought a 3060 laptop (max-Q) and honestly, playing Control and Cyberpunk with DLSS is really impressive. I manage 60fps with ultra settings(1080p), that's insane.Thanks for the video
Ultra Settings yes, but without Ray / Path Tracing tho :P
@@Nebujin383 This is from 2 years ago man they dont know what path tracing is.
10 years from now:
"I Just Upscaled 1 Pixel to 8k and got more than 10000 frame"
PS: let's just hope NVIDIA doesn't miss use the technology, and start releasing cards that without DLSS is GT 710
@@MrMarkTheBeastMemes99 why? If DLSS is a 99.9/100 copy of if it wasn't a GT card then does it matter? Your basically paying for development of software, instead of hardware really. Though this evades multiple other facts like other settings that DLSS doesn't account for, the basis being they could software that shit too eventually.
@@MrMarkTheBeastMemes99 Who would that hurt?? That would actually benefit the budget gaming community. This would actually be the right way to go with it, make cheap gpus and use dlss to improve the performance so people can buy gpus for much less. I doubt it'd be releasing anytime soon if they were to do a thing since dlss isn't universal for most games yet but when the time comes I believe this would be a good thing to do
@@TripleLayerLemonCake DLSS requires special cores called Tensor Cores, that are only available in RTX cards. Tensor cores are used in training AIs from self driving cars to chatbots.
DLSS will probably be useless on lower end gpus once mesh shading becomes popular.
it looks like I forgot to put my glasses on lmao
@@2kliksphilip I've tried it and it doesn't work, I still notice the low resolution but it's blurry and my eyes are in constant pain.
imagine being far sighted
this reply was made by near sighted gang
@@dracula400 you made my day much funnier! thank you for the unique outlook on vision impairment XD
@@dracula400 and constant Depth of field blur ;-;
Worth pointing out that the text (and 2D HUD elements in general) aren't being scaled, those are always being rendered at the screen's resolution _after_ the 3D render has been scaled (assuming the game was coded by developers with a clue). Just in case some people thought DLSS included some CSI-level "zoom in and read a full page of text that's just three pixels wide" voodoo.
which is great , otherwise the ui would be an artifacting mess
@@avfx111And in some games with poorly optimized DLSS, it is
@@olivereisenberger7215 dlss doesnt need optimisation tho, it works preset based( each preset determines what variation of the motion vectors to use) the only reason why ui would artifact is with framegen because framegen doesn't apply on the 3d view but on the entire frame whereas when dlss is enabled, it first upscales the 3d view output and the ui is rendered afterwards so ui is still at native res
5:10 I realize now why I was SO CONFUSED when DLSS came out. I was following it when it was being developed as super-AA. When the 2000 series launched and there was all this talk of boosted frame rates at high resolution I had no idea what they were talking about
"To fill you in - DLSS is about rendering a game at a lower resolution, then using clever upscaling trickery to try and return it to the original resolution again" I see what you did there
Innovation 2007: We built the world's first touchscreen smartphone with cellualr and Wifi in your pocket
Innovation 2023: You can play at 240p on an $800 graphics card
The title made me think that this was a LowSpecGamer video, lol
same here
The funny thing is, I'm nearsighted, but also unequally between my right and left eyes!
So the fact that the background was fuzzy and shimmery actually made my brain think it was higher resolution than it was!
It's exactly how I see the world! Who could have imagined that ultra high graphics and ultra high DOF was actually less realistic!
Oh I'm the oppposite, long sighted but my left eye barely fucking works at all so often when I play games the image appears very flat and when I decide to out on my glasses it looks so much more 3d for some reason
I do find it interesting how we're moving from the days of solid, static resolutions to things like dynamic resolution-scaling and all sorts of interesting systems like that to ensure a high frame-rate
That's an odd thing to say considering DSR had been a thing for a decade when you left the comment.
@@mechanicalmonk2020 true but personally I had missed out on most of xbox one/PS4 era games due to not having either console - and nowadays developers and the industry are talking way more about it and making a much bigger deal out of it as they've made more DSR-related advancements.
as far as I remember, people didn't really talk about upscaling or resolution scaling much at all until things like DLSS and FSR arrived
Yeah, its not going to work like that. Once devs really latch onto the idea of upscaling covering all their performance issues, games are going to start running worse. Eventually, any gain you get from upscaling will be canceled out by poor optimization.
Imagine having a 4090 and having to use upscaling to get decent frames. That day is coming.
@@Damaged7 What kind of devs test without testing for non-dlss performance too?
@@Damaged7 Alan Wake 2 has entered the chat.
"Thats the Setting You Avoid On TH-cam Because it Sucks"
**Me Watching this video at 144p**
: Yeah True
Same
Really?
@@darxustech2883 3rd world standards babyyyy
Damn anything below 1080p is shit to me lol
@@niteowl17 I would really like to know more about this. What percentage of the world can't get better than that on youtube? Where? I'm in the US, with the cheapest option the biggest local internet provider has available, and I'm watching at 4k with no problems. Not flawless, but the hiccups aren't common.
It costs more than it should, because of how much a monopoly comcast has, which should get fixed.
Wow, only 59% of the global population are internet users?
I was thinking maybe they should try applying something like DLSS to these videos to help with that problem, then realized that's basically exactly what lossy video compression does.
Since it was first released, I’ve been more excited about DLSS than any other current graphics advancement. The amount of performance headroom you get allows for more expensive effects actually being viable. Love it
Yeah but it looks like caca, I don’t like this becoming a new standard/crutch in gaming
true
would be great if the game toggles it when necessary
@@Denomote This depends on use case. There already is automated resolution scaling that only lowers resolution when necessary on some games like Cyberpunk. On games that run generally ok but with less than targeted fps it's a quick +10fps for the quality dlss.
There also are games where the game looks actually better with fsr or dlss enabled, for example Starfield is a lot more crisp with fsr 2.0 than completely without.
@@life_is_gr8 Because as Kliks said multiple times in the video, he is using it in a way that was completely unintended. It won't become a new standard or crutch because building games around technology like this is flat out stupid and would limit your userbase by something so arbitrary that it wouldn't make sense at all, but its definitely not "caca" when used properly imo.
@@SafetyKitten I’m sorry meow
Now that Dynamic Resolution Scaling and DLSS together exists, this Ultra Performance DLSS now has a actual use.
It's the best DRS Floor ever now, as games can lock the output to 4k with DLSS, and Dynamically Scale the Internal Resolution to lock a Framerate, and more often than not (at least on PC) it will likely fall closer to the 1080p side rather than 720p.
Imagine this being used on a Switch 2021, it would be revolutionary imho.
@ yeah COVID pushbacks are fun.
Although we do legit know that a new SoC for Nintendo is in development thanks the to the guy who leaked literally all of NVIDIA's Ampere GPUs.
Even have both the codename and model name
(Dane/T239), and what the SoC is based on (Orin).
So the system exists, just likely got pushed out of 2021 because of COVID and also production problems/qualitifcaitons on what Nintendo wants out of the SoC.
Oh hey here's a Steam Deck with a similar technology from AMD lol
@@alejandro6070 FSR pales in comparison to DLSS especially at lower native resolutions and you know it.
Although not knocking the Steam Deck for that, that should slay if it can use the open-source version of XeSS when it comes out and someone makes a version that works in any game like Lossless Scaling's NIS/FSR
It would be cool if they implemented this technology on low powered laptops so they could play games at reasonable frame rates and resolutions.
Budget GPUs and APUs too.
@@PanzerVII-df8hg would be cool to see them implement the tensor cores themselves onto cheaper cards that can be put into a laptop
This is why I bought a Acer Nitro 5 for $700. Comes with i5 9500h and rtx 2060 80W. Even tho it's not a refreshed mobile gpu, I can still add +220 on the clock and get 2000+ mhz on the clock. +1000 on memory. I can run Control 720p dlss to 1080p with ray tracing on and average 70+fps.
@@PanzerVII-df8hg I enabled dlss in the config file for Control on my 1660 ti and it works like a charm lmao
@@runty6841 lmao shut up bruh
I can't wait for technologies like this to come to integrated graphics. Just imagine running AAA games at 360p on your mom's laptop and it actually looking good!
Integrated are actually pretty good with a new laptop, you can play new games at 720p in something like a thinkpad, even better if is ryzen
@@butifarras yeah, jus t think how well iGPUs could hold up if DLSS comes to Vega 3/8 or the new Intel Iris graphics!
@@butifarrasIdeaPad Chad here
my craptop can play early current gen games at 30-60fps depending on the game.
@@Kraafft
Maybe Nvidia Arm CPU?
@@Kraafft that being said DLSS is nvidia's tech. the only intergrated device/APU that would probably even use this is... the next nintendo console
The only thing I fear is that game companies start being even lazier with optimizations because "you can just use dlss to fix it bro".
This should be used to make even bigger and better games possible, not to allow for new games to be lazier optimizing.
It's technologies like DLSS that makes me feel like we're living in the future, not the faster and bigger graphics cards, which is just predictable. DLSS is exciting and magical.
Yep, it's the lowest standard going up that's amazing. I sold my 570 to a friend who had been using a 270x since its release and he was blown away. He now has double the core count, less power for cheaper than the 200s release price. Same goes for Intel's integrated graphics, it's waaaaay better compared to just a few years ago.
The machine learning stuff is amazing, check out 2 minute papers channels for cool stuff related to AI and graphics i really recommend it. And just wait until these deep learning things are applied to rendering 3d engines and even ray tracing which should yield huge performance increases, now instead of 9 extra pixels for the cost of 1, you could also get 9 extra polygons for the cost 1, or 9 extra rays traced for the cost of one. When its all said and done real hardware could go extinct thanks to AI, or just minimalist hardware to run the AI, sort of like how our brains operate, on like a couple of miliwatts? No 750w powersupply needed to render the universe in infinite rays traces in super high resolution, greater then the eye can even see. Maybe we should use this video card more often.
Somehow, Philip found a way to make another AI upscaling video
I'd love to see you do this video again with the latest version of DLSS dlls; they're significantly better at their upsampling rendering than what those games came out with originally.
I found this. Just DLSS 3 (not 3.5). 4k vs native 4k. Still impressive. th-cam.com/video/-qiGU0LEDiQ/w-d-xo.html
Judging by the video, DLSS on lower than 720p resolutions honestly just looks like watching a compressed video with low bitrate.
Interestingly it is also a direction in which rendering in general will be heading in future, a sparse rendering of areas which need the detail spatially and temporally.
We should see more reconstruction techniques aimed for temporal/framerates as well.
8:10
No Philip, please spare them the cucumber. THINK ABOUT THE CHILDREN!
oh, you listened.
thats a courgette lmao
This video was recommended to me right now. Guessing TH-cam is catching onto what the "Switch 2" (Nintendo NX) will look like?
Oh no, he's AI upscaling his games now too
been like almost two years since he did his earliest DLSS vid
“Quite literally blurring the boundaries”
GFY for that absolutely perfect writing
Thx for putting the songs used. It's greatly appreciated. Great content too !
Apparently youtube AI thinks that Death Stranding and Control are GTAV though
GTA V was the second strand type game actually so its easy to get them confused.
@@rigersmuco4970
What was the first ?
@@Fastwalker27 Mario
first time I read gta5 *ENOUGH*
@@Fastwalker27 Super mario bruddas babyyyyyyyy
Finally. Gaming will look incredible on the backup cam of my car.
I just wanna thank you being as I am literally 6 months into my 1st gaming PC. I'm 40 year old man (FML) that has been gaming his entire life thank you for teaching me so much in such a quick video
@@2kliksphilip O absolutely! All my friends have been pressing me to get on the PC bandwagon and I've done a lot of research but your video just made it so easy. Looking forward to watching more of your stuff keep up the good work
Honestly, I know it isn't very likely, since it could radically shift the GPU market, but I really hope they make very mid-end, low-price GPU with this technology implemented, a TON of people would benefit of something like that happening. The opportunity for the PC gaming market to grow when they offer an affordable, quality, and convenient device to markets without much disposable income is insanely high, imagine places like LatinAmerica, the MiddleEast and North Africa, India and South East Asia, all seeing a dramatic rise in players because instead of needing to save up for 1 to 3 years (in many places it's a lot more!) to buy a decent gaming machine, they'd just need to do so for 3 to 6 months.
Again, I don't think that'll happen, but it'd be really cool, gaming should be accessible to the less wealthy if it's at all possible.
I'm happy to inform you that such tech is in final testing atm. Tho it's AMD making it and it'll be for all cards, since they're included in base coding (DirectX i think). It just takes a few consumers that dare turn their back to greedy manufactures like Nvidia, to get such tech. Vote with your money next purchase ;)
I'd rather not have those players in my games
@@legeorgelewis3530 Did you wait 3 years to be an asshole?
@@legeorgelewis3530 aight george
How is it possible to be gamerist@@legeorgelewis3530
DLSS is such a cool bit of optimisation tech. I hope we see it in a lot more titles in the future
"I'm imagining the switch successor being able to use this" - now, 3 years later this seems very likely. Man's a prophet
Budget Gamers: "Heavy breathing*
Right?? Suddenly that 3060 card isn't looking so unattractive anymore.
@@caseyhall2320 Dude make that 3050
The 3060 is already on the level of 2070/2060 super+
3050 would be a 1080p gaming beast and with DLSS2.0 it can even play at 1440p in modern titles. The future is bright for Nvidia card users.
This is the first time we'll see a smooth $200~ 1440p machine.
@@themercifulguard3971 you're forgetting it will probably only have 4gb vram :/
@@thetechnocrat6388 Oh yeah I hope they make an 8gb variant like it is rumored to have or just make it more than 4gb
@@themercifulguard3971 ya
Thank you Philip for being the seemingly the only one that shows the side of DLSS that is *actually* exciting - running games at low resolutions and letting AI do the rest
These results are incredible. I wish we get DLSS or similar technology in more budget friendly products. This would be at its best in lower end GPUs and anything used on laptops.
we do have it for budget friendly products. its called FSR. the budget version of dlss for the budget versions of RTX cards
Imagine if consoles back 20 years ago had this technology! No matter the downsides, it would have been such an amazing improvement!
The upscaling costs more compute power than a 1000 n64s, its just not a big performance hit for modern hardware.
edit: fxaa on the other hand vs MSAA on n64 would save alot of performance, considering you can't blur blurry af n64 textures XD
Think it was the same as fake borderless fullscreen (also the name of the software) I used to play MGSV:PP on an HD7450 set on high 1024x576 resolution then fullscreen borderless on 1080P can't remember much but its somewhat checkered
@@R3TR0J4N "fake borderless fullscreen" wait what does this mean? Isn't fullscreen always borderless, and what's the difference between fake and real?
@@brethnew it's a term for windowed borderless fullscreen, e.g.
The game is windowed so you can tab out without giving it the fullscreen freeze.
@@f1l603 Ah okay I get it now, was just a misunderstanding. You're talking about borderless windowed mode
I keep forgetting how god damn good death stranding's scenery looks
yeah, that game is like a movie
It looks like an animated movie
I didn't even know 8K displays were a thing yet.
I remember reading a thing about meta materials maybe 15 years ago or so, and the potential to create incredible stuff with it, such as cameras capable of filming in 32K. It was some crazy resolution like you could take a non-zoomed photo from orbit and the picture would be so high def that if you zoomed in on the image, you could read a post card laying on a sidewalk. Like, a photo without using a telescopic lens. Some crazy shit about negative refraction, where it could make something appear to be floating above the material so detailed that it looks like it's actually there, not some semi-transparent hologram.
Makes me wonder whatever happened to the meta material research. I was hoping to see the results by now. Then again, maybe it got bought up by the military, as one of the properties of meta material is the ability to bend light completely around an object and have the light resume the original course. In other words, invisibility cloak. They successfully made an object totally invisible to microwaves (the actual wave, not the cooking machine). Apparently, a display made of meta materials could create an image so high def, you can't distinguish it from reality. Probably where they got the idea in the movie, "Gamer."
Anyway, technology is getting crazy. I nearly shit myself when real-time ray tracing was announced. Used to take me over 8 hours back in the day to render a 3D model in 800x600 resolution with ray tracing enabled. Like this Lego walker, which I think too 6 hours to render back when ATI was the dominant GPU maker (early 2000s): scontent-sea1-1.xx.fbcdn.net/v/t1.18169-9/150704_573756515970942_1040442011_n.jpg?_nc_cat=109&ccb=1-7&_nc_sid=cdbe9c&_nc_ohc=4Rz5uLHy7CcAX851lcy&_nc_ht=scontent-sea1-1.xx&oh=00_AfDIY_cXlizzw_l-JRYFf-z8wdhvAFXwlSvmmMUXZIDqOA&oe=637E5C2F
What they're doing with A.I. is truly impressive. Check out Nvidia Remix, if you haven't. Utterly mind-blowing, as if real-time ray tracing wasn't mind-blowing enough.
thats some mason high secret sht
someone sleeping with the fish
This was an interesting read. It's been awhile
8k displays have been around since about 2018 at least if not before then , it's just they were limited to the TV market not computer monitors , and they were/are mega expensive, 8k is usualy reserved for tv's that go beyond 65" such as 70-103 inch tv's where the pixel density (pixels per inch) really makes a difference from 4k. also smaller size displays make no real beneift from drastically higher resolutions either. there's a lost of return on gains involved. this is why most 20-22 inch monitors are never higher in res than 1080 p with 24's-27's being 1440p , you'll rarely see a 1440p monitor that is over 32 inches. most 40-60 inch displays are 4k , with many 65 inch tv's and beyond doing 4k as well since 8k is stil real expensive , you won't find any 8k tv's smaller than 55 inch though.
however msot pc gamers wil use a monitor that is between 22-32 inches , rare are the ones taht use 40+ sized monitors , most pc gamers lookign for that size image just go with TV's because they are cheaper than monitors that big. like myself , (i'm gaming ona 49" 4k samsung tv)
EDIT: i stand corrected , just did a quick search on amazon and walmart and target and best buy, the smallest 8k display i can find is a 65" TV , NO comptuer monitors yet but i'm sure one has to be out there , that said the cheapest 8k tv i could find is $1,500 dollars. add that price , the fact there's no actual media for it , and you'd have to own a $3500+ computer to push any games to 8k (and even then that computer would have to rely heavily on upscalers) ... i can see why it's been very slow to catch on. that all said the tv's i have looked at do all upscale images to 8k , , likely the best result would be from a 4k source such as a 4k blu ray player or 4k streamed media. i've seen dvd's upscaled by my 4k tv and they are hideous. but 1080p blu rays still look nice with the tv's upscaler. best image is of coruse , my 4k blu ray player or when i'm running a game native @4k.
"That's the setting you avoid on youtube because it sucks too much to use"
Me watching at 144p: what did you say, you blob of pixels?
This is definitely imperfect, but holy shit for something that is still basically in its infancy, this is absolutely insane. I can only imagine what this will be like in a few years.
True, just imagine this thing in 2028 or 2032.... damn.
@@_sky_3123 "So, here we can see 8k being upscaled to 32k with DLSS. It actually looks more detailed than real life."
@@PiousMoltar Yeah I think 32K is a limit for the human eye ( for 34" pc monitor). But I guess we will see ( in 10-20 years)
DLSS when it started: Use me to run games at 8K
DLSS now:
... welllllll.... we developed a game with DLSS in mind....
I love DLSS.
Before Fortnite added it, my game struggled to get to 60fps at 90% of 4k on medium settings, now everything is on maximum, game runs butter smooth and it looks amazing.
It's pure magic, and I hope it gets added to as many games as possible.
Cyberpunk is the first game I'll play that has it and I'm stoked at the possibility of running 120hz on ultrawide 1440p stable. I bet the VR crowd is even more hyped for this tech...
nVIDIA is love, nVIDIA is life.
If you're on this tech beat, I'd LOVE for someone to do a deep dive on how this DLSS system is trained. Is it specific to each game? Does it generalize? Could it work for live video? Great work.
DLSS is trained on a per game basis I believe (or at least Nvidia did this for first generation titles - see www.nvidia.com/en-gb/geforce/news/nvidia-dlss-your-questions-answered/).
Essentially, this method creates a 'model' for each DLSS game that analyses various parts of the aesthetic, art style, gameplay feel, etc to essentially give the AI tools an 'intuition' on how the game environment would be most likely to perform/render so it can add in the extra information. It's not general enough for video yet (although the Nvidia toolkits for streaming and things like Optical Flow are promising) - video is pretty difficult to build a general model for as its not going to be as consistent as a game in terms of appearance. I'm not too familiar with what software is used for upscaling video - I understand most methods that aren't just straight scaling rely on tracking and replicating elements between frames so are fairly computationally intensive to do in real time, but if the quality of upscaling the likes of an Nvidia Shield is capable were possible on their GPUs too that would be brilliant.
DLSS 1.0 was trained on each game individually, while DLSS 2.0 is trained in general on various upscaling datasets. They certainly try for good representation among types of images, since they've decided to release it to a wider audience.
Nvidia released an article about it recently, quoting from their website:
"During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images."
and also
"The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games."
I also think (not 100% sure) Nvidia mentioned using their DGX A100 machines to train these models in-house at GTC May 2020.
It works on video linus did a video on it the nvidia shields upscaling.
It was specific to each game using machine learning from Nvidia in DLSS 1.0 and now they have swapped to general purpose ML for DLSS 2.0
According to leaks, the Switch's successor will use this.
Him: Playing a game in 240p gives insanely high fps
*Csgo players have entered the chat*
3kliksphilip csgo at 240p when?
CSGO Vulkan Multithread Optimized SLI Optimized running on Threadripper 3990WX RTX 3090 SLI RAM Disk, at 160p-480p DLSS at 36000fps on 360Hz monitor with PCIe direct keyboard and mouse for GHz polling and ns latency
@@ambhaiji basically normal gaming pc in 15 years
@@ambhaiji You forgot to say optimised again
@@ambhaiji don't forget to plug it in directly to the brain, cause you know, our nervous system have lags too.
its funny cause this feature would be AMAZING on lower end budget cards lol
That's what made the RTX 2060 actually a pretty good value when it came out, if you could get it at a good price of course.
medigilclafouti lowest price for a new 2060 here in the UK is £270 which is pretty good considering the 1660ti is the same price
fisrt gen is always more expensive. I'm pretty sure Nvidia will make another GTX. So you can expect a 3050 and 3050 veriant with DLSS support
Bruh. The only ones that really need it 😂
Upscaling is a cancer. Some games now use it faking is in native resolution. Is rare to find a game sharp right now. Even for a gamer with a top tier PC.
you know you can disable it in settings, right? Have you tried that? No?
When is Anti DLSS coming out I want my games to run at 4k and use an AI to make the image worse.
So FXAA?
Ah low FPS motionblur! At stable 60 motionblur kind of stops working as we all grown to love it.....
If your rig handles 4k60, bump it up to 8k and motionblur starts working again. Love me those artificial nuclear chemtrails on every twist and turn.....
@@markayres6913 Not enough hardware accelerated AI.
You could always play more indie games. You wouldn't even need the AI part!
@@The_Mister_E You joke needs more context
"240, the resolution you want to avoid on youube' if only my computer ran that well
that's probably an issue with your internet, not your computer
@@gisopolis77 eh could be both tbh
@@gisopolis77 the computer that I used to run on accualy could only play games like hl2 at 240p 20fps it was terrible
@@gisopolis77 could be nic or GPU if he's on like Intel GMA
@@gisopolis77 Nah back when I had a Pentium 4 with 2GB ddr2 no GPU, in 2013, I could barely watch vids higher than 480p. Literally only upgraded the CPU to a core 2 quad and then 1080p streaming was lagless.
The irony is that most people who see this video will bee watching it on theire phone and once your past 180 I stoped beeing able to tell the diffrence
It's not irony. The cultural zeitgeist is slow because humanity would collapse if everyone caught up to everything (regardless of accuracy. Low risk is the way to go so -> slow and unintelligent)
"That's the setting you avoid on TH-cam because it sucks too much to use" *Me changing from my usual 144p to 240p so I can see the graphics settings* T_T
I'm used to 360p idk why but if I use the 1080p its okay too. Its not necessary for me
@@kinarast I actually understand the feeling quite well. I had to move in with my mother a while back, for the summer, and her wifi was literally running at an average of 200 to 300 kilobytes per second. Over the course of that summer, I got very used to 240p and 360p (if I was lucky enough to watch it in 360p, had to wait until no one else was using the wifi to get that luxury haha). So whenever I'd visit a friend and we'd watch anything at 480p or higher - usually 1080p to 4K - I'd feel really....weird, as if something was off. That, and I audibly exclaimed, like a child witnessing real life magic, "Woah! Holy shit! It's so high res!" Which my friends would then respond by looking at me as if I had lost my god damn mind, watching me going ape shit over what was practically black magic to me. To be fair, if someone said that to me, I'd be off-put and slightly worried too, lmao.
But basically, to wrap this up, when I moved back in with my father and watched videos in 1080p, every day felt like I was in fairy land. As if I had come across something more amazing than a giant pile of gold. But that only lasted for a couple months and now I'm used to 1080p again.
"240p is so bad it's unusable" you'd be surprised what becomes usable on mississippi internet when you can't use anything else
You are the First guy to ever pronounce Wolfenstein correctly. God bless you!
finally a video that looks like my everyday gaming experience
I really appreciate it, how you keep the quality of your videos so high after all these years. Most youtubers fade away into mediocrity at some point, but you are still motivated to produce amazing videos. I really like that. Also thank you for including the songs in the description. :)
dang, you could’ve told me the 240 p resolution with dlss was actually 1080p native and that would’ve fooled me
Funny thing is, I watched this video on 240p without switching
I watch TH-cam videos at 240p by default so it would load quicker
Install magic actions for youtube, then you can enable it for default.
On my mobile i always watch everything in 144p in case it randomly switches to 4g on without me knowing.
TH-cam 240p are not what youtubers uploaded 10-15 years ago -TH-cam has basicilly destroyed those videos with their algoritms -a 240p avi file can be upscaled to 1024p but youtubes 240 are like 32p movies
I watch everything at 1080p anything lower looks crap
If only more games would adopt DLSS 2.0
i think all future triple a will most important one is cyberpunk
Honestly as unimpressed as I was with the original first pass DLSS implementation... I'm extremely impressed with DLSS 2.0. Honestly this, it's AMD counterpart (which will probably be more like DLSS 1.0, especially since it's software based if they attach it to their current line of GPUs) and future iterations will probably be *huge* for both console and PC gaming. I'm going to go ahead and guess that Nvidia will stay well-ahead of AMD due to their software+hardware implementations, but man, this is making me actually consider upgrading my monitor as I can use techs like these to run higher framerates without going completely blurred-out on a higher res monitor.
imagine GPU can now access DLSS in everything, programs, chrome, games or just youtube videos or watching 240p movies enhanced by DLSS. Less internet data to be used in near future.
The technology doesn't currently work in way in which that would be remotely plausible. The first most obvious issue is that it has to be trained for the content its working on. That work is being done via partnerships with game developers and Nvidia, on servers. Its unlikely Nvidia is going to work with every entity that produces something on the internet. If we get to the point in which it works dynamically on a local level, only streaming content is made and distributed in tiers. Largely everything else exists, and is distributed at a single quality level. DLSS is a thing that does what it does in specific market, to capture as much share of that market for a single entity.
@@blkspade23 "The technology doesn't currently work in way in which that would be remotely plausible"
Eh, that's just not true.
"he first most obvious issue is that it has to be trained for the content its working on."
Then it is "remotely plausible", isn't it? You just need to supply the data for it. If only there were servers available with exabytes of videos with various content types to train NNs with.
@@blkspade23 Isn't DLSS 2.0 trained with all kinds of content and not just specific stuff. Also, Nvidia Shield TVs have an AI upscaling that seems linked to DLSS. I don't have one myself so I can't personally comment on the quality, but Linus made a video about it and it seems impressive
@@isodoubIet Not every game just works with DLSS. There is a collaboration between game devs and Nvidia. So just because tons of stuff exists doesn't resolve the issue. If far less work to implement something for a handful of AAA devs (you've made deals with) a year, than all the video out there. While my original comment was technically only accurate for DLSS 1.0, 2.0 still starts with the dev implementing tech Nvidia (and only Nvidia) provides into the render pipeline.
Video processing is far easier than 3D rendering. Video upscaling isn't new, and as we've seen the result better when the input is better. The devices we own do it locally. Content shot on film has practically infinite resolution, but you have to work from the masters. With video games, the content is the master because its being made in real time (well textures are technically fixed), at what ever resolution you have the horse power for. To achieve the same kind of thing for video, without training, we would need a vector based video codec.
The problem DLSS is solving is the performance hit associated with real time rendering of high resolution, interactive content. Better video compression tech is already the solution to streaming, the problem is unfortunately that data caps are largely non-sense cash grabs.
@@blkspade23 "Not every game just works with DLSS"
Doesn't matter. DLSS is a specific implementation. You said the with the current technology this kind of thing is not "remotely" possible. But with the current state of the technology it absolutely is possible. It's a convnet, not string field theory. You grab the training data, of which there's oodles -- just find any random video at the resolution you want and downscale it, presto, and if you're doing this off youtube it's pretty much already done for you, and you run it through. There's no real technical issue here and Nvidia's engineers have made much more impressive advances in the past few years, particularly with e.g. reinforcement learning or GANs which are actually hard to train, unlike a boring encoder-decoder convolutional topology.
I think DLSS will be great for portable consoles in the future.
There's something fun about playing a game in the worst possible way - you have no idea how delighted I was when I found out that No Man's sky let's you play the game at a crispy 64x48p thank to being able to lower the resolution scale all the way down to 10%.
Oh well the AI upscaling addiction is back
It's interesting how Nvidia originally touted raytracing as the killer feature of it's RTX cards, but DLSS has turned out to be the real killer feature.
Another I'd like to see more about is how they're planning to use RTX IO - don't hear much about that but it looks like if you have super fast NVME PCIE 4.0 storage for your games then theoretically your low VRAM card might not matter as much because games can pretty much just stream assets from the drive.
ive always said ray tracing is lazy bloat ware devs just put in. Super waste of power.
7:50 My man predicting future just there
dlss at 240p looks great for streaming. at 720p the video looks great for consumption.
8:48 WOW! Is that what Steve looks like in DLSS 2.0? I'll buy two mouse mats
I never thought I would hear a documentary's voice in a TH-cam channel for gaming.
I realized i couldn't figure out the difference because i was running on 240p. Thanks Australia, Thanks NBN.
could this be used to make games? like say an artist makes a low fidelity textured model of something, uses this to fill in the gaps, and then goes through and removes any artifacts. It could really speed up workflow. dunno much about the industry and the "pipeline" though so I might be talking out my ass
I don't think so. From my limited understanding of DLSS, it is basically an application of a neural network, which has to be trained for each game individually to produce desired results. If you created low textures to begin with, there wouldn't be the required information to train the network with, I believe. But again, my understanding is limited, so I might be wrong.
Not this DLSS, but a different neural network could (and already exists). You could even make a game without textures and it would "paint" it in style you tell it, check th-cam.com/video/u4HpryLU-VI/w-d-xo.html . Hardware is probably not fast enough today for real-time use in games, but in few years we'll be able to mess with this for sure (imagine modding old games with this).
@@qaulwart You are indeed wrong, or should I say not up to date. DLSS no longer needs to be trained for every game (since April or May if I'm not mistaken), it works out of the box with any title which is able to provide motion vector data (it still needs to be implemented by the developers, though), Unreal Engine already has a DLSS branch (available upon request for "real" projects) in which you just turn it on and it works. You can see it in action along with the RTXGI branch (and another branch with caustics) here: th-cam.com/video/ZefvmV1pdP8/w-d-xo.html
Nvidia has other models that they always present. Check Nvidia GameGAN
One model named GauGAN creates better art from strokes of color
One model I think in GameGAN finished Pacman by itself. I think this was used in their presentation about autonomous AI characters
One model generated a 3D world without a game engine
etc
There's various AI Upscale projects for old retro games that utilized pre-rendered images as backgrounds as stuff, like the classic Resident Evils or FF7.
Since their BGs were rendered at mere 240p, upscaling them makes 'em look awful, thus leaving AI driven recreations the only sane option. And the results are pretty stunning
i remember playing pirated games at 800x600 or lower, hardly being able to see the difference.
I was young and not yet spoiled by FHD+UHD graphics.
Things are better when you don't know what you're missing.
"240p that's the setting you don't use on youtube coz it sucks"
Me: *cries in 2G
@@OverbiteGames Wow i did not know satellite internet was still that bad i know 144p uses around 256kbps this may actually be worse than dali up internet from 1998.
@@belstar1128
On TH-cam ,256 kbs is enough to watch 720 p video without buffering
@@Fastwalker27 No you need at least 1.5 mbps more if it its in 60fps for 720p according to a website i found i was wrong and 144p youtube only uses 100kbps.
@@belstar1128
TH-cam is different
It compresses the video so that the bit rate is much lower , hence the famous "TH-cam compression"
From experience :
144p requires only 40kbs
360p : 70 kbs
720p : 200 kbs ( about 300 if it's 60fps)
I have no idea about 1080p , but it's probably 400~500 kbs.
@@Fastwalker27 I don't think so all sources say that even 144p needs more than that 40kbps video would look way worse 40kbps would be barely enough for the sound.
I'm quite confident that AI-completed graphics and videos will become an industry standard before long. It is an extremely efficient tool to improve visuals to close-enough levels compared to actual native visuals, with nowhere near the same processing power levels needed for the results. After all, this is not too different from how human memories work too; making larger images out of limited details.
That 160p to 480p looked very much like an old PS1/2 Game on an CRT-TV. Really gave me nostalgic feelings :D
There should be a "DLSS ECO" mode that runs your game at half its native resolution, upscales it back to native, and disables GPU Boost to save energy.
Sounds good for laptops