@@kraithaywire lol have you guys noticed that Google is having trouble with our spelling or slang and giving the option to 'translate to english' even though it is english. lol Both of you two have translate under your comment.
This reminds me of those old Xzibit memes. "We heard you like upscaling. So we put an upscaler for your upscaler so you can upscale while you upscale."
God those upscaling and framegeneration tech keep getting better, im looking forward to how it will be implemented in portable gaming, render a 720p 30fps game and present it at 1080p 60fps with minimal latency penaulty
For what it is, it looks great. The more options there are, the better. I hope ms improves arm gaming performance either through software tweaks or the 2nd gen of these types of processors. The idea of the npu doing the work is an interesting one and i look forward to all the df coverage of it. Also, I've done no research but why is auto sr limited to 700-900p? Is the npu not up to the task or is there some other explanation?
Probably the NPU is not able to do more. The poor gaming performance for the Snapdragon Elite may be partly due to Microsoft's x86 to Arm emulation layer. However, it seems likely that the poor performance is mostly caused by Qualcomm's GPU driver, which doesn't contain two decades of game optimizations like Nvidia and AMD GPUs. Intel had similar issues with the driver for its then new GPUs.
NPUs are already major CPU features. Dedicated chips for esoteric functions are a very hard sell these days in the consumer sector. Even back when PhysX appeared it was never expected to be a big seller. My write-up of the CES that year where it was first publicly promoted was that this was plainly an acquisition target for a larger company that would subsume the functionality into their products.
GPUs already fit AI acceleration really well and recent generations include dedicated silicon for it (e.g. Tensor cores in RTX are NPU) Point of NPU as part of CPU is optimizing for power on laptops (not having to wake up dGPU, if laptop even has one to start with)
@niter43 So the things which Nvidia calls "tensor" cores are approximately the same thing as an NPU? Except that the NPU is part of the CPU silicon and the "tensor" cores are part of the GPU silicon? So could GPUs with enough of these cores add AutoSR support to a Windows system? Seems unlikely. I also think for major AI applications like LLMs, people don't even use Nvidia's "tensor" cores (or any NPUs) but rather their "CUDA" cores. Needless to say, AMD uses different names for all these things.
@@cube2fox there was just no (and probably still to this day) industry agreed term back when Nvidia introduced Tensor cores. Nvidia named them refering to TPU acronym, it was quite popular back then with Google calling their own new accelerator chips TPU. Just like industry didn't immediately came with "GPU" term. Nvidia's own LLM interference ("execution") engine uses Tensor cores. As seen in their "Chat with RTX" app or open source TensorRT-LLM library it's based on. Annoying to use, but seems to be the fastest engine for Nvidia hardware. llama.cpp/vLLM and others seem to use CUDA mostly for compatibility with older generations.
10:10 AutoSR was way sharper here, which makes me think the stability issue was from Control's TAA. What if you had 720p DLAA + AutoSR vs 2160p DLSS Ultra Performance? That would've been a more fair test, so we're comparing the image with the same anti-aliasing
@@HybredNone of these technologies have a usable ultra performance mode, outside of the XeSS versions in intel GPUs, which turns to the same problem they have with DLSS. The point, I think, is that the comparison is unfair. It’s not to say “microsoft did a bad job here”, it’s to say “tame your expectations, this is not going to fix anti-aliasing problems as the other upscalers do”.
@@Dinjoralo. Why does it only work on a single processor? That wasn't made clear - I thought AutoSR comes on all CoPilot devices. It actually needs hardware to do that?
We already have 8k gaming, Nvidia added the Ultra Performance mode to their DLSS when they launched their Geforce 3090... specifically to market being able to run DOOM (2016) at "8K" resolution & acceptable FPS.
Like them or not, these are wonderful tools. I’ve always been disappointed they debuted & largely remained on the high end devices when it’s the underpowered (or low power) devices where they make the most sense. I’m hoping some of these really powerful APUs we are seeing will be able to leverage the NPUs AMD is adding to their chips for a ML-upscaler & framegen. No idea if the architecture allows it, but they are all just matrix multiplication accelerators (as I understand) so it’s the right math at least. My only peeve is that every company is remaking the same tech over & over, some standards & coordination would be good for consumers.
They make sense at every level. They actually work even better at higher resolutions with higher end hardware. I'm enjoying 4k gaming on a 4070 thanks to up-scaling tech. Sure it's not quite as good as 4k native but it's still better than 1440p native for the most part. There's still plenty of older and less demanding games I can run at native too.
So, why can't this run on the GPU and be applied to browsers so we get upscaled videos (TH-cam, Twitch, etc)? I know Nvidia and AMD both have features to do that, but neither seem as good as AutoSR. Imagine upscaling 720p and 1080p videos to whatever resolution AutoSR upscales to and then using frame generation (like Lossless Scaling) to make them 120 fps.
Why? Probably no reason. The power of modern GPUs likely runs rings around these NPUs. How? Someone needs to be able to implement it, probably translating to CUDA for Nvidia or something special, and ROCm for AMD maybe?
Almost every video I watch on TH-cam, I notice 8-bit color banding artifacts. For example @ 15:16 when you're saying Thanks for watching, there's 8-bit color banding stripes on the black computer case to your left. I see around 13 separate stripes. Starting from the upper right, the second stripe tends towards more magenta and each stripe is made of single bit channel variances I.e. from 1F1F24 to 1F1F23 to 1E1E22. So when is somebody at Google/Microsoft going to use AI to upsample TH-cam's color space to HDR 10-bits? TH-cam rolled out "Ambient Mode" and it's a nightmare to look at because the entire "ambience" is made up of color banding artifacts. "Ambient Mode" feature flopped. Maybe cause the 8-bit looks horrendous.
Since this is post-processing, does this mean it'll work on pre-rendered things? I'm thinking specifically about FMVs of poor quality and prerendered back grounds.
It probably works on everything whatsoever if it's really post processing. Similar to the upscaling a TV can do. But the machine learning model is very likely trained for upscaling 3D games, so I assume it won't work as well on, say, TH-cam videos.
If it is really post processing, a tech like this could be made into an hdmi dongle, to upscale anything, old consoles, switch etc., some hdmi dongle with an ARM processor.
There are already some that do this, applying traditional filters to a 4k image uscapping, normally people use them for old consoles that do not have 4k output, but the idea of making one with an NPU focused on this is really interesting, but it would definitely need an extra power supply
This looks interesting, useful for devices/games that need it. But some of the failures are surprising, for example the horrible noise around Jesse's shoulders at 8:52, or the complete failure to reconstruct thin power lines in CP2077 e.g. at 10:32. I wouldn't be surprised if Radeon's in-driver upscaler (RSR, based on FSR 1.0) wouldn't be very close, given that ML upscaling is apparently very limited without temporal data and other inputs.
I admit I know nothing of this black magic, but I'd love to see it implemented into all Xbox consoles. Most Xbox One and Xbox 360 games run in the ranges of acceptable resolutions for this.
Im interested in Auto SR for emulators. As DLSS/XESS is probably never going to be usable, for them due to all the vectors needed. FSR maybe is usable but Auto SR may be the obvious choicem
Also Auto SR will be far better than RSR or any other non ai scaler as it runs through a neural net unlike. Which will increase fidelity and fine details more so.
@@Gravy1255 Lossless Scaling works on anything, you can do upscaling with various algorithms including FSR, frame generation, or both together. All you have to do is configure the settings, select scale, then focus the window the emulator is running on. Make sure the window size is accurate to resolution the emulator is currently rendering at, and then it'll upscale that window to fullscreen borderless at your native output resolution. You can also set it to autoscale games with certain configs based on executable or window names
I’m so curious with stuff like this how much the NPU or whatever it’s called is actually needed or how much more efficient it is than running it just on a CPU. Like, could they reasonably support it on non Copilot+ machines / machines without a dedicated neural processor or nah?
I don’t believe so, neural processing units are pretty specialized to perform low precision arithmetic and feature novel architectures. I assume it’d be like using a CPU to render graphics
Even with a relatively beefy NPU (Microsoft requires at least 40 TOPS) Auto SR only supports upscaling from 900p. Which suggests it isn't a particularly light task. So I guess something like an NPU is really necessary.
Has anyone invented something like DLSDR for older games? The thing about older games is that they're not very demanding on modern PC hardware and their engines often aren't designed for high framerates because consoles aren't either. Pushing fidelity is the logical way to make use of my excess computer power in these games but I want to do it a little more efficiently than with pure render scaling.
DLDSR is handy for games that don't support SGSSAA. Otherwise SGSSAA produces slightly more stable results (or you can do 8x SGSSAA for super overkill internal resolution if your GPU can handle it).
I wonder if this could be used for Video like RTX super resolution. It is a post process effect and seeing as many players already use DX11 it may be possible to implement. I don't see why an upscaler like this couldn't work & if it did it could be a gamechanger. Whilst it lacks the compression artefact reduction of RTX SR it certainly seems to be leagues ahead in terms of sharpness & clarity and it's real-time unlike Topaz, so for high bitrate 1080p content something like this could be incredible.
if they can get this working on Xbox Series X before November 7th, they will decimate PS5 Pro sales. Microsoft can just be like, why pay $700 for an update that we will give you for free on the Xbox that you already own?
This tech reminds me of Radeon Super Resolution in how it can be applied to any software, though more restrictive due to the input resolution limitation. I wonder if RSR has improved since launch. I know it's still not as clean a resolve as FSR but I wonder if AMD has improved it since it's introduction.
Honestly... that looks really good. Like, for 720p upscaled to a clearer image, it's a very notable improvement. Appreciate the extra leg work of ensuring a clean image first to give the post process a fair shot.
This "locked" 720p internal resolution makes me quite suspect of this being utilised for Microsoft's rumoured 4k handheld. If you had a powerful enough NPU I imagine you could reduce that 12ms latency somewhat, allowing them to push a 720p console with the same featureset as the home console, ray tracing and nice post-processing, and upscaling back to 4k.
@@ZeitGeist_TV I have a gaming laptop with gamepass..and cod I haven’t played since black ops 2. It’s better to purchase off steam since you are getting the complete pc version, and not a console port.
I believe Lossless Scaling doesn't have these strict NPU requirements as Copilot+. I guess Lossless Scaling doesn't require an NPU, nor that it even is AI based.
Lossless Scaling tries to scale low res images without the usual blur we get with other upscaling techs, it doesn't add new detail or tries to make the images look higher res. AutoSR does try to get an higher res image and adds new details (or fake pixels, if you prefer the term).
I think it would be interesting (assuming you can get them to run) to try out Auto SR on games that are too old for the other upscalers. Maybe some Unreal Engine 3 games? Some stuff from 2010-2015 ish maybe?
@@RedPillAlways💀 but true, they were trying to dissolve it in the xbox one days but just decided to burn it this gen with gasoline When I heard the marketing leader critise microsoft due to no funds for advertising and shareholders getting spooked I knew it was over. It had been getting slowly bleed by microsoft executives into a service.
Seeing how the PSSR launch went, it's clear who's at the forefront of this for the next consoles and the fact that it's native to the system makes me excited for the future of gaming on Windows.
I really like the approach of cleaning up the signal vs forced boosting to a fixed resolution. I think the quality of the image is very good especially considering the fact it has to work with so much less information vs DLSS. DLSS and FSR sometimes has the bad habit of adding shimmer and overly sharp artificial looking image, Auto SR seems to do the exact oppositie, it cleans it up and smooths it.
My big hope for NPUs showing up in commodity hardware was exactly this - low-compromise, vendor-agnostic AI upscaling. I can only hope that what seems to be a generic NPU-accelerated upscaler gets integrated directly into games and given the temporal treatment. Of course, that all relies on this shaking out to be vendor-agnostic once x86 NPUs have rolling out.
More of a traditional form of upscaling is how I see it. Like what you use for video playback perhaps. It's not a huge difference but it's a way to upscale lower resolutions to higher resolutions without it having upscaling problems because the render and output resolutions aren't an even multiplier like 2X. So you can upscale 1440p/1080p to 4k without is being a blurry mess easier. At least that is my understanding. I guess they just need to fix the artifacts.
Rich might had said it and I apologize in advance. Can you use DLSS and Auto SR simultaneously? He showed the differences in 9:04 but not sure if you can at the same time though. Sorry for the Noob question.
If they can make this work from higher res inputs & fix HDR support this could be awesome. I use Lossless scaling whenever 2160p DLSS isn't giving the performance I want, upscaling from 1800p or 1620p. But this seem leagues ahead of any of the algorithms on their so could be useful. It could also be a cheap way for Xbox to compete with PS5 Pro. I doubt they'll do this but imagine a Series X/S refresh with an added NPU. Games using FSR 2 could upscale to slightly reduced resolutions to limit artifacts then use AutoSR to go the rest of the way I.e. instead of 1080p to 2160p it could be 1080 to 1440p then AutoSR to 2160p, the image would probably be similar (though less artifact ridden like PSSR) whilst freeing up a bit of performance. And for the increasing amount of games missing 2160p and even 1440p this would be gamechanging. Again I know this won't happen but I think it hints at the future of upscaling, a mixture of heavier temporal techniques mixed with this new Ai based spatial method. This will be especially crucial if 8k does take off in the next generation as a technique like this is the only way a decent 8k image is happening (and at 8k the difference between this & a native render would be invisible on any screen less than like 120").
9:54 AutoSR has ghosting around the edges of the running character. That should indicate it's using history, right? Anyway the NPU does have access to previous frames rendered. So there's no reason to believe that they can't be using historical frames for improving the upscaling quality.
I wouldn't mind seeing a head to head against FSR1 as upscalers that only require the final image will always have a use case. Curious to see how these two compare and if using neural processing has the potential to take final image upscaling further than what FSR1 currently does.
If Auto SR produces its own anti-aliasing, wouldn't it be beneficial to turn off in-game anti-aliasing solutions in these games? Otherwise the algorithm has to fix the blur caused by TAA solutions. Also, if its possible to create a neural network that can do this, what's stopping anyone from making a neural network that can manifest its own guessed motion vectors?
Technology has also invented "a pair of glasses" as a preprocessing filter for your eyes to see a sharper and more detailed. image. Auto SR feels like additional glasses
Seems pretty cool tbh. I would love if the range of resolutions it worked on was increased. In my own use case, I play PC games on a 4K 55 inch LG C1. 4K is getting quite demanding for my 3080 on new games (especially the 10gb of vram). So maybe doing a 1440p dlss balanced mode, with AutoSR scaling that to 4K could be nice. Verh promising tech. I think it's fascinating that the cost here is a little bit of latency rather than performance.
Exciting. The benefit of having this be at the MS level and widely accessible, is that game creators can start to program for the strengths of this platform. Just as studios have started to consider how to implement DLSS, standardizing around a core upscaler maybe gets them to better TAA solutions and such that maximize the benefit of this. Perhaps.
That looks surprisingly good, and when you showed the 4-way performance comparison there, I couldn't believe it. This will be a godsend for all lower-powered mobile devices in our AI-powered future.
I launched Witcher 3 on my Surface Pro (11) and I was surprised about the clarity when I launched it. Was running at 50-60 FPS inside the city and the image looked really quite good on the 13 inch screen. The downside is that the UI elements looked quite blurry, but that's not that annoying really.
Microsoft Copilot+ (which I guess includes Auto SR) is supported on all CPUs (SoCs, "APUs" etc) which have an NPU with at least 40 TOPS. AMD and Intel already announced their chips with NPUs similar to the one in Qualcomm's chip.
This autoSR stuff would be perfect for the docked experience of these handheld PCs! Playing native 720p on the go and kicking in autoSR to upscale the image once it's plugged on the wall and docked outputting to a 4K TV.
They need to provide a standard way to access motion vectors in DX11/12 that is standardised and any upscaling can use it, including Auto SR. That way proprietary solution vendors would still be happy. Nevermind, that seems to be DirectSR you mention at the end.
I don't full get this stuff, but makes me wonder if this kind of tech would help with solutions like xCloud? I'm assuming the data to stream would be much lesser
I'm missing a comparison to native 4k. Does AutoSR actually resolve additional detail accurately to the source material or is it mostly returning a sharper image with incorrect detail that looks nice?
I would like to see an RSR comparison: both are post processing filters that can be applied to any game. Once those new Ryzens are out, owners of those devices will probably have a choice between either AutoSR via Windows or RSR on a driver level, fitting a similar use case.
@@Nat-yf6ffWhy would that be a problem? Both are solution for the same problem, the fact that they have different approach only make it more interesting to compare really
@@danarseptiyanto4066 Not really. FSR is still implemented on a per-game basis and has access to more data than Auto SR does. Rich even explained in the video why it's not a fair comparison, but I suppose it's asking too much for people to actually watch the video before rushing to the comments section to bless everyone with their awesome hot take.
@@CaptainKenway There is Radeon™ Super Resolution (RSR) which is universal driver implementation of FSR 1, that is also strictly post-process with no input from game's rendering pipeline. As for comparriosion i dunno how well Lossless Scaling program works on ARM CPU but it also has implemented FSR 1 and NVIDIA's NID and has it's own post-process upscaling method
AutoSR sounds interesting but DirectSR sounds like the solution we really need. Looking forward to when that releases and the big 3 make their SR solutions work with it.
This is precisely what I've been saying apple should do with M4 and Metal FX, so far they haven't used the CU units on apple silicon at all for hardware based upscaling. M4 is supposed to add way more for apple intelligence, hopefully they also add it to metalfx
We seem to be getting to an interesting spot. There will come a time when resolution will be a meaningless property of performance with this level of "works everywhere" upscaling.
Sounds interesting, maybe there is some gaming on PC with low(er) Power consumption coming. Perhaps there is a DirectSR AutoSR Combo possible. Thank you!
Never forget that in Digital Foundry's original video for Gears of War Ultimate Edition, Jon put on some weird accent that we never heard again
Had to go check…what was that!
I did not know this😂
wait... what? I gotta check that later 🤣
Liquid Jon
LOL! We need this as a viewer question on df direct.
“Ring the bell for allegedly instant notifications” - killed me lol
3:25 spitting bars dayum
LOL yeah hahahahaha
🔥🔥🔥
God damn Rich and his pudding
@@kraithaywire lol have you guys noticed that Google is having trouble with our spelling or slang and giving the option to 'translate to english' even though it is english. lol Both of you two have translate under your comment.
Drake has been real quiet since this dropped
This reminds me of those old Xzibit memes. "We heard you like upscaling. So we put an upscaler for your upscaler so you can upscale while you upscale."
God those upscaling and framegeneration tech keep getting better, im looking forward to how it will be implemented in portable gaming, render a 720p 30fps game and present it at 1080p 60fps with minimal latency penaulty
Inject all of it directly into SteamOS please!
@@Belfoxy If Lossless Scaling get a Linux port we wouldn't need it
I'll take that u. You don't need it .
For what it is, it looks great. The more options there are, the better. I hope ms improves arm gaming performance either through software tweaks or the 2nd gen of these types of processors. The idea of the npu doing the work is an interesting one and i look forward to all the df coverage of it. Also, I've done no research but why is auto sr limited to 700-900p? Is the npu not up to the task or is there some other explanation?
Probably the NPU is not able to do more.
The poor gaming performance for the Snapdragon Elite may be partly due to Microsoft's x86 to Arm emulation layer. However, it seems likely that the poor performance is mostly caused by Qualcomm's GPU driver, which doesn't contain two decades of game optimizations like Nvidia and AMD GPUs. Intel had similar issues with the driver for its then new GPUs.
"And the pudding in question is actual gaming". There aren't enough t-shirts in the world for Rich quotes.
man those hud elements remind me of AMD's Morphological Anti Aliasing from back in the day, that post processing stuff
This looks a lot better, morphological filtering just blurred everything, this actually upscales and filters 2d elements
Within the limited scope AutoSR is shooting to cover, I'd say it does the job pretty well.
This almost seems like we're soon back to dedicated cards for certain processing. In the past PhysX, today AI.
Well I'd imagine they stick with it on the CPU or add it to the GPU
NPUs are already major CPU features. Dedicated chips for esoteric functions are a very hard sell these days in the consumer sector. Even back when PhysX appeared it was never expected to be a big seller. My write-up of the CES that year where it was first publicly promoted was that this was plainly an acquisition target for a larger company that would subsume the functionality into their products.
GPUs already fit AI acceleration really well and recent generations include dedicated silicon for it (e.g. Tensor cores in RTX are NPU)
Point of NPU as part of CPU is optimizing for power on laptops (not having to wake up dGPU, if laptop even has one to start with)
@niter43 So the things which Nvidia calls "tensor" cores are approximately the same thing as an NPU? Except that the NPU is part of the CPU silicon and the "tensor" cores are part of the GPU silicon? So could GPUs with enough of these cores add AutoSR support to a Windows system? Seems unlikely.
I also think for major AI applications like LLMs, people don't even use Nvidia's "tensor" cores (or any NPUs) but rather their "CUDA" cores.
Needless to say, AMD uses different names for all these things.
@@cube2fox there was just no (and probably still to this day) industry agreed term back when Nvidia introduced Tensor cores. Nvidia named them refering to TPU acronym, it was quite popular back then with Google calling their own new accelerator chips TPU. Just like industry didn't immediately came with "GPU" term.
Nvidia's own LLM interference ("execution") engine uses Tensor cores. As seen in their "Chat with RTX" app or open source TensorRT-LLM library it's based on. Annoying to use, but seems to be the fastest engine for Nvidia hardware. llama.cpp/vLLM and others seem to use CUDA mostly for compatibility with older generations.
10:10 AutoSR was way sharper here, which makes me think the stability issue was from Control's TAA. What if you had 720p DLAA + AutoSR vs 2160p DLSS Ultra Performance? That would've been a more fair test, so we're comparing the image with the same anti-aliasing
You can't get DLSS on machines with AutoSR. DLSS requires an Nvidia GPU, AutoSR is currently only on laptops with Snapdragon processors from Qualcomm.
@@Dinjoralo. Then using FSR3 Native vs Ultra Performance, XeSS or TAAU would've been a better comparison. Surprised they didn't do this.
it does, but also looks it has more halos and artifacts. I'm loving these AI upscalers competition tho. The more the better.
@@HybredNone of these technologies have a usable ultra performance mode, outside of the XeSS versions in intel GPUs, which turns to the same problem they have with DLSS.
The point, I think, is that the comparison is unfair. It’s not to say “microsoft did a bad job here”, it’s to say “tame your expectations, this is not going to fix anti-aliasing problems as the other upscalers do”.
@@Dinjoralo. Why does it only work on a single processor? That wasn't made clear - I thought AutoSR comes on all CoPilot devices. It actually needs hardware to do that?
I hope they implement something like this in their consoles or at least their next console!
Nah they’re spending lots of money on this technology that’s mostly just for gaming in order for it to skip their next console
@@Chunnibyoubaka with the recent business decisions made by microsoft's xbox gaming div, i wouldnt be surprised
A comparison between RSR and Auto SR would be great, Which both technology works in post process and claims that could run on all games.
It’s time for the Frankenstein test. Let’s test an image fed through every upscaler at once! 8k gaming? 😂
We already have 8k gaming, Nvidia added the Ultra Performance mode to their DLSS when they launched their Geforce 3090... specifically to market being able to run DOOM (2016) at "8K" resolution & acceptable FPS.
8k soup
This is such a good idea
Too much gaming
@@Nicholas_SteelYou can't say you are playing in 8k if that thing is on. Not even 4k.
Stop being delusional
Like them or not, these are wonderful tools.
I’ve always been disappointed they debuted & largely remained on the high end devices when it’s the underpowered (or low power) devices where they make the most sense.
I’m hoping some of these really powerful APUs we are seeing will be able to leverage the NPUs AMD is adding to their chips for a ML-upscaler & framegen.
No idea if the architecture allows it, but they are all just matrix multiplication accelerators (as I understand) so it’s the right math at least.
My only peeve is that every company is remaking the same tech over & over, some standards & coordination would be good for consumers.
They make sense at every level. They actually work even better at higher resolutions with higher end hardware. I'm enjoying 4k gaming on a 4070 thanks to up-scaling tech. Sure it's not quite as good as 4k native but it's still better than 1440p native for the most part. There's still plenty of older and less demanding games I can run at native too.
So, why can't this run on the GPU and be applied to browsers so we get upscaled videos (TH-cam, Twitch, etc)? I know Nvidia and AMD both have features to do that, but neither seem as good as AutoSR. Imagine upscaling 720p and 1080p videos to whatever resolution AutoSR upscales to and then using frame generation (like Lossless Scaling) to make them 120 fps.
I'm also wondering if there's any reason it couldn't work on video. Or.. On streamed games..
Why? Probably no reason. The power of modern GPUs likely runs rings around these NPUs.
How? Someone needs to be able to implement it, probably translating to CUDA for Nvidia or something special, and ROCm for AMD maybe?
Microsoft Edge literally has this feature with supported hardware
@@simonedeiana2696 yeah, people just like hating on Edge so they never are aware of all the features It has
@@ZedDevStuffAnd I use Edge Since Xbox one X and now Series X and order some things of Amazon!!
Almost every video I watch on TH-cam, I notice 8-bit color banding artifacts. For example @ 15:16 when you're saying Thanks for watching, there's 8-bit color banding stripes on the black computer case to your left. I see around 13 separate stripes. Starting from the upper right, the second stripe tends towards more magenta and each stripe is made of single bit channel variances I.e. from 1F1F24 to 1F1F23 to 1E1E22. So when is somebody at Google/Microsoft going to use AI to upsample TH-cam's color space to HDR 10-bits? TH-cam rolled out "Ambient Mode" and it's a nightmare to look at because the entire "ambience" is made up of color banding artifacts. "Ambient Mode" feature flopped. Maybe cause the 8-bit looks horrendous.
Whoa there! Let's put windows into dark mode before you show those bright system dialogues. Love the content!
Since this is post-processing, does this mean it'll work on pre-rendered things? I'm thinking specifically about FMVs of poor quality and prerendered back grounds.
It probably works on everything whatsoever if it's really post processing. Similar to the upscaling a TV can do. But the machine learning model is very likely trained for upscaling 3D games, so I assume it won't work as well on, say, TH-cam videos.
You can try the Lossless Scaling, it works on every window
If it is really post processing, a tech like this could be made into an hdmi dongle, to upscale anything, old consoles, switch etc., some hdmi dongle with an ARM processor.
There are already some that do this, applying traditional filters to a 4k image uscapping, normally people use them for old consoles that do not have 4k output, but the idea of making one with an NPU focused on this is really interesting, but it would definitely need an extra power supply
This looks interesting, useful for devices/games that need it. But some of the failures are surprising, for example the horrible noise around Jesse's shoulders at 8:52, or the complete failure to reconstruct thin power lines in CP2077 e.g. at 10:32. I wouldn't be surprised if Radeon's in-driver upscaler (RSR, based on FSR 1.0) wouldn't be very close, given that ML upscaling is apparently very limited without temporal data and other inputs.
I admit I know nothing of this black magic, but I'd love to see it implemented into all Xbox consoles. Most Xbox One and Xbox 360 games run in the ranges of acceptable resolutions for this.
No neural processing unit in any current consoles, so I don’t think you can
That’ll likely be next gen
That'll probably come from mid-gen refresh/next gen
Or imagine an AI upscaler for old consoles, with component and hdmi input with no lag🤔
No but of Microsoft could allow fsr if AMD would implement it into an update for xboxs i mean its an x86 amd chip it could be done
Im interested in Auto SR for emulators. As DLSS/XESS is probably never going to be usable, for them due to all the vectors needed. FSR maybe is usable but Auto SR may be the obvious choicem
Lossless Scaling works on emulators
There's already RSR if you have an AMD card
@@SogonD.Zunatsu all I can find for lossless scaling in emulators is frame gen type stuff. Taking a 30 game to 60
Also Auto SR will be far better than RSR or any other non ai scaler as it runs through a neural net unlike. Which will increase fidelity and fine details more so.
@@Gravy1255 Lossless Scaling works on anything, you can do upscaling with various algorithms including FSR, frame generation, or both together. All you have to do is configure the settings, select scale, then focus the window the emulator is running on. Make sure the window size is accurate to resolution the emulator is currently rendering at, and then it'll upscale that window to fullscreen borderless at your native output resolution.
You can also set it to autoscale games with certain configs based on executable or window names
Would have been interesting to compare Auto SR to NIS (Nvidia Image Scaler), these are both post proces upscalers.
For Control you can use the modded version released by a Dev that includes upgraded DLSS and higher quality effects.
Your videos are a breath of fresh air, always enjoy watching
The series x also has 97tops of int8 ml inference. Just enough to run Copilot and Auto SR! Interesting......🤔
It would probably come with an update when ps5 pro release. But it will be fully supported in all games when next gen release. This always happens.
I’m so curious with stuff like this how much the NPU or whatever it’s called is actually needed or how much more efficient it is than running it just on a CPU. Like, could they reasonably support it on non Copilot+ machines / machines without a dedicated neural processor or nah?
I don’t believe so, neural processing units are pretty specialized to perform low precision arithmetic and feature novel architectures. I assume it’d be like using a CPU to render graphics
Even with a relatively beefy NPU (Microsoft requires at least 40 TOPS) Auto SR only supports upscaling from 900p. Which suggests it isn't a particularly light task. So I guess something like an NPU is really necessary.
Has anyone invented something like DLSDR for older games? The thing about older games is that they're not very demanding on modern PC hardware and their engines often aren't designed for high framerates because consoles aren't either. Pushing fidelity is the logical way to make use of my excess computer power in these games but I want to do it a little more efficiently than with pure render scaling.
DLDSR is handy for games that don't support SGSSAA. Otherwise SGSSAA produces slightly more stable results (or you can do 8x SGSSAA for super overkill internal resolution if your GPU can handle it).
I wonder if this could be used for Video like RTX super resolution. It is a post process effect and seeing as many players already use DX11 it may be possible to implement.
I don't see why an upscaler like this couldn't work & if it did it could be a gamechanger. Whilst it lacks the compression artefact reduction of RTX SR it certainly seems to be leagues ahead in terms of sharpness & clarity and it's real-time unlike Topaz, so for high bitrate 1080p content something like this could be incredible.
Rich your Microsoft Store banner @2:11 is legit authenticated hairworks, no glitches bruz ;D Fellow baldy here ☺
It would be interesting to see comparison with the LS1 upscaler in Lossless Scaling
LS1 is also a machine learning upscaler
if they can get this working on Xbox Series X before November 7th, they will decimate PS5 Pro sales. Microsoft can just be like, why pay $700 for an update that we will give you for free on the Xbox that you already own?
*Adding Microsoft Auto SR to DLSS on Dragon's Dogma 2 should rendera bit better appearance while sparing some extra resources.* 👍
This tech reminds me of Radeon Super Resolution in how it can be applied to any software, though more restrictive due to the input resolution limitation. I wonder if RSR has improved since launch. I know it's still not as clean a resolve as FSR but I wonder if AMD has improved it since it's introduction.
They didn't.
Honestly... that looks really good. Like, for 720p upscaled to a clearer image, it's a very notable improvement. Appreciate the extra leg work of ensuring a clean image first to give the post process a fair shot.
If this type of Auto SR is in use for the Switch 2 for Switch 1 games, that would be amazing!
This "locked" 720p internal resolution makes me quite suspect of this being utilised for Microsoft's rumoured 4k handheld. If you had a powerful enough NPU I imagine you could reduce that 12ms latency somewhat, allowing them to push a 720p console with the same featureset as the home console, ray tracing and nice post-processing, and upscaling back to 4k.
No one ever said their handheld would be or target 4K at all though.
No handheld can game well at 4K in the slightest
After having 3 vita’s, and then went to switch…and now steam deck, I have yet to see how a Microsoft handheld can compete at this point.
@@TheL1arL1ar It works just be a means to have Game Pass and thus C.O.D in a portable form factor.
@@ZeitGeist_TV I have a gaming laptop with gamepass..and cod I haven’t played since black ops 2. It’s better to purchase off steam since you are getting the complete pc version, and not a console port.
Isn't this the same as Lossless Scaling program, but running on chip dedicated to neural networks?
Both run on chips, both use NN
I believe Lossless Scaling doesn't have these strict NPU requirements as Copilot+. I guess Lossless Scaling doesn't require an NPU, nor that it even is AI based.
Lossless Scaling tries to scale low res images without the usual blur we get with other upscaling techs, it doesn't add new detail or tries to make the images look higher res. AutoSR does try to get an higher res image and adds new details (or fake pixels, if you prefer the term).
@@ghost085 Unless you're using bilinear upscaling, you're always adding new details by upscaling.
I wonder how XeSS/FSR 3.1 Native (720p) + AutoSR will look like vs DLSS 3.7 UP 4K
I think it would be interesting (assuming you can get them to run) to try out Auto SR on games that are too old for the other upscalers. Maybe some Unreal Engine 3 games? Some stuff from 2010-2015 ish maybe?
You should try this in VR enviroment where that extra detail is going to make a huge difference.
Holy crap! Yes!
I hope that Auto SR will be released on xbox.
Microsoft don't even like Xbox
Xbone is dead brother
@@stratoshd9043yes, you’re right, the Xbox One line of hardware is no longer manufactured.
Xbox and ps5 don't have SoC with neural processing parts. Best they can get is FSR...
@@RedPillAlways💀 but true, they were trying to dissolve it in the xbox one days but just decided to burn it this gen with gasoline
When I heard the marketing leader critise microsoft due to no funds for advertising and shareholders getting spooked I knew it was over. It had been getting slowly bleed by microsoft executives into a service.
I love it. Sharp image.
Seeing how the PSSR launch went, it's clear who's at the forefront of this for the next consoles and the fact that it's native to the system makes me excited for the future of gaming on Windows.
I really like the approach of cleaning up the signal vs forced boosting to a fixed resolution. I think the quality of the image is very good especially considering the fact it has to work with so much less information vs DLSS. DLSS and FSR sometimes has the bad habit of adding shimmer and overly sharp artificial looking image, Auto SR seems to do the exact oppositie, it cleans it up and smooths it.
New Richard videos are always appreciated.
Mr. Lead better, you are a very Intelligent man. Much respect to you and thank you for all that you do. Great video 👏🏼.
My big hope for NPUs showing up in commodity hardware was exactly this - low-compromise, vendor-agnostic AI upscaling. I can only hope that what seems to be a generic NPU-accelerated upscaler gets integrated directly into games and given the temporal treatment. Of course, that all relies on this shaking out to be vendor-agnostic once x86 NPUs have rolling out.
Why the hell can't we feed it a 1080p image to upscale it to 4k?!
More of a traditional form of upscaling is how I see it. Like what you use for video playback perhaps. It's not a huge difference but it's a way to upscale lower resolutions to higher resolutions without it having upscaling problems because the render and output resolutions aren't an even multiplier like 2X. So you can upscale 1440p/1080p to 4k without is being a blurry mess easier. At least that is my understanding. I guess they just need to fix the artifacts.
Rich might had said it and I apologize in advance. Can you use DLSS and Auto SR simultaneously? He showed the differences in 9:04 but not sure if you can at the same time though. Sorry for the Noob question.
If they can make this work from higher res inputs & fix HDR support this could be awesome. I use Lossless scaling whenever 2160p DLSS isn't giving the performance I want, upscaling from 1800p or 1620p. But this seem leagues ahead of any of the algorithms on their so could be useful.
It could also be a cheap way for Xbox to compete with PS5 Pro. I doubt they'll do this but imagine a Series X/S refresh with an added NPU. Games using FSR 2 could upscale to slightly reduced resolutions to limit artifacts then use AutoSR to go the rest of the way I.e. instead of 1080p to 2160p it could be 1080 to 1440p then AutoSR to 2160p, the image would probably be similar (though less artifact ridden like PSSR) whilst freeing up a bit of performance. And for the increasing amount of games missing 2160p and even 1440p this would be gamechanging.
Again I know this won't happen but I think it hints at the future of upscaling, a mixture of heavier temporal techniques mixed with this new Ai based spatial method. This will be especially crucial if 8k does take off in the next generation as a technique like this is the only way a decent 8k image is happening (and at 8k the difference between this & a native render would be invisible on any screen less than like 120").
9:54
AutoSR has ghosting around the edges of the running character. That should indicate it's using history, right?
Anyway the NPU does have access to previous frames rendered. So there's no reason to believe that they can't be using historical frames for improving the upscaling quality.
That indicates that the image that it is fed has ghosting
I wouldn't mind seeing a head to head against FSR1 as upscalers that only require the final image will always have a use case. Curious to see how these two compare and if using neural processing has the potential to take final image upscaling further than what FSR1 currently does.
Ofcourse it would be better. Have you ever tried fsr 1? 720p upscaled to 1080 looks like shit. And this thing is doing 4k.
If Auto SR produces its own anti-aliasing, wouldn't it be beneficial to turn off in-game anti-aliasing solutions in these games? Otherwise the algorithm has to fix the blur caused by TAA solutions.
Also, if its possible to create a neural network that can do this, what's stopping anyone from making a neural network that can manifest its own guessed motion vectors?
What about good old games with no UI scaling like Icewind Dale 2 or Fallout 2 meant to be played at 600p or 768p?
Lossless Scaling or Magpie
Any idea why the 900p limitation?
Is it constrained by the NPU compute?
It will be interesting to see the differences if enabled on x86+NPU chips
must be that
Technology has also invented "a pair of glasses" as a preprocessing filter for your eyes to see a sharper and more detailed. image. Auto SR feels like additional glasses
AutoSR, according to Microsoft's presentation, would've unified DLSS/FSR/etc when available...is this the case?
Microsoft should produce a mClassic like device with this upscale software on it.
The almost fixed 12ms per frame makes me curious if and by how much the latency can be improved by future hardware. Cool tech, overall!
Do you think they would incorporate history in the future?
Seems pretty cool tbh. I would love if the range of resolutions it worked on was increased.
In my own use case, I play PC games on a 4K 55 inch LG C1. 4K is getting quite demanding for my 3080 on new games (especially the 10gb of vram). So maybe doing a 1440p dlss balanced mode, with AutoSR scaling that to 4K could be nice.
Verh promising tech. I think it's fascinating that the cost here is a little bit of latency rather than performance.
Exciting. The benefit of having this be at the MS level and widely accessible, is that game creators can start to program for the strengths of this platform. Just as studios have started to consider how to implement DLSS, standardizing around a core upscaler maybe gets them to better TAA solutions and such that maximize the benefit of this. Perhaps.
That looks surprisingly good, and when you showed the 4-way performance comparison there, I couldn't believe it.
This will be a godsend for all lower-powered mobile devices in our AI-powered future.
I launched Witcher 3 on my Surface Pro (11) and I was surprised about the clarity when I launched it. Was running at 50-60 FPS inside the city and the image looked really quite good on the 13 inch screen. The downside is that the UI elements looked quite blurry, but that's not that annoying really.
It’s impressive, but I do notice the footage is a bit choppier with the tool
So can you use Auto SR on normal x86 desktops or only Qualcomms laptops?
Only Qualcomm because npus
You can probably use it on any pc with an NPU, but currently only Qualcomm devices have one.
@@SterkeYerke5555the horrible named New AMD laptops have then I believe.
@@mystic3309 strange that you think only qualcomm has npus
Microsoft Copilot+ (which I guess includes Auto SR) is supported on all CPUs (SoCs, "APUs" etc) which have an NPU with at least 40 TOPS. AMD and Intel already announced their chips with NPUs similar to the one in Qualcomm's chip.
is there a forecast when Auto SR will be available for windows based on x32/x64
I really hope we can get this working with retro gaming... that would be amazing.
If windows could use sr/nis/fsr and allow to select it would be quite cool, even more if it allows to keep all config on windows settings
auto SR looks pretty damn amazing
I'd love to see how it compares to Sony's Reality Creation on their TVs.
It looks good but still has that typical "blurry, wormy" effect of upscalers, they can do a better job with the budget they have.
Wtf is it supposed to be? All the comparisons look the same to me.
It looks like total shit lmao
would you be able to show Auto-SR with an older game or maybe a game where the textures in general are lacking (low settings etc)
"allegedly instant notifications" 😆 thanks for the video on the topic 👍
How about using Auto SR with an older game that supports MSAA to eliminate flickering?
This autoSR stuff would be perfect for the docked experience of these handheld PCs!
Playing native 720p on the go and kicking in autoSR to upscale the image once it's plugged on the wall and docked outputting to a 4K TV.
Every modern TV has options like Auto SR and AFMF, they're just called different.
@@eSKAone- but does it look that good?
@@andremalerba5281 nah. Nothing can beat a NPU. Not those shitty tvs.
Why no comparisons with native res over 720p?
Any chance of getting an external GPU connected to one of these AutoSR laptops to feed it more horsepower?
This kinda reminds me of vector tracing. Very curious to see where it goes from here.
A future where Direct XR allows consoles and PC to ship with games that support whatever AI scaler the end user wants will be nice.
What about using the snapdragon laptop as an AutoSR filter device using video captured from another PC?
They need to provide a standard way to access motion vectors in DX11/12 that is standardised and any upscaling can use it, including Auto SR. That way proprietary solution vendors would still be happy. Nevermind, that seems to be DirectSR you mention at the end.
I don't full get this stuff, but makes me wonder if this kind of tech would help with solutions like xCloud? I'm assuming the data to stream would be much lesser
The problem with Xcloud is the artifacting. It could help, but idk by how much. Also it'll add more lag.
I'm missing a comparison to native 4k. Does AutoSR actually resolve additional detail accurately to the source material or is it mostly returning a sharper image with incorrect detail that looks nice?
Shame there was no comparison for power consumption. How much does using the NPU with Auto SR shorten battery life?
Does AutoSR really need over 40 TOPS or will it work on older SQ1 or SQ3?
What about a FSR comparison?
It's not a machine learning technique
I would like to see an RSR comparison: both are post processing filters that can be applied to any game. Once those new Ryzens are out, owners of those devices will probably have a choice between either AutoSR via Windows or RSR on a driver level, fitting a similar use case.
@@Nat-yf6ffWhy would that be a problem? Both are solution for the same problem, the fact that they have different approach only make it more interesting to compare really
@@danarseptiyanto4066 Not really. FSR is still implemented on a per-game basis and has access to more data than Auto SR does. Rich even explained in the video why it's not a fair comparison, but I suppose it's asking too much for people to actually watch the video before rushing to the comments section to bless everyone with their awesome hot take.
@@CaptainKenway There is Radeon™ Super Resolution (RSR) which is universal driver implementation of FSR 1, that is also strictly post-process with no input from game's rendering pipeline. As for comparriosion i dunno how well Lossless Scaling program works on ARM CPU but it also has implemented FSR 1 and NVIDIA's NID and has it's own post-process upscaling method
Since this post process, I wonder how well it works on video. I'd like to upscale some old DVDs that will never get a Blu Ray upgrade like ST:DS9.
have you tried ffmpeg laczos upscale?
Can auto-SR be run on tensor cores or only an mpu? Though I guess if you have access to superior DLSS this would be kind of pointless.
I have a question: can it keep up? Can you use some old game to get 500fps and test if can output these frames. For science !
AutoSR sounds interesting but DirectSR sounds like the solution we really need. Looking forward to when that releases and the big 3 make their SR solutions work with it.
So, has this update roll-out?
If it just looks at the frame and nothing else, can it upscale video also?
Not just that but generate inbetween frames too.
@@mclarenf1gtr99 Yeah but that usually looks terrible. But good upscalers for video are always welcome, especially at the OS level.
This is precisely what I've been saying apple should do with M4 and Metal FX, so far they haven't used the CU units on apple silicon at all for hardware based upscaling. M4 is supposed to add way more for apple intelligence, hopefully they also add it to metalfx
man The future is amazing
Will it work on an m.2 NPU or does it need to be done on the main SoC to keep latency down?
this can be amazing for handhelds
We seem to be getting to an interesting spot. There will come a time when resolution will be a meaningless property of performance with this level of "works everywhere" upscaling.
Sounds interesting, maybe there is some gaming on PC with low(er) Power consumption coming. Perhaps there is a DirectSR AutoSR Combo possible. Thank you!
How does this compare to DLSS 1.0? I never had the chance to experience that in person. Is there any game left that still supports it?
It would be interesting compering this to "nis" (enabled in control panel) which also works on a similar way minus the ai