DLSS isn't hardware based, it's hardware accelerated. it could run on any hardware (if not always very fast) , and could run just as fast on any hardware that has matrix solvers (that Nvidia markets as tensor cores) if nvidia would allow it.
Also worth noting is that Nvidia "hardware accelerated" features tend to use both the Tensor cores AND additional host CPU cycles.. at least on my 4080 Super running the same benchmark (Cyberpunk 2077) with features like RT or DLSS enabled.
It has been done, DLSS can run on GTX but due to lack of the hardware acceleration it runs much worse than native. As far as I remember I saw it in a video about uniscaler mods or something. The guy was trying out FSR 3.1 in older unsupported games.
So you mean its not viable unless hardware accelerated? Just like any other hardware accelerator that does something that can be done in software? Its hardware based. It doesnt work correctly without the hardware
got 6950xt and 7900gre from XFX, I was surprized by the design and overall quality given the lower price. Man both coolers were able to easily keep temps in check even when pushing 300W+, which also means that with some UV and 240W limit both cards ended up really quiet!
The RX 6000 was amazing, 7000 is mid not sure what they did with the cooler but even if you tune down that out of the box turbine, then the TUF or Nitro is still a good 10 degrees cooler on the same noise level. (XT/XTX)
13:00 Always find it somewhat funny that a lot of people in the west think Gigabyte, MSI or Asus are the "big boys" in GPU manufacturing and everyone else is small time, while in reality the real GPU giant is actually Palit Microsystems. They overtook Asus back in 2013 already to become the largest GPU AIB. As for the others players in the market which are not that well know by western audiences and reviewers but still very large. Colorful(Largest in China which moves massive volume) PC Partner Group(Manili,Zotac,InnoVISION) who also also does a lot of contract manufacturing
@@あなた以外の誰でもない Model is also relevant. Palit and zotac make some very underbuilt cards as their base models for midrange cards. Much worse thermals than you'd get with like a base model like an msi ventus 2x, asus dual or gigabyte windforce.
Hot take I think FSR targeted universal card support so that it wouldnt be ignored, sure there was probably a hope some games would choose FSR over DLSS, but really i think the main goal was just to have it included at all. If a game dev often chose not to include DLSS despite it being in ~90% of new cards, why would they bother with FSR if it only worked on 5-10% of new cards? By working on all cards, including Nvidia cards that dont support DLSS, it evened the playing field on weather a developer would include it or not.
Yeah, at the time (and it may even still be the case), there were more people with Nvidia GPUs that couldn't use DLSS than there were people with AMD GPUs. AMD leveraged those neglected Nvidia owners to get FSR into games (because, if devs wouldn't add it for the sake of AMD owners, they'd at least do it for all of the 1080/1660 owners) and, now, it's an established technology that people expect to see in new games.
It was open, for much the same reason Java was Open, to stop the competitor closing the market (NVidia graphics or Windows OS). If it hadn't been for Java being Open Source, Microsoft would have closed the internet completely. Same for FSF. If AMD hadn't open sourced it, Nvidia would have closed off the market and everyone would have to license DLSS from NVidia, who can refuse.
@@Osprey850 i use fsr cuz the only option is NiS, that already shows why my next card is going to be amd again after long time, yeah FSR was made to give ppl with cards which can't use dlss
Machine Learning Engineer here: The reason (I Believe) that FSR is not AI based is because AMD's RocM (the equivalent to CUDA) isn't supported on windows after years of promises that "it will come soon". It means that on windows systems, AMD cards have no way of utilizing the hardware to efficiently run AI tasks. One way around this is to use DirectX 12 as the execution provider for AI tasks but it is slower than running them bare metal as RocM would allow. Nvidia not only has CUDA as an execution provider but also TensorRT, which is even more optimized and fully utilizes the Tensor Cores. If AMD uses DirectX to do this, it means that the upscaling model will have to share the same resources with the game engine that renders the game. Without proper RocM support for windows, I am very skeptical on whether FSR4 will run properly. Lets see.
Dude this is a awesome explanation for someone still tryna rub rocks together for more Fps haha super interesting and Really clear on whts going on here.
@@keashaneverts4452 Glad I could help 😌 another point also is that even with proper RocM support on windows, AMD still lacks dedicated hardware (Tensor Cores) which means that the model will still need to share resources with the game rendering. So, in the current state, AMD has 2 options: either A) train a very simple model which would run fast but will lack visual quality or B) train a proper model but enabling FSR4 would absolutely tank FPS. I am guessing they will go with option B) to showcase at least visual quality parity with DLSS, and then provide both RocM support and dedicated hardware on future cards. Their best option would be to launch FSR4 alongside proper new cards that can handle it.
I don't think amd has to implement the whole rocm stack for something like this to work. They'd "just" have to implement some specific calls that this ai based model uses into their adrenaline drivers. And they could totallly do that, since they have the know-how.
PNY has been a NV OEM since before geforce was a thing for NVIDIA I think they started with the RIVA TNT. They have always been the lowest end reference designs.
I worked at Staples back in the stone age, and we carried PNY RAM. This was the SIMM/DIMM SDRAM era, but the return rate was abysmal. I've never got the impression their consumer line-up has improved to any notable extent.
I own a PNY 4090 XLR8 OC... It's a lovely card. Cool and quiet. Full frame. Solid performance. Vapour Chamber, 8 pipes (two 8mm). Triple circle enclosed fans. Compact design. It's not for benchmarking. But I get an excellent undervolt/OC at 2760MHz @ 975mV with +1200MHz (12%) VRAM OC. It doesn't look like a giant maxipad or candy bar so I'm quite happy.
Not only data center cards, they also do all the expensive Quadro workstation cards. I have two Quadro RTX 5000's in my engineering workstation and their radial fan design is better thought out than some of the consumer brands axial designs. I hold them to the same standard as EVGA was. Their fan bearings are of quality, which many other board partners cheap out on. My sister in law bought an RTX 3060Ti Eagle OC from Goigabyte. Loudest card ever, even more so than a GTX480 and rubbish cooling, flow through area was completely covered by a brittle plastics backplate. She sent it back and replaced it with a PNY 3060 Ti, PC is super quiet. Very big flow through and quality fan bearings.
A few topics here are odd. 1. VRAM usage -> Like RAM, it will use as much as it can get so it doesn't need to hit the disk. As long as you are not getting out of memory usage, it's fine. You want it to fill it up as you pay the electricity cost whether it's full or 1 bit is used, so better to avoid having to go to slower memory (/disk). 2. FSR being open source does not stop them from optimizing it for their own hardware (or overriding/hooking at driver level so other implementation is used), but does increase the chance that the game comes with the correct API calls so they can use it.
Yeah, I came here to say point 2... Supporting your competitors' hardware is not part of open sourcing the software. I agree that maybe AMD doesnt need to focus as much on supporting old Nvidia cards, but at least for me, them open sourcing their software is a big selling point. It makes them more of a good-faith player than Nvidia.
When the Rx 7600 first launched I think I saw a laptop review that had the AMD card often perform significantly worse than the RTX 4060 because it was just slightly using more VRAM,a and they were testing games that all teetered on the edge of 8gb. AMD went over a lot more often and suffered hard. Even if it was just using like 300mb more, it was sending it over the edge more often and choking up.
This is true for memory in general. As long as you are able to run everything you need, you want your game and operating system to use as much memory as possible
I imagine their odd take on point 2 is due to that they looked at it from a gamer point of view rather then the developer point of view. If you're trailing OSS is a good way to keep your technology incorporated in as many products as possible. And so you keep good support despite your position.
I think what they meant by open source isn't actually anything to do with it being open source but that they are spending limited dev time making fsr compatible with rtx and gtx cards as well as older radeon designs like polaris instead of spending that time making it work better on modern radeon cards. That's nice for the people with a 1660, 1070 or a 580 but it's not doing anything to help sell radeons.
I have a XFX merc 319 6800xt and its great. runs quiet cool and fast. i also really like the XFX style. no rgb puke or other weird light garbage. just a lit up XFX logo.
Have their 7900XTX Merc 310, very clean white lit up logo. Both the 4090 FE and their XTX just have clean professional looking white light. Our 7800XT Gigabyte OC has the RGB logo going on, very niche, matches with RGB ram. My main pc has a black out theme in a white case, elegant having less flashy elements sometimes.
@@MaryannLynch-z9c Exactly the same! Had the XFX 6900 XT, very decent style but a bright "Radeon" RGB logo, not really matching my black-white aesthetic of the build. Now, having an XFX 7900 XT, and it is just perfect.
XFX used to be one of the most recommended manufacturers out there back when they also made Nvidia cards. I remember cross-shopping between EVGA and XFX for my 8800GT when I was building a PC to play the original Crysis. And from what I’ve gathered over the years, XFX is very much still considered one of the “big 3” of the red team. I ended up getting ahold of an MSI Ventus 3080 10gb back in December 2020 (for actual MSRP too), and though it has served be very well, it has a plastic backplate. I was used to the build quality of EVGA and Sapphire cards, and holy crap, the shrouds in MSI’s newer graphics cards feel ridiculously cheap!
@@alrecks619 Yeah they had a big turn around, I had a Thicc 3 ultra 5700xt and you could drop temps by like 6 degrees by popping the back panel and removing some of the excess shrouding
FSR shines on old GPUs. Probably the best case scenario would be 5700XT, 1070, 1080ti. Affordable first market (5700XT) and second hand market for NVidia old generation GPUs. Now saying this, i found out a lot of value for 5700XT with it's pricing below 200$.
There I see way more "repaired" zotac compared to any other brand. Might tell you something. Gets hotter than other brands too and cheaper. In their website they don't even tell you directly where their service centres are. You need to mail them, giving invoice and if only ok they will mail you the location. Very annoying experience.
@@me-df9re "repaired" should also be compare to sells % of user. If something has 100 000 users and other 50k obviously the first one will have more repairs also, what % to the whole number is
It's not just that there are people that prefer XFX over ASUS, it's that XFX, PowerColor, and Sapphire make the best AMD GPUs and the big three generally don't spend much money to make sure their Nvidia designs work on an AMD board.
I wish they could compete with CUDA too for developing AI models, but there is just too much support and momentum behind Nvidia from the network effect, I can't even imagine AMD competing with nvidia on this front.
technically the models aren't written in CUDA the stuff that runs them is pytorch, and that has a Backend for AMD GPU's though it only runs on Linux ( For image generators specifically ), part of the functionality has been ported over windows and that runs language models fine.
@@anonapache With the same effort that Linux lovers want everyone else to use Linux with. If Linux is easy for everyone to use, running CUDA on AMD shouldn't be any different. It's a matter of wanting it. Those who say they want to use AMD are full of bullshit. They should be honest about what they really want.
FSR can be good, just look at GoW Ragnarok and Ghost of Tsushima, the thing is when comparing one of the worst implementations of FSR in Cyberpunk to DLSS, then DLSS will always look way better and on other hand, there's a lot about FSR 3.1 ghosting, it's almost non-existent in GoW Ragnarok (snowflakes do leave a trail), but when compared to how uncanny ghosting with DLSS in Wukong can be, then i'd take FSR every day of the week
@@sias.2569 TechPowerUp uploaded 3 days ago comparison of XeSS, FSR and DLSS in boat part, you can check that 2min video i think the only thing FSR does to water in that game is making puddles sligtly shimmer on the edges, nothing you will notice if you don't stare at it, as for reflections on that video, all looks good, but there's no native to compare
DLSS in Wukong is buggy, it looks very grainy compared to Alan Wake 2 or cyberpunk. If you want to compare FSR to DLSS, use the DLSS in Cyberpunk, stable, not grainy, barely any ghosting or artifacts. In fact, it looks almost as good as native even at 1080p. FSR looks good so far, still having a hard time recreating small fine details especially those that in the distance, ghost of Tsushima has falling leaves disappear then suddenly reappear, trees in distance looks blurry compared to native. I imagine FSR 4 might fix this with AI, but right now DLSS still better especially at lower resolution, since people like me who has a 3050 laptop can use DLSS at 1080p and it looks almost as good as native 1080p
@@camdustin9164 *akhem* Nvidia sponsored title and when AMD sponsored title has problems then i can hear it from every channel, my main point was that most comparisons are not with peak FSR performance, like setting up a race with Usain Bolt and 2nd fastest runner but 2nd runner has iron ball next to his leg, there's no competition... using cyberpunk is Nvidia shilling at it's best since FSR 2.1 is looking worse and worse with every patch they release, somehow, i don't know how and FSR 2.2 with frame gen is even worse, somehow 1080p is very low base for FSR, raindrops also has that problem, probably because AI can take more things into account, while algorithm is more likely to miss that while comparing neighoring pixels
for Radeon, Sapphire and Power Color are kings, XFX is good affordable option and rest is... rest, in short it's like EVGA, best performing and best quality products are not from biggest corporations as they are doing too much at once
@@El_Deen they did a lot of custom stuff before Nvidia restricted it, also they were overclockers, so a lot of their stuff was made strictly with overclocking in mind, which made it easier to prolong product life, but yeah, looking at repair shop content and their gripes about EVGA, then there was a lot problems
Here in France, the Sapphire Pulse version of AMD cards are often the cheapest available of the models, while being a very solid choice (but Nitro+ is king) Got myself a 7900 GRE Pulse and I'm so happy with it.
@@hectorvivis3651 the same is here in Poland, Pure > Nitro+ = Red Devil > Hellhound > Speedster > Pulse = Merc so sapphire has pulse for MSRP and Pure as binned Nitro+, Power Color does great things, but i don't think they have MSRP model and XFX is MSRP one + middle of the pack... but everything still is affected by silicon lottery, so one can be better than diagram suggests, but funny thing was when repair shop got his hand on Nitro+ and couldn't melt solder since it had no additives that help with melting, it was a tank and 8-pin was melted, so quite rare problem
Unfortunately AMD didn't specified anything about whether that will be FSR 4 Upscaling or just FSR 3.1 with some AI Frame Generation, also no information if it will be supported by modern GPU such as 7000 and above series. And I can easily recommend Kryosheet, as I have this gorgeous little thing in my 7900XT from Gigabyte, which dropped my temps from 90-95C and fans at 2000rpm +, all the way dowm to 82C 1400-1500rpm at stock, while 88C max 1700-1800rpm with almost 400watts after unlocking power limit. But there is literally no FPS increases over 2-5% of power limit, so I have 3% over the stock and getting lile 84C 1500-1600rpm at max when full load. So yeah I truly recommend this Sheet if you have enough of pump out effect on your GPU. Just use termal electric tape (0.1mm yellow) around the Die to secure the transistors and you good to go.
The RX 6700 has nearly the same specs as the PS5, with a 160-bit memory bus and a slightly higher clock speed, and both deliver similar performance according to Digital Foundry's tests. So, the idea that a PS5 Pro will surpass the 7800XT makes zero sense. The 7800XT is about 70% more powerful than the RX 6700, and Sony itself claimed the PS5 Pro will be around 45% faster than the PS5. Do you really think the 7800XT is only 45% faster than a PS5? Of course not. The numbers don’t lie-believing the PS5 Pro can match that level of performance is pure delusion.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 RTX 3070 seems to be faster than PS5 in God of War Ragnarök. You can compare to the high frame rate mode because it’s 1440p with unlocked frame rate. But there’s no dynamic resolution on PC so idk how you would replicate the 60 fps mode.
@@Rachit0904 I checked ps5 60FPS mode res is between 2688X1512 and native 4K, but it's uncommon to reach highest and lowest res, usually at around 1800-1872P. 8G 3070 cannot do this res with highest textures. Yeah 1440P 3070 can outperform PS5 quite with high textures, high settings maybe. This also depends what CPU you use coz PS5 CPU is kinda weak that is unable to provide you with high frame rates.
PNY is really big in workstations in the US, now because of the CUDA monopoly most of these are Quadro cards, but i cant recall seeing anything other than PNY in the enterprise space. I mean HP and Dell make cards like the A100 and H100 but outside of those, you dont really buy an HP or dell workstation card, unless they come inside of one of their workstations. When talking about the PS5 Pro, my concern is that it us supposedly still Zen2 That means its may be just as slow as my 4700S/BC250 with a 7800XT Zen2 APUs, including PS5, have only 4MB of L3 available to any one core, Zen3 on the other hand has 16MB available to any one core due to having double the L3, along with a unified CCX/CCD I would genuinely be surprized if they didnt at least make a unified CCX version of Zen2, maybe they keep the 8MB of cache instead of bumping up to the 16MB in Zen3, but i think it would be a bad idea APUs share a memory bus, the more cache the CPU has, the less it uses the RAM, the less it uses the RAM, the more cycle time the GPU has to work with. Even with Zen3 i suspect it will be slower than an RX6800 in raw performance even if its RDNA3 based, console optimizations can hopefully make the thing perform better than a 7800XT, and i'd imagine that if they went with 16MB Zen3+ it might perform similar to a 7800X3D+7800XT
Regarding the Nvidia color thing: I believe you are referring to how the Nvidia cards will default to a limited dynamic range when using HDMI while AMD will default to full. You can change the setting in the Nvidia control panel to use 'full' over HDMI. DP is default set to full. I think they work under the assumption that HDMI means TV and TV operated best with limited since the TVs would compress the dynamic range anyway.
As for the CPU test with 7900XTX. I've seen it with my own eyes how the 9700X at 1080p performed 10% faster than with a 4090. One of the technician at my lcoal dealer was playing with the newly arrived CPUs and he noticed it and was showing it to us all. But since their job was basically to sell those CPUs, i took those results with a grain of salt. The explenation was basically that the better branch prediction of the 9000 series CPUs, combined with the increased IPC was improving the communication with the GPU through SAM and wasn't working the same way with nvidia GPUs which DO have Re-BAR, but it wasn't exactly the same as AMD's Smart Access Memory. Whether it's true, idk but it kinda makes sense for AMD to implement someting and boost performance on an all-AMD system. I've been expecting them to do that for 5 years. After all they've been talknig about boosted performance on their own ecosystem ever since 1st gen ryzen came out.
My PNY 4090 XLR8 OC is an excellent card. You most you'll get out of a way more expensive card is about +5%. ImWateringPSUs only had 2% better performance from his card.
20:55 Not really, there is no "magical it's better because it's console optimized GPU". PS5 GPU is basically RX 6700 hardware and it runs around the same performance as PC RX 6700 - sometimes little bit slower (when game wants more compute power) and sometimes little bit faster (when game wants faster memory). PS5 Pro will most likely be little bit below 7800 XT (if it's RDNA 3) because it has both slower memory and lower compute performance.
But if PSSR is decent, and it does have RT from RDNA4, then the ps5 pro can output a better image from a lower input resolution, and will have better performance than any AMD equivalent GPU. Just like RTX cards in the same performance tier.
@@ArchieBunker11 According to digital foundry it's like an underclocked 4070. Which means it has close to 6800xt/7800xt levels of raster performance but much better RT performance and a better upscaler.
@@enmanuel1950 as far as raster is concerned the base 4070 is slightly worse than a 3080 and 7800xt, and marginally better than a 6800xt(speaking in averages). An undervolted 4070 would be closer to a 6800 non xt. Also, sony themselves say “45% better than a ps5. The 6800 is 45% faster than a 6700. Although id grant you the image quality and RT.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 you’re wrong about things you can look up yourself. The 3060ti can play native 1440p 60 in Horizon and Gow, maxed out at just under 60, at Ps5 settings its considerably higher. And can also run native 4k 40 at the same settings, which is objectively better than the ps5. But to be fair the 3060ti is generally more of a 6700XT competitor. It certainly beats the ps5 more often than not, and frankly with DLSS, can run those games in 4k DLSS performance and get better image stability and performance at higher settings
I think console cost + online cost for the lifespan of the console makes it equal or more expensive than same tier pc, is just the cost is not upfront. I also feel the ps5 pro will be more like a 7700xt rather than a 7800
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can dedinitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
6:20 MLID mentioned that one of his contacts at AMD says that theyve been working on it for a while now. I think he said over a year. TBH, I'm glad they aren't rushing it out. Hopefully it gives DLSS and XESS a run for their money out the gate.
Problem with waiting is that DLSS is already included in hundreds of games, while AMD users are stuck with various FSR implementations, which generaly lacks in quality. AMD need to get it out to start building a software library with great FSR implementations, this will take years.
With Intel claiming 0x129 fixed the core issue of the Intel 13th & 14th gen stability problems and yet, they've released another microcode update 0x12B which they yet again claims is to improve the stability of those CPUs, what do you think they've done this time and is it time to do another Intel CPUs testing update?
Tim mentioning AMD playing catching being a good thing... I still think this points at the heart of the problem - letting nVidia dictate what features are the standouts and must haves. Doing so guarantees that nVidia will always have a head start on that feature that AMD realistically can't ever match or exceed. We saw it happen with Raytracing and again with DLSS. Then, every single time AMD updates FSR or Raytracing, reviewers are disappointed that AMD hasn't magically caught up to nVidia, like it could back in the days when it was just about raster performance. What were you realistically expecting to happen when they're constantly playing catchup? nVidia has more resources to through at it, and because they're dictating features, much more dev time as well. These aren't things you can simply overcome just by trying harder. The better path for AMD, at least for me is to find something distinct to add to the mix, get it running well to where they have the head start in development, then release it at a price that isn't losing them money but isn't as overly optimistic as their recent releases have been.
“It was interesting to see amd talk about this… normally with their future looking stuff they kind of hide it away” …. FSR3 frame generation would like a word
I think AMD should continue to make FSR an open standard but because they know in advance how the features will be implemented, they should design their hardware to be more efficient for it
Yes... But no.... Because Nvidia didn't planed tensor cores for DLSS, it was made for enterprise and AI oriented workloads. They just saw it doing nothing in gaming and gave it a use, just like nvenc and the new optical flow accelerator that also was designed for computational vision. But was used for frame Gen. That's the cool part of hardware accelerated stuff, they occupy space on die, but their use mainly occupy power draw budged. Said that, if AMD want the same recognition, they need just to make like Nvidia and do things for who is paying for it .....
@@pedro.alcatra the idea is to adapt FSR to maximize the efficiency on AMD hardware (in this case make full use of their tensor core implementation). Unless both AMD and Nvidia hardware work in the same way, it would stand to reason that the most optimized software implementation for one does not work as well on the other. You absolutely still need FSR to be open as AMD is the underdog so game developers need an incentive to add support. If they can get some improvement for all users (picking an open FSR) vs a lot of improvement for a small number of users (closed FSR) vs a lot of improvement for most users (closed DLSS) they will probably rank those choices as either 132 or 312, but would be insane to pick a closed FSR as a primary path forward.
My understanding is DLSS 'AI' is merely AI tuned rather than 'run in AI.' I think leveraging this level of 'AI' is fine, as it streamlines tuning the various upscaling logic to closest match the original target resolution. This is something that could work with previous versions of FSR ... although would require tuned profiles on a per game basis
Asking Tesla to test drive self driving tech on a VW as it's more reliable is like asking AMD to test on Nvidia. What's next HW unboxed just doing a nexus clips channel as they did better testing or better equipment on certain topics ? Nice for the consumer but wouldn't be a great look for you guys . It's just not a smart decision to push a rivals card however if the percentage uplifts are the same then it doesn't really matter
@20:32 the ps5 came out around the same time as the 30 series but it was equivalent to a 2070super/6650xt/1080Ti NOT a 3070 which is 35% faster and closer to a ps5 pro gpu.
later the ports run like crap, or even the native ports, thats a myth you need more than a 3070 you need a 3070 and all the components that ps5 already have. you are full memes and myths.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8G 3070 to match PS5 60FPS 1872P god of war 5 performance mode?
Nvidia has quite a few on AMD when it comes to AMD catching up.. as long as AMD puts the effort in I don't see why it couldn't be just as good..maybe better if they figure something out Nvidia has not..but that's gunna be hard since Nvidia is a money printing factory
XFX and Powercolor almost always have a great value around the EU and have been recommended very often. When you see XFX Merc models being similarly priced as ASUS Dual the recommendations are super easy.
I honestly think people are giving AMD too much of a bad time regarding FSR. As a technology, it's actually REALLY good; Incredibly efficient (not needing matrix acceleration) and the performance-benefit and quality is still close to DLSS (which should be celebrated). The downside of FSR is that it requires a lot of R&D and has more manual overhead for game developer to yield the best results. That's why we can see such a wide variability in terms of quality depending on which game it has been implemented in. Look at CDPR. Their FSR-implementation has literally become worse over time, and the latest 3.0 update is laughable. I mean heck! they solved ghosting on cars using a "reactive mask" in Cyberpunk v.1.61, however once the DLSS3-patch came out (1.62 I believe) they just re-introduced ghosting into the FSR pipeline, never to fix it again. Now they have two versions of FSR because they realize that they are bad in different ways, with 3.0 suffering immensely when it comes to semi-transparent textures... (FSR literally has a bult-in configuration to fix this). So what can you do? Well, today I run an FSR3.1 community mod that runs over the DLSS pipeline, it's actually GREAT and gets rid of most problems that CDPR seems to be having.
if they want to compete with nvidia an intel they need to use ai upscaling, dlss and xess are usable at 1080p while fsr looks blurrier and pixelated I tried using dlss balanced at cyberpunk and I could see jagged edges around the eyes of the characters on fsr while with dlss it looked more antialiased and the image looked a litle softer in general. heck if you have an amd or old nvidia card you could use xess 1.3 and would look better than fsr at similar fps
@@joxplay2441 To be honest. These upscalers were designed to make 4K viable with modern graphics. not to be used at 540p to reach 1080p. However, If I compare the FSR3.1 mod in Cyberpunk vs XeSS 1.3... playing at 1080P "Performance". FSR 3.1 is by far the better solution. It's sharper, temporally more stable. When looking at distant faces, they wobble with XeSS, on FSR they are stable and sharp. road textures at narrow angles flicker more with XeSS, and vegetation is mudgy and blurry. So AI in itself isn't a solution, ultimately it's all about how efficient and well made the algorithm is. Keep in mind, FSR wins because it's a mod and a better implementation than the official FSR implementation... I would choose XeSS over CDPRs official FSR3.0 implementation. To note, just because an algorithm was made with the help of "AI" doesn't necessarily make it better. it makes it cheaper to develop, since computers are doing a lot of the heavy lifting (DLSS XeSS) instead of graphics engineers needing to painstakingly designing the algorithm themselves (FSR). Personally I would be interested of how efficient FSR would be if it used Matrix Accelerators like the Tensor Cores. Because ultimately, when you execute these algorithm's (no matter if they are "AI" or not), they are just instructions for the GPU. So this whole thing with "AI Algorithms" is marketing bullshit because nvidia wants to be perceived as an "AI company" to investors, and AMD is following suit due to massive accumulation of money that nvidia is making. AI is an investment bubble, because investors have started to believe in the prospects of it... But to put it simple, Matrix accelerators are great at comparing a lot of values simultaneously (that's literally what AI/MachineLearning does on a low level). Know what also needs to compare millions of colors values on screen? temporal upscalers, AA etc. So no matter if it is AI or not, accelerators help these algorithm's perform better, and DLSS is a far slower algorithm than FSR and XeSS, but due to hardware acceleration it pulls ahead.
I have a Sapphire pulse 5600XT and I love how silent it is when using the silent BIOS even though it's a two fan card, it's better than the big brands because first of all they don't even offer dual BIOS for their cheap models, and they are so loud, I'd have to spend more money just to get the same experience as with this card.
@@jemborg the ssd is literally the only thing they designed and it is just the speed of an nvme 4.0 but with hbar type support for direct memory access instead having to use the cpu as a pass through.
PSSR = FSR 4. Why would Sony (who have outsourced all graphics development to AMD for years) suddenly decide to develop their own in-house upscaling? Makes no sense from a financial or expertise point of view. The logical move would be to get AMD to develop it for them. It's no coincidence AMD announced AI-powered FSR 4 seven days after PSSR was announced. Not sure why everyone keeps talking about them as if they're different technologies.
Will note on the PS5 Pro comparison. The 7800XT is way faster, primarily due to no Infinity Cache on the PS5 Pro And that has been a thing that historically cripples a RDNA card versus its IC-Bearing Counterparts. (680M/780M vs 6400, 890M vs 6500XT.etc) So 7700XT is fair for Raster performance for the PS5 Pro due to not having Infinity Cache. As for AI/RT Performance that is more variable but feature wise rumors state RDNA4 (Which Cerny has said the RT is actually coming from) will at least match Ampere's RT Feature set. So we can probably look for GPUs in the Ampere gen that are around 7700XT level in Raster then go from there Which would lead us to a RTX 3070/3070Ti for a lowball or an Downclocked 4070 for a highball
The vram is shared with system memory tho, not exactly the same but the ps5 gpu was roughly a 6700 non xt and performed close to one, as well as console only optimizations would bridge the gap, infinity cache helps bandwidth issues not 100% present on consoles due to a unified memory config, thats why modern games tend to require loads of vram because consoles don't have to worry about the transfers between cpu and gpu like pc typically does
@@PelonixYT no, not quite. The 6700 OEM is the card that is closest to PS5 and even then outside of wacky cases (poor optimization) on PC, often outperforms PS5 exponentially at some resolutions. Again, due to Infinity Cache.
WRT VRam usage - framebuffer compression doesn't actually save memory (it might actually use very slightly *more*) as it can't know if the texture is completely random and simply cannot be compressed. It just helps memory bandwidth, as "most" blocks are smaller to transfer. That's different to "lossy" texture compression, like the DXT/BC formats, but that's handled by the game itself, as it can know what textures it can lose precision on.
The PS5 itself is identical in performance to a RX 6700, there is no doubt about that it matches both raw and RT performance, so accounting for everything announced to the PS5 Pro, the only GPU that matches the performance gain and the "next level RT processor" and so on, is the RX 7700 XT, it has 42% increased performance from the RX 6700, and also has a next level RT processor. It can't be the 7900 GRE because that is a 89% uplift, and the 7800 XT is a 72% uplift from the standard PS5 performance. The NVIDIA Equivalent would be a RTX 3070 Ti, although the raw performance is a little bit slower, and the RT performance is a little bit higher.
@@Kukajin That is too much of a performance improvement from the PS5, the ammount of VRAM isn't a issue, the GPU is always custom made, usually is a shared Memory and they allocate the way it's needed. 7700XT has 12gb of VRAM, but PS5 uses a custom GPU, with 13,7GB... not exactly a reason to call it mismatch, just add more memory modules to the 7700XT. Also Intel XeSS ins a AI based upscaler and is compatible with AMD GPUs... PSSR isn't FSR, and the "FSR" doesn't need to be AI based in order for the GPU to match, PSSR is it's own upscaler, like XeSS is, TSR and also FSR. All which can be run in the PS5, because they aren't GPU specific technologies like the DLSS (maybe the PSSR will be, who knows). Focus on the Performance increase, if anything goes beyond +50% of a RX 6700, it's already better than what PS5 Pro is capable of.
@@BrunoRafaBR Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. Can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 I actually have a 3070 and just finished gow Ragnarok on PC at 4k60 with a mix of high and medium and DLSS quality... So... Yes? It is not a heavy game for PCs, it even came out on PS4. Just had to stick with medium textures because of VRAM, but run smooth like a baby butt.
@@BrunoRafaBR That's still below PS5. But the base bitch PS4 GPU is so so weak that is like large Vram HD7850. 1060, rx580 are over 2.5 times stronger than base bitch PS4 in paper, but they cannot even beat that base bitch ps4. So the direct comparison between PC and console specs is too naive as time goes on.
Another option for AMD: they could keep FSR open source, but the AI models could be proprietary. I would actually welcome that! It could allow for the community to train and create their own AI models, which might compete well with AMD's offering. Ideally it would be possibly for users to choose which model they want. Also, there can be models optimized for various kinds of games, and even -- gasp! -- trained by the community for a specific game with specific quality settings. That could be amazing.
I feel you overestimate consoles a bit. PS5 is more between 3060 and 3060 Ti, and Pro according to what Sony said is 45% faster which would land it around 7700XT, not above 7800XT.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles.
On First world countries the big 3 GPU brands are also sales leaders but on 3rd world countries Palit, Inno3d, Zotax, Xfx, Powercolor, Galax, Sparkle, (Sapphire to some extent) etc are the Kings of sales of entry level to mid to even high end GPU's. Chinese brands such as Soyo, Aisurix and many others are quickly catching up easily beating the big 3 on entry level gpus. In Japan a first world country the most famous GPU brand is Kuroutoshikou that sells both Nvidia & AMD gpus. Asrock, Biostar, Acer etc also joining in the fun after the legendary EVGA left the scene...
I don't understand what open source has to do with tying with hardware... Do you think you need yo have a closed source for having a more tief to the hardware?
I upgraded from a Gigabyte 3060 TI to a PNY 4080. The PNY has been faultless - quiet and good software for adjusting fan curves. The Gigabyte was a nightmare - every time it woke from 'sleep' mode, it would make this horrendous grinding noise that I was told was normal. The only thing I could do was adjust the fan to run all the time. The PNY looks like a brick but it is solid and doesn't have all the gaming logos and bs associated with other brands. I'll happily buy another PNY.
XeSS is the correct direction: you create a super resolution technique that not only uses dedicated hardware accelerator that only your GPUs can provide (XMX), but also can be used on other brands' GPUs but inferior (DP4a).
The memory used by Nvidia is different from the one used by AMD. There is GDDR6X vs GDDR6. But it does not need to differ in type of memory chips. You can have two different GDDR6 speeds, amplitude impulses, two or four signal levels and many other specs. Even switching between manufacturers can cause differences in the game's need and usage of available memory.
We don't know, but AMD have AI cores that rn aren't being used, so it would make sense but also would be against what they have been doing lately so idk , I have Nvidia so idc@@Hi_Im_o2
FSR being open-source has helped AMD on the console and on their laptops with relatively strong iGPUs. As well as budget APU builds. Besides whatever small amount of gaming PCs using standalone Radeon cards. And then looking beyond AMD, FSR is viable on mobile (and can be tweaked to run better there, due to its open-source nature). Reviewers may pixel-peep and say DLSS is better, and indeed DLSS may be better, but many times, gamers (and game devs!) are simply glad to have *something* to be able to increase framerates, since an unplayably choppy game is actually unplayable and the customer closes the game and never looks back. That's a gaming experience either ruined or saved by framerate, potentially. I actually do agree after all that, that if Radeon GPUs can support "the best version" of FSR, AMD should do that. Existing FSR will be open-source forever, per its license, which as far as I understand is effectively irrevocable. I believe they should open-source *some* version of their upscaler going forward as well. But if they have a special sauce that makes it work better on their architecture, much like Intel has done, I think it's fair game at this point and understandable to remain competitive. I want to re-emphasize that FSR ever being open-sourced has been a game-changer and made upscaling real in the long tail of platforms that aren't a standalone Nvidia GPU (or the small marketshare of Arc GPUs) for the past several years. That's awesome, and AMD deserve huge props for that. Even if the visuals aren't really what I'd like them to be, for systems where it means being able to play at all or not, it's a literal game-changer. So, kudos to AMD for the open-source stuff, it's the rising tide lifting all boats sort of situation and I respect the heck out of it.
21:30 People comparing Console to PC because of cost, forget that you must add in console subscription cost of the console because they are largely useless without that monthly subscription
Not really "you must add in console subscription cost". You are talking about the group that competitively games, as free to play multiplayer games/single player don't require a subscription. If you are a single player gamer, or multiplayer for free to play games, a subscription is not a "must". Of course you could pay for a subscription for COD, and/or have access to their game pass subscriptions as well which have a huge library. If I had to pick between my PC and my PS5 for multiplayer, I'd probably pick PS5 for way less cheating in multiplayer, and the games aren't graphically heavy anyways usually. Also parity in hardware, so no advantages due to the almighty $$$, and ability to pay for advantages. PC vs console is stupid, they both have their place, and even though I primarily game on my 4090 PC, when the AIO fails or the CPU, it's nice to have a PS5 while waiting for an RMA. Haven't paid for a subscription at all on PS5 and have played many many games.
The GPU in the regular PS5 is about equivalent to a 6700 XT The PS5 Pro is said to have about 45% more rasterisation performance, and about double the ray tracing performance over the regular PS5 So, roughly as fast as a 6800 XT in rasterisation and a 7900 XT in ray tracing, using back of the napkin estimation
In reality the GPU uplift is now equal to a 6800 non-XT, and the CPU of the pro is roughly equal to an actual ryzen 7 3700 instead of a downclocked one that runs at the performance of the ryzen 5 3600 (like the regular PS5)
Gonna need more than AI FSR, they're gonna need a DLDSR alternative, Ray Reconstruction and actually make their terrible features better like Video Upscale or Noise Suppression.
@sammiller6631 I dislike how RR makes everything waxy in cyberpunk. If Nvidia has a way to implement RR without a hit to image quality, then there's no good reason to avoid it.
@@mikelay5360 you jumped to the conclusion really fast that I'm an AMD fanboy. I'm for whatever's best for me as a buyer. And Nvidia's plan to get me to buy a 4090 will never work lmao. My 3070 still going strong.
@@Steel0079 The pricing is probably not changing. None of us like spending a thousand on a GPU, but there is not too much at the higher end that is remotely close to prior pricing. Even AMD.
PNY is the manufacturer of allmost all Quadro cards, they are of course pretty large as a manufacturer. Quadro cards perhaps even have higher margins, because they are very expensive. I have two Quadro RTX 5000's in my engineering workstation.
I’d argue this channel gives FSR too much credit. I’ve had way too many instances where DLSS performance produces a more stable image than any setting of FSR. They gotta get the shimmer sorted out.
@@ArchieBunker11 ikr, with DLSS you can upgrade to the latest version in any game with a simple dll swap. Fsr doesn't have this and even if it did, ut still lags years behind dlss in quality.
@@sammiller6631 its not a crutch, its increased the life of my card. People like you act like broken games just started coming out, and are blaming DLSS for it. I use it 100% of the time, even if I can play the game at high framerates, because it stops your card from running full blast, and the games look virtually the same.
@@ArchieBunker11 DLSS is a crutch. You don't understand the difference between "broken" and "bleeding edge". There's a reason why pushing the tech forward isn't nice, neat or polished.
got a second hand 6700xt and will upgrade of fsr4 seems actually worth it. Was reluctant on buying a high end card until i remembered that the new generations will launch soon.
There are also others, like Palit (Taiwanese manufacturer that makes pretty good cards), Leadtek and Gainward (both popular value options in Asian markets).
I gotta say FSR 3 dosen't look as bad as people say. When people do comparisons it's hard for me to tell. Plus AMD Has such a smaller team for there Radeon division vs nivda.
For the 3rd question, PowerColor on AMD is amazing. For example 7800XT Hellhound has the best thermals and the lowest noice at the same time. There's a "rumor" that if you go AMD you go specific brand and if you go Nvidia you do the same, people say companies that do cross-cards aren't good on the lower end models especially
Whether or not AMD beats DLSS is irrelevant to me. Will they beat native resolution? No. Therefore this technology is useless to me. Turn off raytracing and enjoy crisp artifact free native resolution instead.
You'll still have the game engine's built in TAA, which I can't really say is "artifact free." AI based upscaling actually reduces some of those. Ultimately it comes down to personal preference.
Zotac is good for SFF cards. While MSI/Gigabyte/Asus would occasionally make single fan ITX cards, Zotac has always made small dual fan cards for pretty much every generation. I had a 2070 and have a 4070 of theirs. At 226mm and 2-slot it's one of the most compact 4070s you can get. Also AMD only has 12% of the GPU market, so for developers to support FSR it has to work on all cards.
@Sal3600 What I mean is that Sony wants a better upscaler to replace fsr. The result is they forced AMD to make new hardware that is capable to do AI upscaling.
nvidia has way better memory compression algorithms thats why it uses less vram than amd, to match nvidia 8gb vram you need 10gb amd alternative so there is a gap of 20% which is quite high and in the meantime nvidia uses dynamic balance to feeding shaders more effectively on a hardware level,its not software tricks but its adds extra die size in the ada lovelace
Yeah that's why their 8GB cards can't play games like Hogwarts Legacy and Resident Evil 4 Remake. Better compression flat out loses against simply adding more vram. Their high end 10GB cards get bottlenecked while AMD's mid range cards sail by. Nvidia goes 8GB, AMD goes 12GB. Nvidia goes 10GB, AMD goes 16GB. You can never close that gap with compression. You're delusional.
@@kentaronagame7529 nvidia simple put less VRAM to push people buy their card that have more VRAM. but technology wise there are reasons why AMD want to have more physical VRAM on the card. for AMD GPU when VRAM bottleneck happen their card will have much worse consequence than nvidia. there are videos comparing RX570 4GB vs GTX1050Ti in horizon zero dawn. in that game RX570 pretty much unable to load the entire game texture unlike GTX1050Ti.
Anybody who thinks fsr4 will beat DLSS right off the bat is setting themselves up for disappointment. Not because I think AMD is incapable, but because Nvidia has years of headstart to fiddle, thinker and fidget with it. Remember when DLSS came out? It was pretty terrible This, comes from somebody who has a full AMD system and doesn't plan on buying Nvidia.
I don't expect it to beat it, but almost pull even, with maybe Ray Reconstruction being the main advantage left for DLSS. Then, very soon after, Nvidia will introduce DLSS4 and pull ahead further, leaving AMD to spend the next year trying to catch up to that.
You guys have featured a PNY 4090 in your b-roll. Did you just toss it lol? Jayz2cents just bought one. Kingpin is going with them after EVGA's demise... They're American. They were a big seller in Australia last couple of years. I bought a PNY 4090 XLR8 OC myself! Excellent card! Paid AU$2900 total. Cheap... sort of.
@@Sal3600 if you know what you are doing, then, no DLSS is worse. But I understand, DLSS is for the general public that have almost zero knowledge of rendering options. Just press a button and done, its serves its purpose for the simple.
PNY might actually be Nvidia's largest AIB - they make all the OEM-branded cards for companies like Dell, HP, and Lenovo, and are also the primary manufacturer of Quadro cards I believe.
35:30 the answer is yes, and 99% of AM5 CPUs should be able to run 2067MHz FCLK and 6200MT/s memory which is guaranteed to be faster than 6000. From what I’ve seen, 2133MHz FCLK and 6400MT/s memory are also pretty easy to run now which is the fastest combination for most people if you can’t run DDR5-8000 stably. I advise people to check out Buildzoid’s reactions to RAM timings for examples and ideas.
5700x3d is closely priced to a 7600, and am4 users can probably find a 5800x3d for that price on the used. Even a 5700x is priced better. (Prices in India, amazon etc..)
went with zotac 4070 2 fan version and im happy how quiet and cool the card is. And went with that model just because it had good sale that made it cheaper then 7800xt alot.
AMD needs to be a better "software company". They have the hardware but no good software pipeline for 3rd party softwares to properly take advantage of the hardware. For example AMD still doesn't have a QuickSync equivalent. In fact even though my ROG Ally's 7840U APU supports hardware accelerated AV1 encoding, the driver software doesn't even include a Shadowplay like screen recorder. I have to OBS. AMD's NPU usage is also limited even though the Zen 5 APUs have the fastest NPUs among consumer laptop CPUs. Let's not even talk about no equivalent feature to RTX Broadcast which includes the excellent RTX Voice. There is no alternative to RTX Video HDR or RTX HDR which are killer features.
The RTX suite is underrated. Obviously DLSS and remix are well regarded. For good reason. But broadcast/video/HDR dont get near enough love, especially with how common no HDR option is in games.
1. QuickSync is hardware based....so how does that relate to your point about software? 2. Radeon suite does in fact have a screen recorder.....what are you on about? I have been using it since the RX 480 in 2016. 3. NPU usage is limited because other companies aren't even bothering with AI? How is that on AMD? 4. AMD does have an RTX voice equivalent called AMD Noise Suppression in the AMD software suite. Are you insane? So far you literally just made shit up because you have never used an AMD product, and your ally is completely dependent on ASUS to optimize the software lmfao. Your argument has zero ground to stand on.
@@singular9 he meant good though. AMD voice suppression is absolutely ass compared to broadcast. Also, dont stop there, where’s their RTX Video, or HDR competitor? How about a tool like Remix? I had an AMD GPU prior to my current card. His argument has Quality as a ground to stand on.
@@singular9 AFAIK, AMD doesn't support the screen recorder, aka ReLive on APUs. At least on desktops that I've tried, but I bet it's probably the same case with laptop/mobile APUs. Last time I tried Noise Supression (around March i think, with a 7900 XTX), it was basically useless, made my voice sound like I'm a robot who's speaking under water. And the problem definitely wasn't on my side, my hardware wasn't faulty, tried every troubleshooting method, and it also worked like a charm before, using RTX Voice, but I couldn't get Noise Supression to work properly. At that timem they just released the FSR Video Upscaling stuff, which I was pretty excited about (cause I loved RTX VSR too), only to realize that it isn't even working. Like, it was there in the Radeon Software, but I couldn't turn on, because...? I wasn't the only one with that issue, that's for sure, seen quite a few comments about that on AMD's Support Forum. After ~1 month, it still wasn't fixed. Also, I used to encode videos using Handbrake, so I tried VCE instead of NVENC - that was quite a disappointment, the end product was larger in size, but also had much worse quality. So my journey with the 7900 XTX ended pretty quickly. I really want to like AMD again, my all time favorite cards are still the HD 7970 and R9 290X, but there's a lot to improve, especially if they want to compete at the higher end (at least for my usecase and preferences, tho I'm probably not alone with this).
@@singular9 1. QuickSync is just branding for Intel's media engine, which actually works really well with Premiere Pro and Resolve. AMD also have their VCE or whatever it's called for media processing, but poorly supported in NLEs. Also H264 encoding is still bad. 2. No that screen recorder is not available for APU drivers. 3. Fair enough. But Intel's weaker NPU yields better result due to OpenVino. I'm not an expert here. 4.Insert meme. AMD's NVIDIA alternative features are just for namesake.
DLSS isn't hardware based, it's hardware accelerated. it could run on any hardware (if not always very fast) , and could run just as fast on any hardware that has matrix solvers (that Nvidia markets as tensor cores) if nvidia would allow it.
Also worth noting is that Nvidia "hardware accelerated" features tend to use both the Tensor cores AND additional host CPU cycles.. at least on my 4080 Super running the same benchmark (Cyberpunk 2077) with features like RT or DLSS enabled.
It has been done, DLSS can run on GTX but due to lack of the hardware acceleration it runs much worse than native.
As far as I remember I saw it in a video about uniscaler mods or something. The guy was trying out FSR 3.1 in older unsupported games.
So you mean its not viable unless hardware accelerated? Just like any other hardware accelerator that does something that can be done in software? Its hardware based. It doesnt work correctly without the hardware
Yeah if FSR4 could run on NPUs this would be a big win for laptops and handhelds
@@TheYuppiejrI'm almost 100% sure this is true because I noticed when I ray traced my CPU usage skyrockets.
11:00 XFX has a pretty big advantage in Germany. It's often one of the cheapest with good coolers
yeah i got a 6800 from them, and it's a really good card.
XFX has been consistently very good after they completely fumbled the bag with the RX 5000 series.
got 6950xt and 7900gre from XFX, I was surprized by the design and overall quality given the lower price. Man both coolers were able to easily keep temps in check even when pushing 300W+, which also means that with some UV and 240W limit both cards ended up really quiet!
The RX 6000 was amazing, 7000 is mid not sure what they did with the cooler but even if you tune down that out of the box turbine, then the TUF or Nitro is still a good 10 degrees cooler on the same noise level. (XT/XTX)
same here in Czech Republic, XFX got usualy best Radeon deals, both 6750 /6800 still available here for pretty good price
PNY is the only brand that Nvidia trusted to produce their workstations gpu..and PNY is a US based company..
Sometime ago, leadtek were making tons and tons of Quadro. No idea if that is still the case
Also, rumors lately that kingpin is working with pny. Take my fucking money pny give me a 5080 ftw
@@dtectatl1that’s not a rumor he said it in his TH-cam video
PNY is a Palit, which is main Nvidia vendor
PNY is Chinese company
13:00 Always find it somewhat funny that a lot of people in the west think Gigabyte, MSI or Asus are the "big boys" in GPU manufacturing and everyone else is small time, while in reality the real GPU giant is actually Palit Microsystems. They overtook Asus back in 2013 already to become the largest GPU AIB.
As for the others players in the market which are not that well know by western audiences and reviewers but still very large.
Colorful(Largest in China which moves massive volume)
PC Partner Group(Manili,Zotac,InnoVISION) who also also does a lot of contract manufacturing
is Palit cards legit? cuz i remember seeing it few years ago with a way lower price than other brands and it instantly gave me scammer vibes 😂
wait the frog gpu guys?
@@あなた以外の誰でもない Yes, they have been around a long time, I bought a Palit GTX 780 jetstream way back when
EVGA is the only big boy to me 😢
@@あなた以外の誰でもない Model is also relevant. Palit and zotac make some very underbuilt cards as their base models for midrange cards. Much worse thermals than you'd get with like a base model like an msi ventus 2x, asus dual or gigabyte windforce.
Hot take
I think FSR targeted universal card support so that it wouldnt be ignored, sure there was probably a hope some games would choose FSR over DLSS, but really i think the main goal was just to have it included at all.
If a game dev often chose not to include DLSS despite it being in ~90% of new cards, why would they bother with FSR if it only worked on 5-10% of new cards?
By working on all cards, including Nvidia cards that dont support DLSS, it evened the playing field on weather a developer would include it or not.
I just wonder how the AI upscaling will work on arc and rtx
Yeah, at the time (and it may even still be the case), there were more people with Nvidia GPUs that couldn't use DLSS than there were people with AMD GPUs. AMD leveraged those neglected Nvidia owners to get FSR into games (because, if devs wouldn't add it for the sake of AMD owners, they'd at least do it for all of the 1080/1660 owners) and, now, it's an established technology that people expect to see in new games.
It was open, for much the same reason Java was Open, to stop the competitor closing the market (NVidia graphics or Windows OS). If it hadn't been for Java being Open Source, Microsoft would have closed the internet completely. Same for FSF. If AMD hadn't open sourced it, Nvidia would have closed off the market and everyone would have to license DLSS from NVidia, who can refuse.
@@Osprey850 i use fsr cuz the only option is NiS, that already shows why my next card is going to be amd again after long time, yeah FSR was made to give ppl with cards which can't use dlss
the issue is that FSR sucks balls and I prefer to not use it because it's genuinely unbearable...
Machine Learning Engineer here: The reason (I Believe) that FSR is not AI based is because AMD's RocM (the equivalent to CUDA) isn't supported on windows after years of promises that "it will come soon". It means that on windows systems, AMD cards have no way of utilizing the hardware to efficiently run AI tasks. One way around this is to use DirectX 12 as the execution provider for AI tasks but it is slower than running them bare metal as RocM would allow. Nvidia not only has CUDA as an execution provider but also TensorRT, which is even more optimized and fully utilizes the Tensor Cores. If AMD uses DirectX to do this, it means that the upscaling model will have to share the same resources with the game engine that renders the game. Without proper RocM support for windows, I am very skeptical on whether FSR4 will run properly. Lets see.
Dude this is a awesome explanation for someone still tryna rub rocks together for more Fps haha super interesting and Really clear on whts going on here.
@@keashaneverts4452 Glad I could help 😌 another point also is that even with proper RocM support on windows, AMD still lacks dedicated hardware (Tensor Cores) which means that the model will still need to share resources with the game rendering. So, in the current state, AMD has 2 options: either A) train a very simple model which would run fast but will lack visual quality or B) train a proper model but enabling FSR4 would absolutely tank FPS. I am guessing they will go with option B) to showcase at least visual quality parity with DLSS, and then provide both RocM support and dedicated hardware on future cards. Their best option would be to launch FSR4 alongside proper new cards that can handle it.
Γειά σου Γιωργάρα μερακλή. Δε κατάλαβα λέξη απ' ότι είπες, αλλά καλά τα λες.
I don't think amd has to implement the whole rocm stack for something like this to work. They'd "just" have to implement some specific calls that this ai based model uses into their adrenaline drivers. And they could totallly do that, since they have the know-how.
"Haven't any roasts prepared or anything?"
"I wish, but its not worth my time."
*Boom* 💥 🔥🔥🔥
PNY is an American Company that does a TON of data center GPUs and have kinda started dabbling in consumer GPUs a bit more in the 4000 series
PNY has been a NV OEM since before geforce was a thing for NVIDIA I think they started with the RIVA TNT. They have always been the lowest end reference designs.
I worked at Staples back in the stone age, and we carried PNY RAM. This was the SIMM/DIMM SDRAM era, but the return rate was abysmal. I've never got the impression their consumer line-up has improved to any notable extent.
I own a PNY 4090 XLR8 OC...
It's a lovely card. Cool and quiet. Full frame. Solid performance. Vapour Chamber, 8 pipes (two 8mm). Triple circle enclosed fans. Compact design. It's not for benchmarking. But I get an excellent undervolt/OC at 2760MHz @ 975mV with +1200MHz (12%) VRAM OC. It doesn't look like a giant maxipad or candy bar so I'm quite happy.
Not only data center cards, they also do all the expensive Quadro workstation cards. I have two Quadro RTX 5000's in my engineering workstation and their radial fan design is better thought out than some of the consumer brands axial designs. I hold them to the same standard as EVGA was. Their fan bearings are of quality, which many other board partners cheap out on.
My sister in law bought an RTX 3060Ti Eagle OC from Goigabyte. Loudest card ever, even more so than a GTX480 and rubbish cooling, flow through area was completely covered by a brittle plastics backplate. She sent it back and replaced it with a PNY 3060 Ti, PC is super quiet. Very big flow through and quality fan bearings.
PNY does gpus for ages
A few topics here are odd.
1. VRAM usage -> Like RAM, it will use as much as it can get so it doesn't need to hit the disk. As long as you are not getting out of memory usage, it's fine. You want it to fill it up as you pay the electricity cost whether it's full or 1 bit is used, so better to avoid having to go to slower memory (/disk).
2. FSR being open source does not stop them from optimizing it for their own hardware (or overriding/hooking at driver level so other implementation is used), but does increase the chance that the game comes with the correct API calls so they can use it.
Yeah, I came here to say point 2...
Supporting your competitors' hardware is not part of open sourcing the software. I agree that maybe AMD doesnt need to focus as much on supporting old Nvidia cards, but at least for me, them open sourcing their software is a big selling point. It makes them more of a good-faith player than Nvidia.
When the Rx 7600 first launched I think I saw a laptop review that had the AMD card often perform significantly worse than the RTX 4060 because it was just slightly using more VRAM,a and they were testing games that all teetered on the edge of 8gb. AMD went over a lot more often and suffered hard. Even if it was just using like 300mb more, it was sending it over the edge more often and choking up.
This is true for memory in general. As long as you are able to run everything you need, you want your game and operating system to use as much memory as possible
I imagine their odd take on point 2 is due to that they looked at it from a gamer point of view rather then the developer point of view. If you're trailing OSS is a good way to keep your technology incorporated in as many products as possible. And so you keep good support despite your position.
I think what they meant by open source isn't actually anything to do with it being open source but that they are spending limited dev time making fsr compatible with rtx and gtx cards as well as older radeon designs like polaris instead of spending that time making it work better on modern radeon cards. That's nice for the people with a 1660, 1070 or a 580 but it's not doing anything to help sell radeons.
I have a XFX merc 319 6800xt and its great. runs quiet cool and fast. i also really like the XFX style. no rgb puke or other weird light garbage. just a lit up XFX logo.
Have their 7900XTX Merc 310, very clean white lit up logo. Both the 4090 FE and their XTX just have clean professional looking white light. Our 7800XT Gigabyte OC has the RGB logo going on, very niche, matches with RGB ram. My main pc has a black out theme in a white case, elegant having less flashy elements sometimes.
Yeah Xfx cards look great and run great. I have 4 in my house all bought within the last few years. Build quality and no rgb is what I wanted.
@@MaryannLynch-z9c Exactly the same! Had the XFX 6900 XT, very decent style but a bright "Radeon" RGB logo, not really matching my black-white aesthetic of the build. Now, having an XFX 7900 XT, and it is just perfect.
6950xt here, it does everything I want at 4k
XFX used to be one of the most recommended manufacturers out there back when they also made Nvidia cards. I remember cross-shopping between EVGA and XFX for my 8800GT when I was building a PC to play the original Crysis. And from what I’ve gathered over the years, XFX is very much still considered one of the “big 3” of the red team.
I ended up getting ahold of an MSI Ventus 3080 10gb back in December 2020 (for actual MSRP too), and though it has served be very well, it has a plastic backplate. I was used to the build quality of EVGA and Sapphire cards, and holy crap, the shrouds in MSI’s newer graphics cards feel ridiculously cheap!
Simple: Never Buy MSI.
XFX is doing better after being called out for that Plasticky "THICC" series GPUs with the 5700 XTes.
@@alrecks619 Yeah they had a big turn around, I had a Thicc 3 ultra 5700xt and you could drop temps by like 6 degrees by popping the back panel and removing some of the excess shrouding
MSI makes good mobos, but I'd never buy anything else from them
I opened msi 3070 gpu and compare to colorful and gigabyte aero 3080 even saw rlthe 3070 version online. Yeah msi gpu pcb is so cheap
11:00 Honestly, I've had far better experiences with the smaller brands than with the big boys to the point where I'll now actually prefer them.
FSR shines on old GPUs. Probably the best case scenario would be 5700XT, 1070, 1080ti. Affordable first market (5700XT) and second hand market for NVidia old generation GPUs.
Now saying this, i found out a lot of value for 5700XT with it's pricing below 200$.
Nvidia introduced very efficient frame buffer lossless compression back in the GTX 1000 series. In fact, they were specific about that at the time
Don't throw your wallets up here
Zotac is very popular in India as it provides 5(2+3) years of warranty
Bought my first gforce card from them. It gives me 😊peace of mind.
There I see way more "repaired" zotac compared to any other brand. Might tell you something. Gets hotter than other brands too and cheaper.
In their website they don't even tell you directly where their service centres are. You need to mail them, giving invoice and if only ok they will mail you the location. Very annoying experience.
zotac is also junk
If you look at used gpus to purchase nearly every zotac card for sale has fans that are broke or loud and wobbles....in the description by the seller
@@me-df9re "repaired" should also be compare to sells % of user. If something has 100 000 users and other 50k obviously the first one will have more repairs also, what % to the whole number is
Awesome q&a, great questions and answers. Thanks for taking the time guys
It's not just that there are people that prefer XFX over ASUS, it's that XFX, PowerColor, and Sapphire make the best AMD GPUs and the big three generally don't spend much money to make sure their Nvidia designs work on an AMD board.
I wish they could compete with CUDA too for developing AI models, but there is just too much support and momentum behind Nvidia from the network effect, I can't even imagine AMD competing with nvidia on this front.
technically the models aren't written in CUDA the stuff that runs them is pytorch, and that has a Backend for AMD GPU's though it only runs on Linux ( For image generators specifically ), part of the functionality has been ported over windows and that runs language models fine.
You can run CUDA on AMD, but people don't like change. It's easier to just repeat the same old things even when the world has moved on.
@@sammiller6631 zluda isn't a thing anymore for amd, the hip conversion requires manual effort. so how exactly?
@@anonapache With the same effort that Linux lovers want everyone else to use Linux with. If Linux is easy for everyone to use, running CUDA on AMD shouldn't be any different.
It's a matter of wanting it. Those who say they want to use AMD are full of bullshit. They should be honest about what they really want.
@@sammiller6631 Wtf, what's wrong with you?
FSR can be good, just look at GoW Ragnarok and Ghost of Tsushima, the thing is when comparing one of the worst implementations of FSR in Cyberpunk to DLSS, then DLSS will always look way better
and on other hand, there's a lot about FSR 3.1 ghosting, it's almost non-existent in GoW Ragnarok (snowflakes do leave a trail), but when compared to how uncanny ghosting with DLSS in Wukong can be, then i'd take FSR every day of the week
Does FSR break water reflections like DLSS does? Look at the water whenever you are in a boat and moving.
@@sias.2569 TechPowerUp uploaded 3 days ago comparison of XeSS, FSR and DLSS in boat part, you can check that 2min video
i think the only thing FSR does to water in that game is making puddles sligtly shimmer on the edges, nothing you will notice if you don't stare at it, as for reflections on that video, all looks good, but there's no native to compare
DLSS in Wukong is buggy, it looks very grainy compared to Alan Wake 2 or cyberpunk. If you want to compare FSR to DLSS, use the DLSS in Cyberpunk, stable, not grainy, barely any ghosting or artifacts. In fact, it looks almost as good as native even at 1080p.
FSR looks good so far, still having a hard time recreating small fine details especially those that in the distance, ghost of Tsushima has falling leaves disappear then suddenly reappear, trees in distance looks blurry compared to native. I imagine FSR 4 might fix this with AI, but right now DLSS still better especially at lower resolution, since people like me who has a 3050 laptop can use DLSS at 1080p and it looks almost as good as native 1080p
I actually had strong ghosting with FSR in Wukong and switched to XESS.
@@camdustin9164 *akhem* Nvidia sponsored title and when AMD sponsored title has problems then i can hear it from every channel, my main point was that most comparisons are not with peak FSR performance, like setting up a race with Usain Bolt and 2nd fastest runner but 2nd runner has iron ball next to his leg, there's no competition... using cyberpunk is Nvidia shilling at it's best since FSR 2.1 is looking worse and worse with every patch they release, somehow, i don't know how and FSR 2.2 with frame gen is even worse, somehow
1080p is very low base for FSR, raindrops also has that problem, probably because AI can take more things into account, while algorithm is more likely to miss that while comparing neighoring pixels
for Radeon, Sapphire and Power Color are kings, XFX is good affordable option and rest is... rest, in short it's like EVGA, best performing and best quality products are not from biggest corporations as they are doing too much at once
@1Grainer1 I don't get why evga is so praised tho. They made quite some f ups
@@El_Deen they did a lot of custom stuff before Nvidia restricted it, also they were overclockers, so a lot of their stuff was made strictly with overclocking in mind, which made it easier to prolong product life, but yeah, looking at repair shop content and their gripes about EVGA, then there was a lot problems
Here in France, the Sapphire Pulse version of AMD cards are often the cheapest available of the models, while being a very solid choice (but Nitro+ is king)
Got myself a 7900 GRE Pulse and I'm so happy with it.
@@hectorvivis3651 the same is here in Poland, Pure > Nitro+ = Red Devil > Hellhound > Speedster > Pulse = Merc
so sapphire has pulse for MSRP and Pure as binned Nitro+, Power Color does great things, but i don't think they have MSRP model and XFX is MSRP one + middle of the pack... but everything still is affected by silicon lottery, so one can be better than diagram suggests, but funny thing was when repair shop got his hand on Nitro+ and couldn't melt solder since it had no additives that help with melting, it was a tank and 8-pin was melted, so quite rare problem
Unfortunately AMD didn't specified anything about whether that will be FSR 4 Upscaling or just FSR 3.1 with some AI Frame Generation, also no information if it will be supported by modern GPU such as 7000 and above series.
And I can easily recommend Kryosheet, as I have this gorgeous little thing in my 7900XT from Gigabyte, which dropped my temps from 90-95C and fans at 2000rpm +, all the way dowm to 82C 1400-1500rpm at stock, while 88C max 1700-1800rpm with almost 400watts after unlocking power limit. But there is literally no FPS increases over 2-5% of power limit, so I have 3% over the stock and getting lile 84C 1500-1600rpm at max when full load.
So yeah I truly recommend this Sheet if you have enough of pump out effect on your GPU. Just use termal electric tape (0.1mm yellow) around the Die to secure the transistors and you good to go.
Nice to see the boys together.
The RX 6700 has nearly the same specs as the PS5, with a 160-bit memory bus and a slightly higher clock speed, and both deliver similar performance according to Digital Foundry's tests. So, the idea that a PS5 Pro will surpass the 7800XT makes zero sense. The 7800XT is about 70% more powerful than the RX 6700, and Sony itself claimed the PS5 Pro will be around 45% faster than the PS5. Do you really think the 7800XT is only 45% faster than a PS5? Of course not. The numbers don’t lie-believing the PS5 Pro can match that level of performance is pure delusion.
Yeah, Tim mentioned 256-bit bus but PS5 Pro has to use that for the CPU too. And I think it has a smaller on-die cache so it needs more bandwidth too.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 RTX 3070 seems to be faster than PS5 in God of War Ragnarök. You can compare to the high frame rate mode because it’s 1440p with unlocked frame rate. But there’s no dynamic resolution on PC so idk how you would replicate the 60 fps mode.
@@Rachit0904 I checked ps5 60FPS mode res is between 2688X1512 and native 4K, but it's uncommon to reach highest and lowest res, usually at around 1800-1872P. 8G 3070 cannot do this res with highest textures. Yeah 1440P 3070 can outperform PS5 quite with high textures, high settings maybe. This also depends what CPU you use coz PS5 CPU is kinda weak that is unable to provide you with high frame rates.
@@thepaintbrushonlymodeller9858 good point about VRAM and PS5 possibly being CPU-limited in high frame rate mode
For AMD GPUs, I specifically only buy XFX. Their customer service is next level and the quality of their cards are awesome.
PNY is really big in workstations in the US, now because of the CUDA monopoly most of these are Quadro cards, but i cant recall seeing anything other than PNY in the enterprise space.
I mean HP and Dell make cards like the A100 and H100 but outside of those, you dont really buy an HP or dell workstation card, unless they come inside of one of their workstations.
When talking about the PS5 Pro, my concern is that it us supposedly still Zen2
That means its may be just as slow as my 4700S/BC250 with a 7800XT
Zen2 APUs, including PS5, have only 4MB of L3 available to any one core, Zen3 on the other hand has 16MB available to any one core due to having double the L3, along with a unified CCX/CCD
I would genuinely be surprized if they didnt at least make a unified CCX version of Zen2, maybe they keep the 8MB of cache instead of bumping up to the 16MB in Zen3, but i think it would be a bad idea
APUs share a memory bus, the more cache the CPU has, the less it uses the RAM, the less it uses the RAM, the more cycle time the GPU has to work with.
Even with Zen3 i suspect it will be slower than an RX6800 in raw performance even if its RDNA3 based, console optimizations can hopefully make the thing perform better than a 7800XT, and i'd imagine that if they went with 16MB Zen3+ it might perform similar to a 7800X3D+7800XT
MSI GTX 1080 to a XFX 7900XT both have been wonderful. Still using the 1080 in the living room rig.
Regarding the Nvidia color thing: I believe you are referring to how the Nvidia cards will default to a limited dynamic range when using HDMI while AMD will default to full. You can change the setting in the Nvidia control panel to use 'full' over HDMI. DP is default set to full. I think they work under the assumption that HDMI means TV and TV operated best with limited since the TVs would compress the dynamic range anyway.
As for the CPU test with 7900XTX. I've seen it with my own eyes how the 9700X at 1080p performed 10% faster than with a 4090. One of the technician at my lcoal dealer was playing with the newly arrived CPUs and he noticed it and was showing it to us all. But since their job was basically to sell those CPUs, i took those results with a grain of salt. The explenation was basically that the better branch prediction of the 9000 series CPUs, combined with the increased IPC was improving the communication with the GPU through SAM and wasn't working the same way with nvidia GPUs which DO have Re-BAR, but it wasn't exactly the same as AMD's Smart Access Memory. Whether it's true, idk but it kinda makes sense for AMD to implement someting and boost performance on an all-AMD system. I've been expecting them to do that for 5 years. After all they've been talknig about boosted performance on their own ecosystem ever since 1st gen ryzen came out.
PNY cards are the only nvidia GPUs sitting at msrp in my country, the rest are marked up 20-30% sometimes 50%
My PNY 4090 XLR8 OC is an excellent card. You most you'll get out of a way more expensive card is about +5%. ImWateringPSUs only had 2% better performance from his card.
12:40 you're 100% right. I'm in the middle east and I have a pny 4070 and most of the GPUs here are pnys and zotacs and XFXs
I bought my PNY 4090 XLR8 OC in Australia so I don't understand these guys here. They sold quite well. It's a great card.
amd gpu gang
I thought you all changed your name to broke boys gang?
@@tomgreene5388 #ROASTED those broke 🅱️ois amirite champ? LOL! You've won the internet for today.
@@tomgreene5388 Nice troll but it doesn't really land. Everything is expensive and overpriced these days, including Nvidia and AMD GPUs.
@@tomgreene5388since i’m broke, can you please buy me an rx 7900XTX for me since you are so rich?
@@tomgreene5388 7900 xtx is 1000 $ what are you on
Great FSR analysis 💯👌
20:55 Not really, there is no "magical it's better because it's console optimized GPU". PS5 GPU is basically RX 6700 hardware and it runs around the same performance as PC RX 6700 - sometimes little bit slower (when game wants more compute power) and sometimes little bit faster (when game wants faster memory). PS5 Pro will most likely be little bit below 7800 XT (if it's RDNA 3) because it has both slower memory and lower compute performance.
But if PSSR is decent, and it does have RT from RDNA4, then the ps5 pro can output a better image from a lower input resolution, and will have better performance than any AMD equivalent GPU. Just like RTX cards in the same performance tier.
@@ArchieBunker11 According to digital foundry it's like an underclocked 4070. Which means it has close to 6800xt/7800xt levels of raster performance but much better RT performance and a better upscaler.
@@enmanuel1950 as far as raster is concerned the base 4070 is slightly worse than a 3080 and 7800xt, and marginally better than a 6800xt(speaking in averages). An undervolted 4070 would be closer to a 6800 non xt. Also, sony themselves say “45% better than a ps5. The 6800 is 45% faster than a 6700.
Although id grant you the image quality and RT.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 you’re wrong about things you can look up yourself. The 3060ti can play native 1440p 60 in Horizon and Gow, maxed out at just under 60, at Ps5 settings its considerably higher. And can also run native 4k 40 at the same settings, which is objectively better than the ps5. But to be fair the 3060ti is generally more of a 6700XT competitor. It certainly beats the ps5 more often than not, and frankly with DLSS, can run those games in 4k DLSS performance and get better image stability and performance at higher settings
Let's go! Nice timing on this one :)
I think console cost + online cost for the lifespan of the console makes it equal or more expensive than same tier pc, is just the cost is not upfront. I also feel the ps5 pro will be more like a 7700xt rather than a 7800
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can dedinitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
6:20 MLID mentioned that one of his contacts at AMD says that theyve been working on it for a while now. I think he said over a year. TBH, I'm glad they aren't rushing it out. Hopefully it gives DLSS and XESS a run for their money out the gate.
Problem with waiting is that DLSS is already included in hundreds of games, while AMD users are stuck with various FSR implementations, which generaly lacks in quality. AMD need to get it out to start building a software library with great FSR implementations, this will take years.
With Intel claiming 0x129 fixed the core issue of the Intel 13th & 14th gen stability problems and yet, they've released another microcode update 0x12B which they yet again claims is to improve the stability of those CPUs, what do you think they've done this time and is it time to do another Intel CPUs testing update?
The first microcode update didn't cover all motherboards.
We bought a 3070 from Zotac about 3-4 years ago and it's stellar and no problems all this time.
I see HU video i click instantly, glad to see you guys are not standing , give your legs some rest
Tim mentioning AMD playing catching being a good thing... I still think this points at the heart of the problem - letting nVidia dictate what features are the standouts and must haves.
Doing so guarantees that nVidia will always have a head start on that feature that AMD realistically can't ever match or exceed. We saw it happen with Raytracing and again with DLSS.
Then, every single time AMD updates FSR or Raytracing, reviewers are disappointed that AMD hasn't magically caught up to nVidia, like it could back in the days when it was just about raster performance. What were you realistically expecting to happen when they're constantly playing catchup?
nVidia has more resources to through at it, and because they're dictating features, much more dev time as well. These aren't things you can simply overcome just by trying harder.
The better path for AMD, at least for me is to find something distinct to add to the mix, get it running well to where they have the head start in development, then release it at a price that isn't losing them money but isn't as overly optimistic as their recent releases have been.
Sapphire and Powercolor make the best GPUs
Can confirm I have Sapphire Pulse RX 7600 Overclocked edition
@@davidbooth1634 they are also the most expensive(AMD)
@@El_Deen I got a Sapphire pulse 7900xt last year for $720. That deal was very good so I pulled the trigger on a new PC.
Early Powercolor weren't so great. I had two hd4830 512MB dying in 2 weeks.
“It was interesting to see amd talk about this… normally with their future looking stuff they kind of hide it away” …. FSR3 frame generation would like a word
I think AMD should continue to make FSR an open standard but because they know in advance how the features will be implemented, they should design their hardware to be more efficient for it
Yes... But no.... Because Nvidia didn't planed tensor cores for DLSS, it was made for enterprise and AI oriented workloads. They just saw it doing nothing in gaming and gave it a use, just like nvenc and the new optical flow accelerator that also was designed for computational vision. But was used for frame Gen. That's the cool part of hardware accelerated stuff, they occupy space on die, but their use mainly occupy power draw budged.
Said that, if AMD want the same recognition, they need just to make like Nvidia and do things for who is paying for it .....
@@pedro.alcatra the idea is to adapt FSR to maximize the efficiency on AMD hardware (in this case make full use of their tensor core implementation). Unless both AMD and Nvidia hardware work in the same way, it would stand to reason that the most optimized software implementation for one does not work as well on the other.
You absolutely still need FSR to be open as AMD is the underdog so game developers need an incentive to add support. If they can get some improvement for all users (picking an open FSR) vs a lot of improvement for a small number of users (closed FSR) vs a lot of improvement for most users (closed DLSS) they will probably rank those choices as either 132 or 312, but would be insane to pick a closed FSR as a primary path forward.
My understanding is DLSS 'AI' is merely AI tuned rather than 'run in AI.' I think leveraging this level of 'AI' is fine, as it streamlines tuning the various upscaling logic to closest match the original target resolution. This is something that could work with previous versions of FSR ... although would require tuned profiles on a per game basis
Asking Tesla to test drive self driving tech on a VW as it's more reliable is like asking AMD to test on Nvidia.
What's next HW unboxed just doing a nexus clips channel as they did better testing or better equipment on certain topics ? Nice for the consumer but wouldn't be a great look for you guys .
It's just not a smart decision to push a rivals card however if the percentage uplifts are the same then it doesn't really matter
When I was looking to buy my 7800XT my first option was Sapphire, but I got a great deal on an ASRock Steel Legend model, very quiet and cool card.
@20:32 the ps5 came out around the same time as the 30 series but it was equivalent to a 2070super/6650xt/1080Ti NOT a 3070 which is 35% faster and closer to a ps5 pro gpu.
later the ports run like crap, or even the native ports, thats a myth you need more than a 3070 you need a 3070 and all the components that ps5 already have.
you are full memes and myths.
ps5 has 6700 non XT , so slightly below 3060ti 6700 XT performance
@@Jackson-bh1jw Yes if you look at death stranding on PS5 it performs very close to 3070. The rt performance creates a gap.
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. By the way, can you use a 8G 3070 to match PS5 60FPS 1872P god of war 5 performance mode?
another myth, unoptimized ports..... welll even the native ports run like crap, eg: warzone, full of memes and myth.
I'm primarily a Radeon guy, and I rarely (if ever) buy from the big 3. Its typically Sapphire > Powercolor or XFX
Nvidia has quite a few on AMD when it comes to AMD catching up.. as long as AMD puts the effort in I don't see why it couldn't be just as good..maybe better if they figure something out Nvidia has not..but that's gunna be hard since Nvidia is a money printing factory
XFX and Powercolor almost always have a great value around the EU and have been recommended very often. When you see XFX Merc models being similarly priced as ASUS Dual the recommendations are super easy.
I honestly think people are giving AMD too much of a bad time regarding FSR. As a technology, it's actually REALLY good; Incredibly efficient (not needing matrix acceleration) and the performance-benefit and quality is still close to DLSS (which should be celebrated). The downside of FSR is that it requires a lot of R&D and has more manual overhead for game developer to yield the best results. That's why we can see such a wide variability in terms of quality depending on which game it has been implemented in.
Look at CDPR. Their FSR-implementation has literally become worse over time, and the latest 3.0 update is laughable. I mean heck! they solved ghosting on cars using a "reactive mask" in Cyberpunk v.1.61, however once the DLSS3-patch came out (1.62 I believe) they just re-introduced ghosting into the FSR pipeline, never to fix it again. Now they have two versions of FSR because they realize that they are bad in different ways, with 3.0 suffering immensely when it comes to semi-transparent textures... (FSR literally has a bult-in configuration to fix this). So what can you do? Well, today I run an FSR3.1 community mod that runs over the DLSS pipeline, it's actually GREAT and gets rid of most problems that CDPR seems to be having.
if they want to compete with nvidia an intel they need to use ai upscaling, dlss and xess are usable at 1080p while fsr looks blurrier and pixelated I tried using dlss balanced at cyberpunk and I could see jagged edges around the eyes of the characters on fsr while with dlss it looked more antialiased and the image looked a litle softer in general. heck if you have an amd or old nvidia card you could use xess 1.3 and would look better than fsr at similar fps
@@joxplay2441 To be honest. These upscalers were designed to make 4K viable with modern graphics. not to be used at 540p to reach 1080p. However, If I compare the FSR3.1 mod in Cyberpunk vs XeSS 1.3... playing at 1080P "Performance". FSR 3.1 is by far the better solution. It's sharper, temporally more stable. When looking at distant faces, they wobble with XeSS, on FSR they are stable and sharp. road textures at narrow angles flicker more with XeSS, and vegetation is mudgy and blurry. So AI in itself isn't a solution, ultimately it's all about how efficient and well made the algorithm is. Keep in mind, FSR wins because it's a mod and a better implementation than the official FSR implementation... I would choose XeSS over CDPRs official FSR3.0 implementation.
To note, just because an algorithm was made with the help of "AI" doesn't necessarily make it better. it makes it cheaper to develop, since computers are doing a lot of the heavy lifting (DLSS XeSS) instead of graphics engineers needing to painstakingly designing the algorithm themselves (FSR). Personally I would be interested of how efficient FSR would be if it used Matrix Accelerators like the Tensor Cores. Because ultimately, when you execute these algorithm's (no matter if they are "AI" or not), they are just instructions for the GPU. So this whole thing with "AI Algorithms" is marketing bullshit because nvidia wants to be perceived as an "AI company" to investors, and AMD is following suit due to massive accumulation of money that nvidia is making. AI is an investment bubble, because investors have started to believe in the prospects of it...
But to put it simple, Matrix accelerators are great at comparing a lot of values simultaneously (that's literally what AI/MachineLearning does on a low level). Know what also needs to compare millions of colors values on screen? temporal upscalers, AA etc. So no matter if it is AI or not, accelerators help these algorithm's perform better, and DLSS is a far slower algorithm than FSR and XeSS, but due to hardware acceleration it pulls ahead.
I have a Sapphire pulse 5600XT and I love how silent it is when using the silent BIOS even though it's a two fan card, it's better than the big brands because first of all they don't even offer dual BIOS for their cheap models, and they are so loud, I'd have to spend more money just to get the same experience as with this card.
Funfact the ps5 motherboard ram and apu costs sony only $115 lol.
In parts maybe, not in research and development.
No. They cost nothing.
The few grams of sand metal and plastic used are practically for free.
@@jemborg but they literally didn't design it AMD did lol
@@FBAS2 not all of it Chuckles.
@@jemborg the ssd is literally the only thing they designed and it is just the speed of an nvme 4.0 but with hbar type support for direct memory access instead having to use the cpu as a pass through.
I bought a pny 4080s and am thus far happy with it. I dont get the "easy overclocking" with power limits, but it runs great and cool
PSSR = FSR 4. Why would Sony (who have outsourced all graphics development to AMD for years) suddenly decide to develop their own in-house upscaling? Makes no sense from a financial or expertise point of view. The logical move would be to get AMD to develop it for them. It's no coincidence AMD announced AI-powered FSR 4 seven days after PSSR was announced. Not sure why everyone keeps talking about them as if they're different technologies.
I bought the missus an XFX 6750 for about £300, it performs as expected and i was really impressed by the build quality for the money
Will note on the PS5 Pro comparison. The 7800XT is way faster, primarily due to no Infinity Cache on the PS5 Pro
And that has been a thing that historically cripples a RDNA card versus its IC-Bearing Counterparts. (680M/780M vs 6400, 890M vs 6500XT.etc)
So 7700XT is fair for Raster performance for the PS5 Pro due to not having Infinity Cache.
As for AI/RT Performance that is more variable but feature wise rumors state RDNA4 (Which Cerny has said the RT is actually coming from) will at least match Ampere's RT Feature set. So we can probably look for GPUs in the Ampere gen that are around 7700XT level in Raster then go from there
Which would lead us to a RTX 3070/3070Ti for a lowball or an Downclocked 4070 for a highball
The vram is shared with system memory tho, not exactly the same but the ps5 gpu was roughly a 6700 non xt and performed close to one, as well as console only optimizations would bridge the gap, infinity cache helps bandwidth issues not 100% present on consoles due to a unified memory config, thats why modern games tend to require loads of vram because consoles don't have to worry about the transfers between cpu and gpu like pc typically does
@@PelonixYT no, not quite. The 6700 OEM is the card that is closest to PS5 and even then outside of wacky cases (poor optimization) on PC, often outperforms PS5 exponentially at some resolutions. Again, due to Infinity Cache.
@@Alovonusually its the same performance as shown side by side from digital foundry
@@puffyips Yeah, that's when matched to PS5 Settings at higher outputs. At lower input resolutions though Infinity Cache's benefits become apparent.
The ps5 pro gpu is estimated to be about 50% faster than the PS5 which puts it at 6% faster than an rx 6800 in raster but a 4070 ti in rt
WRT VRam usage - framebuffer compression doesn't actually save memory (it might actually use very slightly *more*) as it can't know if the texture is completely random and simply cannot be compressed. It just helps memory bandwidth, as "most" blocks are smaller to transfer. That's different to "lossy" texture compression, like the DXT/BC formats, but that's handled by the game itself, as it can know what textures it can lose precision on.
The PS5 itself is identical in performance to a RX 6700, there is no doubt about that it matches both raw and RT performance, so accounting for everything announced to the PS5 Pro, the only GPU that matches the performance gain and the "next level RT processor" and so on, is the RX 7700 XT, it has 42% increased performance from the RX 6700, and also has a next level RT processor.
It can't be the 7900 GRE because that is a 89% uplift, and the 7800 XT is a 72% uplift from the standard PS5 performance.
The NVIDIA Equivalent would be a RTX 3070 Ti, although the raw performance is a little bit slower, and the RT performance is a little bit higher.
PS5 Pro has a machine learning-based upscaler and more than 10 GB of VRAM, making it equivalent to 3080 and 4070.
@@Kukajin That is too much of a performance improvement from the PS5, the ammount of VRAM isn't a issue, the GPU is always custom made, usually is a shared Memory and they allocate the way it's needed.
7700XT has 12gb of VRAM, but PS5 uses a custom GPU, with 13,7GB... not exactly a reason to call it mismatch, just add more memory modules to the 7700XT.
Also Intel XeSS ins a AI based upscaler and is compatible with AMD GPUs... PSSR isn't FSR, and the "FSR" doesn't need to be AI based in order for the GPU to match, PSSR is it's own upscaler, like XeSS is, TSR and also FSR.
All which can be run in the PS5, because they aren't GPU specific technologies like the DLSS (maybe the PSSR will be, who knows).
Focus on the Performance increase, if anything goes beyond +50% of a RX 6700, it's already better than what PS5 Pro is capable of.
@@BrunoRafaBR Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles. Can you use a 8g 3070 to match PS5's 60FPS 1872P performance mode in God of war 5?
@@thepaintbrushonlymodeller9858 I actually have a 3070 and just finished gow Ragnarok on PC at 4k60 with a mix of high and medium and DLSS quality... So... Yes? It is not a heavy game for PCs, it even came out on PS4. Just had to stick with medium textures because of VRAM, but run smooth like a baby butt.
@@BrunoRafaBR That's still below PS5. But the base bitch PS4 GPU is so so weak that is like large Vram HD7850. 1060, rx580 are over 2.5 times stronger than base bitch PS4 in paper, but they cannot even beat that base bitch ps4. So the direct comparison between PC and console specs is too naive as time goes on.
Another option for AMD: they could keep FSR open source, but the AI models could be proprietary. I would actually welcome that! It could allow for the community to train and create their own AI models, which might compete well with AMD's offering. Ideally it would be possibly for users to choose which model they want. Also, there can be models optimized for various kinds of games, and even -- gasp! -- trained by the community for a specific game with specific quality settings. That could be amazing.
I feel you overestimate consoles a bit. PS5 is more between 3060 and 3060 Ti, and Pro according to what Sony said is 45% faster which would land it around 7700XT, not above 7800XT.
7700xt with better ray tracing basically
Yeah, Currently for PS5 unoptimised third party titles, PS5 sit around 2070super to 3060ti/6700xt level, but for PS's god tier optimised first party, PS5 can definitely blow 6700/3060ti away, like god of war 5, the last of us part 1, horizon forbidden west burning shores. The main reason to have a PS5/pro is to play PS5 first god tier titles.
On First world countries the big 3 GPU brands are also sales leaders but on 3rd world countries Palit, Inno3d, Zotax, Xfx, Powercolor, Galax, Sparkle, (Sapphire to some extent) etc are the Kings of sales of entry level to mid to even high end GPU's. Chinese brands such as Soyo, Aisurix and many others are quickly catching up easily beating the big 3 on entry level gpus. In Japan a first world country the most famous GPU brand is Kuroutoshikou that sells both Nvidia & AMD gpus. Asrock, Biostar, Acer etc also joining in the fun after the legendary EVGA left the scene...
I don't understand what open source has to do with tying with hardware... Do you think you need yo have a closed source for having a more tief to the hardware?
I upgraded from a Gigabyte 3060 TI to a PNY 4080. The PNY has been faultless - quiet and good software for adjusting fan curves. The Gigabyte was a nightmare - every time it woke from 'sleep' mode, it would make this horrendous grinding noise that I was told was normal. The only thing I could do was adjust the fan to run all the time. The PNY looks like a brick but it is solid and doesn't have all the gaming logos and bs associated with other brands. I'll happily buy another PNY.
I don't get why AMD didn't partner with Intel to make an Upscaler to compete with DLSS.
XeSS is the correct direction: you create a super resolution technique that not only uses dedicated hardware accelerator that only your GPUs can provide (XMX), but also can be used on other brands' GPUs but inferior (DP4a).
The memory used by Nvidia is different from the one used by AMD. There is GDDR6X vs GDDR6. But it does not need to differ in type of memory chips. You can have two different GDDR6 speeds, amplitude impulses, two or four signal levels and many other specs. Even switching between manufacturers can cause differences in the game's need and usage of available memory.
FSR 4 aka DLSS for everyone
"everyone" FSR4 will use AI cores, so not everyone has them, and the people on Nvidia who has them will use DLSS.
@@SweetFlexZ wait it needs ai cores? then wouldnt that just be useless???? i thought they were doing it like XeSS DP4a
Funny how Frame gen is now accepted 😂. NVIDIA was slaughtered initially for" fake frames " although the proponents went silent.
@@Hi_Im_o2 i mean, the whole point is it's AI driven?
We don't know, but AMD have AI cores that rn aren't being used, so it would make sense but also would be against what they have been doing lately so idk , I have Nvidia so idc@@Hi_Im_o2
FSR being open-source has helped AMD on the console and on their laptops with relatively strong iGPUs. As well as budget APU builds. Besides whatever small amount of gaming PCs using standalone Radeon cards.
And then looking beyond AMD, FSR is viable on mobile (and can be tweaked to run better there, due to its open-source nature). Reviewers may pixel-peep and say DLSS is better, and indeed DLSS may be better, but many times, gamers (and game devs!) are simply glad to have *something* to be able to increase framerates, since an unplayably choppy game is actually unplayable and the customer closes the game and never looks back. That's a gaming experience either ruined or saved by framerate, potentially.
I actually do agree after all that, that if Radeon GPUs can support "the best version" of FSR, AMD should do that. Existing FSR will be open-source forever, per its license, which as far as I understand is effectively irrevocable. I believe they should open-source *some* version of their upscaler going forward as well. But if they have a special sauce that makes it work better on their architecture, much like Intel has done, I think it's fair game at this point and understandable to remain competitive.
I want to re-emphasize that FSR ever being open-sourced has been a game-changer and made upscaling real in the long tail of platforms that aren't a standalone Nvidia GPU (or the small marketshare of Arc GPUs) for the past several years. That's awesome, and AMD deserve huge props for that. Even if the visuals aren't really what I'd like them to be, for systems where it means being able to play at all or not, it's a literal game-changer. So, kudos to AMD for the open-source stuff, it's the rising tide lifting all boats sort of situation and I respect the heck out of it.
21:30 People comparing Console to PC because of cost, forget that you must add in console subscription cost of the console because they are largely useless without that monthly subscription
Not really "you must add in console subscription cost". You are talking about the group that competitively games, as free to play multiplayer games/single player don't require a subscription. If you are a single player gamer, or multiplayer for free to play games, a subscription is not a "must". Of course you could pay for a subscription for COD, and/or have access to their game pass subscriptions as well which have a huge library. If I had to pick between my PC and my PS5 for multiplayer, I'd probably pick PS5 for way less cheating in multiplayer, and the games aren't graphically heavy anyways usually. Also parity in hardware, so no advantages due to the almighty $$$, and ability to pay for advantages. PC vs console is stupid, they both have their place, and even though I primarily game on my 4090 PC, when the AIO fails or the CPU, it's nice to have a PS5 while waiting for an RMA. Haven't paid for a subscription at all on PS5 and have played many many games.
Was seriosuly expecting Steve to look at the phone and then at Tim and say "Read it, boy" :P
The GPU in the regular PS5 is about equivalent to a 6700 XT
The PS5 Pro is said to have about 45% more rasterisation performance, and about double the ray tracing performance over the regular PS5
So, roughly as fast as a 6800 XT in rasterisation and a 7900 XT in ray tracing, using back of the napkin estimation
6700 NON xt^ and it's actually compareable to the 7800xt instead, but still relies on the same cpu (ryzen 4700 non G)
In reality the GPU uplift is now equal to a 6800 non-XT, and the CPU of the pro is roughly equal to an actual ryzen 7 3700 instead of a downclocked one that runs at the performance of the ryzen 5 3600 (like the regular PS5)
I have a 3080 PNY card I bought four years ago. So far, it has worked without a hitch.
Gonna need more than AI FSR, they're gonna need a DLDSR alternative, Ray Reconstruction and actually make their terrible features better like Video Upscale or Noise Suppression.
Ray Reconstruction is a gimmick. No one needs it.
@@sammiller6631 That's what I thought until I used it.
@@sammiller6631 lol all your comments here are just trying to cope and keep defending AMD oh my poor Sam miller being AMD's meatshield 😂
@sammiller6631 I dislike how RR makes everything waxy in cyberpunk. If Nvidia has a way to implement RR without a hit to image quality, then there's no good reason to avoid it.
@@Eleganttf2 Do you think Leather Jacket Man will notice you if you make excuses for Nvidia? Nvidia has given up on gamers for AI data center cards.
I bought a 6900xt and have had a great time with it over the few yrs
Almost everyone bashed NVIDIA for dlss😂 now they're all in line.
we moved onto bashing the pathetic 4000 series.
@@Steel0079 I have seen NVIDIA gaming revenue this quarter 😂 and compared to AMD. 💀 Clearly you won🙃
@@mikelay5360 you jumped to the conclusion really fast that I'm an AMD fanboy. I'm for whatever's best for me as a buyer. And Nvidia's plan to get me to buy a 4090 will never work lmao. My 3070 still going strong.
@@Steel0079 The pricing is probably not changing. None of us like spending a thousand on a GPU, but there is not too much at the higher end that is remotely close to prior pricing. Even AMD.
PNY is the manufacturer of allmost all Quadro cards, they are of course pretty large as a manufacturer. Quadro cards perhaps even have higher margins, because they are very expensive. I have two Quadro RTX 5000's in my engineering workstation.
Nice.
oh boy the DLSS debate fsr is good right now not DLSS good but far better than this channel gives it credit for.
I’d argue this channel gives FSR too much credit. I’ve had way too many instances where DLSS performance produces a more stable image than any setting of FSR. They gotta get the shimmer sorted out.
@@ArchieBunker11 DLSS is a crutch. People should buy a higher tier card if they want higher tier performance.
@@ArchieBunker11 ikr, with DLSS you can upgrade to the latest version in any game with a simple dll swap. Fsr doesn't have this and even if it did, ut still lags years behind dlss in quality.
@@sammiller6631 its not a crutch, its increased the life of my card. People like you act like broken games just started coming out, and are blaming DLSS for it. I use it 100% of the time, even if I can play the game at high framerates, because it stops your card from running full blast, and the games look virtually the same.
@@ArchieBunker11 DLSS is a crutch. You don't understand the difference between "broken" and "bleeding edge". There's a reason why pushing the tech forward isn't nice, neat or polished.
got a second hand 6700xt and will upgrade of fsr4 seems actually worth it.
Was reluctant on buying a high end card until i remembered that the new generations will launch soon.
With the power of AI, we can hallucinate a reality where amd is competitive in gpus.
Nvidia fanboy detected
At this point i just want them to be competitive at least in mid range prices. EU prices of GPUs is beyond retarded right now.
or a reality where nvidia doesn't gatekeep their customers from newest tech as soon as a new generation of gpu pops up
Unfortunately even with the incredible capabilities of AI, imagining a world where Nvidia isn't scamming its costumers, is impossible.
@GroteGlon they've never beat the highest end. So always been slower.
What type of FPS counter are you guys using?
Its q4 of 2024. Best GPU (with 16GB Vram which is the minimum) is an 2020 radeon model, called RX 6800.
How is that possible, want went wrong?!?
Got the used one for about $270 this year
There are also others, like Palit (Taiwanese manufacturer that makes pretty good cards), Leadtek and Gainward (both popular value options in Asian markets).
I gotta say FSR 3 dosen't look as bad as people say. When people do comparisons it's hard for me to tell. Plus AMD Has such a smaller team for there Radeon division vs nivda.
For the 3rd question, PowerColor on AMD is amazing. For example 7800XT Hellhound has the best thermals and the lowest noice at the same time. There's a "rumor" that if you go AMD you go specific brand and if you go Nvidia you do the same, people say companies that do cross-cards aren't good on the lower end models especially
That rumor is half true
Whether or not AMD beats DLSS is irrelevant to me. Will they beat native resolution? No. Therefore this technology is useless to me. Turn off raytracing and enjoy crisp artifact free native resolution instead.
You'll still have the game engine's built in TAA, which I can't really say is "artifact free." AI based upscaling actually reduces some of those. Ultimately it comes down to personal preference.
what resolution are you playing at?
at a res >=1440p there are almost no situations where native will look better than DLSS Q.
Native TAA at 4k/ 50-60% of the time looks worse compared to DLSSQ Stay living in stone age.
Zotac is good for SFF cards. While MSI/Gigabyte/Asus would occasionally make single fan ITX cards, Zotac has always made small dual fan cards for pretty much every generation. I had a 2070 and have a 4070 of theirs. At 226mm and 2-slot it's one of the most compact 4070s you can get.
Also AMD only has 12% of the GPU market, so for developers to support FSR it has to work on all cards.
Sony is the reason why fsr finally goes AI.
It is laughable how useless the Radeon department is.
Nvidia is the reason you mean. AMD is still chasing.
@Sal3600 What I mean is that Sony wants a better upscaler to replace fsr.
The result is they forced AMD to make new hardware that is capable to do AI upscaling.
your comment is laughable
@@FoxSky-md1ul you're welcome.
@@張彥暉-v8p no thanks
Good show.
I'm very interested in how FSR 4 and AMD 8000 series GPUs turn out, in laptop's especially.
nvidia has way better memory compression algorithms thats why it uses less vram than amd, to match nvidia 8gb vram you need 10gb amd alternative so there is a gap of 20% which is quite high and in the meantime nvidia uses dynamic balance to feeding shaders more effectively on a hardware level,its not software tricks but its adds extra die size in the ada lovelace
The gap is actually around 5-10%. You are over exaggerating and projecting.
Yeah that's why their 8GB cards can't play games like Hogwarts Legacy and Resident Evil 4 Remake. Better compression flat out loses against simply adding more vram. Their high end 10GB cards get bottlenecked while AMD's mid range cards sail by. Nvidia goes 8GB, AMD goes 12GB. Nvidia goes 10GB, AMD goes 16GB. You can never close that gap with compression. You're delusional.
@@singular9 I'm not exaggerating anything
@@kentaronagame7529 nvidia simple put less VRAM to push people buy their card that have more VRAM. but technology wise there are reasons why AMD want to have more physical VRAM on the card. for AMD GPU when VRAM bottleneck happen their card will have much worse consequence than nvidia. there are videos comparing RX570 4GB vs GTX1050Ti in horizon zero dawn. in that game RX570 pretty much unable to load the entire game texture unlike GTX1050Ti.
@@singular9 On average they use at least 1GB more VRAM, that isn't an overexaggerating....
35:32 question about memory: if one uses AMD APUs, then going with 8000MT/s memory with, for example, 8700g to boost iGPU performance makes sense
Anybody who thinks fsr4 will beat DLSS right off the bat is setting themselves up for disappointment.
Not because I think AMD is incapable, but because Nvidia has years of headstart to fiddle, thinker and fidget with it. Remember when DLSS came out? It was pretty terrible
This, comes from somebody who has a full AMD system and doesn't plan on buying Nvidia.
From everything we've seen up to this point, either amd is incapable or is incompetent lmao
I don't expect it to beat it, but almost pull even, with maybe Ray Reconstruction being the main advantage left for DLSS. Then, very soon after, Nvidia will introduce DLSS4 and pull ahead further, leaving AMD to spend the next year trying to catch up to that.
You guys have featured a PNY 4090 in your b-roll. Did you just toss it lol?
Jayz2cents just bought one. Kingpin is going with them after EVGA's demise... They're American. They were a big seller in Australia last couple of years. I bought a PNY 4090 XLR8 OC myself! Excellent card! Paid AU$2900 total. Cheap... sort of.
stick with native, fk dlss and fsr.
by the card that matches your resolution, simple.
and use dlss when the games become demanding for your resolution. Stop being so backwards.
@@Sal3600 or knock down a setting or 2.
@@goose5462 Correct. Dlss is also knocking down a setting or two but with less of a loss in quality.
@@Sal3600 if you know what you are doing, then, no DLSS is worse. But I understand, DLSS is for the general public that have almost zero knowledge of rendering options. Just press a button and done, its serves its purpose for the simple.
@@goose5462 i personally don't like using upscalers. I don't use TAA as well. I just do not like the look.
PNY might actually be Nvidia's largest AIB - they make all the OEM-branded cards for companies like Dell, HP, and Lenovo, and are also the primary manufacturer of Quadro cards I believe.
No, Since FSR 4 was designed for mobile gaming. AMD lost
35:30 the answer is yes, and 99% of AM5 CPUs should be able to run 2067MHz FCLK and 6200MT/s memory which is guaranteed to be faster than 6000. From what I’ve seen, 2133MHz FCLK and 6400MT/s memory are also pretty easy to run now which is the fastest combination for most people if you can’t run DDR5-8000 stably. I advise people to check out Buildzoid’s reactions to RAM timings for examples and ideas.
AMD needs ultra budget GPUs and better priced CPUs.
am4 still strong tho, someone jumping into pc gaming right now would be satisfied with a 5700X3D
Their cpus are already very well priced, x3d ones are just in high demand
5700x3d is closely priced to a 7600, and am4 users can probably find a 5800x3d for that price on the used. Even a 5700x is priced better. (Prices in India, amazon etc..)
We're getting a bit greedy if US$160 or $180 is too much for a 7500f or 7600
@@MisterFoxton PPP would like to know your location.
went with zotac 4070 2 fan version and im happy how quiet and cool the card is. And went with that model just because it had good sale that made it cheaper then 7800xt alot.
AMD needs to be a better "software company".
They have the hardware but no good software pipeline for 3rd party softwares to properly take advantage of the hardware.
For example AMD still doesn't have a QuickSync equivalent.
In fact even though my ROG Ally's 7840U APU supports hardware accelerated AV1 encoding, the driver software doesn't even include a Shadowplay like screen recorder. I have to OBS.
AMD's NPU usage is also limited even though the Zen 5 APUs have the fastest NPUs among consumer laptop CPUs.
Let's not even talk about no equivalent feature to RTX Broadcast which includes the excellent RTX Voice. There is no alternative to RTX Video HDR or RTX HDR which are killer features.
The RTX suite is underrated. Obviously DLSS and remix are well regarded. For good reason. But broadcast/video/HDR dont get near enough love, especially with how common no HDR option is in games.
1. QuickSync is hardware based....so how does that relate to your point about software?
2. Radeon suite does in fact have a screen recorder.....what are you on about? I have been using it since the RX 480 in 2016.
3. NPU usage is limited because other companies aren't even bothering with AI? How is that on AMD?
4. AMD does have an RTX voice equivalent called AMD Noise Suppression in the AMD software suite. Are you insane?
So far you literally just made shit up because you have never used an AMD product, and your ally is completely dependent on ASUS to optimize the software lmfao.
Your argument has zero ground to stand on.
@@singular9 he meant good though. AMD voice suppression is absolutely ass compared to broadcast. Also, dont stop there, where’s their RTX Video, or HDR competitor? How about a tool like Remix?
I had an AMD GPU prior to my current card. His argument has Quality as a ground to stand on.
@@singular9 AFAIK, AMD doesn't support the screen recorder, aka ReLive on APUs. At least on desktops that I've tried, but I bet it's probably the same case with laptop/mobile APUs.
Last time I tried Noise Supression (around March i think, with a 7900 XTX), it was basically useless, made my voice sound like I'm a robot who's speaking under water. And the problem definitely wasn't on my side, my hardware wasn't faulty, tried every troubleshooting method, and it also worked like a charm before, using RTX Voice, but I couldn't get Noise Supression to work properly.
At that timem they just released the FSR Video Upscaling stuff, which I was pretty excited about (cause I loved RTX VSR too), only to realize that it isn't even working. Like, it was there in the Radeon Software, but I couldn't turn on, because...? I wasn't the only one with that issue, that's for sure, seen quite a few comments about that on AMD's Support Forum. After ~1 month, it still wasn't fixed.
Also, I used to encode videos using Handbrake, so I tried VCE instead of NVENC - that was quite a disappointment, the end product was larger in size, but also had much worse quality. So my journey with the 7900 XTX ended pretty quickly.
I really want to like AMD again, my all time favorite cards are still the HD 7970 and R9 290X, but there's a lot to improve, especially if they want to compete at the higher end (at least for my usecase and preferences, tho I'm probably not alone with this).
@@singular9
1. QuickSync is just branding for Intel's media engine, which actually works really well with Premiere Pro and Resolve. AMD also have their VCE or whatever it's called for media processing, but poorly supported in NLEs. Also H264 encoding is still bad.
2. No that screen recorder is not available for APU drivers.
3. Fair enough. But Intel's weaker NPU yields better result due to OpenVino. I'm not an expert here.
4.Insert meme. AMD's NVIDIA alternative features are just for namesake.