[SPON: Use "brokensilicon“ at CDKeyOffer to get Win 11 Pro for $23: www.cdkeyoffer.com/cko/Moore11 ] [SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: bit.ly/3wKx6v1 ] [SPON: Tell COSWHEEL MLID sent you when you check out the CT20: www.coswheelebike-eu.com/allot/transfer/1000226?redirectPath=%2Fproducts%2Fct20-ebike ]
Tom, what node is the PS5 Pro on? Your earlier leaks said N4P but have not seen the node mentioned specifically in recent videos. Is that still accurate?
it all happened in a nearby grocery store, his father wanted to cut in line but he was unfortunate enough to encounter Lisa Su, who did not let him cut in and mentally traumatized him with "wee wee ass haircut" quote, which hit him so hard that his entire lineage now boils in anger at a mere thought of AMD
@@waynejr.6763 True he is young...but I mostly see him adhering to click-bait youtube practices. Hope he does become more insightful and meaningful in terms of content as well.
Tim's been on before: th-cam.com/video/ycXkvVfc2yw/w-d-xo.html th-cam.com/video/0DleBSV_-2c/w-d-xo.html th-cam.com/video/LyLg-r5ZTWc/w-d-xo.html th-cam.com/video/4LVuX6FvDXg/w-d-xo.html th-cam.com/video/fLslDE27XxY/w-d-xo.html So has Steve: th-cam.com/video/lO22Ny_41nc/w-d-xo.html
I've been enjoying your content quite a bit. I think I've been overly critical of your videos in the past and I wanted to say that I find you content to be rather interesting, informative, and despite some of my skepticism and concerns of bias, has in large part proven to be accurate. Thanks for the hard work and best of luck going forward. I'll be watching... and prob commenting, for better or worse.
I've been thinking AMDs GPU decisions have some or all these in mind: 1) R&D budget - Right now, NVIDIA has a massive budget, so try to keep up even if that means a generation or so late. One day, a window may open up like with what happened during Intel's 10nm issues. - note on raytracing, word is that RDNA4 and RDNA5 are being designed to make big moves. Im hoping this is true, the PS5 Pro leak is starting to reveal this is coming to fruition. 2) In the meantime, focus on the special dies for consoles and their own CPUs. Continue to take advantage of these cores in Ryzen to gain more of a CPU edge. 3) Accept the momentary flux of low R&D budgeting, being behind feature-wise and take wins where they can be done now. 4) While playing keep up, start to push new features that would destabilize NVIDIAs edge. Maybe right now isn't the time, but at least get ready for it. 5) Keep up the FSR development even if it is third out of the three. Work with Microsoft and really push optimization within the OS. Push specialized core development to take advantage of these features (both DX and FSR) which should greatly close gaps or even surpass NVIDIA at some point. Regarding PSSR vs FSR - Sony continuously produces elite scaling hardware/software solutions in their TVs. They are the premium brand. Im not at all surprised they were able to produce such a quality upscaler on AMD hardware. AMD is still pretty new to this, they will get there.
Ultimately only one matters: Chip die size. They have to pack what they want into a 300mm²-ish die, smaller is better. The larger it is, higher probability of defect, higher cost.
Traded my 3070 for a 6700XT + $100 (I got the $100 with the GPU) when I started noticing VRAM being an issue over a year ago. Very glad I did now, even at 1080P I used over 8-10GB in many games, and rarely need more GPU grunt. NVIDIA is ridiculous selling mid-high end cards with 8gb still.
3070 ti is easily fast enough to do 4k with dlss quality but no way it has the VRAM for that. The cost of vram is so low.. we all know that 8gb frame buffer exists to upsell you to the 3080 or make you upgrade soon because of the lack of VRAM. Crazy how Nvidia is shameless with these tactics. And people still buy them! I am for AMD simply because I cannot bring myself to give my hard earned cash to a company as deceitful and anti consumer as Nvidia. I've been running Alan Wake 2 hopping back and forth with pathtracing on vs off and.. idk if it is just a poor implementation but I don't see much of a difference and certainly not enough to make me support that company.
AMD gpus stutter non stop in Alan Wake 2, crash constantly in Helldivers 2, are unstable af in Lords of the Fallen, and aren't even able to launch some of the Fallout games. I would NEVER buy a gpu that suffers so many issues.
@@jjlw2378 fortunately for me.. I have never had an issue with an amd gpu so i get to pay far less for the same experience and not have texture loading issues due to lack of vram. I am happy i went with a new $420 6950xt. You get 8gb from nvidia for that price lmao enjoy
@@jjlw2378where are you getting this information? I have seen demos with frame time graphs in all of those games using AMD gpus. All running well. My close friend runs an 7900xt for Allan wake 2 and it runs like butter. Who has better 1% low is a per game toss up between nvidia and AMD. Are you attempting to run these new games on an RX480?
@@jjlw2378 Helldivers is well known for crashing on just about every system but my 7900XT hasn't crashed since last month's driver update, and I've played every fallout since 3 on either that or my old 5700XT just fine aside from the universal issues in 3 and NV, which are also famous for crashing on all modern systems. Haven't played the other two, but you don't seem like a very good source here.
Regarding VRAM longevity, memory usage goes up at a fast rate compared before when RT isn’t a thing or how tailored games are to utilize low VRAM buffer (particularly 4-6GB usable of PS4 as reference). Now, we had everything brute forced by rasterization instead of smokes and mirrors techniques used in the previous gen. 8GB vram size today will be having a hard time keeping up with the new technologies and demanding techniques alongside large texture sizes. 12GB should be minimum for now and 16GB minimum right before PS6/Xbox A-Z Series 2 releases.
@@TheGuruStud The competence of developers has massively decreased as games have become more popular and game studios got flooded with university "game development" graduates. It's almost comical if you go explore the game files. Things that were industry standards for 30+ years have been simply forgotten, and devs just try to brute force instead of optimize. There's still some game studios that know what they're doing, like ID software.
I open my phone, see I hace a notification from mlid, click on it, see that Hub is back. Like the video Wait that was an hour ago, the podcast is half way over
Mentioning FXAA... not all of it was made equal. I used Radeon Pro with my old HD6870 back in the day and that had a variety of quality settings for FXAA. The higher settings didn't blur the image at all and were still more performant than MSAA while having better coverage of things like transparencies. I think it got a bad name because the implementations of it were poor or low quality.
FXAA is post processing AA. It's pretty much the same as SMAA (not to be confused with MSAA) and the reason why it sucks isn't because of blurriness or ghosting like in TAA but the fact that because it's post processing, it can only AA certain parts of the image to a limited degree. It works ok on forward rendering games but sucks badly on deferred rendering games, much like SMAA, and MSAA won't even run unless you implement it for EVERY single shader separately, hence why TAA exists.
It got bad name because it is not MSAA or SSAA. simple as that. Same as why some people did not want to accpet any form of upscaling no matter how good it is because it is not native rendering.
AM4 owner here, yes platform longevity matters to me....and give me more PCIE 5.0 lanes for my gpu/hi end SSD, no splitting lane BS. show me the value!
Looking the PS5 Pro spec, it is clear the selling point this time around is ray tracing and upscaling image stability. It make sense if you are trying to give the most bang for the buck. In every scenario, whether viewing the game on a small screen, a large screen or from different distances, the player can see the ray tracing benefit every time. Whereas higher resolution or higher frame rate is only noticeable in certain circumstances and certain types of game. You can see the quality of ray tracing on a tiny screen from across the room, or a large screen covering a whole wall, in every type of game. The other point is pixel stability. When a game is upscaled, you get temporal sub pixel shimmering. Again, that can be noticed on a large screen at any distance, because the effect gets amplified by neighbouring pixels to form a shimmering group. The focus on a good AI upscaling solution that can stabilise pixel shimmering with anti aliasing is another big bang for the buck. Sony seem to be focussing on these two issues as their selling point, because you can see the effect all the time, rather than just occasionally under certain conditions. Even a novice can spot the difference. You don't need a trained eye. Time will tell if they have been successful, but Sony have a good track record for punching above their weight in console design thanks to Mark Cerny.
19:55 people are forgetting games are doing 100x more work per-pixel than older games, most ps4/xbox one games had very limited dynamic lighting and most of it was baked, nowadays almost all lighting is dynamic either with lumen or raytracing, with PBR materials bordering offline rendering models.
Fake news. Look at PC. 10 yr old games are vastly superior in IQ including lighting, reflections, etc. Devs are 100% coasting due to hardware power aka lazy. All of these new PS5 games look like garbage. It's no better than PS4 with a res bump and AA.
Is that true though? That almost all lighting is RT? "Almost all lighting" means actual global illumination. Even in nvidia sponsored titles this is certainly not the case.
It's hard to forget something that's simply not true. Ps4 era is RDR2, The Witcher 3, Battlefront, Cyberpunk 2077. ALL of these games have dynamic lighting, even on the PS4. They were just much better optimized for the hardware they had to run on.
2:35:52 Glad to see Tim bringing up self-emitting QDs and microLEDs, because I would definitely consider those the two technologies that have been publicly announced that have even a hope of dethroning OLED.
Always a pleasure to listen to Tim. I wish there was some discussion of ULMB 2, after using ULMB on my PG-279Q Asus monitor, I can never go back as it's must-high feature for me I look forward to the advancements of this technology.
Well, the 14th gen RPL-R was essentially a rebranded 13th gen RPL, so no it can't really be considered an "Additional gen" for the platform. It's just 13th, again.
As for weight of gaming laptops - show me a single person who regularly carries their desktop with them to a cafe, to a plane and uses it in there and so on. People like you always talk about getting a desktop instead of a laptop, and yet none of such people carries their desktop. It's almost like people need a laptop for portability and you can't replace it with a desktop or something. But I guess it's just me. As for price - if someone really needs a laptop for office\school work but also wants to game - in addition to a desktop one would also need to get an office laptop. Good luck getting a laptop with a display just as good, battery life just as good (or many gaming laptops it ranges from 7 to 11 hours these days), keyboard just as good, upgradeable on RAM and storage, and so it wouldn't be super sluggish in basic tasks like browsing. Yeah right, "cheaper". I'm not talking about absurdly priced gaming laptops of 4-5 thousand dollars, I'm talking about reasonably priced ones like costing 1200-1600 dollars. Using it as a desktop+ as an office laptop(oh and they support type c charging these days) beats desktop + office laptop in pure value.
Honestly, what interests me about Strix Halo is were talking something that can be 1660 Ti+ performance that fits in a thin and light. I don't do heavy gaming on my laptop, but being able to play older and AA games on my laptop when I'm on the road is very nice in something that I can also use as an ultrabook.
Always love to see more Tim or Steve from HUB. They both stride the line with deep knowledge about the industry and tech with customer centered mentality that focuses on the value. Still kinda shocked that you have these caliber of guests appearing, but I guess next you pull Steve from Gamers Nexus or something...
Laptop 1070 was 5-10% behind desktop 1070, laptop 4070 is about 50% behind the desktop 4070. Meanwhile Tim: "laptops always lag behind desktops by roughly the same amount" Deep knowledge, right.
2:30:01 You have to sit 1.03m to tell the difference from 1080p to 4k on a 55" TV, that's why they only make them in 32", it's pointless to go any lower apart from pixel density. Most are made by LG, Hisense using Patents from Sharp Display and Samsung, with a few CSOT panels as well which is TCL and Samsung combined.
On the TAA side of things: A lot of modern day rendering techniques take a lot of work to be compatible with things like MSAA, and that's the reason why it's so expensive to run in modern games. It's basically not worth the dev time to even implement anymore AFAIK.
I think something that gets missed on the upscaling debate is I feel it's MORE important on low end cards than high end cards. You can't look at it from an elitist point of view where you're unforgiving of less than perfect image quality. If you're shopping for a $300 GPU you can't afford to be a pixel-peeping snob. I have GPUs all the way from the high end to "can barely play it" levels through my family. Sure, if you have a 4090 you're going to think upscaling is useless at 1080p, but for the millions that still own older 60 and 70 class cards upscaling is a _necessity_ if you want to play any modern game and you're going to be FAR more forgiving of image quality if your only option is to put up with sub-30 fps or not play at all. DLSS is great to push my high end system to 165fps, but it's not the difference in being _playable_ or not. Whereas FSR is a godsend for our 970, 5700xt, 1660 super, 1070 and 1080ti. "Just buy a more powerful GPU" simply isn't an option for many people and 1080p upscaling is _VERY_ relevant to those people. When I have to try and keep 5-6 PCs updated it's pretty obvious why I care about extending the life of my hardware as far as possible. When you're getting to the point where games dip below 30-40fps you get REALLY tolerant of less than perfect image quality and FSR is still more than usable at 1080p. I just wish it looked as good as DLSS at 1080 and would absolutely effect my buying decision if I was shopping for a new 60 class card.
Love Tim's Insights! Two comments: About potential future upgrades for Framegen: Intel are still working on frame extrapolation, which would get rid of the latency penalty. If that pans out it might be a great argument for battlemage. About the future of OLED Monitors: I'm highly hopeful for tech like Gsync pulsar, or anyhting else that improves motion clarity. You need like 360hz (and fps!) on even IPS and OLED monitors (not to mention VA) to get motion clarity that matches 100 hertz on a CRT. Motion clarity matters so much for perception of smoothness and quality and spending more and more hardware performance to get to 240+ or other stupid framerates when the issue that would solve (partly) it at the source is the monitor. On that note I wonder if we'll get better versions of black frame insertion or similar tech in the future as well.
I think what Tim doesn't udnerstand about this discussion is that super-samping isn't the only alternative to TAA, theirs a ton of different low-cost AA techniques and while they have their own flaws so does TAA, none of them are perfect, so until we get something that's perfect gamers should be allowed to pick their poison. Anti-aliasing is a balance between effective anti-aliasing, low perf cost, and clarity - you can only have two at a time. Also it's caused a massive reduction in image quality, if someone feels differently I'd assume its because they're either less sensitive to the cons of TAA or they're gaming at 4k. TAA causes a lot of motion blurring and my motion sensitive eyes notice it so much I get eye fatigue playing modern games now. This issue is also an accessibility issue just as much as it is about enhancing graphics. Also TAA is a catch-22. Games without TAA aren't that aliased. Games that have TAA with it turned off tend to be extremely aliased, why? Because TAA allows developers to be lazy and make their games as aliased as possible cause TAA will just clean it up at the end, which means we get aggressive TAA, smudgy graphics and other techniques are ineffective. If a game is built around TAA, TAA will look better most of the time. That does not mean it is better though. If you built your game around not using it, or treated it as an additive to the game & nothing else it would look better because then you'd have your cake and eat it to (clarity & low aliasing at the same time)
17:51: I would like to see supersampling added back along with TAA. The problem with doing supersampling via the driver route is that at least on the Nvidia side, DSR factors are blocked on monitors using DSC. Supersampling can still be quite relevant for 1440p monitors and it is quite annoying that it is not available for some high refresh / ultrawide 1440p monitors.
Your guest mentioned prices of old low end gpus were cheaper, the main one he mentioned being the 1060 6gb but I did a little research looking back the last 3 generations from Nvidia and prices were the same or more than the 4060 which is 299 and this is before adjusting for inflation. From what I can gather the 1060 msrp was 299, the 2060 350, and the 3060, 329. Correct me if wrong.
To be fair, the 4060 is about 5-10% slower than the 3060 ti, uses almost 100 watts less of power, and costs almost 25% less msrp at release. Prices were atleast 10% higher across the economy when the 4060 released and yet it was still 25% cheaper and 35% if you take that into account. Im not a fan of Nvidia and own a 6800 xt, along with a 4060, but your just whining about something thats not even true while accusing someone based on that victim mentality your storys based on. And the 4060 has frame generation thats actually not that bad and is going to improve over the years for another 30-40% performance boost.@@IDKOKIDK
@@mikehawk6918problem is those AD107 is built on $17, 000 5nm wafer. That's why AMD not willing to use 5nm on 7600XT. Instead they use 6nm which is another variant of 7nm.
@2:06:10 Tim is correct that inflation accounts for low and mid-range price increases. However, it also sounds like he's saying this is true for the high-end, others have said this for sure and it's patently untrue. The 4080 should have launched at ~$730(call it $750), not $1200. Instead of 21.9% of inflation, they went up 100%, they doubled the price. The high end prices have become utterly absurd. Trouble is, we're screwed into buying them if we want good picture quality and playability. Let's also not forget that Nuhvidia has shifted the tiers and their respective performance levels/dies.
Is there some kind of rule on this podcast where you can't ask Tom questions? Coz there were some times for some insightful questions during this that just fell flat :( I really love the content btw, my criticism is minor and comes from a place of love ❤
It's always about planned obsolescence. The 3070 with 12 GB, or especially 16, would be a nightmare for Jensen. This is because he wants every single person to buy a new card every year and a half to two years. At the 500 to $600 level minimum... In other words he wants minimum of investment from individuals of 250 to $300 per year adjusted by inflation, and not adjusted by either economies of scale, or the dead Moors law
I know the discussion is more fidelity focused, but I would like to see more focus on making 60 fps the standard, while keepong fidelity on par. Smoothness, low input paired with good upsampling seems like a decent way to go.
I have always preferred no AA over any post-processing AA. We're now at a point where TAA is making games lose depth and look like 720p all because developers don't want to fix lighting glitches. Please stop forcing TAA on games. Bring back MSAA. It doesn't matter if most people will only use 2x or off, it's an option we still want.
@@IDKOKIDK I compared normal internal res scaling vs DLSS in LotF at 1080p to 4K and found there isn't really a noticeable difference. DLSS and other AI upscalers are overrated because they came out at a time when games changed the output resolution (TV upscaling) rather than the internal resolution. What games really need is just a good internal scaling option as opposed to games like Cyberpunk 2077 which forces you to choose between output resolution or DLSS, obviously making DLSS look better. Surely this was a marketing ploy for DLSS considering most new games in 2020 already used internal scaling when changing reslution. There really isn't a better explanation for the biggest AAA game of 2020 not having what at the time was already a standard feature. At least games are catching on now, but we could've had good upscaling fifteen years ago.
regarding image quality, one thing I noticed nobody has ever mentioned. So I use DLDSR 2.25x from native 4k on my TUF 4080 Super to 6k resolution. At this level, I can use DLSS ultra performance, turn off anti aliasing and still don't notice any jaggy or shimmering edges on my 32" 4k monitor. The 6k foundation resolution used for DLSS is so high and detailed, it completely eliminated the need for additional anti aliasing (FXAA/ SMAA etc), freeing up GPU resources for other settings. This is what I noticed anyway in games like outcast a new beginning, Avatar frontiers of pandora, jedi survivor. I can run DLSS performance for higher FPS and still have sharper graphics with DLDSR 2.25x. its just a win win, the only loss is my wallet RIP
@@Navi_xoo well obviously you have no clue what you're talking about. I already tried 4k native with DLSS, image doesn't look as crisp as 6k dldsr. Once you seen 6k, going back to 4k is like returning to 1080p.
Multi sampling has an even higher performance hit these days as we tend to have much small (and more) trigs. Good MAA only does additional sampling on trig edges but if you trigs are tiny thee as t means your doing a lot more sampling across the entire display. And for PC GpUs you also have a much higher frame buffer size and bandwidth hit.
Tom, what node is the PS5 Pro on? Your earlier leaks said N4P but have not seen the node mentioned specifically in recent videos. Is that still accurate?
29:30 The upscaling technique used in fortnite UE5 is better than dlss, 1440p res upscaled to 4k (nanite and lumen enabled). 44:30 Other than cpu/ssd optimization, the "easy wins" for pc gaming are Dolby Vision and 32-bit 384khz audio mixing (7.1 surround and simulated 3d headphone audio).
50:00 I still think that AMD intended the 7900XTX to compete with the 4090 and originally planned to price it at $1199 and when it fell short of the performance expectations they cut the price to $999 and then didn't consider to cut the price of the 7900XT which was originally meant to compete with the 4080 for $899
Hey can someone explain to me what 4k native games are? For example on the PS5, people have the tendency to say the games are not native but always upscaled. I find that hard to believe.
@@arenzricodexd4409 Didn't that just become TrueAudio Next? The solution that uses 25% of the GPU cores to process sound instead of a dedicated processor for Audio.
@@kingkrrrraaaaaaaaaaaaaaaaa4527 originally it was a feature on hawaii. They even have dedicated hardware inside the gpu for it. Then true audio next no longer have those dedicated hardware (starting with polaris onwards). Technically even AMD latest card can do true audio but the feature only ever used in one game: Thief 4. Then we never heard about it again.
Could amd add a 32gb ssd caching system to tgere GPUs for unreal engine stuttering issues. I don't know probably wouldn't work just spit balling I'm a noob.
I agree with Tim, that AMD could make some sort of geometry/texture feature. Where they have some hardware accelerated Nanite competitor. They could even be like "Sure nvidia can use Nanite, but our feature is 4 times as fast and uses half the energy." If nvidia goes for lighting with RTX, AMD can go hard on geometry.
it feels like alot of people working at amd arent qualified for their job if theyre struggling this hard to keep a brand consistent. automobile manufacturers dont struggle as muh as amd does with having a consistent product generation and lineup.
I know its way too early to hear anything but what about laptop? Will there an RTX 5090 laptop version? What would the specs be? I know it will be cut down for power management but cut down by how much?
most likely they'll do the same as this gen. Will call a desktop 5080 a "5090" on laptops and nobody will complain about it, because "laptops are always cut down", except those people forget of the existence of 1000 and 2000 series where they were both named and spec-ed the same
I managed to buy a great Dell Precision 7760 with an i9, 64GB of RAM and a Quadro A5000 with 16BGB of RAM for £1500UK inc. tax - that's around $1900. While predominantly for work, it does game surprisingly well with some games at max settings at 1440P.
@@christophermullins7163 It's a mobile workstation so not as powerful as that - more like RTX3080 mobile, so still reasonable for most games at 1080P and 1440P.
Listening to Tim talk about how Radeon is more focused on short term gain rather than a long term strategy made me wonder if this could actually be an AMD issue, not a Radeon. There is only so much TSMC supply available across CPUs, consoles, datacenter chips and GPUs so what if they need to reach the highest possible margins to even justify allocating silicon ressources to their products?
Talking about AA. Some games like Gears of War 5 have a stunning good AA. It is built in to the engine so we can't mess around with it, but it looks stunning good. The visual clarity....
@@dante19890 So it is the best implementation of it i have seen so far. In other games it usually brings a blur in the textures like even RE 4 Remake does
Why did you invite Tim? To trash gaming laptops again publicly? "beefy big high performance laptop" How is 2.5 kg a chunky laptop? over 10 years ago, my 2.5kg laptop pushed out just 40 watts of heat. My new Legion pushes out 4.5 times the heat at the same weight, same noise levels(but usually MUCH quieter), while having a smaller case and a larger display, more ports and so on. 6:53 As for "they always seem to lack behind by roughly the same amount" Ok let's see a few comparisons: -970 was about 50-60% faster than laptop 970m. Then desktop 1070 was about 5-10% faster than laptop 1070(yes, Pascal was super close and Tim never talks about it), -then 2070, according to Notebookcheck comparisons(Tim used a stock laptop vs overclocked desktop version) 2070 was about 13% faster than the laptop version, -in Ampere, Nvidia continued to cut them down so the difference was 30%, -and then now, since 4070m is severely cut down the desktop version is 50% faster. Roughly the same difference, huh, Tim? I guess 5-10% and 50% is roughly the same according to this guy. I highly suggest Tim to do his research next time before talking like that publicly.
Tim has a good point about laptops, i have a g14 because its light weight with good battery life and has enough power for some 1080p gaming or unreal devolpment, but if I wanted 4k max spec machine, I have my desktop for that. I'm looking forward to strix halo as a 1.5-1.7kg machine with great battery life that can scale up and down with only one GPU is exciting although I hope they start at 32GB of ram as any less would be pointless as you have to share the RAM between the CPU and GPU.
I do wonder about how much the delay of a AI-FSR has to do with also wanting to use the NPUs on the APUs (probably not on the Desktop CPUs) to accelerate it was well. Another thing might be them wanting a way to incorporate it into their HYPR-RX one button software solution, which might be able to do what you guys were talking about NVIDIA if they get all the pieces to work together right. Unfortunately Strix-Halo is probably going to be out late enough that I would probably wait to see what a hypothetical Zen6/RDNA5 successor would look like, especially since it might be out in time to be compatible with LPDDR6(x).
I run an absolutely top tier am5 system with a 7900xtx and I can't complain about anything! I get around 1000 fps on every game I run. 2000+ fps in VR. I don't know what else is needed.
For the anti aliasing discussion, as someone who was always infuriated by jaggies and the shimmering caused by MSAA I think, at 1440p, TXAA is a very solid compromise for image quality/no shimmering/low perf hit. FXAA was definitely blurry & washed out colors garbage not worth using in most of the games it's available, and with the hamstrung perf we are getting with cards these days super sampling is totally out of the question if you aren't rocking a 4090 sadly...
17:41 It hasn't. The only area were MSAA is weak is transparent textures, but you can enable an adaptive MSAA mode which uses SSAA on transparent textures and MSAA everywhere else. Better than TAA across the board. It's worth pointing out that the original question you're discussing here had an error in terminology, no doubt caused by nVidia's dishonest naming of their upscaling technology. DLSS is not super sampling. It's sub-sampling. Super sampling increases image quality, while the questioner thought it reduced quality. He clearly meant DLSS, and he is, of course, correct that it reduces quality. To the extent that DLSS or FSR ever look better than "native", it is entirely due to replacing a particularly bad TAA implementation.
it is sad that not at least the option for the highest image quality is provided via supersampling. And yes, it can be done via driver side (DSR), but there it is blocked with some monitors because nvidia is using a different display head for DSC apparently.
@@Navi_xoo I go back to 20+ yo classics sometimes these days and with todays games in 20 years, I am supposed to look at them in mediocre image quality on future hardware?
@@Navi_xoo I've been playing games on computers for over 40 years now. On PC's for over 30. I assure you, there's zero chance I don't know what I'm talking about on an issue like this. The very notion that 4xMSAA looks worse than TAA is beyond laughable.
Assuming they are keeping the bus width for every tier and 3GB memory modules aren't coming they will have to do 128bit 16GB for the 5060, 192bit 24GB for the 5070, 256bit 32GB for the 5080 and 384bit 48GB for the 5090.
They did it on purpose. The thing with nvidia is if they compete evenly with AMD on VRAM they can kill AMD sales wise even if their card is a bit more expensive in general.
Im a bit suprised that the gpu upgrade is only 45 percent. It was 100 percent going from ps4 to ps4 pro. When many games are running in 720p in performance mode the extra gpu grunt is very much needed.
Look at the leaps from RDNA2 to RDNA3 (lack thereof). 45% is still good though considering other things like bandwidth are being increased. Goal is probably to get everything to a minimum 1080p to be upscaled with PSSR.
@@mikem2253 The PSSR upscaling tech will be the saving grace because jumping to 4K native is a massive drain on the GPU but if using FSR3/PSSR you would upscale to 4K and hit 60fps on most games, it’s so smooth you really wouldn’t know it’s not 4K native
Talk about how Mipmapping destroys visual quality... With it, textures in the near-far distance are blurred (AA not necessary). Without it, everything shimmers in high contrast pixelation (AA does very little to stop it). Creating a road shader with high visual quality requires actually doing the work to fade textures to grey & reflective at the horizon like in the real world...
Am surprised Tim said he needed to put The Talos Principle on x-many times supersampling. I found that I could not get Serious Sam 3 or 4 to alias. No matter what angle I looked at anything.
I think PS5 Pro with the 3.85ghz CPU it will be able to hit Ai Upscaled Dragon's Dogma 2 & GTA 6 at 60fps. 1080p upscaled to 1440p or something like that
I got double burned with Ampere and it's miserable amounts of vram, my wife had a 3070 and I had a 3080 and we both run into issues at 1440p 😡😡 fuck nVidia, we both run 7900XTX now, no issues with vram there 😁
@Pikkoroo He's saying that Steve at Gamers Nexus won't come Tom's Broken Silicon because thinks he's too good for it. I wouldn't agree that he is but it could be why he hasn't been a guest yet.
@@puffyips I was pointing out what the first comment was saying. He hasn't done a lot of collaborations but he has been on with PC World (Gordon) and Wendell. I enjoyed those videos, so I would be happy to see him on Broken Silicon. I won't pretend to know why he hasn't been on before. He might want to know Tom better first or he might be too busy. I'm not saying his ego is too big but I don't think he's too humble, either.
The other reason why Sony is developing PSSR is because Microsoft's console is also based on AMD hardware and also relies on FSR for upscaling. So if they come up with their own method that beats AMD's standard upscaler that's something they can push forward to make the PS5 Pro look like a more advanced product than the Xbox.
Yeah, and looks like there won't be a pro version of the Series X and they will release a new console in 2026 and stagger their releases. But without PSSR the Series X2 might not be better than the PS5 Pro. So what if it runs games at 80fps instead of 60fps for 120hz displays? It'll be a useless feature as frame-gen will do that for the PS5 Pro and it will look better. Let's also not forget that Microsoft will surely release the Series S2 at 12 tflops or less which will quash any attempt of the Series X2 to beat the PS5 Pro. Things are really, really bleak for Microsoft in the gaming industry but they have cornered themselves with Series S and Starfield running at 30fps and looking worse than Mass Effect.
[SPON: Use "brokensilicon“ at CDKeyOffer to get Win 11 Pro for $23: www.cdkeyoffer.com/cko/Moore11 ]
[SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: bit.ly/3wKx6v1 ]
[SPON: Tell COSWHEEL MLID sent you when you check out the CT20: www.coswheelebike-eu.com/allot/transfer/1000226?redirectPath=%2Fproducts%2Fct20-ebike ]
What about Hardware Busters some month? Aris knows his shit. And it can sometimes sound like he said "Hardware B@stards" which is something!
Tom, what node is the PS5 Pro on? Your earlier leaks said N4P but have not seen the node mentioned specifically in recent videos. Is that still accurate?
Can I request the next special guest to be the founder of Userbenchmark? We need to figure out what AMD did to his parents.
That's hilarious
it all happened in a nearby grocery store, his father wanted to cut in line but he was unfortunate enough to encounter Lisa Su, who did not let him cut in and mentally traumatized him with "wee wee ass haircut" quote, which hit him so hard that his entire lineage now boils in anger at a mere thought of AMD
His wife's boyfriend has an AMD pc and it drives him mad that they play fortnite together.
Underrated comment for sure! Lol iykyk
or to his ass for that matter.
Super episode, I am glad to see Tim back so soon!
He's always great. *_Monitors Unboxed_* is one of the best display channels going. I'm sure he's saved me lots of time and frustration over the years
Ancient Gameplays, Vex, and now Tim from Hardware Unboxed?
Man this podcast has been on a roll lately when it comes to guests.
Agreed 💯
Vex doesn't really have anything interesting to say or add to in a conversation regarding tech. Ancient Gameplays and Tim though are excellent guests!
@@ofon2000Have to agree, but I pray that Vex keeps growing and produces content with more depth and insight in the future.
@@waynejr.6763 True he is young...but I mostly see him adhering to click-bait youtube practices. Hope he does become more insightful and meaningful in terms of content as well.
Tim's been on before:
th-cam.com/video/ycXkvVfc2yw/w-d-xo.html
th-cam.com/video/0DleBSV_-2c/w-d-xo.html
th-cam.com/video/LyLg-r5ZTWc/w-d-xo.html
th-cam.com/video/4LVuX6FvDXg/w-d-xo.html
th-cam.com/video/fLslDE27XxY/w-d-xo.html
So has Steve:
th-cam.com/video/lO22Ny_41nc/w-d-xo.html
Tim is really engaged in this one. Good discussion points and arguments. This is the most enjoyable Broken Silicon in a while for me. Cheers!
Yoo it's awesome to see 2 different creators share their asspects and show how close the community really is at large.
Tim and Steve will always be the best at benchmark's its always cool too see them on podcasts and other shows
Always happy to see Tim or Steve swing by. HU is a staple for me.
I've been enjoying your content quite a bit. I think I've been overly critical of your videos in the past and I wanted to say that I find you content to be rather interesting, informative, and despite some of my skepticism and concerns of bias, has in large part proven to be accurate. Thanks for the hard work and best of luck going forward. I'll be watching... and prob commenting, for better or worse.
Glad to see you've escaped from the Reddit hivemind! They all just bash Tom nonstop for the dumbest reasons.
I've been thinking AMDs GPU decisions have some or all these in mind:
1) R&D budget - Right now, NVIDIA has a massive budget, so try to keep up even if that means a generation or so late. One day, a window may open up like with what happened during Intel's 10nm issues.
- note on raytracing, word is that RDNA4 and RDNA5 are being designed to make big moves. Im hoping this is true, the PS5 Pro leak is starting to reveal this is coming to fruition.
2) In the meantime, focus on the special dies for consoles and their own CPUs. Continue to take advantage of these cores in Ryzen to gain more of a CPU edge.
3) Accept the momentary flux of low R&D budgeting, being behind feature-wise and take wins where they can be done now.
4) While playing keep up, start to push new features that would destabilize NVIDIAs edge. Maybe right now isn't the time, but at least get ready for it.
5) Keep up the FSR development even if it is third out of the three. Work with Microsoft and really push optimization within the OS. Push specialized core development to take advantage of these features (both DX and FSR) which should greatly close gaps or even surpass NVIDIA at some point.
Regarding PSSR vs FSR - Sony continuously produces elite scaling hardware/software solutions in their TVs.
They are the premium brand. Im not at all surprised they were able to produce such a quality upscaler on AMD hardware. AMD is still pretty new to this, they will get there.
I just want good competition. I will only buy nvidia anyway
Ultimately only one matters: Chip die size. They have to pack what they want into a 300mm²-ish die, smaller is better. The larger it is, higher probability of defect, higher cost.
Didn't pore thru everything u said but saw enough to see uv a great analysis of the realities.
@@nathsabari97Contradiction. Uv voting to hurt urself.
@@mimimimeowonce they can make GPU chiplets, this would be less of a issue
Always a great ride when I can listen to mlid podcast. Tim is a legend!
Traded my 3070 for a 6700XT + $100 (I got the $100 with the GPU) when I started noticing VRAM being an issue over a year ago. Very glad I did now, even at 1080P I used over 8-10GB in many games, and rarely need more GPU grunt. NVIDIA is ridiculous selling mid-high end cards with 8gb still.
3070 ti is easily fast enough to do 4k with dlss quality but no way it has the VRAM for that. The cost of vram is so low.. we all know that 8gb frame buffer exists to upsell you to the 3080 or make you upgrade soon because of the lack of VRAM. Crazy how Nvidia is shameless with these tactics. And people still buy them! I am for AMD simply because I cannot bring myself to give my hard earned cash to a company as deceitful and anti consumer as Nvidia. I've been running Alan Wake 2 hopping back and forth with pathtracing on vs off and.. idk if it is just a poor implementation but I don't see much of a difference and certainly not enough to make me support that company.
AMD gpus stutter non stop in Alan Wake 2, crash constantly in Helldivers 2, are unstable af in Lords of the Fallen, and aren't even able to launch some of the Fallout games. I would NEVER buy a gpu that suffers so many issues.
@@jjlw2378 fortunately for me.. I have never had an issue with an amd gpu so i get to pay far less for the same experience and not have texture loading issues due to lack of vram. I am happy i went with a new $420 6950xt. You get 8gb from nvidia for that price lmao enjoy
@@jjlw2378where are you getting this information? I have seen demos with frame time graphs in all of those games using AMD gpus. All running well. My close friend runs an 7900xt for Allan wake 2 and it runs like butter. Who has better 1% low is a per game toss up between nvidia and AMD. Are you attempting to run these new games on an RX480?
@@jjlw2378 Helldivers is well known for crashing on just about every system but my 7900XT hasn't crashed since last month's driver update, and I've played every fallout since 3 on either that or my old 5700XT just fine aside from the universal issues in 3 and NV, which are also famous for crashing on all modern systems. Haven't played the other two, but you don't seem like a very good source here.
Regarding VRAM longevity, memory usage goes up at a fast rate compared before when RT isn’t a thing or how tailored games are to utilize low VRAM buffer (particularly 4-6GB usable of PS4 as reference). Now, we had everything brute forced by rasterization instead of smokes and mirrors techniques used in the previous gen. 8GB vram size today will be having a hard time keeping up with the new technologies and demanding techniques alongside large texture sizes.
12GB should be minimum for now and 16GB minimum right before PS6/Xbox A-Z Series 2 releases.
10 yr old games running great on 2GB vram are far superior to brand new games needing 8+... Hmmm, wonder what the common denominator is....
@@TheGuruStud The competence of developers has massively decreased as games have become more popular and game studios got flooded with university "game development" graduates. It's almost comical if you go explore the game files. Things that were industry standards for 30+ years have been simply forgotten, and devs just try to brute force instead of optimize.
There's still some game studios that know what they're doing, like ID software.
Fantastic episode, thank you for bringing Tim 🎉❤
41:43 quote on quote "RTX 6090"
the meme card
Love to see Tim, bought a new OLED monitor because Tim!!!
Same
Always enjoy when you bring someone to the show. Tim is always good for info and opinions.
Feels like Christmas 😍
I open my phone, see I hace a notification from mlid, click on it, see that Hub is back. Like the video
Wait that was an hour ago, the podcast is half way over
really good feedback on radeon group's decision making
Mentioning FXAA... not all of it was made equal. I used Radeon Pro with my old HD6870 back in the day and that had a variety of quality settings for FXAA. The higher settings didn't blur the image at all and were still more performant than MSAA while having better coverage of things like transparencies. I think it got a bad name because the implementations of it were poor or low quality.
I remember it being very good in max Payne 3. Had a clean image at 1080p whilst being able to run the game maxed out. On a HD7950
FXAA is post processing AA. It's pretty much the same as SMAA (not to be confused with MSAA) and the reason why it sucks isn't because of blurriness or ghosting like in TAA but the fact that because it's post processing, it can only AA certain parts of the image to a limited degree. It works ok on forward rendering games but sucks badly on deferred rendering games, much like SMAA, and MSAA won't even run unless you implement it for EVERY single shader separately, hence why TAA exists.
It got bad name because it is not MSAA or SSAA. simple as that. Same as why some people did not want to accpet any form of upscaling no matter how good it is because it is not native rendering.
Tim is back so soon? Lets goooo!!
AM4 owner here, yes platform longevity matters to me....and give me more PCIE 5.0 lanes for my gpu/hi end SSD, no splitting lane BS. show me the value!
Looking the PS5 Pro spec, it is clear the selling point this time around is ray tracing and upscaling image stability. It make sense if you are trying to give the most bang for the buck. In every scenario, whether viewing the game on a small screen, a large screen or from different distances, the player can see the ray tracing benefit every time. Whereas higher resolution or higher frame rate is only noticeable in certain circumstances and certain types of game. You can see the quality of ray tracing on a tiny screen from across the room, or a large screen covering a whole wall, in every type of game.
The other point is pixel stability. When a game is upscaled, you get temporal sub pixel shimmering. Again, that can be noticed on a large screen at any distance, because the effect gets amplified by neighbouring pixels to form a shimmering group. The focus on a good AI upscaling solution that can stabilise pixel shimmering with anti aliasing is another big bang for the buck. Sony seem to be focussing on these two issues as their selling point, because you can see the effect all the time, rather than just occasionally under certain conditions.
Even a novice can spot the difference. You don't need a trained eye. Time will tell if they have been successful, but Sony have a good track record for punching above their weight in console design thanks to Mark Cerny.
Not to mention ghosting for temporal upscaling, though I guess that's part of pixel stability
You need guests like these who are very engaging and actually have thoughts to put in the table.
19:55 people are forgetting games are doing 100x more work per-pixel than older games, most ps4/xbox one games had very limited dynamic lighting and most of it was baked, nowadays almost all lighting is dynamic either with lumen or raytracing, with PBR materials bordering offline rendering models.
Fake news. Look at PC. 10 yr old games are vastly superior in IQ including lighting, reflections, etc. Devs are 100% coasting due to hardware power aka lazy. All of these new PS5 games look like garbage. It's no better than PS4 with a res bump and AA.
Is that true though? That almost all lighting is RT? "Almost all lighting" means actual global illumination. Even in nvidia sponsored titles this is certainly not the case.
It's hard to forget something that's simply not true. Ps4 era is RDR2, The Witcher 3, Battlefront, Cyberpunk 2077. ALL of these games have dynamic lighting, even on the PS4. They were just much better optimized for the hardware they had to run on.
GTA 5 had a day/night cycle on a huge map in the PS3 era
this is an absolute dogshit take.
2:35:52 Glad to see Tim bringing up self-emitting QDs and microLEDs, because I would definitely consider those the two technologies that have been publicly announced that have even a hope of dethroning OLED.
It might.
01:13:40 FSR supports dynamic resolution scaling, so that wouldn't be something new with the Sony upscaler.
Nice guest selection recently. I wish you could schedule one with Jarrod from Jarrod's Tech at one point.
@2:05:00 such good points about price creep
Great crossover episode
Always a pleasure to listen to Tim. I wish there was some discussion of ULMB 2, after using ULMB on my PG-279Q Asus monitor, I can never go back as it's must-high feature for me I look forward to the advancements of this technology.
Well, the 14th gen RPL-R was essentially a rebranded 13th gen RPL, so no it can't really be considered an "Additional gen" for the platform. It's just 13th, again.
39:15 I think homeboy question/opinion here is spot on. summed up quite nicely
As for weight of gaming laptops - show me a single person who regularly carries their desktop with them to a cafe, to a plane and uses it in there and so on. People like you always talk about getting a desktop instead of a laptop, and yet none of such people carries their desktop. It's almost like people need a laptop for portability and you can't replace it with a desktop or something. But I guess it's just me.
As for price - if someone really needs a laptop for office\school work but also wants to game - in addition to a desktop one would also need to get an office laptop. Good luck getting a laptop with a display just as good, battery life just as good (or many gaming laptops it ranges from 7 to 11 hours these days), keyboard just as good, upgradeable on RAM and storage, and so it wouldn't be super sluggish in basic tasks like browsing.
Yeah right, "cheaper". I'm not talking about absurdly priced gaming laptops of 4-5 thousand dollars, I'm talking about reasonably priced ones like costing 1200-1600 dollars. Using it as a desktop+ as an office laptop(oh and they support type c charging these days) beats desktop + office laptop in pure value.
Honestly, what interests me about Strix Halo is were talking something that can be 1660 Ti+ performance that fits in a thin and light.
I don't do heavy gaming on my laptop, but being able to play older and AA games on my laptop when I'm on the road is very nice in something that I can also use as an ultrabook.
Always love to see more Tim or Steve from HUB. They both stride the line with deep knowledge about the industry and tech with customer centered mentality that focuses on the value. Still kinda shocked that you have these caliber of guests appearing, but I guess next you pull Steve from Gamers Nexus or something...
Laptop 1070 was 5-10% behind desktop 1070, laptop 4070 is about 50% behind the desktop 4070.
Meanwhile Tim: "laptops always lag behind desktops by roughly the same amount"
Deep knowledge, right.
2:30:01 You have to sit 1.03m to tell the difference from 1080p to 4k on a 55" TV, that's why they only make them in 32", it's pointless to go any lower apart from pixel density.
Most are made by LG, Hisense using Patents from Sharp Display and Samsung, with a few CSOT panels as well which is TCL and Samsung combined.
On the TAA side of things: A lot of modern day rendering techniques take a lot of work to be compatible with things like MSAA, and that's the reason why it's so expensive to run in modern games. It's basically not worth the dev time to even implement anymore AFAIK.
I think something that gets missed on the upscaling debate is I feel it's MORE important on low end cards than high end cards. You can't look at it from an elitist point of view where you're unforgiving of less than perfect image quality. If you're shopping for a $300 GPU you can't afford to be a pixel-peeping snob. I have GPUs all the way from the high end to "can barely play it" levels through my family. Sure, if you have a 4090 you're going to think upscaling is useless at 1080p, but for the millions that still own older 60 and 70 class cards upscaling is a _necessity_ if you want to play any modern game and you're going to be FAR more forgiving of image quality if your only option is to put up with sub-30 fps or not play at all.
DLSS is great to push my high end system to 165fps, but it's not the difference in being _playable_ or not. Whereas FSR is a godsend for our 970, 5700xt, 1660 super, 1070 and 1080ti. "Just buy a more powerful GPU" simply isn't an option for many people and 1080p upscaling is _VERY_ relevant to those people. When I have to try and keep 5-6 PCs updated it's pretty obvious why I care about extending the life of my hardware as far as possible. When you're getting to the point where games dip below 30-40fps you get REALLY tolerant of less than perfect image quality and FSR is still more than usable at 1080p. I just wish it looked as good as DLSS at 1080 and would absolutely effect my buying decision if I was shopping for a new 60 class card.
Love Tim's Insights!
Two comments:
About potential future upgrades for Framegen: Intel are still working on frame extrapolation, which would get rid of the latency penalty. If that pans out it might be a great argument for battlemage.
About the future of OLED Monitors: I'm highly hopeful for tech like Gsync pulsar, or anyhting else that improves motion clarity. You need like 360hz (and fps!) on even IPS and OLED monitors (not to mention VA) to get motion clarity that matches 100 hertz on a CRT. Motion clarity matters so much for perception of smoothness and quality and spending more and more hardware performance to get to 240+ or other stupid framerates when the issue that would solve (partly) it at the source is the monitor.
On that note I wonder if we'll get better versions of black frame insertion or similar tech in the future as well.
I think what Tim doesn't udnerstand about this discussion is that super-samping isn't the only alternative to TAA, theirs a ton of different low-cost AA techniques and while they have their own flaws so does TAA, none of them are perfect, so until we get something that's perfect gamers should be allowed to pick their poison. Anti-aliasing is a balance between effective anti-aliasing, low perf cost, and clarity - you can only have two at a time.
Also it's caused a massive reduction in image quality, if someone feels differently I'd assume its because they're either less sensitive to the cons of TAA or they're gaming at 4k. TAA causes a lot of motion blurring and my motion sensitive eyes notice it so much I get eye fatigue playing modern games now. This issue is also an accessibility issue just as much as it is about enhancing graphics.
Also TAA is a catch-22. Games without TAA aren't that aliased. Games that have TAA with it turned off tend to be extremely aliased, why? Because TAA allows developers to be lazy and make their games as aliased as possible cause TAA will just clean it up at the end, which means we get aggressive TAA, smudgy graphics and other techniques are ineffective.
If a game is built around TAA, TAA will look better most of the time. That does not mean it is better though. If you built your game around not using it, or treated it as an additive to the game & nothing else it would look better because then you'd have your cake and eat it to (clarity & low aliasing at the same time)
Sick collab
17:51: I would like to see supersampling added back along with TAA. The problem with doing supersampling via the driver route is that at least on the Nvidia side, DSR factors are blocked on monitors using DSC. Supersampling can still be quite relevant for 1440p monitors and it is quite annoying that it is not available for some high refresh / ultrawide 1440p monitors.
Your guest mentioned prices of old low end gpus were cheaper, the main one he mentioned being the 1060 6gb but I did a little research looking back the last 3 generations from Nvidia and prices were the same or more than the 4060 which is 299 and this is before adjusting for inflation. From what I can gather the 1060 msrp was 299, the 2060 350, and the 3060, 329. Correct me if wrong.
The problem is that the 4060 is actually a 4050, and should be priced as such.
Nvida were hoping people were all like you, make all the models perform like the model beneath and nobody would notice, but we all did, but not you
To be fair, the 4060 is about 5-10% slower than the 3060 ti, uses almost 100 watts less of power, and costs almost 25% less msrp at release. Prices were atleast 10% higher across the economy when the 4060 released and yet it was still 25% cheaper and 35% if you take that into account. Im not a fan of Nvidia and own a 6800 xt, along with a 4060, but your just whining about something thats not even true while accusing someone based on that victim mentality your storys based on. And the 4060 has frame generation thats actually not that bad and is going to improve over the years for another 30-40% performance boost.@@IDKOKIDK
Inflation doesn't matter (as much) if it doesn't affect the price of the components; e.g. VRAM is the cheapest it's ever been
@@mikehawk6918problem is those AD107 is built on $17, 000 5nm wafer. That's why AMD not willing to use 5nm on 7600XT. Instead they use 6nm which is another variant of 7nm.
@2:06:10 Tim is correct that inflation accounts for low and mid-range price increases. However, it also sounds like he's saying this is true for the high-end, others have said this for sure and it's patently untrue. The 4080 should have launched at ~$730(call it $750), not $1200. Instead of 21.9% of inflation, they went up 100%, they doubled the price. The high end prices have become utterly absurd. Trouble is, we're screwed into buying them if we want good picture quality and playability. Let's also not forget that Nuhvidia has shifted the tiers and their respective performance levels/dies.
The lack of unified memory architecture on PCs seems to be the problem
Is there some kind of rule on this podcast where you can't ask Tom questions? Coz there were some times for some insightful questions during this that just fell flat :(
I really love the content btw, my criticism is minor and comes from a place of love ❤
It's always about planned obsolescence. The 3070 with 12 GB, or especially 16, would be a nightmare for Jensen. This is because he wants every single person to buy a new card every year and a half to two years. At the 500 to $600 level minimum... In other words he wants minimum of investment from individuals of 250 to $300 per year adjusted by inflation, and not adjusted by either economies of scale, or the dead Moors law
Oh capitalism 🙄
I know the discussion is more fidelity focused, but I would like to see more focus on making 60 fps the standard, while keepong fidelity on par. Smoothness, low input paired with good upsampling seems like a decent way to go.
I thought it was going to be Steve, but Tim is a great consolation prize.
I have always preferred no AA over any post-processing AA. We're now at a point where TAA is making games lose depth and look like 720p all because developers don't want to fix lighting glitches.
Please stop forcing TAA on games. Bring back MSAA. It doesn't matter if most people will only use 2x or off, it's an option we still want.
If you have 1440P and higher monitor DLSS Quality is much better
@@IDKOKIDK I compared normal internal res scaling vs DLSS in LotF at 1080p to 4K and found there isn't really a noticeable difference. DLSS and other AI upscalers are overrated because they came out at a time when games changed the output resolution (TV upscaling) rather than the internal resolution.
What games really need is just a good internal scaling option as opposed to games like Cyberpunk 2077 which forces you to choose between output resolution or DLSS, obviously making DLSS look better. Surely this was a marketing ploy for DLSS considering most new games in 2020 already used internal scaling when changing reslution. There really isn't a better explanation for the biggest AAA game of 2020 not having what at the time was already a standard feature.
At least games are catching on now, but we could've had good upscaling fifteen years ago.
No AA just look like shit, jaggies galore with image instability in motion
regarding image quality, one thing I noticed nobody has ever mentioned. So I use DLDSR 2.25x from native 4k on my TUF 4080 Super to 6k resolution. At this level, I can use DLSS ultra performance, turn off anti aliasing and still don't notice any jaggy or shimmering edges on my 32" 4k monitor. The 6k foundation resolution used for DLSS is so high and detailed, it completely eliminated the need for additional anti aliasing (FXAA/ SMAA etc), freeing up GPU resources for other settings. This is what I noticed anyway in games like outcast a new beginning, Avatar frontiers of pandora, jedi survivor. I can run DLSS performance for higher FPS and still have sharper graphics with DLDSR 2.25x. its just a win win, the only loss is my wallet RIP
At this point, hardware AA is almost “free”. Doesn’t cost much performance to have it. Not like back in the day of 3dfx cards
@@codycast no it isn't free D.L.S.S is an AA technique.
it's not 4k 240Hz right? No silly DSC issues...
@@Navi_xoo you don't need AA a 4K
What is wrong with you people.
@@Navi_xoo well obviously you have no clue what you're talking about. I already tried 4k native with DLSS, image doesn't look as crisp as 6k dldsr. Once you seen 6k, going back to 4k is like returning to 1080p.
PCIE6 accompanies CXL3.0 on the data center chips.
Multi sampling has an even higher performance hit these days as we tend to have much small (and more) trigs. Good MAA only does additional sampling on trig edges but if you trigs are tiny thee as t means your doing a lot more sampling across the entire display. And for PC GpUs you also have a much higher frame buffer size and bandwidth hit.
Raptor Lake refresh ... Thunderbolt 5, Wifi7, 6400MT/s DDR5, 6.2GHz boost, 16 PCIE5 lanes with bifurcation. DLVR
Tom, what node is the PS5 Pro on? Your earlier leaks said N4P but have not seen the node mentioned specifically in recent videos. Is that still accurate?
I don't believe him because why Sony hasn't made it official yet if it's coming out this year
As a GTX 1060 3GB buyer i went to a used RTX 3070 for 235€ shipped
29:30 The upscaling technique used in fortnite UE5 is better than dlss, 1440p res upscaled to 4k (nanite and lumen enabled). 44:30 Other than cpu/ssd optimization, the "easy wins" for pc gaming are Dolby Vision and 32-bit 384khz audio mixing (7.1 surround and simulated 3d headphone audio).
Am I the only one who sees only big blur between 1:10:00 and 1:17:00? Was it somewhere mentioned why it is blurred?
50:00 I still think that AMD intended the 7900XTX to compete with the 4090 and originally planned to price it at $1199 and when it fell short of the performance expectations they cut the price to $999 and then didn't consider to cut the price of the 7900XT which was originally meant to compete with the 4080 for $899
Hy i have a Q: i'm wondering is there any benefit on running a second monitor on the cpu over the gpu?
Hey can someone explain to me what 4k native games are? For example on the PS5, people have the tendency to say the games are not native but always upscaled. I find that hard to believe.
Can you do a video on the implications of PSSR in a PlayStation handheld based on the potential specs you've heard for the unit?
Looking forward to listening to this one when I'm fully awake after a coffee.
AMD has had 'ray traced' audio since Polaris if I recall correctly.
True audio. Another tech that AMD abandon.
@@arenzricodexd4409 Didn't that just become TrueAudio Next?
The solution that uses 25% of the GPU cores to process sound instead of a dedicated processor for Audio.
@@kingkrrrraaaaaaaaaaaaaaaaa4527 originally it was a feature on hawaii. They even have dedicated hardware inside the gpu for it. Then true audio next no longer have those dedicated hardware (starting with polaris onwards). Technically even AMD latest card can do true audio but the feature only ever used in one game: Thief 4. Then we never heard about it again.
Monitors are what you actually look at and give you the experience. Makes sense to focus on what materially matters most.
Could amd add a 32gb ssd caching system to tgere GPUs for unreal engine stuttering issues. I don't know probably wouldn't work just spit balling I'm a noob.
That Coswheel ebike looks really cool, but the specs doesn't look legal in Norway 😅
I agree with Tim, that AMD could make some sort of geometry/texture feature. Where they have some hardware accelerated Nanite competitor. They could even be like "Sure nvidia can use Nanite, but our feature is 4 times as fast and uses half the energy." If nvidia goes for lighting with RTX, AMD can go hard on geometry.
it feels like alot of people working at amd arent qualified for their job if theyre struggling this hard to keep a brand consistent. automobile manufacturers dont struggle as muh as amd does with having a consistent product generation and lineup.
I know its way too early to hear anything but what about laptop? Will there an RTX 5090 laptop version? What would the specs be? I know it will be cut down for power management but cut down by how much?
most likely they'll do the same as this gen. Will call a desktop 5080 a "5090" on laptops and nobody will complain about it, because "laptops are always cut down", except those people forget of the existence of 1000 and 2000 series where they were both named and spec-ed the same
Are we going to stacked chiplets from AMD soon?
I managed to buy a great Dell Precision 7760 with an i9, 64GB of RAM and a Quadro A5000 with 16BGB of RAM for £1500UK inc. tax - that's around $1900. While predominantly for work, it does game surprisingly well with some games at max settings at 1440P.
Guessing that's similar to a 4080 in performance. If so.. that is brilliant. Got my 6950 XT open box for $420.
@@christophermullins7163 It's a mobile workstation so not as powerful as that - more like RTX3080 mobile, so still reasonable for most games at 1080P and 1440P.
@@robsshedoftech6457 well I'm sure it's perfect for work. I use my GBs for Alan Wake and Hogwarts and stuff.
I need the PS5 Pro running at 1440@120 and that would make my OLED sing.
1440p 60 maybe
@@dante19890 PS5 already does 1440@60 maybe just maybe the PS5 pro can do a lil more
Listening to Tim talk about how Radeon is more focused on short term gain rather than a long term strategy made me wonder if this could actually be an AMD issue, not a Radeon. There is only so much TSMC supply available across CPUs, consoles, datacenter chips and GPUs so what if they need to reach the highest possible margins to even justify allocating silicon ressources to their products?
Nearly 3 hours! Noice 😎
Talking about AA. Some games like Gears of War 5 have a stunning good AA. It is built in to the engine so we can't mess around with it, but it looks stunning good. The visual clarity....
Its just TAA
@@dante19890 So it is the best implementation of it i have seen so far. In other games it usually brings a blur in the textures like even RE 4 Remake does
In my opinion not being able to mess around with it is bad. Turning it off would be preferable to many.
@@GeneralS1mba I doubt it. That implementation is perfect in my sole minuscule opinion
@@pedro.alcatra Many prefer native though. Some people can't stand any blur.
Why did you invite Tim? To trash gaming laptops again publicly?
"beefy big high performance laptop" How is 2.5 kg a chunky laptop? over 10 years ago, my 2.5kg laptop pushed out just 40 watts of heat. My new Legion pushes out 4.5 times the heat at the same weight, same noise levels(but usually MUCH quieter), while having a smaller case and a larger display, more ports and so on.
6:53 As for "they always seem to lack behind by roughly the same amount" Ok let's see a few comparisons:
-970 was about 50-60% faster than laptop 970m. Then desktop 1070 was about 5-10% faster than laptop 1070(yes, Pascal was super close and Tim never talks about it),
-then 2070, according to Notebookcheck comparisons(Tim used a stock laptop vs overclocked desktop version) 2070 was about 13% faster than the laptop version,
-in Ampere, Nvidia continued to cut them down so the difference was 30%,
-and then now, since 4070m is severely cut down the desktop version is 50% faster. Roughly the same difference, huh, Tim? I guess 5-10% and 50% is roughly the same according to this guy. I highly suggest Tim to do his research next time before talking like that publicly.
You have to give tim a break here. Wheelchairs don't have laptop support
Tim has a good point about laptops, i have a g14 because its light weight with good battery life and has enough power for some 1080p gaming or unreal devolpment, but if I wanted 4k max spec machine, I have my desktop for that.
I'm looking forward to strix halo as a 1.5-1.7kg machine with great battery life that can scale up and down with only one GPU is exciting although I hope they start at 32GB of ram as any less would be pointless as you have to share the RAM between the CPU and GPU.
Can't wait for you to have Bryan from TechYesCity on again. He's by far my favorite guest.
I do wonder about how much the delay of a AI-FSR has to do with also wanting to use the NPUs on the APUs (probably not on the Desktop CPUs) to accelerate it was well. Another thing might be them wanting a way to incorporate it into their HYPR-RX one button software solution, which might be able to do what you guys were talking about NVIDIA if they get all the pieces to work together right.
Unfortunately Strix-Halo is probably going to be out late enough that I would probably wait to see what a hypothetical Zen6/RDNA5 successor would look like, especially since it might be out in time to be compatible with LPDDR6(x).
I run an absolutely top tier am5 system with a 7900xtx and I can't complain about anything! I get around 1000 fps on every game I run. 2000+ fps in VR. I don't know what else is needed.
What are those talks about CPU and 30 frames? How many games are 30 only? And how many games are 30 because of the CPU?
I like to apply both TAA and FXAA on top of each other, i really like the result, you can judge me.
For the anti aliasing discussion, as someone who was always infuriated by jaggies and the shimmering caused by MSAA I think, at 1440p, TXAA is a very solid compromise for image quality/no shimmering/low perf hit.
FXAA was definitely blurry & washed out colors garbage not worth using in most of the games it's available, and with the hamstrung perf we are getting with cards these days super sampling is totally out of the question if you aren't rocking a 4090 sadly...
I like super sampling option in games
17:41 It hasn't. The only area were MSAA is weak is transparent textures, but you can enable an adaptive MSAA mode which uses SSAA on transparent textures and MSAA everywhere else.
Better than TAA across the board.
It's worth pointing out that the original question you're discussing here had an error in terminology, no doubt caused by nVidia's dishonest naming of their upscaling technology. DLSS is not super sampling. It's sub-sampling. Super sampling increases image quality, while the questioner thought it reduced quality. He clearly meant DLSS, and he is, of course, correct that it reduces quality. To the extent that DLSS or FSR ever look better than "native", it is entirely due to replacing a particularly bad TAA implementation.
it is sad that not at least the option for the highest image quality is provided via supersampling. And yes, it can be done via driver side (DSR), but there it is blocked with some monitors because nvidia is using a different display head for DSC apparently.
@@Navi_xoo I go back to 20+ yo classics sometimes these days and with todays games in 20 years, I am supposed to look at them in mediocre image quality on future hardware?
@@Navi_xoo I've been playing games on computers for over 40 years now. On PC's for over 30.
I assure you, there's zero chance I don't know what I'm talking about on an issue like this.
The very notion that 4xMSAA looks worse than TAA is beyond laughable.
Isn't Dlss super sampling because it increases the quality of the render resolution?
@@GeneralS1mba super sampling is when you render at a higher than native resolution, DLAA would be a form of supersampling.
Im guessing that the PS5 pro will upscale PS4 games to 4K also.
Back compat like that would be massive
probably needs to be in the game and not an universal upscale toggle. If u have a good tv just set ur ps5 to 1080p and let the tv do the upscaling
@@mikem2253 that's the only thing I can see Sony being able to market it
What if a graphics card had a small fast onboard SSD just for caching shaders so they don't need to be completely recompiled every time?
Nvidia SEEMS to be recognizing that they dropped the ball on vram. If the, 5060 doesn’t have at least 12GB it will be DOA. The 5070 better be 16GB.
Assuming they are keeping the bus width for every tier and 3GB memory modules aren't coming they will have to do 128bit 16GB for the 5060, 192bit 24GB for the 5070, 256bit 32GB for the 5080 and 384bit 48GB for the 5090.
They did it on purpose. The thing with nvidia is if they compete evenly with AMD on VRAM they can kill AMD sales wise even if their card is a bit more expensive in general.
In terms of end game monitors, would take a looooong time to need an upgrade to 8K or 16K from 4K OLED at 500hz+
Im a bit suprised that the gpu upgrade is only 45 percent. It was 100 percent going from ps4 to ps4 pro.
When many games are running in 720p in performance mode the extra gpu grunt is very much needed.
Look at the leaps from RDNA2 to RDNA3 (lack thereof). 45% is still good though considering other things like bandwidth are being increased. Goal is probably to get everything to a minimum 1080p to be upscaled with PSSR.
@@mikem2253 The PSSR upscaling tech will be the saving grace because jumping to 4K native is a massive drain on the GPU but if using FSR3/PSSR you would upscale to 4K and hit 60fps on most games, it’s so smooth you really wouldn’t know it’s not 4K native
The extra 1gb of memory on the ps4 pro was for the UI so it could run at 4k.
Talk about how Mipmapping destroys visual quality... With it, textures in the near-far distance are blurred (AA not necessary). Without it, everything shimmers in high contrast pixelation (AA does very little to stop it).
Creating a road shader with high visual quality requires actually doing the work to fade textures to grey & reflective at the horizon like in the real world...
Am surprised Tim said he needed to put The Talos Principle on x-many times supersampling.
I found that I could not get Serious Sam 3 or 4 to alias. No matter what angle I looked at anything.
I think PS5 Pro with the 3.85ghz CPU it will be able to hit Ai Upscaled Dragon's Dogma 2 & GTA 6 at 60fps. 1080p upscaled to 1440p or something like that
I'm fairly certain it'll hit 4Ghz if not 4.2Ghz
I got double burned with Ampere and it's miserable amounts of vram, my wife had a 3070 and I had a 3080 and we both run into issues at 1440p 😡😡 fuck nVidia, we both run 7900XTX now, no issues with vram there 😁
What about Steve from gamers nexes? My guess is his ego would not let him "prop up" the mlid podcast. 😂🤣
huh
@Pikkoroo He's saying that Steve at Gamers Nexus won't come Tom's Broken Silicon because thinks he's too good for it. I wouldn't agree that he is but it could be why he hasn't been a guest yet.
@@frommatorav1too good? Maybe too busy or too humble I heavily doubt it’s ego
@@puffyips I was pointing out what the first comment was saying. He hasn't done a lot of collaborations but he has been on with PC World (Gordon) and Wendell. I enjoyed those videos, so I would be happy to see him on Broken Silicon. I won't pretend to know why he hasn't been on before. He might want to know Tom better first or he might be too busy. I'm not saying his ego is too big but I don't think he's too humble, either.
The other reason why Sony is developing PSSR is because Microsoft's console is also based on AMD hardware and also relies on FSR for upscaling. So if they come up with their own method that beats AMD's standard upscaler that's something they can push forward to make the PS5 Pro look like a more advanced product than the Xbox.
Yeah, and looks like there won't be a pro version of the Series X and they will release a new console in 2026 and stagger their releases. But without PSSR the Series X2 might not be better than the PS5 Pro. So what if it runs games at 80fps instead of 60fps for 120hz displays? It'll be a useless feature as frame-gen will do that for the PS5 Pro and it will look better. Let's also not forget that Microsoft will surely release the Series S2 at 12 tflops or less which will quash any attempt of the Series X2 to beat the PS5 Pro.
Things are really, really bleak for Microsoft in the gaming industry but they have cornered themselves with Series S and Starfield running at 30fps and looking worse than Mass Effect.
ps5 pro will be a more advance product against series X either way.