Just a technical note - the 6nm Navi 33 is based on a version of RDNA3 which lacks the 50% increase in vector register capacity as seen in N31 and N32. So it might be a slightly better outcome for RDNA3 if you compare the 6900xt vs 7900xt (if you could account for the minor difference in compute units i.e. 80 vs 84). But it seems N33 brings little to the table from architecture alone. Great analysis!
From my memory, my Sapphire 6900xt can hit 54FPS (Cyberpunk 2077 4KHDR High/no RT) on a 5900X CPU clocked at 4.4Ghz. I recently got the XFX 7900XTX and I can get 68FPS (high/no RT) on the same CPU. This is a 25% boost in rasterization performance. For RT performance, my memory is 50% boost for the same setup. I expected this because architecture-wise, Infinity fabric is going to introduce latency and reduced performance. But I still got the 7900XTX because Cyberpunk 2077 is >60FPS & MSFS is close to 60FPS. (Yes I know MSFS works better on Intel CPUs but 6900xt can only run at 48-50FPS while 7900XTX manages 58FPS. I also recently got the Minisforum UM790 Pro with 7940HS APU & the RDNA3 GPU is definitely much faster than the 6800HS on my Asus X13 Flow 2022 because the 780M can actually run Hogwarts Legacy and Cyperpunk 2077 at playable FPS while the 680M cannot.
It is more of a buy the last gen stuff or wait for this one to drop harder in prices. My mate got an RTX 3090 for similar price of a 4070 and still with 1 year warranty from ASUS. Of course, on used market, though that GPU wasn't used too much. As it wasn't teared apart, and hand no dust, and temps are really good.
This was why I was so confused when the 7600 came out. I saw the small boost over the 6600 xt with the same core count and thought "what were they doing for the past 2 years?" The 7800 xt is going to be embarrassing.
it's much cheaper for similar specs and higher clocks, this comparison is dumb, ignoring that newer architectures chief advantage is being more efficient, comparing overclocked to underclocked ignores that the underclocked card could also be overclocked. apples to apples would be looking at thermals and power draws which are the real limiting factors for clock speeds, not the clocks themselves. so rdna 3 is more efficient and can run at higher clocks with lower power, and he's trying to take that out of the comparison to make nvidia look better. he's an obvious shill.
@@doltBmBmuch cheaper? The 6650XT is cheaper than the 7600. And they have the same performances. If the 7600 can run much higher clocks, why it doesn't? Why does it perform like a 6650XT?
Boys boys you guys are comparing the 7600 with the 6600/6650 XT cards and not the 6600. This card from a RX 6600 I honestly say that is a decent jump but people expected it to be like a 6700 XT this 7600.
My worries about RDNA3 started when the W7900 seemed to be a small performance increase over the W6900X despite having 16 more compute units nearly 50% wider memory bus, and slightly higher clocks
RDNA3 underachieved but RTX 4000 is purposely nerfed. There is a difference. Technologically Lovelace is far more advanced than RDNA3. With the last generation let's not forget that Nvidia had a huge node disadvantage compared to AMD.
@@mrwpg6286 the 7900XTX is the full N31 die, the 4090 is a cut down ad102 and has it completely beaten, while N31 is just edging out the 4080 in raster but loses in RT, which is a cut down ad103 die. Price dictates what gpu is best for the task and N31 is overall a really fast chip, but it simply doesn't come close to 4000 highest end chips. In lower end, it's a bit harder to show, but the 4060 and 4060ti chips are what the 4050 and 4050ti should've been, its the reason they use so little power, while the 7600 uses more energy than the 6600
@@mrwpg6286 he's pretty much stating the facts. Full fat AD102 has 144SMs where the 4090 has 128. If a 4090ti released with the full die, AMD would be left in the dust completely.
@@OinWorkingway plum bonito is the whole N31 die with 6144SP, and the alledged 7990XT and 7950XT if those cards even come to life, are still using the same die. so it is the W7900. again, they are good gpu's, but nvidia didn't lag behind this generation in high end and it shows, the 7900XTX barely surpasses the 4080 while it is a cut down ad103 die.
There are reports that RDNA4 has been giving AMD problems, it's why their next release will just be mid to low end, so basically they STILL haven't worked out the issues RDNA3 have, but oddly they are releasing RDNA 3.5 in the new laptops.
RDNA 3.5 is just to fix efficiency issues of RDNA 3, they can't ship next gen APUs with a broken IP again, there's a reason you've barely seen any U version of Phoenix this gen. And no it won't come to desktop. It is what it is.
@@thevaultsup Great! AMD's chiplet design is really suffering from latency. which I don't think Meteor Lake's tile suffers from, but Meteor Lake's integrated GPU is technically a dedicated GPU instead of an integrated one since each of them has their own tiles.
Could you include some light-moderate RT results (doesn't need to be Ultra settings) at 1080p? I am interested to see if the Ray Accelerators upgrade between RDNA2 and 3 is visible in practice.
I'm not surprised at the RDNA3 results. Marketing BS aside, and the weakness of a duopoly market place, I always assumed that the first gen of a new architecture would never realise its full potential due to unforseen and missed market analysis. That last point really is historic and still makes me wonder what the hell is going through the heads of those in charge. However looking to the future, AMD will likely have a great product based on the hard work now. The chiplet design is the current issue, while the architecture looks like it can deliver really good results in a low power envelope. It's also modular, so once AMD can bring the architecture and chiplet design together it will start to deliver much better performances. Of course we have to see that happen. Till then it's more BS from the tech companies that only the dedicated take on and highlight. Thanks for the hard work in benchmarking and highlighting the BS :)
True, the real benefit of RDNA3 might be its renewed scalability. Pay no attention to the gimping of the 7600 that the AMD marketing pinheads demand, the real issue is, "Where is the ceiling on RDNA3?" The 6950xt was certainly at the limit of RDNA2. Overclockers have managed to drive clocks 35% above boost clocks in the 7900 xtx, exceeding 4090 performance.
@@dgillies5420 That sounds interesting. Do you have a link to these OCs where it's explained? At the moment most GPU news concentrates on the negative, with many believing that OCing is dead as GPU are already pushed to the max.
I would say that the marketing for architectures is a bit miss leading, because what it mean is the node that the GPU was built on, if there was no node improvement or a software one, there will be no gain between different architectures from the *same company* if they we're using the same node, the process node is what determines the improvement between the GPUs. **Great Video By the way keep uploading**
Very interesting benchmarks. I was curious about this analysis - thank you. Horizon Zero Dawn is probably a driver issue. It looks like RDNA3 is was focused on higher clock speeds primarily - it looks like 7600 can (over)clock a lot higher than 6600/6600XT/6650XT despite only being on a ‘slightly’ better node. (TSMC N6 is a derivative of TSMC N7). TSMC N6 probably allows 5-10% higher clocks than N7, but it looks like RDNA3 can clock 25-30% higher than RDNA2.
Getting so tired of AMD's and Nvidia's tomfoolery. We truly need Intel to jump on their toes, hard and heavy. Crushing bones, and performance per dollar.
What will probably happen is they will take a note from Nvidia's book and make FSR3 have frame gen and only usable on the 7000 series with a major boost in FPS over other gens and if lucky they make take a note Intel's upscaler tech and make it usable across all or most cards but the 7000 series having the tech to run the frame gen like how Xess runs best on Intel's cards but works across all brands is best case scenario , but we would then see those A.I cores work and reviewers then seeing the generational leap , Look at Nvidia's cards ,No real improvement over last gen when you take away upscaling etc AMD is just doing it later
Wouldn't a better test be a 7800xt vs a 6800 and match their clocks core and memory clocks (both 60 cus)? This would be the closest way to measure arch differences? And it's like this if I was guessing: rdna 3 wins in compute, rdna2 matches or exceeds in just raster gaming
For RDNA 3, AMD didn't increase TMU per CU while doubled stream processors count. Need to test raytracing and mesh shaders. Extra shaders only supports wave32 instruction set.
navi 33 is using TSMC 6nm which is just a performance optimised 7nm the fact that the 7600 matches or barely beats the 6600xt at the same clocks is no surprise to me since RDNA3 was focused on upgrading the PT(path tracing) performance of their architexture to catch up to Nvidia who had 1 extra gen of time to work on optimising their RT core. i dont expect another bump of more than 20% in the next 3 gens as over 3/4ths of performance is down to node advancements
When comparing the 6600XT and 7600, remember that the 7600 is on an improved version of the same 7nm node, but the ~15% performance boost from 5nm is not going to make up for this lacking arcitecture The $269 MSRP is somewhat of a consolation, but for a card that technically costs less to manufacture than a 5nm card with faster RAM would cost. What i'd have liked to see is 10GB of VRAM and on a 160 bit bus and either a slightly larger die with 40CU, or 5nm (edit still with 40CU) with a $299-$329 MSRP. Or, this card $40 cheaper
I like your thinking as we should be getting something closer to the RX 6700 non-XT for that price this generation. Seems like that 10GB RX 6700 non-XT came back to haunt them in the end.
We are all hoping AMD is not going to just starting faking stuff and have reached their performance limit with RDNA2 .. I loved RDNA2, have 2 RDNA2 gpus, and was holding my breath for RDNA 3. Then I was horrified when I saw the prices, relative to the performance. Sometimes if you relax and let some time pass, AMD value starts to shine thru. There has been a bit of that with RDNA3 but not enough. AMD should really do a refresh on RDNA3. Good video.
It's possible this new architecture will scale well as they refine the design on future products- but the first generation of all new design was difficult to really optimize to be significantly better than previous gen. Think RTX 2000 series NVidia. If only all these Tech companies would focus more on delivering good products versus overhyping future products.
The thing is the rx 7600 was basically a 6600 xt except with a slightly improved node (6 nm) vs the 7 nm of the entire RDNA 2 lineup. Looking it up, it only allows for like 6% improvement in power efficiency which is not exactly noticeable. People should have never expected this card to be any good according to what we see on paper.
There is one problem 6600xt/6650xt has half of TFlops vs 7600 and this 2x TFlops are from what dual-issue capability ? But RDNA1/2 have wider shaders vs RDNA3 that has smaller but have dual-issue capability and one RT core + dual AI cores. 5700xt vs 6700xt at same clocks have same FPS. One good test would be 5800X3D vs 7800X3D at same 4.0Ghz and RAM at 4600Mhz..... It also means that if you consider the RDNA 3 CU to be 64-wide (but actually capable of double the ops), it will appear to have improved performance per clock compared to truly 64-wide RDNA 2 CU. Basically, it will look like the architecture has improved “IPC”. From this, it seems that there are no particularly special matrix units dedicated to AI acceleration as in the CDNA and CDNA 2 architectures. The question is whether to talk about AI units at all, even though they are on the CU diagram. The ray accelerators have already been mentioned, i.e. the units responsible for hardware acceleration of raytracing effects analogous to Nvidia’s RT cores, which were first used in RDNA 2 GPUs. In RDNA 3, they also have an improved architecture, their performance is supposed to be 50 % higher per CU and they are supposed to handle 50 % more rays at once.Ray accelerators have various improvements in sorting and analyzing the “boxes” used during raytracing calculations using the BVH (Bounding Volume Hierarchy) method used in today’s raytracing games. But even generic shaders, which do some of the work in these calculations not relegated to dedicated RT units, have some new instructions to improve raytracing performance, according to AMD.
You bring up some interesting points. AMD did add AI acceleration into silicon this gen however, it has no impact on gaming at this point. Maybe with FSR3?
AMD tries to compare the 7600 to the launch price of the 6600 and make it seem like a good deal, but what people need to realize here is that the 6600 series (and the 6700XT for that matter) were released in the middle of the mining craze where GPU prices were all time high. AMD adjusted the launch prices to better reflect the market conditions at the time. Just look at the 3060 12GB which was launched in January 2021 at $329 for example. It handily outperforms the 6600($329 MSRP) and even surpasses the 6600XT($379 MSRP) at 1440p and above while having 12GB of VRAM.
Max frequency is part of the architecture, as is power efficiency (how long it can boost frequency). So there is no point in equalizing frequencies to compare both, it will only drag the better one down.
I think 7600 is simply something at the entry level to replace 6000-series when it runs out of stock. Should be cheaper tho. Most likely will be when 6600xt is sold out.
That kind of comparison isn't really apt for describing Lovelace vs Ampere, as like with Pascal, a large bulk of its performance gains are from the implementation of a superior memory system (The massive L2$ increase) & its massive boost in clock speed. A better comparison would be talking two cards of comparable TFLOPs/Pixel Fillrates (So clocking the 3090 Ti to 2GHz & the 4070 Ti for 2.8GHz) & seeing how well the two stack up against another. The 4070 Ti would have no issue being able to achieve comparable Perf/TFLOP at 1080p or 1440p for most titles, but at 4K it would completely crap itself, & in certain titles even 1440p proves to be too much for the little card.
amd really should have just done 5nm rdna 2. faster memory and sold them cheap. imagine a 5nm 6900xt with 20gbps memory for 699$ everybody would have been thrilled and honestly it probably would atleast match the 7900xt
Nice and suprisingly informative video. But I want to supplement something. While it might barely matter in this case, possible (perf. scaling) clock frequencies and power efficiency can also (slightly) depend on the architecture. Equalizing performance stats when one die might consume signiticantly more power (depending on the stats level - edge cases?), wouldn't automatically make it a fair comparision either. However, in this case I assume, at your testet options, they were close in power consumption and the 7600 even has a small note advantage anyway.
Thank you. I agree. The focus for this study was on equalizing the clock speeds to understand the benefit of the new architecture and when overclocking the 6600 XT, the power consumption did increase by about 20W.
My best bet is that 7600 is a very nerfed card on vector register database like top comment suggested. But also that its not just a new architecture. It features a die with 2 seperate process node chips working together. In which leaves a boatload of performance potential from optimizng them.Just like intels good old 14nm approach until the 12th gen.
I would love to know what AMDs design goals were for this architecture? Are they changing the architecture just so it will be better in the future for AI? It certainly didn't help much for gaming.
I am absolutely not surprised. But I would check separately N31 and N33 uplifts. Because N33 is MCM with different specs vs N23. RX7600 was leaked as RX7600XT, it was AMD announcement moment, when we got to know its just RX7600. It was 1st AMD step to make it look good, cutting price by 30USD during review embargo was 2nd step for making it acceptable value. But we know new products usually come for worse value with some margin. It would not be smart to let bleed retailer chain, if they live with low margins( they got very good ones lately I gues..xd) and leave stock of old gpus to be sold with loss. If there is any RX6600XT/6650XT around, I would not be worried of budget gpu bad value. When those dry out at shelves, I would check again, if RX7600 becomes decent replacement. (Nvidia leaves Ampere at good value point too by other methods....renaming, increase of prices) Good to remind, that 6nm node brings no performance but some efficiency over 7nm. AMD added just 2bil AI transistors to N23, maybe some cache tweaks and specs say also 0,5Gbit/s faster memory...which is not much and RDNA doesnt benefit from faster memory that well probably. I think going 6nm was mainly for both cost saving( smaller die size vs N23!!! and also using 6nm node for other products....So AMD probably going for what is best production cost and only that. Getting quantity discounts.
Idk if it allows it but you could try using more power tool to lower the 7600 memory speed, idk if it actually can do that though since I haven’t used it
you can not compare compute units of RDNA 2 vs RDNA 3 as they are totally different, This is simply Flawed comparison as CUs in RDNA 3 are “dual issue”, i.e. process two instructions simultaneously which result in Weaker CU vs RDNA2 CU. A very big headache to utilize for every Game/App over there. You can compare by Die Size to check what they did with same size (Regardless of Naming/Segmentation) even if process node is Slightly different 6nm vs 7nm . But for NV the Process Node Difference was Hugh 8nm Samsung vs 4nm TSMC and you can see how they get the Hugh efficiency Gains
If they were completely different, wouldn't AMD have done a better job of making that distinction this generation? What do you suggest for an unflawed comparison?
RDNA3 at micro-architectural level is a direct evolution of RDNA2. The dual-packet issue was a cheap approach by AMD to double the FLOPs rate that, ironically, works better in Wave64 mode than in the more economical Wave32. Compilers are terrible in vectorizing/packing operations together, so manual optimization will be the order of the day for RDNA3.
This is the first intelligent analysis of performance I have seen yet. Most reviewers have no clue. The performance difference between generations is small because design changes are incremental most gains are made by node density. The smaller the transistor the lower the power needed for the transistor to operate allowing for higher clocks at the same tdp. But scaling up cooling allows for higher TDP as well so if they can't scale smaller they can just increase power and use bigger coolers. It's a race to the bottom now. As scaling has hit the wall and now acceptable power limits have been reached. Even complete redesigns will bring little benefit from now on. The best thing that AMD can do is reduce power spikes by more even distribution of compute but this must be a challenge using chiplets. NVidia may fall behind in the next generation if they can't scale but I believe they have been working on chiplets designs for some time and they already have a much more energy efficient die. AMD is constantly playing catch-up at this point. But I think they are happy to be second place for now.
RDNA3 less cache but faster is the right direction, for GPUs increasing cache doesn't help that much compared to CPUs think of CPU only ever using 1-10% of available ram frequently which can be put in cache to speed up, while GPUs scan most of memory multiple times during a frame, because of much bigger data i.e textures, vertex, lighting data etc, so cache will only help if per-caching algorithms is super accurate, which is not an easy problem and is domain of CPU architects, which AMD is also good at..
I feel like rdna 3 is a zen 1 moment a little lack luster but when they get the chiplets worked out in a few gens like with zen 3 it'll be pretty good hopefully time will tell
Agreed. Modular chiplets that stack over to whatever combo needed would be interesting indeed. A guest on MLID awhile back was talking about how Nvidia would soon have to use chiplets eventually, they would hit a wall with monolithic and the price of silicon
@@WesternHypernormalization If it wasnt for AMD and their innovation youd still be on a four core intel chip lol. You miss the point and your comment is BAD value
I think most of the comparison is unfair to RDNA 3, for the Purpose of one very important technology that could boosts the performance of the RDNA 3 architecture. I do hope you will review again the RDNA 3 when FSR 3 comes and check it's performance when it's using WMMA instructions to access it's AI Accelerators. FSR 3 will have a feature that supports Temporal upscaling for generations lower than RDNA 3, and the WMMA(RDNA 3 - BF16 data type) upscaling features that uses AI accelerators to process those upscaling technology, this approach will be similar to DLSS.
I dont get it where all the increased transistor budget went. Dual issue CU's, but for what? RT not improved. 54% perf per watt nonexistent, it is closer to 0-5%. They claimed (not me) on an official slide clock for clock 15% improvement per cu which is nonexistent. So far, the RDNA3 lineup is horrible.
From what I read software needs to be specifically optimized/designed around dual issue CU's to use them. One game in GN's test was actually nearly 20% faster on the 7600 than the 6650 XT.
Thx for testing! It seems RDNA3 doesn't really scale with clock speed either. The only reason why the 7900 series is faster is the much higher CU count. Imagine if Nvidia had named and priced ADA appropriately a 7900 XT would be competing against a 399 4060 Ti (former 4070 Ti) and the XTX would be on par with a 4070 (Ti) (former 4080) for 599. The only reason AMD came close last gen was, because Nvidia was one node behind with Samsung 10mn. Now they are on equal terms and they are crushing AMD once again. Hopefully Intel will be competition one day.
Thanks. Yes, only the higher CU helps AMD this generation. If Intel can execute and release Battemage next year, it may be a wake up call for AMD Radeon.
AMD was more focused on the chiplet strategy for RDNA3 before the actual micro-architecture. By the way Navi33 doesn't implement the extra 50% increase of the vector registers found in Navi31. This further constrains the "IPC" boost compared to the previous generation. Ray-tracing is particularly affected by this, since RDNA3 doesn't include any hardware acceleration for BVH traversal.
I'll have about to say about Navi 32 in an upcoming video. As for the naming, AMD (and NVIDIA) has taught us that their naming convention is completely arbitrary and designed to either get maximum dollar or minimum embarrassment.
im currently so unhappy with my rx7600, the game im playing keep freezing even with reinstalled drivers and stuff, restarting multiple times doesnt do anything(world war z)
I see the point you are making. I would say that the lower end is not a big jump but i had a 6950xt and with my OC it would hit 2.9Ghz and was a beast but now i have a 7900 xtx. I will say my 7900xtx in most cases distroys my older 6950xt. Yeah it has more cores and vram but I don't care why it's faster all i cared about is it is. For example on Remnant 2 i know its not optimized well but my 6950xt runs the a lower framerate at 1440p as my 7900xtx at 4k both using upscaling.
"game clock is not well defined" uh, yes it is? it's the clock that the card runs at under a "3D" workload, typically there are three clock speeds, "idle" for desktop and browsing, "2D" for video and "3D" for graphics. very well defined. Compared to boost clocks which only kick in under certain circumstances for vanishingly short periods of time and those are the ones that are hard to define.
Why not compare the high end cards? I think that the RDNA3 cards come with some sort of AI cores which is not there on the RDNA2 cards. These AI cores may help in the future for upscaling….maybeeee
Got a 6700xt overclocked it 2750mhz 1070mv :) power 110%... Iam happy... need only cook the graphics on furmark tô get rid of the coilwine. 48hrs later it's dead silent
@@zdenkakoren6660 Yeah I seen that. The 7800XT with only 60 CU's it should get demolished by the 6800XT than. Game clock is like 2.6GHz I think. I can get that with my 6800XT and an overclock.
@zdenkakoren6660 ComputerBase tested one as well and found it to trade blows with a 6950 XT. I'm not sure why HWU's scored lower, maybe their 6950 XT was an OC'd AIB card and Computer bases wasn't? Or maybe differently branded MBA cards have different v/f targets.
I will say that it's a bit unfair to assume that the 54% performance per watt applies to ALL RDNA3 GPU's. Whenever a GPU maker uses a graph like that, it's the new flagship SKU vs the old top SKU. As you go down the stack the configuration changes. In general, the more expensive cards tend to be more efficient because their performance comes more from a larger die with more CU's than high clock speeds. For example if you cap the framerate of a game to something a 4070 TI can hit, the 4080 and 4090 will consume less power. Of course some generations have the top SKU go overkill and sacrifice that efficiency. What you're doing is sort of like saying Nvidia is lying about Ada's efficiency because the 4060 TI only uses like 18-22% less power than the 3060 TI and is only 5% faster. Meanwhile the 4090 is like 50-70+% more efficient than the 3090, and those are the numbers Nvidia gave. For one thing, you're testing a card that doesn't even have RDNA3's main structural changes (chiplets). It's like saying Turing has unusable RT after testing a GTX 1660 Super. The 7900 GRE on the other hand is basically a 6950 XT (according to computerbase) but uses 270W instead of 335W That being said, RDNA3 is disappointing.
As rumoured, next gen won't have high end GPU which is fair enough if that would be competitive. But the point is AMD don't need to be just competitive, they have to be much more than that, people aren't even buying Nvidia at this point. so if AMD do similar things to remain competitive, then why don't just buy Nvidia. They have to offer what a real next gen should be if they really want market share. It looks like they are more focused on servers and AI as they already disclosed MI400 GPUs underwork.
It's too bad AMD Execs don't understand this. I think what we don't understand is why doesn't AMD care about market share? We expect them to care about market share, but their actions say, they don't.
you kinda should have compared the RDNA2 OC with stock RDNA3 and the RDNA3 UC with stock RDNA2, the way you compared does not make any sense in what you wanted to do... instead, you compared an OC to a UC which is comparing higher clocks to lower clocks,. just like the stock-to-stock comparison is.
The way i see it. rdna 3 is a much better alternative than rdna 2 if you dont have a gpu or have an old one. since it seems that rdna3 has much better and stable prices at the moment while also providing newer software and could get better performance with future updates as amd has proven before
i hate when people complain newer gen cards to older hardware only by performance,these cards are getting more expensive because they have more features... (dlss 3 or av1 encoding for example)
rx7600 with the same specs is just 3% faster than the 650xt just because it has 3% faster clocks, not to mention that the 7600 has 20% more transistors. Now the best... 7900gre with the same specs as the 6900xt is as fast and is slower than the 69500xt. And funny is that the 7900gre has two times more transistors. RDNA 3 is a total failure in terms of performance and performance per watt after 3 years of development and with a smaller TSMC process.
Yeah, I'm building them to sell. I've switched to using 7600 on budget-mid builds because it's only 10 more than a 6650XT. But on entry-level it's still 12100f/6600.
Let's make a hypothetical example of gpu upgrade cost 300USD and savings if 70W/hr on new gpu vs old one. Most expensive electricity country is Belgium with 0,39USD per KWh/h. Such conditions call for using golden rated PSU at least 90 percent efficiency. So 70W comes out as 0,078kW/h on out let for 1h full load (gaming), which is 0,078x,0,39USD =0,024USD/h savings Point of investment return is at 300/0.024 =12500hr gaming. If we consider this upgrade as generational 2y it makes the upgrader to play 17hr per day. Nice 2shaft duty to get investment in greedy gpu company 😅.
So the 3 to 5% improvement is because of increase in wattage from 132 to 165 watts. I think AMD wasted 2 year’s trying to make chip-let and infinity fabric work in RDNA3 instead of working on improvements in performance. RDNA3 is just RDNA 2 relabelled and built in a lesser nanometer fab process. Hope that AMD finds their answer’s on chip-let integration at least with RDNA 4.
What if RDNA3 is indeed CDNA3 with video output?🙃 Processing power of rdna3 7900 XTX in big loads is faster than MI250 cdna2 and almost twice faster than 6900XT according to AMD's results published on GpuOpen amd lab notes finite difference docs laplacian part4 a comparison with 6650 XT vs 7600 may be interesting to see if computing and maths make a difference...1% to 10% more FPS is not an expectesd result for a +100% Tflops card according to official specs...
In technological terms, chiplet GPUs (RDNA III) for gamers are "World's Most Advanced Gaming Architecture," but nowhere does that line explicitly mention real-world performance. That's marketing. AMD tried to lie their way out of this hype trainwreck disaster and failed miserably.
AMD buried PhysX, G-Sync (remember your old god?), Hairworks, "dressFX", tell me who is technologically superior with your dead and unupdated techs mountain.
I have rdna2, XFX Merc Black rx6750xt, and i wont buy anything until at least rdna5. Rdna4 probably wont have any high end skus, sad because they are wastin their opportunity to bring us a good rx8800. Rdna5 if it's priced correctly and get us a 9800 die with 4090 raw performance (without upscalers on) at 550€ it will consider it. Edit: Intel ARC 3rd gen might also be the way to go and Nvidia it's already dead to me. I was a long time green team owner but they became the Apple of PC components so i must buy something else 😂
Just a technical note - the 6nm Navi 33 is based on a version of RDNA3 which lacks the 50% increase in vector register capacity as seen in N31 and N32. So it might be a slightly better outcome for RDNA3 if you compare the 6900xt vs 7900xt (if you could account for the minor difference in compute units i.e. 80 vs 84). But it seems N33 brings little to the table from architecture alone. Great analysis!
Great technical note. Thanks for sharing!
How did you know? Thank you.
the RX 7900 GRE has a cu count of 80 so that will definitely be more accurate.
From my memory, my Sapphire 6900xt can hit 54FPS (Cyberpunk 2077 4KHDR High/no RT) on a 5900X CPU clocked at 4.4Ghz. I recently got the XFX 7900XTX and I can get 68FPS (high/no RT) on the same CPU. This is a 25% boost in rasterization performance. For RT performance, my memory is 50% boost for the same setup. I expected this because architecture-wise, Infinity fabric is going to introduce latency and reduced performance.
But I still got the 7900XTX because Cyberpunk 2077 is >60FPS & MSFS is close to 60FPS. (Yes I know MSFS works better on Intel CPUs but 6900xt can only run at 48-50FPS while 7900XTX manages 58FPS.
I also recently got the Minisforum UM790 Pro with 7940HS APU & the RDNA3 GPU is definitely much faster than the 6800HS on my Asus X13 Flow 2022 because the 780M can actually run Hogwarts Legacy and Cyperpunk 2077 at playable FPS while the 680M cannot.
@@erictayet Sounds like you know your stuff, but comparing an XT with an XTX ? 🤨
This is the definition of "skip-it" generation for both AMD/nVidia users.
I don't think skipping 4090 is wise if you have the budget.
The thing is that Lovelace is technically impressive, but Nvidia decided to offer poor value products for all but the 4090
It is more of a buy the last gen stuff or wait for this one to drop harder in prices. My mate got an RTX 3090 for similar price of a 4070 and still with 1 year warranty from ASUS. Of course, on used market, though that GPU wasn't used too much. As it wasn't teared apart, and hand no dust, and temps are really good.
@@jal.ajeeradebatable, had no price cap.
The 4090 and 4080 is amazing though. It’s the price of the 4080 that makes it a bad deal.
This was why I was so confused when the 7600 came out. I saw the small boost over the 6600 xt with the same core count and thought "what were they doing for the past 2 years?" The 7800 xt is going to be embarrassing.
it's much cheaper for similar specs and higher clocks, this comparison is dumb, ignoring that newer architectures chief advantage is being more efficient, comparing overclocked to underclocked ignores that the underclocked card could also be overclocked. apples to apples would be looking at thermals and power draws which are the real limiting factors for clock speeds, not the clocks themselves. so rdna 3 is more efficient and can run at higher clocks with lower power, and he's trying to take that out of the comparison to make nvidia look better. he's an obvious shill.
@@doltBmBmuch cheaper? The 6650XT is cheaper than the 7600. And they have the same performances. If the 7600 can run much higher clocks, why it doesn't? Why does it perform like a 6650XT?
Boys boys you guys are comparing the 7600 with the 6600/6650 XT cards and not the 6600. This card from a RX 6600 I honestly say that is a decent jump but people expected it to be like a 6700 XT this 7600.
@@Joscraft_05bro wtf, they are same price! OFC you have to compare 7600 with 6650xt and not the 6600?!
@@doltBmBthe 6650xt is cheaper while drawing less power than the 7600 my guy...
Excellent and very informative video sir! Well done! Keep up the good work. 👍
Thanks, will do!
My worries about RDNA3 started when the W7900 seemed to be a small performance increase over the W6900X despite having 16 more compute units nearly 50% wider memory bus, and slightly higher clocks
RDNA3 underachieved but RTX 4000 is purposely nerfed. There is a difference.
Technologically Lovelace is far more advanced than RDNA3.
With the last generation let's not forget that Nvidia had a huge node disadvantage compared to AMD.
@@mrwpg6286 the 7900XTX is the full N31 die, the 4090 is a cut down ad102 and has it completely beaten, while N31 is just edging out the 4080 in raster but loses in RT, which is a cut down ad103 die.
Price dictates what gpu is best for the task and N31 is overall a really fast chip, but it simply doesn't come close to 4000 highest end chips.
In lower end, it's a bit harder to show, but the 4060 and 4060ti chips are what the 4050 and 4050ti should've been, its the reason they use so little power, while the 7600 uses more energy than the 6600
@@mrwpg6286 dude said purposely nerf, lol. Thought I heard it all from the fanboys.
@@mrwpg6286 he's pretty much stating the facts. Full fat AD102 has 144SMs where the 4090 has 128. If a 4090ti released with the full die, AMD would be left in the dust completely.
@@ishiddddd4783 wrong 7900xtx isn't full N31
@@OinWorkingway plum bonito is the whole N31 die with 6144SP, and the alledged 7990XT and 7950XT if those cards even come to life, are still using the same die. so it is the W7900.
again, they are good gpu's, but nvidia didn't lag behind this generation in high end and it shows, the 7900XTX barely surpasses the 4080 while it is a cut down ad103 die.
Please keep making videos like this, this is the proper analysis what the real gaming consumers are looking for 👍
Thanks, will do!
There are reports that RDNA4 has been giving AMD problems, it's why their next release will just be mid to low end, so basically they STILL haven't worked out the issues RDNA3 have, but oddly they are releasing RDNA 3.5 in the new laptops.
RDNA 3.5 is just to fix efficiency issues of RDNA 3, they can't ship next gen APUs with a broken IP again, there's a reason you've barely seen any U version of Phoenix this gen. And no it won't come to desktop. It is what it is.
@@thevaultsup isn't the rdna 3.5 on notebook still monolithic?
@@aviatedviewssound4798 Most of it yea, only Strix Halo is chiplet.
@@thevaultsup Great! AMD's chiplet design is really suffering from latency. which I don't think Meteor Lake's tile suffers from, but Meteor Lake's integrated GPU is technically a dedicated GPU instead of an integrated one since each of them has their own tiles.
Good to see your channel growing and heading towards the 10k mark. Discovered your channel around the 2.4k. I always look forward to your analysis.
Thanks so much!
Could you include some light-moderate RT results (doesn't need to be Ultra settings) at 1080p? I am interested to see if the Ray Accelerators upgrade between RDNA2 and 3 is visible in practice.
I'm not surprised at the RDNA3 results. Marketing BS aside, and the weakness of a duopoly market place, I always assumed that the first gen of a new architecture would never realise its full potential due to unforseen and missed market analysis. That last point really is historic and still makes me wonder what the hell is going through the heads of those in charge. However looking to the future, AMD will likely have a great product based on the hard work now. The chiplet design is the current issue, while the architecture looks like it can deliver really good results in a low power envelope. It's also modular, so once AMD can bring the architecture and chiplet design together it will start to deliver much better performances. Of course we have to see that happen.
Till then it's more BS from the tech companies that only the dedicated take on and highlight. Thanks for the hard work in benchmarking and highlighting the BS :)
Thanks!
True, the real benefit of RDNA3 might be its renewed scalability. Pay no attention to the gimping of the 7600 that the AMD marketing pinheads demand, the real issue is, "Where is the ceiling on RDNA3?" The 6950xt was certainly at the limit of RDNA2. Overclockers have managed to drive clocks 35% above boost clocks in the 7900 xtx, exceeding 4090 performance.
@@dgillies5420 That sounds interesting. Do you have a link to these OCs where it's explained? At the moment most GPU news concentrates on the negative, with many believing that OCing is dead as GPU are already pushed to the max.
I would say that the marketing for architectures is a bit miss leading, because what it mean is the node that the GPU was built on, if there was no node improvement or a software one, there will be no gain between different architectures from the *same company* if they we're using the same node, the process node is what determines the improvement between the GPUs. **Great Video By the way keep uploading**
Well done, your methods are about as good as one can get for this type of testing. Subbing because I respect your research. Thanks for this.
Very interesting benchmarks. I was curious about this analysis - thank you. Horizon Zero Dawn is probably a driver issue. It looks like RDNA3 is was focused on higher clock speeds primarily - it looks like 7600 can (over)clock a lot higher than 6600/6600XT/6650XT despite only being on a ‘slightly’ better node. (TSMC N6 is a derivative of TSMC N7). TSMC N6 probably allows 5-10% higher clocks than N7, but it looks like RDNA3 can clock 25-30% higher than RDNA2.
Getting so tired of AMD's and Nvidia's tomfoolery. We truly need Intel to jump on their toes, hard and heavy. Crushing bones, and performance per dollar.
The only thing they have left to do is compete on price, which you don't really want to do, but thats where were at.
What will probably happen is they will take a note from Nvidia's book and make FSR3 have frame gen and only usable on the 7000 series with a major boost in FPS over other gens and if lucky they make take a note Intel's upscaler tech and make it usable across all or most cards but the 7000 series having the tech to run the frame gen like how Xess runs best on Intel's cards but works across all brands is best case scenario , but we would then see those A.I cores work and reviewers then seeing the generational leap , Look at Nvidia's cards ,No real improvement over last gen when you take away upscaling etc AMD is just doing it later
Wouldn't a better test be a 7800xt vs a 6800 and match their clocks core and memory clocks (both 60 cus)? This would be the closest way to measure arch differences?
And it's like this if I was guessing: rdna 3 wins in compute, rdna2 matches or exceeds in just raster gaming
For RDNA 3, AMD didn't increase TMU per CU while doubled stream processors count. Need to test raytracing and mesh shaders. Extra shaders only supports wave32 instruction set.
navi 33 is using TSMC 6nm which is just a performance optimised 7nm the fact that the 7600 matches or barely beats the 6600xt at the same clocks is no surprise to me since RDNA3 was focused on upgrading the PT(path tracing) performance of their architexture to catch up to Nvidia who had 1 extra gen of time to work on optimising their RT core. i dont expect another bump of more than 20% in the next 3 gens as over 3/4ths of performance is down to node advancements
Nvidia managed to get a 30%-50% from Kepler to Maxwell while both were on 28nm TSMC.
When comparing the 6600XT and 7600, remember that the 7600 is on an improved version of the same 7nm node, but the ~15% performance boost from 5nm is not going to make up for this lacking arcitecture
The $269 MSRP is somewhat of a consolation, but for a card that technically costs less to manufacture than a 5nm card with faster RAM would cost.
What i'd have liked to see is 10GB of VRAM and on a 160 bit bus and either a slightly larger die with 40CU, or 5nm (edit still with 40CU) with a $299-$329 MSRP.
Or, this card $40 cheaper
I like your thinking as we should be getting something closer to the RX 6700 non-XT for that price this generation. Seems like that 10GB RX 6700 non-XT came back to haunt them in the end.
RDNA3 is an embarrassment 😭😔
What are u talking about
We are all hoping AMD is not going to just starting faking stuff and have reached their performance limit with RDNA2 .. I loved RDNA2, have 2 RDNA2 gpus, and was holding my breath for RDNA 3. Then I was horrified when I saw the prices, relative to the performance. Sometimes if you relax and let some time pass, AMD value starts to shine thru. There has been a bit of that with RDNA3 but not enough. AMD should really do a refresh on RDNA3. Good video.
Thanks. I agree with you on RDNA2. Love those RDNA2 GPUs. Maybe that's why we're disappointed with RDNA3. We were all expecting a repeat of RDNA2.
It's possible this new architecture will scale well as they refine the design on future products- but the first generation of all new design was difficult to really optimize to be significantly better than previous gen. Think RTX 2000 series NVidia. If only all these Tech companies would focus more on delivering good products versus overhyping future products.
The thing is the rx 7600 was basically a 6600 xt except with a slightly improved node (6 nm) vs the 7 nm of the entire RDNA 2 lineup. Looking it up, it only allows for like 6% improvement in power efficiency which is not exactly noticeable.
People should have never expected this card to be any good according to what we see on paper.
damnn i mean i knew that it wasn't that much of an improvement but only 1-3%? that's just cruel lol
Wow, just wow. Thanks for your time to find this out. Thanks for your hard work, sir.
Sure thing!
There is one problem 6600xt/6650xt has half of TFlops vs 7600 and this 2x TFlops are from what dual-issue capability ? But RDNA1/2 have wider shaders vs RDNA3 that has smaller but have dual-issue capability and one RT core + dual AI cores.
5700xt vs 6700xt at same clocks have same FPS.
One good test would be 5800X3D vs 7800X3D at same 4.0Ghz and RAM at 4600Mhz.....
It also means that if you consider the RDNA 3 CU to be 64-wide (but actually capable of double the ops), it will appear to have improved performance per clock compared to truly 64-wide RDNA 2 CU. Basically, it will look like the architecture has improved “IPC”.
From this, it seems that there are no particularly special matrix units dedicated to AI acceleration as in the CDNA and CDNA 2 architectures. The question is whether to talk about AI units at all, even though they are on the CU diagram.
The ray accelerators have already been mentioned, i.e. the units responsible for hardware acceleration of raytracing effects analogous to Nvidia’s RT cores, which were first used in RDNA 2 GPUs. In RDNA 3, they also have an improved architecture, their performance is supposed to be 50 % higher per CU and they are supposed to handle 50 % more rays at once.Ray accelerators have various improvements in sorting and analyzing the “boxes” used during raytracing calculations using the BVH (Bounding Volume Hierarchy) method used in today’s raytracing games. But even generic shaders, which do some of the work in these calculations not relegated to dedicated RT units, have some new instructions to improve raytracing performance, according to AMD.
You bring up some interesting points. AMD did add AI acceleration into silicon this gen however, it has no impact on gaming at this point. Maybe with FSR3?
AMD tries to compare the 7600 to the launch price of the 6600 and make it seem like a good deal, but what people need to realize here is that the 6600 series (and the 6700XT for that matter) were released in the middle of the mining craze where GPU prices were all time high. AMD adjusted the launch prices to better reflect the market conditions at the time. Just look at the 3060 12GB which was launched in January 2021 at $329 for example. It handily outperforms the 6600($329 MSRP) and even surpasses the 6600XT($379 MSRP) at 1440p and above while having 12GB of VRAM.
Great video. This channel is superb.
Thank you so much 😀
this video is so underrated
Max frequency is part of the architecture, as is power efficiency (how long it can boost frequency).
So there is no point in equalizing frequencies to compare both, it will only drag the better one down.
I think 7600 is simply something at the entry level to replace 6000-series when it runs out of stock. Should be cheaper tho. Most likely will be when 6600xt is sold out.
we Need same comparison for NV between Ampere vs Lovelace (Same Clocks) and same Mem Speed to try as much to see the Arch Difference
That kind of comparison isn't really apt for describing Lovelace vs Ampere, as like with Pascal, a large bulk of its performance gains are from the implementation of a superior memory system (The massive L2$ increase) & its massive boost in clock speed. A better comparison would be talking two cards of comparable TFLOPs/Pixel Fillrates (So clocking the 3090 Ti to 2GHz & the 4070 Ti for 2.8GHz) & seeing how well the two stack up against another. The 4070 Ti would have no issue being able to achieve comparable Perf/TFLOP at 1080p or 1440p for most titles, but at 4K it would completely crap itself, & in certain titles even 1440p proves to be too much for the little card.
amd really should have just done 5nm rdna 2. faster memory and sold them cheap. imagine a 5nm 6900xt with 20gbps memory for 699$ everybody would have been thrilled and honestly it probably would atleast match the 7900xt
Probably be better off spendind the extra $15-$25 for the 6700. It's slightly faster and has 10 gb of vram.
I love this review. Well done!
Thank you. Glad you enjoyed it!
Nice and suprisingly informative video.
But I want to supplement something.
While it might barely matter in this case, possible (perf. scaling) clock frequencies and power efficiency can also (slightly) depend on the architecture.
Equalizing performance stats when one die might consume signiticantly more power (depending on the stats level - edge cases?), wouldn't automatically make it a fair comparision either.
However, in this case I assume, at your testet options, they were close in power consumption and the 7600 even has a small note advantage anyway.
Thank you. I agree. The focus for this study was on equalizing the clock speeds to understand the benefit of the new architecture and when overclocking the 6600 XT, the power consumption did increase by about 20W.
My best bet is that 7600 is a very nerfed card on vector register database like top comment suggested. But also that its not just a new architecture. It features a die with 2 seperate process node chips working together. In which leaves a boatload of performance potential from optimizng them.Just like intels good old 14nm approach until the 12th gen.
RDOA3
This aged very bad. AMD not only Beat Nvidia it also Matched in workstation workloads. It beats Nvidias 4080 super and is cheaper.
why didn't he just take the RX 6650xt to compare?
its crazy with the rumors of “double perfomance”
thank you man, finally someone put in a video what im saying for month
RDNA3 is a failed architecture that bring noting to the table over RDNA2
I would love to know what AMDs design goals were for this architecture? Are they changing the architecture just so it will be better in the future for AI? It certainly didn't help much for gaming.
For entry level 24 percent is more then enough of a generational uplift. Issue is the price to performance. 7600 was already in top 5 last I checked
I am absolutely not surprised.
But I would check separately N31 and N33 uplifts. Because N33 is MCM with different specs vs N23.
RX7600 was leaked as RX7600XT, it was AMD announcement moment, when we got to know its just RX7600. It was 1st AMD step to make it look good, cutting price by 30USD during review embargo was 2nd step for making it acceptable value.
But we know new products usually come for worse value with some margin. It would not be smart to let bleed retailer chain, if they live with low margins( they got very good ones lately I gues..xd) and leave stock of old gpus to be sold with loss. If there is any RX6600XT/6650XT around, I would not be worried of budget gpu bad value. When those dry out at shelves, I would check again, if RX7600 becomes decent replacement.
(Nvidia leaves Ampere at good value point too by other methods....renaming, increase of prices)
Good to remind, that 6nm node brings no performance but some efficiency over 7nm. AMD added just 2bil AI transistors to N23, maybe some cache tweaks and specs say also 0,5Gbit/s faster memory...which is not much and RDNA doesnt benefit from faster memory that well probably.
I think going 6nm was mainly for both cost saving( smaller die size vs N23!!! and also using 6nm node for other products....So AMD probably going for what is best production cost and only that. Getting quantity discounts.
Idk if it allows it but you could try using more power tool to lower the 7600 memory speed, idk if it actually can do that though since I haven’t used it
you can not compare compute units of RDNA 2 vs RDNA 3 as they are totally different, This is simply Flawed comparison as CUs in RDNA 3 are “dual issue”, i.e. process two instructions simultaneously which result in Weaker CU vs RDNA2 CU. A very big headache to utilize for every Game/App over there.
You can compare by Die Size to check what they did with same size (Regardless of Naming/Segmentation)
even if process node is Slightly different 6nm vs 7nm . But for NV the Process Node Difference was Hugh 8nm Samsung vs 4nm TSMC and you can see how they get the Hugh efficiency Gains
If they were completely different, wouldn't AMD have done a better job of making that distinction this generation? What do you suggest for an unflawed comparison?
@@ImaMac-PC as i said same die size comparison
RDNA3 at micro-architectural level is a direct evolution of RDNA2. The dual-packet issue was a cheap approach by AMD to double the FLOPs rate that, ironically, works better in Wave64 mode than in the more economical Wave32. Compilers are terrible in vectorizing/packing operations together, so manual optimization will be the order of the day for RDNA3.
@@Ivan-pr7ku Thx for tech details.
Interesting followup. Thanks.
Thank you!
Could you do a similar comparison but for ada lovelace?
This is the first intelligent analysis of performance I have seen yet. Most reviewers have no clue. The performance difference between generations is small because design changes are incremental most gains are made by node density. The smaller the transistor the lower the power needed for the transistor to operate allowing for higher clocks at the same tdp. But scaling up cooling allows for higher TDP as well so if they can't scale smaller they can just increase power and use bigger coolers. It's a race to the bottom now. As scaling has hit the wall and now acceptable power limits have been reached. Even complete redesigns will bring little benefit from now on. The best thing that AMD can do is reduce power spikes by more even distribution of compute but this must be a challenge using chiplets. NVidia may fall behind in the next generation if they can't scale but I believe they have been working on chiplets designs for some time and they already have a much more energy efficient die. AMD is constantly playing catch-up at this point. But I think they are happy to be second place for now.
So, N33 is an updated RDNA2 6650XT with less power draw and cheaper to make. Only N31 and N32 have the new arch.
obviously
Hi i know I'm super late but can you share the most efficient unverdervolt settings you've achieved in RX 7600? So we can copy it on AMD Adrenaline
Excellent analysis!
Thanks!
RDNA3 less cache but faster is the right direction, for GPUs increasing cache doesn't help that much compared to CPUs think of CPU only ever using 1-10% of available ram frequently which can be put in cache to speed up, while GPUs scan most of memory multiple times during a frame, because of much bigger data i.e textures, vertex, lighting data etc, so cache will only help if per-caching algorithms is super accurate, which is not an easy problem and is domain of CPU architects, which AMD is also good at..
I feel like rdna 3 is a zen 1 moment a little lack luster but when they get the chiplets worked out in a few gens like with zen 3 it'll be pretty good hopefully time will tell
would agree. It's the fine wine.
Agreed. Modular chiplets that stack over to whatever combo needed would be interesting indeed. A guest on MLID awhile back was talking about how Nvidia would soon have to use chiplets eventually, they would hit a wall with monolithic and the price of silicon
Except zen 1 still managed to offer value good enough to get people's interest. Not this slot in just under the competition trash.
@@WesternHypernormalization If it wasnt for AMD and their innovation youd still be on a four core intel chip lol. You miss the point and your comment is BAD value
@@Djent_Lover Increase your reading comprehension first dumb fanboy. I wasn't trashing Zen 1, it was good. rdna 3 is trash and noting close to Zen 1.
I think most of the comparison is unfair to RDNA 3, for the Purpose of one very important technology that could boosts the performance of the RDNA 3 architecture. I do hope you will review again the RDNA 3 when FSR 3 comes and check it's performance when it's using WMMA instructions to access it's AI Accelerators. FSR 3 will have a feature that supports Temporal upscaling for generations lower than RDNA 3, and the WMMA(RDNA 3 - BF16 data type) upscaling features that uses AI accelerators to process those upscaling technology, this approach will be similar to DLSS.
Rdna 3 is really just a rdna 2.5,... the entire RX 7000 series line up is a joke
I don't think even AMD was expecting more than a couple percent in IPC gains. What they were expecting was 3000-3200mhz clocks at this power usage.
What about the power consumption? 🤔
Full breakdown in my last video.
I dont get it where all the increased transistor budget went. Dual issue CU's, but for what? RT not improved. 54% perf per watt nonexistent, it is closer to 0-5%. They claimed (not me) on an official slide clock for clock 15% improvement per cu which is nonexistent. So far, the RDNA3 lineup is horrible.
From what I read software needs to be specifically optimized/designed around dual issue CU's to use them. One game in GN's test was actually nearly 20% faster on the 7600 than the 6650 XT.
@@shanez1215 Could be the case, I dont know. Even if it 20% faster then, still way far off from the 54% perf/watt.
@@shanez1215 Nobody will recompile old games to do a favor for AMD.
Thx for testing!
It seems RDNA3 doesn't really scale with clock speed either. The only reason why the 7900 series is faster is the much higher CU count. Imagine if Nvidia had named and priced ADA appropriately a 7900 XT would be competing against a 399 4060 Ti (former 4070 Ti) and the XTX would be on par with a 4070 (Ti) (former 4080) for 599. The only reason AMD came close last gen was, because Nvidia was one node behind with Samsung 10mn. Now they are on equal terms and they are crushing AMD once again. Hopefully Intel will be competition one day.
Thanks. Yes, only the higher CU helps AMD this generation. If Intel can execute and release Battemage next year, it may be a wake up call for AMD Radeon.
AMD was more focused on the chiplet strategy for RDNA3 before the actual micro-architecture. By the way Navi33 doesn't implement the extra 50% increase of the vector registers found in Navi31. This further constrains the "IPC" boost compared to the previous generation. Ray-tracing is particularly affected by this, since RDNA3 doesn't include any hardware acceleration for BVH traversal.
What is your prediction of Navi 32 vs 6800/6800xt ? Will they call it 7800 or 7800xt
I'll have about to say about Navi 32 in an upcoming video. As for the naming, AMD (and NVIDIA) has taught us that their naming convention is completely arbitrary and designed to either get maximum dollar or minimum embarrassment.
im currently so unhappy with my rx7600, the game im playing keep freezing even with reinstalled drivers and stuff, restarting multiple times doesnt do anything(world war z)
I see the point you are making. I would say that the lower end is not a big jump but i had a 6950xt and with my OC it would hit 2.9Ghz and was a beast but now i have a 7900 xtx. I will say my 7900xtx in most cases distroys my older 6950xt. Yeah it has more cores and vram but I don't care why it's faster all i cared about is it is. For example on Remnant 2 i know its not optimized well but my 6950xt runs the a lower framerate at 1440p as my 7900xtx at 4k both using upscaling.
I think this is the 1st channel I've ever seen criticizing AMD products. Finally, someone with principles.
"game clock is not well defined"
uh, yes it is? it's the clock that the card runs at under a "3D" workload, typically there are three clock speeds, "idle" for desktop and browsing, "2D" for video and "3D" for graphics. very well defined. Compared to boost clocks which only kick in under certain circumstances for vanishingly short periods of time and those are the ones that are hard to define.
So game clock is defined for a 3D workload? Not a game? I would love a reference for that definition if you have one. Thank you!
@@ImaMac-PC any program which accesses the 3D functions will trigger the 3D clocks, it's not complex
thx for the vid!
Why not compare the high end cards? I think that the RDNA3 cards come with some sort of AI cores which is not there on the RDNA2 cards. These AI cores may help in the future for upscaling….maybeeee
Got a 6700xt overclocked it 2750mhz 1070mv :) power 110%... Iam happy... need only cook the graphics on furmark tô get rid of the coilwine. 48hrs later it's dead silent
Very nice! And with that 12GB of VRAM, you should be set for a while.
This should spell total disaster for the RX 7800XT than with only 60 compute units vs the 72 compute units in 6800XT.
HWUB did test 7900gre 80CU and is like a good 6800xt OC or normal 6900xt lol
@@zdenkakoren6660 Yeah I seen that. The 7800XT with only 60 CU's it should get demolished by the 6800XT than. Game clock is like 2.6GHz I think. I can get that with my 6800XT and an overclock.
@zdenkakoren6660 ComputerBase tested one as well and found it to trade blows with a 6950 XT.
I'm not sure why HWU's scored lower, maybe their 6950 XT was an OC'd AIB card and Computer bases wasn't? Or maybe differently branded MBA cards have different v/f targets.
I will say that it's a bit unfair to assume that the 54% performance per watt applies to ALL RDNA3 GPU's.
Whenever a GPU maker uses a graph like that, it's the new flagship SKU vs the old top SKU. As you go down the stack the configuration changes.
In general, the more expensive cards tend to be more efficient because their performance comes more from a larger die with more CU's than high clock speeds. For example if you cap the framerate of a game to something a 4070 TI can hit, the 4080 and 4090 will consume less power.
Of course some generations have the top SKU go overkill and sacrifice that efficiency.
What you're doing is sort of like saying Nvidia is lying about Ada's efficiency because the 4060 TI only uses like 18-22% less power than the 3060 TI and is only 5% faster. Meanwhile the 4090 is like 50-70+% more efficient than the 3090, and those are the numbers Nvidia gave.
For one thing, you're testing a card that doesn't even have RDNA3's main structural changes (chiplets). It's like saying Turing has unusable RT after testing a GTX 1660 Super. The 7900 GRE on the other hand is basically a 6950 XT (according to computerbase) but uses 270W instead of 335W
That being said, RDNA3 is disappointing.
My AMD card just died (HD 7850 1 GB xD). Thanks for a great introduction to the world of current cards.
8:37 Shadow of the benchmarks - LOL
As rumoured, next gen won't have high end GPU which is fair enough if that would be competitive. But the point is AMD don't need to be just competitive, they have to be much more than that, people aren't even buying Nvidia at this point. so if AMD do similar things to remain competitive, then why don't just buy Nvidia. They have to offer what a real next gen should be if they really want market share. It looks like they are more focused on servers and AI as they already disclosed MI400 GPUs underwork.
It's too bad AMD Execs don't understand this. I think what we don't understand is why doesn't AMD care about market share? We expect them to care about market share, but their actions say, they don't.
to be fair you need to look at silicon count. 7900XTX is no where near at 4090 amount of silicon. doesnt mean RDNA3 is bad architecture.
you kinda should have compared the RDNA2 OC with stock RDNA3 and the RDNA3 UC with stock RDNA2, the way you compared does not make any sense in what you wanted to do... instead, you compared an OC to a UC which is comparing higher clocks to lower clocks,. just like the stock-to-stock comparison is.
10:25 30% faster at 720? The difference is 48%
i love your videos!
Thanks!
The way i see it. rdna 3 is a much better alternative than rdna 2 if you dont have a gpu or have an old one. since it seems that rdna3 has much better and stable prices at the moment while also providing newer software and could get better performance with future updates as amd has proven before
Yep. Exactly my case. A week ago my 11 yo card died. So it makes sense for me to buy 7600, especially for AV1 support.
i hate when people complain newer gen cards to older hardware only by performance,these cards are getting more expensive because they have more features... (dlss 3 or av1 encoding for example)
rx7600 with the same specs is just 3% faster than the 650xt just because it has 3% faster clocks, not to mention that the 7600 has 20% more transistors.
Now the best... 7900gre with the same specs as the 6900xt is as fast and is slower than the 69500xt. And funny is that the 7900gre has two times more transistors.
RDNA 3 is a total failure in terms of performance and performance per watt after 3 years of development and with a smaller TSMC process.
I'm not seeing a huge difference on a 5600 R5 in comparison to a 6650XT on a B550 board, 32GB of 4000 MHZ DDR4, and running on an NVMe 4.0 2TB drive.
And sadly, you won't. This was not the upgrade generation.
Yeah, I'm building them to sell. I've switched to using 7600 on budget-mid builds because it's only 10 more than a 6650XT. But on entry-level it's still 12100f/6600.
7900 is not having its ai enhanced features used in any ray tracing method or upscaling yet.
Why is that?
@@GreyDeathVaccine apparently AMD are working on ray tracing software to leverage it with raytracing.
The clock speed is a feature of the architecture. You are comparing apples to oranges
Let's make a hypothetical example of gpu upgrade cost 300USD and savings if 70W/hr on new gpu vs old one. Most expensive electricity country is Belgium with 0,39USD per KWh/h.
Such conditions call for using golden rated PSU at least 90 percent efficiency.
So 70W comes out as 0,078kW/h on out let for 1h full load (gaming), which is 0,078x,0,39USD =0,024USD/h savings
Point of investment return is at 300/0.024 =12500hr gaming.
If we consider this upgrade as generational 2y it makes the upgrader to play 17hr per day. Nice 2shaft duty to get investment in greedy gpu company 😅.
If electricity costs were that serious for gaming, then I would be investing in solar panels and batteries and just buy a laptop.
So the 3 to 5% improvement is because of increase in wattage from 132 to 165 watts. I think AMD wasted 2 year’s trying to make chip-let and infinity fabric work in RDNA3 instead of working on improvements in performance. RDNA3 is just RDNA 2 relabelled and built in a lesser nanometer fab process. Hope that AMD finds their answer’s on chip-let integration at least with RDNA 4.
5:51 Please just use MSI afterburner
good content
Thanks!
i see what happened they are giving you xt class card for less and releasing a 300 dollar xt card
What if RDNA3 is indeed CDNA3 with video output?🙃
Processing power of rdna3 7900 XTX in big loads is faster than MI250 cdna2 and almost twice faster than 6900XT according to AMD's results published on GpuOpen amd lab notes finite difference docs laplacian part4 a comparison with 6650 XT vs 7600 may be interesting to see if computing and maths make a difference...1% to 10% more FPS is not an expectesd result for a +100% Tflops card according to official specs...
in games like portal rtx, major difference, rx 7600 gets 30fps at 1080p with performance upscaling, rdna2 couldn't do that
I've been thinking this for months, how all of AMD's claims of efficiency in RDNA 3 are bogus. Why isn't this more covered by the big reviewers?
That's a great question.
$110 less than the 6600 xt. I don't see your argument.
i love it 2 ram chip 2 cache 7700xt / 1 ram 1 cache 7500xt
AMD should have upped the memory speed to GDDR6X.
0:57 "most advanced gaming architecture" LMAO
In technological terms, chiplet GPUs (RDNA III) for gamers are "World's Most Advanced Gaming Architecture," but nowhere does that line explicitly mention real-world performance. That's marketing. AMD tried to lie their way out of this hype trainwreck disaster and failed miserably.
AMD buried PhysX, G-Sync (remember your old god?), Hairworks, "dressFX", tell me who is technologically superior with your dead and unupdated techs mountain.
It looks like RDNA3 is the Bulldozer of GPUs.
RDNA3 is such a disappointment. AMD really needs to get their gpu house in order or lose relevance totally.
I have rdna2, XFX Merc Black rx6750xt, and i wont buy anything until at least rdna5.
Rdna4 probably wont have any high end skus, sad because they are wastin their opportunity to bring us a good rx8800.
Rdna5 if it's priced correctly and get us a 9800 die with 4090 raw performance (without upscalers on) at 550€ it will consider it.
Edit: Intel ARC 3rd gen might also be the way to go and Nvidia it's already dead to me. I was a long time green team owner but they became the Apple of PC components so i must buy something else 😂
I can't stand monopolists as well. Fcuk Nvidia.