People's immune systems are weird. Im super healthy in the sense that i get cold etc extremely infrequently. Maybe once in five years or even more. The last time i was really sick was 2016 or something. There have been a few cases where i've thrown up and running slightly elevated temperature or headaches but these have passed within a 24 hours. On the other hand i have hypertension (elevated blood pressure) that im in the process of finding proper medication for with my doctor.
@@Raivo_K yeah same, maybe feel shity now and then but not ill like Mr badgers arse here lol, he’s always ill, looking drastic or generally looking unhealthy
AMD is making new GPUs with AI upscaling in mind. PS5 Pro is somewhat of a precursor to their upcoming architecture. Mark Cerny hinted at this during his presentation.
I think Nvidia's decision to only supply the 5070 with 12gb of memory might be AMD's saving grace. They will need a combination of both releasing the 8800XT in early January (to give them a one month lead on the 5070), and a low price (500 max) to do any sort of damage sales-wise. The success of RDNA 4 depends on how Nvidia prices its 5070 and whether or not the rumours of a 16gb model are true.
That impossible, 7900gre is already higher than 500$ and 8800xt 500$ will not gonna happened, atleast it will be 600$ and need to wait few months to discounts.
This is a perfect example of Nvidia and AMD working together...in case anyone was in any doubt. Nvidia could very, very easily close all doors for AMD if they wanted. Remember too, AMD is only (cough) 'competing' in the low to medium end - Nvidia needs to leave them space ;)
@@igm1571 7900GRE is irrelevant. It's last gen card with limited stock left. If AMD want to be competitive, they'll need to price the 8800XT competitively. 600+ isn't competitive as even with less memory, most people will opt for a 5070 instead. 550 is probably the sweet spot for AMD. But 500 - 525 will give them a huge edge on the 5070 which I expect will go for about 600 - 650.
I am SUPER pumped for RDNA 4 if its anything like Polaris it could be HUGEEEE, I also hope they have an rdna 2 unlocked bios and not a locked down rdna 3 like bios
Gotta take anything RDNA4 with a massive grain of salt. Based on what we already know about it not being able to compete with higher end RTX 50 series cards.
You should take any leak with a grain of salt. Just, period. But in this case not because of the reasons you mentioned. RDNA4 focusing on the midrange and RDNA4 having big boosts in RT and FSR performance are not mutually exclusive. Saying the leaks about its RT/FSR performance should be taken with a massive grain of salt (which implies they are most likely not true) just because RDNA4 focuses on the midrange is a logical fallacy. My GPU can be decent at 1440p but absolute shit at 4k simply because it does not have enough raw power. It can also be really good in RT scenarios in 1440p. Multiple things can be true at once.
Focusing on midrange has no place for innovations. How can a company innovate when do not push themselves to fight the competition. Let's see what will come form it ...
@@ivofixzone6410 Its not about it. Not like u said. They will focus on what sell. We all know MOST nvidia users buy4060 and 4070 and thats what they are doing. AND by not doing big chips, they reduce the project time and the different "blueprints" it would generate so it will require LESS CONTRACTS with TSMC to do fewer kinds of chips. Since every chip you print theres a MININUM QUANTITY for the project to be profitable for both TSMC AND AMD, recuing the number of chips will reduce the money needed to put a gpu in the marketing, so they can put MORE quantity of the same fewer chips so they can REDUCE THE COST even more of those fewer chips. We all know: they need to show better ray tracing dedicated cores + some IA cOres, fsr4 + frame generator all done by dedicated IA cores so it will not take performance from Raster , leading to less performance drop when ray tracing and doing FSR. RDNA strategie was to make a BEAST GPU with HUGE fp16 and 32 capabilities (and it is) , ELIMINATING the need of specific IA CORES, BUT the shader cores had to do so many things that even the monster numbers cant do it without huge performance drop. Its was enought t keep it with rtx 3000 series but Love Lace Brought a new level of game. Love Lace was weak in rasterization but a MONSTER when it comes TO IA/TENSOR cores and rt cores + top notch DLSS+FRAME GEN software suported by hardware. So even a weak 4060 could do A LOT when the other cores enter in action.
@@ivofixzone6410 even if they innocate and push then people will go for "true and tried", so they will again remain with unsold stock because "DLSS and nvidia RT performance is worth the premium" we can also go with old classic "amd drivers bad" which happens as often as it does with nvidia... there's no reason to push for high end if they own less than 10% of the market, everyone will either go for halo nvidia product or even if they managed to get ahead, then there would be a lot of speculations what's wrong first they need reputation to get better and be visible, also get devs to do proper job with implementing their tech, if you look at CP and atrocious FSR that's there... PS 5 ports have only good FSR implementations
@@ivofixzone6410 Koenigsegg cars may be some of the fastest in the world, but car innovation exists independently of their existence. What most engineers actually focus on isn't the absolute highest-performing bleeding-edge, but making cars more efficient for a price that real-world people will buy them for. That's innovation in its own right.
If AMD can implement a deep learning AA in FSR 4 that's anything like Nvidia DLAA, I will be incredibly happy. DLAA in the two games I've used it in (Deep Rock Galactic and Diablo 4) is PERFECT. The shimmering of diagonal edges is gone and the sharpness is basically perfect, with no temporal ghosting. It should be the gold standard target for AA.
AMD already has FSR NativeAA but it's not AI accelerated. Still i actually prefer it over DLAA because of the built in sharpening filter. DLAA by default is too soft and this annoys me.
When we finally get an upgrade that touts its so powerful it doesn't need upscaling, I'll be impressed. Until then they need to calm down on the arrogance and the prices
WMMA = Wave Matrix Multiply Accumulate it's hardware-accelerated matrix multiplications feature that can be exploited in RDNA3, but for machine learning purposes, in these area BF16 is extensively used, and RDNA2 and older generations can't performed BF16 operations, because it has no AI Accelerators. You can see at 7:32, So since AI upscaling is basically a machine learning application it will be heavily dependent on BF16, if FSR4 requires hardware hardware such as RDNA3 and newer then, this is the reason why, Since RDNA2 and older can't do machine learning because it has no AI Accelerators.
All I want is Navi 48 laptop chips to come out as soon as possible, and Asus to put the 8800/8900M XT, or whatever it's gonna be called, with 16GB VRAM in their AMD Zen 5 version of the Zephyrus G16 OLED for me to finally retire my aging Zephyrus M (i7 8750H/GTX 1070)
Exactly, laptop 4070 is dogshit with just 8GB VRAM and we can't even get the new G16 with anything better. Laptop 4080 is overpriced for only having 12GB VRAM and 4090 is even more overpriced, nvidia probably charges desktop 4090 money for it when it's just an underclocked 4080 (not even super). If N48 does end up performing similar to a 4080 at just 250-260W it will be smart for AMD to go for the laptop market, 125-175W 8800/8900M XT should be faster than a laptop 4090 since desktop 4080 uses over 300W and should be less efficient than N48. With 5080 using so much more power to achieve higher performance laptop 5080 and 5090 probably won't be that much more efficient other than GDDR7 because they can just clock it lower than desktop to reduce power while still being faster than last gen.
Unless RDNA4 finds a way to be astonishingly disappointing, I’m essentially certain to be getting whatever the “top” GPU is to replace the 7800XT in my mini-ITX build. Preferably N48 is at least close to even with the 7900XT in raster.
IF Amd price the 8800 XT at $499 and its right around the performance of the 4080, and they actually have caught up in ray tracing and FSR 4 turns out to be a worth competitor to DLSS then it could be a good card. BUT......Will AMD price it right? they say they're focusing on midrange and anything over $499 would be more than midrange pricing. i just dont think AMD will price it right and the performance wont be as good as we hope. Whats the bet it comes out at $599, right around the 5070's pricing. It will just be another repeat, AMD keeps fumbling when it comes to GPU pricing
Yes. Personally I'll be extremely surprised if there are any meaningful RT performance improvements over the 7900 XTX, considering how lackluster all the announced PS5 Pro game upgrades have been so far and the fact that RDNA4 will be generally significantly slower.
Yes you would be. AMD said themselves they are going to focus more on the midrange instead of a flagship card. They are focusing more on sales through volume and leaving the best to NVIDIA because there were issues with the 8900XTX they were attempting to make. I don't believe anything this coming generation in Q1 2025 will top the XTX in terms of raw performance.
That depends on how good RDNA 4 RT performance is, and to a lesser extent what your target resolution is. RDNA 4 would need about 50%-100% improvement in RT to match the 4080 (depending on the game and the level of RT/PT implemented). That would be an impressive and "significant" increase, but even 4080 RT performance isn't always sufficient for smooth gameplay . I have a 4090 and newer titles with demanding RT (Wukong, Alan Wake 2) can't sustain 30fps in 4k native. In these titles I have to use DLSS Performance (1080p internal render) to hit 60fps, which means you now have to factor in the reduced image quality of FSR (particularly when upscaling to 4k from a sub-1440p base render resolution). In a best case scenario where AMD pulls off a massive gen on gen improvement, it's still almost a full gen behind. I think it's highly unlikely that RDNA 4 will match the 5070 in RT at lower resolutions, and at 4k the 40 series is already feeling inadequate. Really the only game in town for 4k RT is Nvidia, and you have to pay up for a seat at the table. Maybe AMD pulls out a 150%+ rabbit out of a hat, but I wouldn't hold my breath. That said, I hope RDNA 4 delivers great value at the sub $500 tier, AMD can claw back some market share, and RDNA 5 can truly compete with Nvidia.
Sounds like we need a 12GB N44. Perhaps if they pair N44 with the 192bit IO/12GB from the N48. 8GB is for the extreme low end, and better RT also uses some vram, so to take the mantle from 3060 12GB. 12GB made the 3060 a classic.
@@dgillies5420 Strix halo is an APU, it looks like what they are going to do is give discounts to laptop OEMs who want to pair Strix Point or Fire Range with N44 and N48.
Some YT chanel did some tests using the Core Ultra 9 285 and CUDIMM running at 8400 MT and the gaming performance was pretty good. Now that require that you buy memory that right now will be rather expensive and you have to win the silicon lottery and get a processor and motherboard that work at 8400 MT or better. This is interesting as the max speed specified by intel for the Core Ultra processors is 6400 MT if I remember correctly. Any processor that doesn't work at a MT higher than that they won't replace as it still works at the specified speed. So I can't call that a real solution to the gaming performance of these processors.
Diffused in Taiwan, made in malaysia. Back of amd box states "amd processors are diffused and/or made in one or more of the following countries and/or regions: USA, Germany, Singapore, China, Malaysia or Taiwan"
Would be strange for it to be chiplet, but the 8900xtx was supposed to be chiplet. Maybe they figured out that it doesn't scale well past a certain point? Unlikely but just an idea
IF I was forced to guess AMD could probably have figured out the HW aspect of things [give or take bottlenecks that is] but I suspect that getting the drivers to work as desired was the real gotcha.
do you not think Halo strix will be upgraded from 40 compute unit later on if they can get it to run as people say why not try go for 1440p also with a chip like that what do you think ?
I wish AMD and Intel would try to compete with Nvidia's most powerful GPUs. Just competing in the low end will make them regress every generation. There needs to be stiff competition. I think to lower costs they should focus on 3 cards initially. High end, to compete with the RTX 5090, a mid tier to competed with the RTX 5070 and also a low tier to compete with the RTX 5060 but with the low end, if need be make is less powerful than the RTX 5060 but also A LOT cheaper so it will entice those who can't afford a lot but want a reasonable card compared to previous gen. Then after some time goes by, maybe they can release other versions from chips of higher tiers that may have some flaws but can still be repurposed to become mid tiers.
What ever AMD has up it's sleeve its got Nvidia worried for a change hence earlier than normal Mid&Low end RTX50 cards. I still think AMDs using RDNA4 as a stop gap, with high end RDNA5 coming mid to end 2026 With mid-low following 2027. Then if Chiplet GPUs finally work as expected then Nvidia will need to look at chiplet gaming gpus.
*I am very happy for AMD video card fans. This is great news for them. I love AMD, but for an extra $50-100 I prefer top performance per tier. So, I will stick to Nvidia.*
yay can hopefully focus on AMD side the things that interest me then Boring Ngreedia, people gonna buy their cards no matter how rubbish they are due to the brainwashing and all that nonsense.
ditch opengl and vulkan, move to opencl for all types of generic compute, the drivers are much more metal. for both very low level graphics and other compute, for games., like physics and ai and other simulations, and environment management. yep opengl and directx, vulkan, are only for the gpu display outputs special usage case. yes all cpu/gpu/npt/tpu/rtx/fpga are to be used through opencl generic compute pathway. then fetch the output buffer to the cpu. then push it to the display or whatnot. yes opencl is the best (shader type) compiled kernel program api there is. well make sure the pipeline from the opencl compute device to cpu to cpu image buffer copy pipeline is fast. its at least 30fps at 4K currently. most of the time is spent fetching the data from the opencl device and pushing it to the gpu / opengl vulkan directx whatnot display api interface. yes opencl drivers are much easier to make than opengl, vulkan or directx drivers. opencl is almost bare metal c-code. yep you can do even opengl drivers based on opencl, mostly. yep opencl is also open source. generic compute avoids opengl/vulkan/graphics api complications. key issue is that the opencl pipeline operates completely independent from the graphics api, to avoid all complications. any dma image buffer transfers must happen without any opengl interop with opencl. as you dont know if its the opengl device that does the opencl rendering that also would display and share memory. also multiple devices other than the opengl display device will be doing the generic render compute, including the cpu. well gpu has tons of array/matrix number crunching capabilities. ie turbo avx. yes opencl runs on all devices that have opencl drivers, including cpus fpgas etc. not only on vulkan or opengl, directx devices. opengl and other graphics only apis are only in the way and causing confusion. yep code the graphics by hand, you dont need any outside libs. ray tracing is not complicated either, if you dont look at the formulas but think it through on your own. you have an amount of light and need to think what to do with it. nah at least two rays per pixel, per reflection/refraction bounce, say 3 bounces. so you have 1+2+4+8 = 15 rays per pixel per frame static. and async on-demand lighting updates using ray tracing. no denoising required.
Tbh if Nvidia is downgrading all the gpu segments once again, that's the final straw for me. I'll go for rdna 4, considering they would finally have up to par ray tracing perf for productivity tasks compared to Nvidia. Rdna 3 is not bad, but it's too far behind in ray tracing
Unless you're "pretty unhappy" with the 6800XT I'd just wait for RDNA 5 and leave out the mid-tier upgrade. Then again if the 8800XT does something important that your 6800XT can't then upgrade away ...
RDNA 4 FSR improvements sound good - getting a 16GB 8800XT for 4k gaming is sounding more and more feasible. Ray tracing still doesn't seem very interesting, but its probably time to get a card that has ray tracing similar to what a PS6 will have. Everything else from the next gen below a 5080 seem like they will run into known VRAM issues. 8GB is just not enough for 1080p with ray tracing and 12GB isn't good for ray traced 1440p native or 4k DLSS/FSR quality.
@@VoldoronGaming Because the PS5 Pro's RT improvements are supposed to be ported over from RDNA4? Of course it's not certain that it's EXACTLY the same but seems likely, and I really have no confidence in AMD being able to actually deliver on any goals or promises with GPUs anymore.
You know how i see the future. The time Amd finally do something good with gpu, they've got passed and even Nvidia by an Chinese gpu company. Huawei is now competing with Nvidia in the AI part. They arent as good as Nvidia now. But see in a few years. I think after 3 gens Huawei can have better product then Nvidia. And then they go into gpu market or anather company will go into gpu market. And then they beat all other companies. 3 gens chinese gpu can go very fast, in 3 years i think.
I would "hope" that the flagship RDNA 4 part if only 400$ or so but I think that the cost will be closer to the 475$-575$ range with it expected to skew towards the higher end of the range. I hope to be proven wrong -- in a good way :)
So I hear so many conflicting news.. Is the next AMU GPU going to compete with 5080 and 5090? Or it will only be a bunch of mid of the pack shitty gpus? Also focusing on ray tracing is the dumbest shit.. this thing has been out for years and is barely supported on any games.. and it EATS the performance of any cards.. It's a waste of time AND increase of cost for not really much visual improvement.
AMD really needs to improve the stability of their drivers and Adrenalin software. This was the main reason I switched to NVIDIA. While Team Green has its own issues, I found their problems to be far less frequent than those with AMD's drivers. If AMD can fix their driver issues, I would definitely consider going back, especially since I can't afford NVIDIA anymore.
*I Think the NEW Radeon "flagship" will have 24Gb of GDDR6X or 7, will be 4 to 8% faster in Raster (than the 7900XTX) and 250% better in Gay Tracing, for an price around $649,- to $699,-* And the one below that : 20Gb of GDDR6X or 7, will be 2 to 6% faster in raster (than the 7900XT) and also 250% better in Gay Tracing, for an price of $549,- to $599,-
Intel needs to get out of the Discrete Graphics market, they can’t afford to compete and Intel is not the company that it once was. They have failed before with discrete graphics, just stick with integrated and get your CPUs to be competitive again.
@@kevinerbs2778 agree the 5070 sounds like a 5060 Ti. First thing, is that it all depends on the price. it always does. 12gb of higher spec ram for something with performance between a 4070 Ti and a 4070 Ti Super is *just* enough in today's market. Secondly, are we comparing it to a 3060 Ti or a 4060 Ti because the two beasts are very different
@@AndyViant Oh no. No you did not go there with that card without D.L.S.S & Frame generation crap is a miserable 10 to 15% above the RTX 3060 ti. It's stagnation trash.
@@kevinerbs2778 That's exactly my point. The 3060 Ti was a good card. A huge jump from the previous generation's equivalent, the 2060 Super. (as there were no Ti's in the 2000 series except the 2080 Ti) The 4060 Ti was trash. Pure and simple. So the question is, if the 5070 is really a 60ti class card, will it be a meaningful generational leap, like the 3060 Ti (38 SM/4864 cores) was over the 2070 (36/2304) and 2070 Super (40/2560)? That kind of generational jump would see the 5070, even if it only really a 5060 ti like config to be MUCH faster than a 4070 Ti as a minimum. 89% more cores than the 4070 Super would be incredible. Over 13,500 cores. Even if it only had 111% more cores than the original 4070 that's over 12400 cores? That's by spec sheet. The real world tells us a different story, because the architecture changes matter too. If it gave us the same 15% jump over the 4070 Super that the 3060 Ti gave us over the 2070 Super for $500 or even $600 I'd be happy with that, as that would be between 4070 Ti Super and 4080 performance.
Buddy in all the years I've been watching you, you've always been getting over a cold/sickness. Are you okay?
Yeah there’s been a few people that’s mentioned this in the comments over the years but never responds, always looks as rough as a badgers arse
@@chrisgrimes325 Roids
Agreed, highly concerning.
People's immune systems are weird. Im super healthy in the sense that i get cold etc extremely infrequently. Maybe once in five years or even more. The last time i was really sick was 2016 or something. There have been a few cases where i've thrown up and running slightly elevated temperature or headaches but these have passed within a 24 hours.
On the other hand i have hypertension (elevated blood pressure) that im in the process of finding proper medication for with my doctor.
@@Raivo_K yeah same, maybe feel shity now and then but not ill like Mr badgers arse here lol, he’s always ill, looking drastic or generally looking unhealthy
After AMD's RDNA 3 claims, I'll believe it when I see it.
AMD is making new GPUs with AI upscaling in mind. PS5 Pro is somewhat of a precursor to their upcoming architecture. Mark Cerny hinted at this during his presentation.
I think Nvidia's decision to only supply the 5070 with 12gb of memory might be AMD's saving grace. They will need a combination of both releasing the 8800XT in early January (to give them a one month lead on the 5070), and a low price (500 max) to do any sort of damage sales-wise. The success of RDNA 4 depends on how Nvidia prices its 5070 and whether or not the rumours of a 16gb model are true.
That impossible, 7900gre is already higher than 500$ and 8800xt 500$ will not gonna happened, atleast it will be 600$ and need to wait few months to discounts.
Given the memory bus the 5070 will either come out with 12 gb vram or 18 gb vram if the 3 gb modules are available before launch
This is a perfect example of Nvidia and AMD working together...in case anyone was in any doubt. Nvidia could very, very easily close all doors for AMD if they wanted. Remember too, AMD is only (cough) 'competing' in the low to medium end - Nvidia needs to leave them space ;)
@@igm1571 7900GRE is irrelevant. It's last gen card with limited stock left. If AMD want to be competitive, they'll need to price the 8800XT competitively. 600+ isn't competitive as even with less memory, most people will opt for a 5070 instead. 550 is probably the sweet spot for AMD. But 500 - 525 will give them a huge edge on the 5070 which I expect will go for about 600 - 650.
@@AnGhaeilge500 would be a must buy for the 8800 xt
FSR4/Ai Upscaling, about damn time. Now AMD needs an equivalent tech to Nvidia's Ray Reconstruction and Rtx HDR. Hopefully, it won't take years.
I use AMD but il say it straight AMD will never be as good Nvidia in Ray tracing until they chose to go hardware base ray tracing that's the truth
Except nobody cares about upscaling and rt.
@@XeqtrM1 7000 series already is hardware base, it just look bad as nvidia 2000 series gpus.
@@igm1571 true that I'm not happy with my 7900 xtx so I'm moving to team green so I'm getting 5090
@@igm1571 they' r to far behind so they probably won't catch up
I am SUPER pumped for RDNA 4 if its anything like Polaris it could be HUGEEEE, I also hope they have an rdna 2 unlocked bios and not a locked down rdna 3 like bios
Gotta take anything RDNA4 with a massive grain of salt. Based on what we already know about it not being able to compete with higher end RTX 50 series cards.
You should take any leak with a grain of salt. Just, period.
But in this case not because of the reasons you mentioned. RDNA4 focusing on the midrange and RDNA4 having big boosts in RT and FSR performance are not mutually exclusive. Saying the leaks about its RT/FSR performance should be taken with a massive grain of salt (which implies they are most likely not true) just because RDNA4 focuses on the midrange is a logical fallacy.
My GPU can be decent at 1440p but absolute shit at 4k simply because it does not have enough raw power. It can also be really good in RT scenarios in 1440p. Multiple things can be true at once.
Focusing on midrange has no place for innovations. How can a company innovate when do not push themselves to fight the competition. Let's see what will come form it ...
@@ivofixzone6410 Its not about it. Not like u said. They will focus on what sell. We all know MOST nvidia users buy4060 and 4070 and thats what they are doing.
AND by not doing big chips, they reduce the project time and the different "blueprints" it would generate so it will require LESS CONTRACTS with TSMC to do fewer kinds of chips. Since every chip you print theres a MININUM QUANTITY for the project to be profitable for both TSMC AND AMD, recuing the number of chips will reduce the money needed to put a gpu in the marketing, so they can put MORE quantity of the same fewer chips so they can REDUCE THE COST even more of those fewer chips.
We all know: they need to show better ray tracing dedicated cores + some IA cOres, fsr4 + frame generator all done by dedicated IA cores so it will not take performance from Raster , leading to less performance drop when ray tracing and doing FSR.
RDNA strategie was to make a BEAST GPU with HUGE fp16 and 32 capabilities (and it is) , ELIMINATING the need of specific IA CORES, BUT the shader cores had to do so many things that even the monster numbers cant do it without huge performance drop. Its was enought t keep it with rtx 3000 series but Love Lace Brought a new level of game.
Love Lace was weak in rasterization but a MONSTER when it comes TO IA/TENSOR cores and rt cores + top notch DLSS+FRAME GEN software suported by hardware.
So even a weak 4060 could do A LOT when the other cores enter in action.
@@ivofixzone6410 even if they innocate and push then people will go for "true and tried", so they will again remain with unsold stock because "DLSS and nvidia RT performance is worth the premium" we can also go with old classic "amd drivers bad" which happens as often as it does with nvidia... there's no reason to push for high end if they own less than 10% of the market, everyone will either go for halo nvidia product or even if they managed to get ahead, then there would be a lot of speculations what's wrong
first they need reputation to get better and be visible, also get devs to do proper job with implementing their tech, if you look at CP and atrocious FSR that's there... PS 5 ports have only good FSR implementations
@@ivofixzone6410 Koenigsegg cars may be some of the fastest in the world, but car innovation exists independently of their existence. What most engineers actually focus on isn't the absolute highest-performing bleeding-edge, but making cars more efficient for a price that real-world people will buy them for. That's innovation in its own right.
If AMD can implement a deep learning AA in FSR 4 that's anything like Nvidia DLAA, I will be incredibly happy. DLAA in the two games I've used it in (Deep Rock Galactic and Diablo 4) is PERFECT. The shimmering of diagonal edges is gone and the sharpness is basically perfect, with no temporal ghosting. It should be the gold standard target for AA.
AMD already has FSR NativeAA but it's not AI accelerated. Still i actually prefer it over DLAA because of the built in sharpening filter. DLAA by default is too soft and this annoys me.
ROCK AND STONE!
I used DLSS Tweaks to get DLAA working in Nioh 2, and boy, what an improvement!
yeah current fsr is absolutely terrible
Will the 7900xtx get FSR4 ?
When we finally get an upgrade that touts its so powerful it doesn't need upscaling, I'll be impressed. Until then they need to calm down on the arrogance and the prices
WMMA = Wave Matrix Multiply Accumulate it's hardware-accelerated matrix multiplications feature that can be exploited in RDNA3, but for machine learning purposes, in these area BF16 is extensively used, and RDNA2 and older generations can't performed BF16 operations, because it has no AI Accelerators. You can see at 7:32, So since AI upscaling is basically a machine learning application it will be heavily dependent on BF16, if FSR4 requires hardware hardware such as RDNA3 and newer then, this is the reason why, Since RDNA2 and older can't do machine learning because it has no AI Accelerators.
All I want is Navi 48 laptop chips to come out as soon as possible, and Asus to put the 8800/8900M XT, or whatever it's gonna be called, with 16GB VRAM in their AMD Zen 5 version of the Zephyrus G16 OLED for me to finally retire my aging Zephyrus M (i7 8750H/GTX 1070)
Exactly, laptop 4070 is dogshit with just 8GB VRAM and we can't even get the new G16 with anything better. Laptop 4080 is overpriced for only having 12GB VRAM and 4090 is even more overpriced, nvidia probably charges desktop 4090 money for it when it's just an underclocked 4080 (not even super). If N48 does end up performing similar to a 4080 at just 250-260W it will be smart for AMD to go for the laptop market, 125-175W 8800/8900M XT should be faster than a laptop 4090 since desktop 4080 uses over 300W and should be less efficient than N48. With 5080 using so much more power to achieve higher performance laptop 5080 and 5090 probably won't be that much more efficient other than GDDR7 because they can just clock it lower than desktop to reduce power while still being faster than last gen.
Unless RDNA4 finds a way to be astonishingly disappointing, I’m essentially certain to be getting whatever the “top” GPU is to replace the 7800XT in my mini-ITX build. Preferably N48 is at least close to even with the 7900XT in raster.
7900XT raster is the bare minimum top Navi 48 has to achieve or it is a total flop but I don't believe it will under-perform.
No news if the 7900 series will get FSR4 support?
I would like to know as well.
Paul, you gotta hold off on “basically”. Love the show.
Crossing fingers AMD delivers this time around. I'm holding back upgrade and till Jan
Sub $400 next gen 256-bit like they actually do a Polaris 2.0 and I'd definitely buy.
If RDNA4 is just a few months away doesn't that mean Strix Halo should be RDNA4 and not RDNA3.5?
I am not sure if they will call it FSR 4 now that AI is on board.
I think it would be a great time for a rebranding.
I'm gonna be cautious and hope that the 8800XT manages at least 7900XT performance.
If 8800XT is 7900XT raster, 4080 Raytracing and $600 I'll be buying it. If it's 7900GRE Raster, 4070 Super Raytracing and $800 why the heck would you?
This will be best comparing raytracing 7900xt vs 3080 vs 4080 vs whatever amd highest rdna 4 gpu
IF Amd price the 8800 XT at $499 and its right around the performance of the 4080, and they actually have caught up in ray tracing and FSR 4 turns out to be a worth competitor to DLSS then it could be a good card. BUT......Will AMD price it right? they say they're focusing on midrange and anything over $499 would be more than midrange pricing. i just dont think AMD will price it right and the performance wont be as good as we hope. Whats the bet it comes out at $599, right around the 5070's pricing. It will just be another repeat, AMD keeps fumbling when it comes to GPU pricing
Is it stupid that i want to upgrade from the 7900xtx for the new one for the ray tracing performance but i dont want nvidia cause they are greedy mf?
No its not
Yes. Personally I'll be extremely surprised if there are any meaningful RT performance improvements over the 7900 XTX, considering how lackluster all the announced PS5 Pro game upgrades have been so far and the fact that RDNA4 will be generally significantly slower.
Yes you would be. AMD said themselves they are going to focus more on the midrange instead of a flagship card. They are focusing more on sales through volume and leaving the best to NVIDIA because there were issues with the 8900XTX they were attempting to make. I don't believe anything this coming generation in Q1 2025 will top the XTX in terms of raw performance.
If you care about raytracing you should not choose amd at the beginning, maybe 8000 series will be better but not sure 100% better.
That depends on how good RDNA 4 RT performance is, and to a lesser extent what your target resolution is.
RDNA 4 would need about 50%-100% improvement in RT to match the 4080 (depending on the game and the level of RT/PT implemented). That would be an impressive and "significant" increase, but even 4080 RT performance isn't always sufficient for smooth gameplay . I have a 4090 and newer titles with demanding RT (Wukong, Alan Wake 2) can't sustain 30fps in 4k native. In these titles I have to use DLSS Performance (1080p internal render) to hit 60fps, which means you now have to factor in the reduced image quality of FSR (particularly when upscaling to 4k from a sub-1440p base render resolution).
In a best case scenario where AMD pulls off a massive gen on gen improvement, it's still almost a full gen behind. I think it's highly unlikely that RDNA 4 will match the 5070 in RT at lower resolutions, and at 4k the 40 series is already feeling inadequate.
Really the only game in town for 4k RT is Nvidia, and you have to pay up for a seat at the table. Maybe AMD pulls out a 150%+ rabbit out of a hat, but I wouldn't hold my breath.
That said, I hope RDNA 4 delivers great value at the sub $500 tier, AMD can claw back some market share, and RDNA 5 can truly compete with Nvidia.
GTX 1080 will do you for many years to come! And has done so already!
GTX 1080 baby!
It should be max 600 for the flagship 300 low end 600 flagship
Im jump back to team Green old GDDR 6 plus it take 6 years get Fsr 4 into 3 games.
I hope a huge increase in raster as well.
Look at HW Unboxed ray tracing video and you maybe understand what i mean.
How does anybody get sick that often?
A.I.D.S......probably A.I.D.S.
Poor diet
i think he's vegan probably low iron.
@@stevensmith6445 lol thats why hes so white his white bloodcell count is through the roof from hiv.
Sounds like we need a 12GB N44. Perhaps if they pair N44 with the 192bit IO/12GB from the N48. 8GB is for the extreme low end, and better RT also uses some vram, so to take the mantle from 3060 12GB. 12GB made the 3060 a classic.
Amd needs to bundle their cpus and gpus.
This has already been mentioned a couple weeks ago - looks like a go
It's called Strix Halo and it will appear in 6 months.
@@dgillies5420 Strix halo is an APU, it looks like what they are going to do is give discounts to laptop OEMs who want to pair Strix Point or Fire Range with N44 and N48.
also is fsr4 going to be released at the launch of the new cards or just promised and 1 year later released?
Some YT chanel did some tests using the Core Ultra 9 285 and CUDIMM running at 8400 MT and the gaming performance was pretty good. Now that require that you buy memory that right now will be rather expensive and you have to win the silicon lottery and get a processor and motherboard that work at 8400 MT or better. This is interesting as the max speed specified by intel for the Core Ultra processors is 6400 MT if I remember correctly. Any processor that doesn't work at a MT higher than that they won't replace as it still works at the specified speed. So I can't call that a real solution to the gaming performance of these processors.
AMD my love 😎
Unless AMD dropped TSMC and I never heard about it, that chip's a fake. It clearly says 'Made in Malaysia' on the side.
Diffused in Taiwan, made in malaysia. Back of amd box states "amd processors are diffused and/or made in one or more of the following countries and/or regions: USA, Germany, Singapore, China, Malaysia or Taiwan"
Navi 48 is just mass production testing chiplet GPU without much cost. Next will be full size.
You are probably correct.
From what I remember it's supposed to be monolithic and not chiplet.
Rx 7000 has ai cores ...
Looks like AMD is also getting neural ray reconstruction tech as well.
RDNA4 might be the new RX580.
Time to save money for rdna 5
Would be strange for it to be chiplet, but the 8900xtx was supposed to be chiplet. Maybe they figured out that it doesn't scale well past a certain point? Unlikely but just an idea
My last information was that it didnt work out for rdna4 like they intended so they moved the chiplet design to rdna5, wasnt it?
IF I was forced to guess AMD could probably have figured out the HW aspect of things [give or take bottlenecks that is] but I suspect that getting the drivers to work as desired was the real gotcha.
so what will be the highest point gpu of amd rdna 4 ?
Huge RT boosts!
Says AMD for the past 3 years
Why did they take the 5800X 3D?
do you not think Halo strix will be upgraded from 40 compute unit later on if they can get it to run as people say why not try go for 1440p also with a chip like that what do you think ?
The must cool and beefed tech channel on yt
Great time to be a pc gamer when more and more games are requiring an always online internet connection?
I wish AMD and Intel would try to compete with Nvidia's most powerful GPUs. Just competing in the low end will make them regress every generation. There needs to be stiff competition. I think to lower costs they should focus on 3 cards initially. High end, to compete with the RTX 5090, a mid tier to competed with the RTX 5070 and also a low tier to compete with the RTX 5060 but with the low end, if need be make is less powerful than the RTX 5060 but also A LOT cheaper so it will entice those who can't afford a lot but want a reasonable card compared to previous gen. Then after some time goes by, maybe they can release other versions from chips of higher tiers that may have some flaws but can still be repurposed to become mid tiers.
Hard to get excited or rdna 4.
It would have to be a good price to performance.
I was hoping intels battlemage would be out by tax return time next year if not ill get a mid range RDNA 4 amd gpu.
I never buy nvidia.
What ever AMD has up it's sleeve its got Nvidia worried for a change hence earlier than normal Mid&Low end RTX50 cards. I still think AMDs using RDNA4 as a stop gap, with high end RDNA5 coming mid to end 2026 With mid-low following 2027. Then if Chiplet GPUs finally work as expected then Nvidia will need to look at chiplet gaming gpus.
I'd like to see a video where you comb your hair!
*I am very happy for AMD video card fans. This is great news for them. I love AMD, but for an extra $50-100 I prefer top performance per tier. So, I will stick to Nvidia.*
An extra $50-$100 😂
@@Lee-SR71ikr? 😂😂😂😂😂
@@Lee-SR71it depends which price tier it is lol
These gpus will not compete with rtx5000 series as AMD said. They will compete with Nvidia rtx4000 series in RT and FPS.
new lineup should be good for content creation amd lost here too
yay can hopefully focus on AMD side the things that interest me then Boring Ngreedia, people gonna buy their cards no matter how rubbish they are due to the brainwashing and all that nonsense.
ditch opengl and vulkan, move to opencl for all types of generic compute, the drivers are much more metal. for both very low level graphics and other compute, for games., like physics and ai and other simulations, and environment management. yep opengl and directx, vulkan, are only for the gpu display outputs special usage case. yes all cpu/gpu/npt/tpu/rtx/fpga are to be used through opencl generic compute pathway. then fetch the output buffer to the cpu. then push it to the display or whatnot. yes opencl is the best (shader type) compiled kernel program api there is. well make sure the pipeline from the opencl compute device to cpu to cpu image buffer copy pipeline is fast. its at least 30fps at 4K currently. most of the time is spent fetching the data from the opencl device and pushing it to the gpu / opengl vulkan directx whatnot display api interface. yes opencl drivers are much easier to make than opengl, vulkan or directx drivers. opencl is almost bare metal c-code. yep you can do even opengl drivers based on opencl, mostly. yep opencl is also open source. generic compute avoids opengl/vulkan/graphics api complications. key issue is that the opencl pipeline operates completely independent from the graphics api, to avoid all complications. any dma image buffer transfers must happen without any opengl interop with opencl. as you dont know if its the opengl device that does the opencl rendering that also would display and share memory. also multiple devices other than the opengl display device will be doing the generic render compute, including the cpu. well gpu has tons of array/matrix number crunching capabilities. ie turbo avx. yes opencl runs on all devices that have opencl drivers, including cpus fpgas etc. not only on vulkan or opengl, directx devices. opengl and other graphics only apis are only in the way and causing confusion. yep code the graphics by hand, you dont need any outside libs. ray tracing is not complicated either, if you dont look at the formulas but think it through on your own. you have an amount of light and need to think what to do with it. nah at least two rays per pixel, per reflection/refraction bounce, say 3 bounces. so you have 1+2+4+8 = 15 rays per pixel per frame static. and async on-demand lighting updates using ray tracing. no denoising required.
Tbh if Nvidia is downgrading all the gpu segments once again, that's the final straw for me. I'll go for rdna 4, considering they would finally have up to par ray tracing perf for productivity tasks compared to Nvidia. Rdna 3 is not bad, but it's too far behind in ray tracing
Ill take one as a short term upgrade from my 6800XT, waiting for RDNA5!
Unless you're "pretty unhappy" with the 6800XT I'd just wait for RDNA 5 and leave out the mid-tier upgrade. Then again if the 8800XT does something important that your 6800XT can't then upgrade away ...
RDNA 4 FSR improvements sound good - getting a 16GB 8800XT for 4k gaming is sounding more and more feasible.
Ray tracing still doesn't seem very interesting, but its probably time to get a card that has ray tracing similar to what a PS6 will have.
Everything else from the next gen below a 5080 seem like they will run into known VRAM issues.
8GB is just not enough for 1080p with ray tracing and 12GB isn't good for ray traced 1440p native or 4k DLSS/FSR quality.
Flagship 8000? Hmm guessing 500-550$
Basically basic
Personally I'm not holding my breath for RDNA4's RT improvements, considering how lackluster so many PS5 Pro upgrades are.
Lackluster ? You are a clown
What does that have to do with rdna4?
You are a clown
@@VoldoronGaming Because the PS5 Pro's RT improvements are supposed to be ported over from RDNA4? Of course it's not certain that it's EXACTLY the same but seems likely, and I really have no confidence in AMD being able to actually deliver on any goals or promises with GPUs anymore.
4070 perf with $400 cost
SwolGamingTech
You know how i see the future.
The time Amd finally do something good with gpu, they've got passed and even Nvidia by an Chinese gpu company.
Huawei is now competing with Nvidia in the AI part.
They arent as good as Nvidia now.
But see in a few years. I think after 3 gens Huawei can have better product then Nvidia.
And then they go into gpu market or anather company will go into gpu market.
And then they beat all other companies.
3 gens chinese gpu can go very fast, in 3 years i think.
Only interested in the CPU. As for GPU im getting 5090!
Jesus/Trump/Crypto 2024!
Ok, but is it efficient? Seeing the power consumption turned me away from RDNA3
Any way of turning of Ray Tracing/ i don't need it.
You simply just keep it off in each games settings that you play. There is no setting on the card itself to turn it off.
what are those awesome games you are referring to? PC gaming is at it's worst atm.
With these prices? How is it great?😂
good news
I would "hope" that the flagship RDNA 4 part if only 400$ or so but I think that the cost will be closer to the 475$-575$ range with it expected to skew towards the higher end of the range. I hope to be proven wrong -- in a good way :)
On man sick again huh 🤔 lol
What and who’s rdnafree ?
So I hear so many conflicting news..
Is the next AMU GPU going to compete with 5080 and 5090? Or it will only be a bunch of mid of the pack shitty gpus?
Also focusing on ray tracing is the dumbest shit.. this thing has been out for years and is barely supported on any games.. and it EATS the performance of any cards..
It's a waste of time AND increase of cost for not really much visual improvement.
AMD really needs to improve the stability of their drivers and Adrenalin software. This was the main reason I switched to NVIDIA. While Team Green has its own issues, I found their problems to be far less frequent than those with AMD's drivers. If AMD can fix their driver issues, I would definitely consider going back, especially since I can't afford NVIDIA anymore.
*I Think the NEW Radeon "flagship" will have 24Gb of GDDR6X or 7, will be 4 to 8% faster in Raster (than the 7900XTX) and 250% better in Gay Tracing, for an price around $649,- to $699,-*
And the one below that : 20Gb of GDDR6X or 7, will be 2 to 6% faster in raster (than the 7900XT) and also 250% better in Gay Tracing, for an price of $549,- to $599,-
Nice attempt at trolling. It failed tho.
I find ray tracing not important. Since almost all games do not support ray tracing.
what
Intel needs to get out of the Discrete Graphics market, they can’t afford to compete and Intel is not the company that it once was. They have failed before with discrete graphics, just stick with integrated and get your CPUs to be competitive again.
Best case scenario - 7900xtx raster, 3 times raytracing performance with new upscaling. $400
Worst case scenario - 7900gre with better raytracing $500
Worst case scenario but price $899
@@AndyViant That's probably what Nvidia's targeting their RTX 5070, which is really a RTX 5060 Ti.
@@kevinerbs2778 agree the 5070 sounds like a 5060 Ti.
First thing, is that it all depends on the price. it always does.
12gb of higher spec ram for something with performance between a 4070 Ti and a 4070 Ti Super is *just* enough in today's market.
Secondly, are we comparing it to a 3060 Ti or a 4060 Ti because the two beasts are very different
@@AndyViant Oh no. No you did not go there with that card without D.L.S.S & Frame generation crap is a miserable 10 to 15% above the RTX 3060 ti. It's stagnation trash.
@@kevinerbs2778 That's exactly my point. The 3060 Ti was a good card. A huge jump from the previous generation's equivalent, the 2060 Super. (as there were no Ti's in the 2000 series except the 2080 Ti)
The 4060 Ti was trash. Pure and simple.
So the question is, if the 5070 is really a 60ti class card, will it be a meaningful generational leap, like the 3060 Ti (38 SM/4864 cores) was over the 2070 (36/2304) and 2070 Super (40/2560)?
That kind of generational jump would see the 5070, even if it only really a 5060 ti like config to be MUCH faster than a 4070 Ti as a minimum.
89% more cores than the 4070 Super would be incredible. Over 13,500 cores.
Even if it only had 111% more cores than the original 4070 that's over 12400 cores?
That's by spec sheet. The real world tells us a different story, because the architecture changes matter too.
If it gave us the same 15% jump over the 4070 Super that the 3060 Ti gave us over the 2070 Super for $500 or even $600 I'd be happy with that, as that would be between 4070 Ti Super and 4080 performance.
First?
second?
Third?
@@solaireastora5394 fourth?
4 absolute sausageparty bangers 😂
I'd sell my 4090 if the RX 8800 XT will be great 😃👍🏼