One of the very few channels that actually _READ AND responds_ to its viewers' comments. I can testify. Believe it or not, it makes a BIG difference. Some channels got big and no longer engage with its viewers. You want to engage your viewers, not disengage them! More eyeballs, more dollars!
I don't care about framerates in the games I play so long as they're higher than 60 in 4K. What I care about is Resolve, Premiere, Unreal 5.x, Blender, and other video transcoding benchmarks. So few channels give us meaningful benchmarks in these programs, so thanks for trying to address this.
On the LLM benchmarks i feel like the models are so small relative to what the cards are capable of that the reason you aren't seeing meaningful performance differences are because you simply need some bigger models. 14B Q4 is something like 8GB and llama 3.1 8B is below 5 depending on the context length. 4K context is also super tiny and not what most people would use IMO - I can use 30K+ on a baseline M1 16GB with 3.1 8B for example. For all 16GB cards you should consider using bigger models and fine tune context length to fully utilize it and maybe you'll start to see some more differentiation (and use big prompts). Also this workload is extremely bandwidth sensitive so you wont see major differences here even across generations with the exception of the 5090, which should start to pull away especially if you load up larger models.
I think even the 5090 is limited by memory bandwidth. NVidia keeps adding more tensor cores for AI workloads, but those cores are starving for data on most workloads. Would love to see memory bandwidth on one axis of a chart and inference score on another. My guess is the results would be perfectly linear.
Considering how difficult it will be to get a 5080 for the foreseeable future, and that the partner models will probably start at $1,200 and above, I feel pretty good about getting an original 4080 at a discount (for $1k) in late 2023... More than a year later and I can sit pretty knowing I have been already enjoying basically the same level of performance of a brand new "80 tier" card for all this time. The only real saving grace for the 5080 (at least the FE model) is its reasonable size compared to the ridiculous 3 and 4 slot models of the 4000 series. Which is a real advantage for people looking to build smaller form factor builds.
wow; this might be one of the best compact in-depth reviews of 5090! An advantage of posting latter than everyone else, because you can rule out the issues of other reviews. Respect, Hardware Canucks team.
It will hurt views-wise but I think we made the right decision here. We will push the RTX 5070 Ti content out AT LAUNCH since the process is now ingrained in our systems.
I really enjoyed the more contemplative perspective of this review. You touched on some of the newer features and covered use cases that tend to get overlooked by most day one reviews, but were also able to more realistically frame it from a performance standpoint. Thank you for also mentioning the Gen4 riser aspect, something that a lot of other outlets didn't point out.
This really is a quality review and more informative in a lot of ways that others i've seen. Hardware Canucks in general has really improved over the years to offer quality simply unmatched.
That's one of the cleverest presented review. Even before you said it I am like "hey, these guys might have shares in NVDIA" but before concluding I am like "hold on, I can see the objectivism here" Starting with the benefit of the doubt and finishing with the objective verdict really made me appreciate the quality of this presentation. Good on ya'll, team!
for video editior we might have to get 50gen for their decoder and encoder. By the way, you are laughting because you forget who made this gen a joke and forcing you to believe that your 4090 at msrp is worth it, is nvidia.
@ I don’t have to “believe it’s worth it” because at the end of the day I bought it simply due to the fact I wanted it and could afford it. It’s really pretty simple. If I couldn’t afford it or didn’t want it, this would be a different conversation.
GREAT VIDEO! Thanks for calling them out. 5080 = 5070. Same Nvidia Shenanigans they tried with the double 4080's. The proper 5080 will come later this year in some kind of TI/Super form.
Love your down to earth approach! For the love of God, please include Apple Silicon figures in creator tests when the same testing method is available across both platforms!
Thanks a lot for AI benchmarks!!! As per NVIDIA's provided specs there seems to be 2 to 3x performance in AI Tops... but real performance jump in AI seems to be as bad as performance jump in gaming...
I didn't think my 4090 at msrp and 7900xtx under msrp were going to be such good "investments" 😂 Btw the review quality you have been putting out has been amazing. A reviewer who listens to helpful feedback and implements good ideas is rare.
Thank you for including the previous generations. Yes, we get it it's not an upgrade over the 4xxx series. But not every one of us upgrades every gen and we can't even get our hands on a 4xxx series that's not 2x over msrp.
Thank you for this video. Just so that everyone's language is not restricted by modern verbal margins, before people adopted the word "evolve" for everything, they used the more proper words "develop, change, progress, advance," and so on.
for someone who doesn't game as much anymore i appreciate you including AI benchmarks. i guess i'll wait for the 5080Ti 24GB since the 5080 performance is so underwhelming
HC, I absolutely love the new format of the new charts; having the 1440p on the left and then the 4k on the right --- Genius! I also really appreciate application and AI benchmarks as well; not many do them. While I'm in the camp of "Laughing my ??? off," I'm not really laughing; it is sad really that nVIDIA think they deserve a premium for this work. The 4080 should be better or it should cost less!
Very useful info for that one guy who managed to get one 😁 Or, to be fair, for the people who will buy between later batches and when Nvidia rolls out Super versions which "fix" the value proposition. Kidding aside, I do appreciate the testing. Thanks for doing the machine learning testing now, too, which does interest me along with gaming and other use cases.
Nvidia datacenter SM:s are tuned more towards compute workload than the Desktop GPU SM:s. Nvidia made a new API functions to help RT become faster, everything supported from 20 series onwards. However 50 series RT cores are designed around the new functions.
Thanks for the non-gaming testing and benchmarks! I only have a 3080 TI, so the 5080 is a good unobtanium upgrade for me. For GenAI, you should try generating 100 images using Stable Diffusion. Cheers!
As a professional CAD designer, thank you the new non-gaming benchmarks. It would be great to see this in CPU benchmarks in future. CPU reviews these days pretty much come down to gaming FPS.
I upgraded from a GTX 1080 to an RTX 4080 for a 230% increase in performance. How many generations will I have to wait to see that uplift in performance again?
I wonder if they'll ever made gaming GPUs anymore? With every release Nvidia GPUs creep a bit more into the workstation space. Half-assedly, but steadily.
it would be superb to just build 5080s with 24gb vram and call it for what it is; as nice as competition like the 7900xtx is, nvidia's performance stability and feature set still make it more valuable imo but of course that'd cripple the 5090, which they can barely sell anyway. and card sales are so guaranteed, they can afford to leave plenty of space to "improve".
As a 4080 owner, I couldn't care less. But I'm very much interested in the AMD's PyTorch performance and excited about the improvements they are making.
Thanks for non-gaming tests! Though, while it would add work I'm sure, I'd love to see some Apple Silicon comparisons side by side. Very niche, and perhaps an argument against it, but just a suggestion!
today's "launch" was straight garbage. 2nd most valuable company in the world, and they can't even get order page working on their own site. Microcenter had something like 500 5090s between all their stores. this artificial scarcity garbage by them is complete nosense.
Nvidia launched their cards and they're basically nowhere to be found and barely provide any upgrade from prev gen for the 5080. Meanwhile AMD is busy shooting themselves in the foot. I'm sitting on the side with my 3080 which still does the job for what I'm doing.
The scalpers can keep their 5080’s wonder if they regret buying them with their bots. I feel sorry for whoever pays double to get them just a stupid decision if they do…
This is wild. I was kinda upset when I first heard the announcement of the new 50 series. Cause I spent an arm and leg on my new rig but now I'm glad lol this is really bad.
Which one should I pick for productivity especially blender? 3080 Ti or 4070? I am getting them at the same price pre owned! Don't have the budget for 5080 and i don't play heavy AAA titles.
overclocked 5080 and a 4090 are equal. I will keep my 4080 one or two years more. Would like to see a card from Intel that is on pair with a 5080/90. They can build solid cards for sure.
Yeah, well... I'm coming from a 1080 and got one at MSRP so I'm happy lol. I was planning on a 4080 or 90 after the 5k release but man the resale market is a mess.
For the AI benches; you might be better off researching how they are running / what they are doing and then replicating them - they all should be fairly simple to automate. Apart from some people dicking about making pictures of cats with huge boobies using Automatic1111; most serious usage happens in Linux. Most workloads can be split into training; or inference. Training is where you go rent a fleet of NVIDA gpu's in the cloud and let them crunch away for a few days. Inference is not nearly as heavy on the Tensor cores; but is very heavy on memory. A lot of the interesting inference tasks like running the larger models (bigger than Qwen:14b or Llama:9b) will likely run (far) better on a system with a large pool of unified RAM. The Mac Studio is an excellent example of this and exactly what Project Digits is there to counter. My 7900XTX isn't massively slower than a 3090 or 4090 for inference using ROCM on Linux fwiw (this is almost all memory capacity and throughput). Other tasks will absolutely favour the NVIDIA cards.
Thank you for doing the productivity testing but please don't use the 9800X 3D for this! It's not good for multicore workloads. Also others have shown that it has good overclocking headroom so there is more gas in the tank.
Seems to me most of these under performance is due to software (drivers), core processing power is not the end and be all, without drivers / software optimization
Did you get a memo from SpaceX and were informed that the R6 (£11,000) is underperforming perilously, and needed a team of rocket engineers to rescue its astronomical price tag? 🚀💸
Used my 4090 for 2.5 years and still holding very strong. Would I love a 5090, sure...But I can wait, see what bugs come up in the next few months while companies make proper water blocks and when all is said and done in about 6 months they should be more accessible. I'm in no rush as the 4090 is still a champion
THANK YOU for doing non-gaming benchmarks.
Indeed. I wanted to see some Blender benchmarks with the 5080 and 4080 GPUs. Thanks to Hardware Canucks for that.
Agreed. Love the Davinci stuff
I wish more would do this
@@Chuyuck 100%! Weird that many content creators don't do content creation benchmarks. 🤷
@ only one I’m aware of is tech notice.
One of the very few channels that actually _READ AND responds_ to its viewers' comments. I can testify. Believe it or not, it makes a BIG difference. Some channels got big and no longer engage with its viewers. You want to engage your viewers, not disengage them! More eyeballs, more dollars!
I don't care about framerates in the games I play so long as they're higher than 60 in 4K.
What I care about is Resolve, Premiere, Unreal 5.x, Blender, and other video transcoding benchmarks. So few channels give us meaningful benchmarks in these programs, so thanks for trying to address this.
On the LLM benchmarks i feel like the models are so small relative to what the cards are capable of that the reason you aren't seeing meaningful performance differences are because you simply need some bigger models. 14B Q4 is something like 8GB and llama 3.1 8B is below 5 depending on the context length. 4K context is also super tiny and not what most people would use IMO - I can use 30K+ on a baseline M1 16GB with 3.1 8B for example. For all 16GB cards you should consider using bigger models and fine tune context length to fully utilize it and maybe you'll start to see some more differentiation (and use big prompts). Also this workload is extremely bandwidth sensitive so you wont see major differences here even across generations with the exception of the 5090, which should start to pull away especially if you load up larger models.
I think even the 5090 is limited by memory bandwidth. NVidia keeps adding more tensor cores for AI workloads, but those cores are starving for data on most workloads. Would love to see memory bandwidth on one axis of a chart and inference score on another. My guess is the results would be perfectly linear.
Thanks for the feedback. We will absolutely look into this and we already are in some cases.
@@HardwareCanucks btw you should look for ollama they give all the models you want in the most simple way
I'm sure the scalpers enjoyed these 5000 reviews because the average as fuck joes isn't gonna be able to buy one.
lol love that seemingly unnecessary "as fuck" addition
Welp, this doesn't feel like a next gen at all. Looking forward to how amd will perform with the rx 9070 xt
Best reviews of 50-series online IMO
Considering how difficult it will be to get a 5080 for the foreseeable future, and that the partner models will probably start at $1,200 and above, I feel pretty good about getting an original 4080 at a discount (for $1k) in late 2023... More than a year later and I can sit pretty knowing I have been already enjoying basically the same level of performance of a brand new "80 tier" card for all this time.
The only real saving grace for the 5080 (at least the FE model) is its reasonable size compared to the ridiculous 3 and 4 slot models of the 4000 series. Which is a real advantage for people looking to build smaller form factor builds.
wow; this might be one of the best compact in-depth reviews of 5090! An advantage of posting latter than everyone else, because you can rule out the issues of other reviews.
Respect, Hardware Canucks team.
It will hurt views-wise but I think we made the right decision here. We will push the RTX 5070 Ti content out AT LAUNCH since the process is now ingrained in our systems.
I really enjoyed the more contemplative perspective of this review. You touched on some of the newer features and covered use cases that tend to get overlooked by most day one reviews, but were also able to more realistically frame it from a performance standpoint. Thank you for also mentioning the Gen4 riser aspect, something that a lot of other outlets didn't point out.
This really is a quality review and more informative in a lot of ways that others i've seen. Hardware Canucks in general has really improved over the years to offer quality simply unmatched.
Outstanding work! Great and complete review! Thank you for the creator benchmarks! Cheers!
Worth the wait. Especially big thank you for running creators and non gaming workflows in depth as you did.
I really appreciate you doing the extra work to help people understand how a GPU has uses far beyond gaming. Bravo.
That's one of the cleverest presented review. Even before you said it I am like "hey, these guys might have shares in NVDIA" but before concluding I am like "hold on, I can see the objectivism here"
Starting with the benefit of the doubt and finishing with the objective verdict really made me appreciate the quality of this presentation.
Good on ya'll, team!
Excellent reviews, especially including the AI Workload Benchmarks!
Your self-reflection on your review is very Canadian :D ... I love it. This review really helps.
Thank you so much for CAD benchmarks 😊
What a great review! My fave on youtube - way to go!
"AMD have rolled over like a dead fish."
Clearly this guy has never been fishing. :)
Great review. I typically don't hit the like button, but this review deserves one
Its great to see benchmarks for video production and AI development.
Coming from a 3080, the 5080 is a nice upgrade. Thanks for making these great benchmarks.
4080 Super Super
4080 super ti
@@andrewyeoh2612 Not even.
@@andrewyeoh2612 i dont think it even has the performance a ti level card would have , its that bad.
4080 super mini
My 4090 and I are indeed sitting here laughing our asses off. The decision to be patient and get that card at MSRP has aged like a fine wine *sip* 😂
for video editior we might have to get 50gen for their decoder and encoder. By the way, you are laughting because you forget who made this gen a joke and forcing you to believe that your 4090 at msrp is worth it, is nvidia.
@ I don’t have to “believe it’s worth it” because at the end of the day I bought it simply due to the fact I wanted it and could afford it. It’s really pretty simple. If I couldn’t afford it or didn’t want it, this would be a different conversation.
Very good video Bro! Congrats!
GREAT VIDEO! Thanks for calling them out. 5080 = 5070. Same Nvidia Shenanigans they tried with the double 4080's. The proper 5080 will come later this year in some kind of TI/Super form.
A sobering summary, thanks for the level-headed take.
Thank you for only guy who is doing non-gaming benchmark with rtx 5080
Love your down to earth approach! For the love of God, please include Apple Silicon figures in creator tests when the same testing method is available across both platforms!
Great review. For the ai benchmarks, it would be good to see a training workload too. The increased memory bandwidth should be helpful there.
Do you have specific training workload in mind?
Thanks a lot for AI benchmarks!!!
As per NVIDIA's provided specs there seems to be 2 to 3x performance in AI Tops... but real performance jump in AI seems to be as bad as performance jump in gaming...
I didn't think my 4090 at msrp and 7900xtx under msrp were going to be such good "investments" 😂
Btw the review quality you have been putting out has been amazing. A reviewer who listens to helpful feedback and implements good ideas is rare.
So much work in ome video. Well done guys! 💪
Thank you for including the previous generations. Yes, we get it it's not an upgrade over the 4xxx series. But not every one of us upgrades every gen and we can't even get our hands on a 4xxx series that's not 2x over msrp.
Pros and creators are in for a surprise: Its support for 10bpc color is lacking as of writing. 😄
Thank you for this video.
Just so that everyone's language is not restricted by modern verbal margins, before people adopted the word "evolve" for everything, they used the more proper words "develop, change, progress, advance," and so on.
for someone who doesn't game as much anymore i appreciate you including AI benchmarks. i guess i'll wait for the 5080Ti 24GB since the 5080 performance is so underwhelming
Awesome review. Plz do a seperate video foe DLSS 4RR /MFG
Nice review. I'm a DR video edit sporting a 3070TI so I guess I should try to get one of these when the resupply arrives.
HC, I absolutely love the new format of the new charts; having the 1440p on the left and then the 4k on the right --- Genius! I also really appreciate application and AI benchmarks as well; not many do them. While I'm in the camp of "Laughing my ??? off," I'm not really laughing; it is sad really that nVIDIA think they deserve a premium for this work. The 4080 should be better or it should cost less!
It's about time someone do serious content on the AI Capacity for modern GPUs and not just gaming benchmarks!
You can’t find 5000 series cards anywhere now. I want one though. Very engaging video by the way. I enjoyed every minute of it.
Very useful info for that one guy who managed to get one 😁 Or, to be fair, for the people who will buy between later batches and when Nvidia rolls out Super versions which "fix" the value proposition.
Kidding aside, I do appreciate the testing. Thanks for doing the machine learning testing now, too, which does interest me along with gaming and other use cases.
Based on what we saw from retailers, there were hundreds available at least on Newegg and OCUK.
Nvidia datacenter SM:s are tuned more towards compute workload than the Desktop GPU SM:s. Nvidia made a new API functions to help RT become faster, everything supported from 20 series onwards. However 50 series RT cores are designed around the new functions.
Thanks for the non-gaming testing and benchmarks! I only have a 3080 TI, so the 5080 is a good unobtanium upgrade for me. For GenAI, you should try generating 100 images using Stable Diffusion. Cheers!
As a professional CAD designer, thank you the new non-gaming benchmarks. It would be great to see this in CPU benchmarks in future. CPU reviews these days pretty much come down to gaming FPS.
How did you find the Beta version of DaVinci Resolve Studio though. Looked around everywhere and found nothing.
Blackmagic provided it to us.
@HardwareCanucks I asked BMD and they told me to contact NVIDIA for review purpose somehow 🫠
If you compare 3090TI (same cuda cores) and 5080 ... you get around 28% more performance! 28% More performance on 2 generations at same specs is bad!
You can do a thing to make the videos even better... pit % on all the Benchmark data!
WOW! 84 blackwell sm’s when over clocked coming close to 128 ada sm’s that’s 65%less. Blackwell is pretty impressive
Thank you for doing AI benchmarks
Great review, thanks. Most nuanced review of the RTX 4080 so far.
The best part about this comment is that it works whether you meant to type 4080 or not. :)
@@AK-Brian 🤣😂 You've got a point!
How dare you, Mike! I don't watch any other channels except this one.
My last gaming rig AMD Ryzen 2700X and GTX 1070Ti pre made computer with windows cost the same as retail price of the 5080 alone. SICK
Is it possible that the 50 series needs further driver optimisations to improve performance?
I was thinking the same thing
New drivers just released today. I wonder how/if it impacts results.
Other channels has shown it has good overclocking results.
Enjoying my 7900xtx even more right now
This is what ive been waiting for!
I upgraded from a GTX 1080 to an RTX 4080 for a 230% increase in performance.
How many generations will I have to wait to see that uplift in performance again?
I wonder if they'll ever made gaming GPUs anymore? With every release Nvidia GPUs creep a bit more into the workstation space. Half-assedly, but steadily.
it would be superb to just build 5080s with 24gb vram and call it for what it is; as nice as competition like the 7900xtx is, nvidia's performance stability and feature set still make it more valuable imo
but of course that'd cripple the 5090, which they can barely sell anyway. and card sales are so guaranteed, they can afford to leave plenty of space to "improve".
Man, the 40 series cards were a HUGE improvement
Since a 4070 performs similar to 3080, would it be good to upgrade to 5080 from it?
As a 4080 owner, I couldn't care less. But I'm very much interested in the AMD's PyTorch performance and excited about the improvements they are making.
we need another graph: vram utilization in 1440p and 4k to see how futureproof card really is
Thanks for non-gaming tests! Though, while it would add work I'm sure, I'd love to see some Apple Silicon comparisons side by side. Very niche, and perhaps an argument against it, but just a suggestion!
I have waited for some AI benchmark testing. Looking to maybe upgrade my 3090 and I wanted to see how these cards would stack up.
Ditto, but the 3090 is forgotten for all these tests. GN includes it but that's about it.
Isn't the smaller VRAM a no-go anyways?
@@JensAllerlei For the 5080 yes but not for the 5090, I wanted to see both of them, hopefully.
today's "launch" was straight garbage. 2nd most valuable company in the world, and they can't even get order page working on their own site. Microcenter had something like 500 5090s between all their stores.
this artificial scarcity garbage by them is complete nosense.
This is just a marketing bs strategy. After this review nobody should be in rush to buy 5080 with inflated prices.
@imagine_84 especially now they they raised prices on them from $200 to $400 more than last week.
Cards that were 1200 are now 1400+
Hey I have a question, Ik you are at a company, but idk who else to ask, anyway do you think that more 5080/5090 options will come in white
Nvidia launched their cards and they're basically nowhere to be found and barely provide any upgrade from prev gen for the 5080. Meanwhile AMD is busy shooting themselves in the foot. I'm sitting on the side with my 3080 which still does the job for what I'm doing.
100% same situation here.
Im actually looking for price decreases in 4080/90s - not 50 series 🤦♂️
The scalpers can keep their 5080’s wonder if they regret buying them with their bots. I feel sorry for whoever pays double to get them just a stupid decision if they do…
Thanks for this.
Yooo, you mentioned something about performance issues with AMD Ryzen 5000 chips, but never went into detail about it.
It was more in relation to setting your PCIe bandwidth accordingly.
This is wild. I was kinda upset when I first heard the announcement of the new 50 series. Cause I spent an arm and leg on my new rig but now I'm glad lol this is really bad.
Which one should I pick for productivity especially blender? 3080 Ti or 4070? I am getting them at the same price pre owned! Don't have the budget for 5080 and i don't play heavy AAA titles.
It would be nice to see come Stable Diffusion testing as well... or any other locally run image generators.
So is it worth buying a 4070 laptop NOW or wait for a 5070 laptop
For benchmarking could you also show FSR enabled as its an injustice to the AMD comparatives.
We don't show DLSS in mainline reviews, so no.
Intrigued to know the impact of pcie4.0 x16 or even x8 (aka pcie3.0 x16) in non-games usage
thanks
So good to see proper non-gaming testing. And getting in proper deep. Nice one guys
overclocked 5080 and a 4090 are equal. I will keep my 4080 one or two years more. Would like to see a card from Intel that is on pair with a 5080/90. They can build solid cards for sure.
Yeah, well... I'm coming from a 1080 and got one at MSRP so I'm happy lol. I was planning on a 4080 or 90 after the 5k release but man the resale market is a mess.
How about using different color in the charts, not different shades.
Already showing signs of trouble at 3.0 x16. I'd be concerned getting a 12 GB 5070 running at that configuration.
My 4070ti draws about 270W in full load, so a little over 300 for a 80 class sounds ok
For the AI benches; you might be better off researching how they are running / what they are doing and then replicating them - they all should be fairly simple to automate. Apart from some people dicking about making pictures of cats with huge boobies using Automatic1111; most serious usage happens in Linux. Most workloads can be split into training; or inference. Training is where you go rent a fleet of NVIDA gpu's in the cloud and let them crunch away for a few days. Inference is not nearly as heavy on the Tensor cores; but is very heavy on memory. A lot of the interesting inference tasks like running the larger models (bigger than Qwen:14b or Llama:9b) will likely run (far) better on a system with a large pool of unified RAM. The Mac Studio is an excellent example of this and exactly what Project Digits is there to counter.
My 7900XTX isn't massively slower than a 3090 or 4090 for inference using ROCM on Linux fwiw (this is almost all memory capacity and throughput). Other tasks will absolutely favour the NVIDIA cards.
I can't wait to swap my old 4090 for the new 5080 next week ❤
Thank you for doing the productivity testing but please don't use the 9800X 3D for this! It's not good for multicore workloads.
Also others have shown that it has good overclocking headroom so there is more gas in the tank.
So it's a creator workstation gpu and not a gaming one
Seems to me most of these under performance is due to software (drivers), core processing power is not the end and be all, without drivers / software optimization
please add stable diffusion for AI benchmark
We will look into that. Thanks!
I wish you can compare Mac vs PC on H265 422
Shout out to AMD's 7900XTX actually battling in quite a few productivity tasks!
Did you get a memo from SpaceX and were informed that the R6 (£11,000) is underperforming perilously, and needed a team of rocket engineers to rescue its astronomical price tag? 🚀💸
I'm still perfectly happy with my 7900XT
We've all hit the 4 nm wall, into the future, no card will greatly outperform these numbers.
Used my 4090 for 2.5 years and still holding very strong. Would I love a 5090, sure...But I can wait, see what bugs come up in the next few months while companies make proper water blocks and when all is said and done in about 6 months they should be more accessible. I'm in no rush as the 4090 is still a champion
❤
it will be like so a long time till they do not shrink the node, we only get software updates
The Emperor protects, brother.