RTX 5090 Chip Deep-Dive

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.พ. 2025

ความคิดเห็น • 448

  • @HighYield
    @HighYield  7 วันที่ผ่านมา +112

    Did you try to buy a RTX 50 card and if so, did you get one?
    It was impossible to get a FE card here in Germany, scalper bots bought the official NVIDIA supply before launch. I think the low stock is most likely due to the limited GDDR7 supply. Currently, only Samsung manufactures these chips.

    • @pf100andahalf
      @pf100andahalf 6 วันที่ผ่านมา +17

      I'm sticking with my 4090 for many more years.

    • @PorinaAdventures
      @PorinaAdventures 6 วันที่ผ่านมา +2

      Nvidia sale site died shortly before it went live, and by the time I got connected it was far too late. I did see AIB stock elsewhere but that went near instantly too.

    • @unvergebeneid
      @unvergebeneid 6 วันที่ผ่านมา +8

      But apparently they also just started producing these cards, so they didn't have time to build up a stockpile for launch.

    • @b1lleman
      @b1lleman 6 วันที่ผ่านมา +5

      I tried getting one, saw the prices, and saved myself 5000+ EURO's since I didn't need to buy a completely new system around that 50 series. My 3080 will have to do :P

    • @RobBCactive
      @RobBCactive 6 วันที่ผ่านมา +12

      the joke is AMD changed their numbering system expecting the 5070 class to match around the 4080, pushing price performance up and the 5080 to be priced again around $1,200 not the super.
      Using GDDR7 gained Nvidia what exactly?
      Hot air at CES, cold water poured on at review and the RTX 50 is vapour at retail

  • @xL3thalTuRdZz
    @xL3thalTuRdZz 6 วันที่ผ่านมา +122

    New High Yield video out. Time to stick the kettle on and watch. I wish more tech TH-cam channels were like this.

    • @christophermullins7163
      @christophermullins7163 6 วันที่ผ่านมา +6

      I agree. I have been learning about nodes, lithography and computer hardware since I was a kid. I learn nothing from the average channel but so much from this one. Incredible information here

    • @Ignisan_66
      @Ignisan_66 6 วันที่ผ่านมา +1

      Cringe

    • @pf100andahalf
      @pf100andahalf 6 วันที่ผ่านมา +2

      @Ignisan_66 Yes, because knowledge is bad, what?!?

    • @franzpleurmann2585
      @franzpleurmann2585 5 วันที่ผ่านมา

      @@christophermullins7163 Can you recommend some more sources of information?

  • @lharsay
    @lharsay 6 วันที่ผ่านมา +294

    For the cost of the silicon you should also mention that the worse yields will end up as RTX5090 while the better yields will be used as B6000 workstation cards which somewhat lowers the cost for the 5090 dies.

    • @raw_000
      @raw_000 6 วันที่ผ่านมา +36

      and worse chips could still go into lower tiered cards (though i am not familiar with this gpu gen)

    • @roanbrand7358
      @roanbrand7358 6 วันที่ผ่านมา +5

      Lowers, lol

    • @fVNzO
      @fVNzO 6 วันที่ผ่านมา +15

      @@raw_000 That usually happens, typically they go into datacenter or region specific cards after a while when they've stockpiled enough chips.

    • @VADemon
      @VADemon 6 วันที่ผ่านมา +3

      5600X3D was a Microcenter exclusive initially. RX570D or RX580D? Where AMD had cut down versions for the Chinese market.
      nVidia is being forced to only export cut down chips to China.
      Witnessing how we don't see multiple tiers of cards out of one die on the mass market means the current down binning is enough in most cases such that a "sub RTX 5090" isn't needed. Did the last generation's Ti and Super have old chips binned or new chips?

    • @lharsay
      @lharsay 6 วันที่ผ่านมา +11

      @ Depends on yields. For the 3090 (95% enabled die) most dies ended up as lower tier 3080 instead (Nvidia wanted to sue Samsung for the poor yields, then they settled with a discount for the manufacturing of that die.). For the 4090 (85% enabled die) only a few were turned into more cut down RTX5000 Ada, basically the 4090 was the lowest tier model on that die.

  • @Ivan-pr7ku
    @Ivan-pr7ku 6 วันที่ผ่านมา +55

    Fun fact: Nvidia is the largest vendor of RISC-V tech in the world. They have been using this ISA for various purposes (i.e. command and state processor logic) since at least Volta and Turing.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา +13

      Interesting information, especially since NVIDIA tried to buy arm.

  • @Jerico64
    @Jerico64 7 วันที่ผ่านมา +67

    Thank you, no one in the main coverage of the 50 series really mentions the chip yield. Approaching limit of the current node makes me keen on the next generation, maybe we can see another big jump.

    • @aetheriality
      @aetheriality 5 วันที่ผ่านมา +3

      6090 HAS to be a big jump

    • @𩛗
      @𩛗 3 วันที่ผ่านมา +1

      @@aetheriality funny graphics card amirite.

    • @syncmonism
      @syncmonism 15 ชั่วโมงที่ผ่านมา

      @@aetheriality They will likely use the TSMC 2nm node, skipping the 3nm node entirely, as the 2nm node is just a lot better, and I don't think Nvidia wants to leave the door open to AMD being the only one to use a 2nm node for a next gen high-end GPU.

  • @organichand-pickedfree-ran1463
    @organichand-pickedfree-ran1463 6 วันที่ผ่านมา +15

    I love the detailed mm^2 breakdown. Do you care about how large nvlink was in Turing and Ampere, just to see how much area was saved when they removed it from Ada or perhaps how it compares to PCIe? Just an idea. No need to do it.

    • @HighYield
      @HighYield  6 วันที่ผ่านมา +8

      NVlink is a great idea. It’s actually part of the US restrictions. H800 for example has the same compute as H100, but less NVlink bandwidth.

    • @مقاطعمترجمة-ش8ث
      @مقاطعمترجمة-ش8ث 4 วันที่ผ่านมา

      Hello Linus Blurry webcam capture, I really appreciate what you did out there.

  • @aayankhan942
    @aayankhan942 20 ชั่วโมงที่ผ่านมา +1

    This what quality content looks like. Very impressive 👏🏻

  • @maxmustsleep
    @maxmustsleep 7 วันที่ผ่านมา +15

    super impressive analysis once again!

  • @b1lleman
    @b1lleman 7 วันที่ผ่านมา +12

    Super interesting as always. Thank you !!

  • @kopasz777
    @kopasz777 6 วันที่ผ่านมา +48

    0:44 By the available info I could find, the 2080 Ti was their biggest chip so far. (754 vs 744)

    • @christophermullins7163
      @christophermullins7163 6 วันที่ผ่านมา +13

      Look at the wattage difference.. these dense cores are sucking massive power these days.

    • @cant_Comment
      @cant_Comment 6 วันที่ผ่านมา

      @@christophermullins7163 theres lot of them

    • @Raivo_K
      @Raivo_K 6 วันที่ผ่านมา +4

      TPU has updated their GB202 die size to 750mm² now. Still smaller than TU102 but effectively the same.

    • @AlpineTheHusky
      @AlpineTheHusky 6 วันที่ผ่านมา

      This video includes scribe lines tho. So thats probably adding a bit

    • @cl4ster17
      @cl4ster17 5 วันที่ผ่านมา +5

      GV100 was even bigger at 815 mm², though not a GeForce.

  • @hiepchu6028
    @hiepchu6028 4 วันที่ผ่านมา +3

    I haved waited so long to watch your new video, please continue making more videos

  • @Spewyspews
    @Spewyspews 7 วันที่ผ่านมา +245

    A shame it’s not on a new node

    • @HighYield
      @HighYield  7 วันที่ผ่านมา +149

      Agree. It would have been a much more convincing chip on N3E.

    • @Spewyspews
      @Spewyspews 7 วันที่ผ่านมา +41

      @@HighYieldback to back ~4090 uplifts would have been incredible

    • @tommihommi1
      @tommihommi1 7 วันที่ผ่านมา +45

      also a shame there's no big architectural improvement

    • @_TT90
      @_TT90 6 วันที่ผ่านมา +31

      I would assume yield rates and price are the reasons it’s not on N3E

    • @yenmartin9115
      @yenmartin9115 6 วันที่ผ่านมา

      @@HighYieldtsmc n3 yield is actually very bad. Don’t trust the media. Yield rate for Apple m4 is less than 50%.

  • @balthazarbulau4095
    @balthazarbulau4095 6 วันที่ผ่านมา +6

    As soon as I saw this channel has new activity, my heart went up in joy. Such an underestimated channel.

  • @rainerzufall9881
    @rainerzufall9881 6 วันที่ผ่านมา +3

    Thank you Max, I really appreciate your approach to high-quality content!

  • @Destructificial
    @Destructificial 6 วันที่ผ่านมา +28

    I think the "memory bus size determines chip area" argument is a bit more nuanced in practice, because the exact layout of the memory interface on the die isn't fixed. With a large chip it obviously makes sense to arrange it in a wide and shallow configuration, but that doesn't mean they couldn't have gone for a narrow and deep configuration instead! Arrange the compute first, then place the memory interface like a donut-shaped ring around the compute. The thickness of the donut is determined by its circumference (the total die area for the memory interface is fixed), which in turn is a direct result of the size of the compute part. This would avoid the empty space issue.

    • @sinnwalker
      @sinnwalker 4 วันที่ผ่านมา

      So why would they not do that

    • @bariole
      @bariole 3 วันที่ผ่านมา

      True. And it is easy to visualize. Just rotate each 32bit controller by 90°. Same controllers, same layout, same 512bit interface, but you end up with much smaller circumference and thus die size of a chip.

    • @bariole
      @bariole 3 วันที่ผ่านมา +1

      @@sinnwalker Because they are in a business of delivering 20000+ cuda cores.

    • @musaran2
      @musaran2 3 วันที่ผ่านมา

      No. Chip perimeter connect density is saturating too.

    • @bariole
      @bariole 2 วันที่ผ่านมา

      @@musaran2 Nothing which few layers of wires wouldn't fix.

  • @CasualGamers
    @CasualGamers 6 วันที่ผ่านมา +5

    Great stuff, thanks!

  • @lachlanlau
    @lachlanlau 6 วันที่ผ่านมา +11

    Interesting to see this technical perspective on things like the 5090 max power draw. Mainstream reviewers don’t talk about the why, they just talk about the basic features.

  • @chrisrothstein6157
    @chrisrothstein6157 4 วันที่ผ่านมา +1

    Thanks for the discussion of costs and yields. That section was especially valuable and insightful!

    • @HighYield
      @HighYield  4 วันที่ผ่านมา +1

      Glad you enjoyed it!

  • @halmyrach
    @halmyrach 6 วันที่ผ่านมา +2

    Always happy to see a video from you, and great timing with that one. Despite failing to beat the bots for a 5080 I can at least learn more about Blackwell until more drops are arriving and getting one myself :)

  • @Tr1pp1n5
    @Tr1pp1n5 4 วันที่ผ่านมา +1

    Dass du dir echt die ganze Arbeit machst das zu analysieren und es noch so verstädlich erklären kannst hat meinen Vollen Respekt verdient. Ich freue mich jedes mal wenn ein neues Video raus kommt und ich was neues lernen kann. 🤟💌

    • @HighYield
      @HighYield  4 วันที่ผ่านมา

      Vielen Dank für deinen Kommentar. Ich kann es so gut erklären, weil ich es erst selber mühsam verstehen muss. Und wenn selbst ich es checke, kann es jeder verstehen :D

  • @justinsullivan5063
    @justinsullivan5063 วันที่ผ่านมา

    Nice, logical walkthrough of the chip. Thanks. I have a system on order with a 5090 - can't wait! (won't see it until towards the end of March, but hey..)

  • @MalcolmREBORN
    @MalcolmREBORN 3 วันที่ผ่านมา +1

    That was an excellent analysis. I learned a bunch. Thank you.

  • @andromeda8418
    @andromeda8418 6 วันที่ผ่านมา +27

    Not sure what I was expecting from RTX 5090. On a paper, it looked very impressive, especially that jump in memory bandwidth. But in gaming benchmarks rather disapointing, more performance for more power. Efficiency improvement within the margin of error.
    AMD ditching high-end products for this gen hints that they were having issues in consumer multi-GCD GPUs. I just want to see Nvidia going into panic mode when they don't have ready answer against AMDs ability to glue chips together.

    • @Hahahanoyes
      @Hahahanoyes 6 วันที่ผ่านมา +6

      It’s disappointing in older games but in heavy ray tracing games, it is getting 40-50% uplift over the 4090. Going forward as more games come out, I do believe the 5090 will get a chance to stretch its legs.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา +2

      It is far from a disappointment. Gamers like to move the goal posts around a lot. Card price goes up and has higher efficiency gamers cry about prices, prices go down but efficiency gains aren't the greatest then gamers cry about that.
      I'm curious what you think AMD's ability to glue chips together is going to achieve. Do you think it is going to break the laws of physics or something?

    • @andromeda8418
      @andromeda8418 5 วันที่ผ่านมา

      @@apersonontheinternet8006 It does not necessarily break physics, but GPU market prices, I hope. The cost per chip is exponentially lower for smaller chips due to more dies per wafer and the higher yield rate. The multi-GCD design gives AMD potential for a few things 1) Focus their design and production into 1-2 GCDs used in all their product tiers 2) undercut Nvidia prices even more 3) perhaps undercut Intel GPUs 4) glue enough GCDs for super-chip that can brute-force above Nvidia xx90 cards.
      AMD already has multi-GCD products for data centers (High Yield has a video about MI300, go check it out if interested), so it's only a question of time when multi-GCD lands the consumer market.

    • @jaylapointe1654
      @jaylapointe1654 4 วันที่ผ่านมา +5

      There was no meaningful manufacturing process node shrink this generation. The 40 and 50 series are on the same process node.
      People forget the 30-40 series was a such a massive performance jump due to the move from 8nm samsung to 5nm TSMC.

    • @Hahahanoyes
      @Hahahanoyes 4 วันที่ผ่านมา +2

      @ I think it’s actually impressive how big the jump is in some games using the same node. The 5080 OC is 5% behind the 4090 with 6,000 less cores.

  • @couldntfindafreename
    @couldntfindafreename 5 วันที่ผ่านมา +6

    14:00 Don't forget, the scalper is also making 100% on it...

  • @SubhadipSen
    @SubhadipSen 6 วันที่ผ่านมา +15

    Titan V had a larger die, but you're right - forgot it wasn't branded GeForce. Still, Nvidia's largest gaming product.

    • @Steamrick
      @Steamrick 6 วันที่ผ่านมา +14

      The TU102 die (RTX 2080 Ti) was within touching distance at 754mm². That's >99% the size of GB202.
      GB202 is certainly the most expensive to manufacture, though. TSMC 12nm was cheap compared to TSMC 4nm - something to the point of a quarter or a fifth the price per wafer.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา +7

      According to NVIDIA TU102 was 754mm2, while they say 750mm2 for GB202. So that would make Turing larger. But the measured GB202 is 761mm2. Anyways, both chips seem to be pretty close. But Turing was on a inferior node, even for the time.

    • @SubhadipSen
      @SubhadipSen 5 วันที่ผ่านมา +2

      @@HighYield Titan V was GV100 though, which was 800+ mm2

    • @Swiss4.2
      @Swiss4.2 3 วันที่ผ่านมา

      @@HighYield Blackwell 2.0 also isn't the best node. Still on 5nm just like Ada and no more efficient. Blackwell should have honestly been 3nm

    • @inf11
      @inf11 8 ชั่วโมงที่ผ่านมา

      @@SubhadipSenthat is with HBM memory, chiplet

  • @azero79
    @azero79 วันที่ผ่านมา

    High quality content, thanks!

  • @Teste-gp7bm
    @Teste-gp7bm 6 วันที่ผ่านมา +10

    Can you explain why you think the PHYs couldn't be wider and take up less shoreline?
    The Radeon 290X is a 512bit card on a 448mm² die.
    Even being more complex, surely the PHYs could be made wider and take up less space around the die.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา +7

      I’m sure GDDR7 PHYs are more complex and thus larger than GDDR5 PHYs. But that’s actually a great question! The HD 2900 XT also had a 512-bit bus iirc.

    • @cl4ster17
      @cl4ster17 5 วันที่ผ่านมา

      ​@@HighYieldAnd GTX 280/285

    • @joeAK7.62
      @joeAK7.62 5 วันที่ผ่านมา

      i had the radeon 290 and it was a real beast and fun to play games with.
      it was better than nvidias cards in every way.
      they even made models with 6gb vram and i think some 8gb ones too.
      i had an xfx dd black 290 and that thing was great for 24/7 overclock.
      ran at 1140mhz gpu and 2650mhz vram.
      the following r9 390 and 580 series were great too.
      the radeon 5700xt held up until recently, amd gpus really age like fine wine, unlike nvidia...

  • @tonnylins
    @tonnylins 6 วันที่ผ่านมา +1

    Impressive! Thanks, as always. 😸

  • @andikunar7183
    @andikunar7183 6 วันที่ผ่านมา +4

    Great video thanks!!! I totally agree with your assessment re. the 5090 RAM-bandwidth need for AI. Non-batched token-generation during LLM-inference is mostly limited by memory-bandwidth. Pumping all the many billions of AI-model parameters through the compute-units for each and every token generated (as well as the KV-cache values, etc.) is the bottleneck. E.g. even on modern arm64v8.4 and upwards, the CPU's matrix/vector-compute support alone is able to saturate its 130+GB/s 128-bit LPDDR4X bus (it does not even need any GPU/NPU to saturate it - see llama.cpp Q4_0 GEMM/GEMV acceleration on Snapdragon X). The GB202 has nearly infinitely more compute-power, so having a 2TB/s bus will still be its (very high) token-generation bottleneck. Prompt-processing, training,... is a different story, there its compute-horsepower will shine.

  • @yasserbenabou213
    @yasserbenabou213 6 วันที่ผ่านมา +14

    Watching this while still rocking my 750mm^2 2080ti

    • @Raivo_K
      @Raivo_K 6 วันที่ผ่านมา +2

      754mm². Same card here.

    • @mimimimeow
      @mimimimeow วันที่ผ่านมา

      I'm running the giant Arctic Accelero Xtreme cooler on one and the base literally doesn't cover the entire die 🤣🤣

    • @Raivo_K
      @Raivo_K วันที่ผ่านมา

      @@mimimimeow Im running even bigger Rajinitek Morpheus II Core edition and since it covers the equally massive TU102 die on my 2080Ti i would assume it also covers the GB202 die.

  • @marce.fa28
    @marce.fa28 7 วันที่ผ่านมา +7

    Du hast eine beeindruckende Klarheit und Präzision in deinen Erklärungen! Dein Kanal ist wirklich einzigartig. Ich bewundere deinen Geist!

    • @HighYield
      @HighYield  6 วันที่ผ่านมา +1

      🦝

  • @DavidsKanal
    @DavidsKanal 6 วันที่ผ่านมา +23

    I see High Yield, I click

    • @christophermullins7163
      @christophermullins7163 6 วันที่ผ่านมา +3

      If I suddenly found that this channel has thousands of videos, I would have to take next week off work to watch them all. Truly a wonderful treat for those that are interested in the subject.

  • @Sheaksa
    @Sheaksa 6 วันที่ผ่านมา +1

    Love this in depth analysis

  • @OfSheikah
    @OfSheikah 7 วันที่ผ่านมา +22

    Thank you for the very in-depth deep dives as usual! Highly anticipated yields from your channel!
    "GB202 is so MASSIVE"
    Y know what else is massive

    • @samserious1337
      @samserious1337 6 วันที่ผ่านมา +3

      a RTX 4090 die? jk

    • @josuad6890
      @josuad6890 6 วันที่ผ่านมา +7

      Definitely not the stock.
      Maybe the price tag, yeah.

    • @ohmygodineedhelp
      @ohmygodineedhelp 6 วันที่ผ่านมา +2

      LOOOOOOOOOOOW
      TAPER
      FAAAAAAAAAAAADE

    • @BaBaNaNaBa
      @BaBaNaNaBa 6 วันที่ผ่านมา

      the price tag is truly massive

  • @nekony3563
    @nekony3563 6 วันที่ผ่านมา +3

    Games are now an AI workload: mega geometry, neural compression, neural radiance cache, and others.

  • @tedjohansen1634
    @tedjohansen1634 5 วันที่ผ่านมา

    Great video 10/10 - Subbed!

  • @theworddoner
    @theworddoner 6 วันที่ผ่านมา +4

    I appreciate the cost estimate you’ve provided for the 5090.
    At MSRP, they should still make a gross profit of approximately $1000 per unit excluding R&D costs.
    That being said I’m perplexed why they have reduced supply of this card. Wouldn’t it make sense to average out the r&d costs over larger amounts of units sold? It would be more profitable for them. The R&d cost on the cooler alone is not negligible.

    • @christophermullins7163
      @christophermullins7163 6 วันที่ผ่านมา +2

      The RnD was funded by and for AI. That is already paid for lol and not by gamers. This is pure 100% profit on these ridiculously overpriced GPUs. (Not MSRP but aibs for $2800 etc)

    • @krazownik3139
      @krazownik3139 6 วันที่ผ่านมา

      Plus, there is also probability that they want to ship at least some cards before tariffs will hit. Then, they could claim that they wanted to sell for MSRP, but tariffs would jank those prices even higher.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา

      @@christophermullins7163 you know how I know you’ve never been in charge of anything or have never ran your own business? Really wish you w4 types would stick to w4 things because calling all of your ideas half baked is being very generous.

    • @Jalinto
      @Jalinto 2 วันที่ผ่านมา

      @@apersonontheinternet8006what does w4 mean?

  • @StefanKamer
    @StefanKamer 5 วันที่ผ่านมา

    Absolutely incredible insight! Thank you so much for this video! I'm also curious about the idea of underclocking the memory now when I get the 5090.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา +1

      I wasn't able to get a 5090 so I can't test it myself. Maybe we'll get lucky and one of the usual channels does a test.

    • @StefanKamer
      @StefanKamer 5 วันที่ผ่านมา +1

      ​@@HighYield I'll try to suggest it in the SFF community where they're always looking for ways to improve efficiency.

  • @JoseMolins
    @JoseMolins 6 วันที่ผ่านมา +1

    The video is very good; thank you for your contributions, High Yield. It’s a pity you couldn’t locate the AMP (likely an accelerator/processing module) implemented with a RISC-V, as I assume it must take up very little space and is impossible to find without help from Nvidia’s engineers. There’s one thing I don’t quite understand (forgive my lack of knowledge): When Nvidia disables certain functional units on the die to increase the probability of a die meeting its commercial specifications, is this deactivation analyzed and performed on a per-unit basis, or are the same units always disabled across all manufactured dies? Thank you so much again.

    • @StephanAhonen
      @StephanAhonen 2 วันที่ผ่านมา +1

      It happens in the QC stage of manufacturing, every chip is tested individually and sections with defects are identified and disabled.

  • @Slavolko
    @Slavolko 5 วันที่ผ่านมา +1

    Fantastic analysis, as always! It's interesting to see how much die space is disabled for the sake of yields, as well as the reason behind the bump to a 512-bit memory interface. Considering how tiny the raster units are, I wonder if they could just double the number of them to dramatically increase raster performance in games, or if the raster performance wouldn't scale properly. Just curious.

  • @sinnwalker
    @sinnwalker 4 วันที่ผ่านมา +2

    Seems the real prize are gonna be the workstation cards this gen. Especially that new 96GB 6000

  • @ostrov11
    @ostrov11 6 วันที่ผ่านมา +1

    Спасибо, отличный контент.

  • @DigitalJedi
    @DigitalJedi 6 วันที่ผ่านมา +2

    Always love you die shot videos. I would love to see you take a look at both Arrow Lake and Strix Halo. In my opinion having done a good bit of packaging design myself, this die looks like it is ripe for splitting down the middle in a future MCM design. The crossbar placement right through the exact center of the GPCs and cache is what gives me that feeling when combined wit hthe fact that GB203 appears to basically just be one half of this die layout. Speaking of which, I would love to see some GB203 (5080) die shots in similar resolutions to see if it really is built like that.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา

      Good observation, but let me let you in on a little secret; it basically already is split down the middle. The 90 cards were always 2x the hardware of the 80's, but in the past we ran two separate dies on a single PCB run in SLI. The 90's went away with the GTX 690 because power and heat management was getting out of control with the ever increasing core counts. The 3090 and 4090 were not true 90 cards as they only had roughly 1.3x - 1.6x the hardware, so this is our first true 90 card since about 2012.
      With the launch of the 50 series I'd say it makes a lot more sense as to why nvidia ended support for SLI. It relieves pressure from 80 cards sales while still providing that SLI-like power to those that need it without all the pitfalls of SLI.
      I get annoyed with the likes of GN and HU talking about how the 5080 die has been cut or that the big gap between the 5080 and 5090 is just so nvidia can stuff in Supers and Ti's. The 5080 wasn't "cut down", the 3090 and 4090's were the ones that were "cut down" and that was very likely due to heat management concerns. I think nvidias cooler redesign is evidence of this, such a clever yet simple and elegant solution for dealing with all of that power required for these insane core counts.

  • @karim1485
    @karim1485 6 วันที่ผ่านมา +1

    Awesome! Really makes you wonder why HBM didn't become mainstream. For CPUs, it makes sense that the end customer can choose how much RAM they want to add, but GPUs always come with a fixed memory capacity. I wonder if they could even shrink the die size by switching to HBM vs external chips while largely improving memory performance.

    • @hasnihossainsami8375
      @hasnihossainsami8375 6 วันที่ผ่านมา +3

      Yield issues coupled with degradation over time at sustained high temps, and prohibitively expensive.

  • @nguyenminh7780
    @nguyenminh7780 6 วันที่ผ่านมา +10

    Can you please do a video explaining why some components like the GDDR phys connections on GPUs don't shrink with node shrinks ?

    • @pedro.alcatra
      @pedro.alcatra 6 วันที่ผ่านมา +3

      That's an absolutely good idea.
      Would be good if he includes why cache memory and video engines also suffer from the same problem

    • @guyg.8529
      @guyg.8529 6 วันที่ผ่านมา +5

      GDDR phys connections are metal, not transistors, and it' mostly transistors that are reduced in size with node shrinks.

    • @nguyenminh7780
      @nguyenminh7780 6 วันที่ผ่านมา

      @@pedro.alcatra oh ye those and pcie connections too, they don’t shrink that much

    • @DigitalJedi
      @DigitalJedi 6 วันที่ผ่านมา +4

      @@nguyenminh7780 Certain components, such as sram for cache, stopped scaling due to limitations in the actual device physics. Others, such as PHYs, are limited by needing to be built up in the metal layers of the chip. Those layers get progressively larger as you get away from the transistor layer. Since the PHYs have to interface with the outside world, they end up limited at least in part by how you intend to connect to them.

  • @angelost1467
    @angelost1467 5 วันที่ผ่านมา

    Great analysis.

  • @egalanos
    @egalanos 6 วันที่ผ่านมา +1

    Those area % breakdowns shows that there's a large future opportunity for 3D stacking a compute die on top and an I/O + cache die.

  • @nicknorthcutt7680
    @nicknorthcutt7680 6 วันที่ผ่านมา

    Absolutely amazing deep dive! It is incredible to see how massive gb202 really is.

  • @cloudcyclone
    @cloudcyclone 2 วันที่ผ่านมา

    very good video!!

  • @Strykenine
    @Strykenine 6 วันที่ผ่านมา +1

    Learned a lot in this talk, particularly about the memory interface. I would never pick up one of these just for gaming. This seems like a card for a 3d or ai professional, who is making $$ with their hardware.

  • @lucas.cogrossi
    @lucas.cogrossi 3 วันที่ผ่านมา

    Good stuff, thanks

  • @Col_Panic
    @Col_Panic 6 วันที่ผ่านมา

    Yay!! New video! Btw, ita awsome to see how many more subs you have gotten! I changed my YT name, so im sure you have no idea I am, now, lol. No worries, just happy you are back!

  • @Steamrick
    @Steamrick 6 วันที่ผ่านมา +3

    Calling it the biggest Geforce chip ever is slightly disingenuous. Technically correct, yes, but the TU102 (RTX 2080 Ti) at 754mm² is 99.1% of the size. That's just shy of margin of error.
    GB202 is certainly the most expensive to manufacture, though. TSMC 12nm was cheap compared to TSMC 4nm - something to the point of a quarter or a fifth the price per wafer.
    edit: Excellent analysis, though. Thank you very much for sharing your thoughts.

    • @HighYield
      @HighYield  6 วันที่ผ่านมา +2

      According to NVIDIA TU102 is even slightly bigger as you stated. But this measurement is pretty precise at 761. Plus, it’s on a competitive node.

  • @dylanjastle
    @dylanjastle 5 วันที่ผ่านมา

    Amazing work. I’m excited to see what the smaller chip on the 5080 looks like.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา

      I've posted a quick GB203 analysis on X → x.com/highyieldYT/status/1884656106972053926

  • @faust-cr3jk
    @faust-cr3jk 6 วันที่ผ่านมา +1

    This is not a deep-dive, this is only scratching a surface. You can talk about each and every building block of this chip for hours, if not for days. And you know this much better than me :)

  • @LouisDuran
    @LouisDuran 6 วันที่ผ่านมา +1

    2:10 I dare you to say "Shoreline" 3 times fast!
    I kid... Excellent analysis!

  • @maksroma
    @maksroma 7 วันที่ผ่านมา +7

    Isn't it correct that the raster engine is a separate block from the ROP? I read a white paper from Nvidia about the Blackwell architecture and on the block diagram they are separated from each other

    • @HighYield
      @HighYield  7 วันที่ผ่านมา +8

      The Raster Engines have two blocks with ROPs. That’s how I understood it. I might be wrong. Let’s check again.

    • @organichand-pickedfree-ran1463
      @organichand-pickedfree-ran1463 6 วันที่ผ่านมา +3

      That's correct. Historically ROPs were part of the L2 cache/memory controller. Raster includes culling, ..., and rasterization (making triangles into fragments/pixels before feeding them to the SM for pixel/fragment shading) hardware whereas the ROP is for "render output" at the very end of the process. It stores only pixels/fragments (maybe "primitives" e.g. planes) iirc and is in charge of blending colors and Z. (zrop and crop).

    • @guyg.8529
      @guyg.8529 6 วันที่ผ่านมา +5

      Raster and ROP are separate but very close together. The reason is that the ROP do the z-test to eliminate hidden triangles, hidden faces, and so on. And in some cases, the z-test can be done immediatly after rasterization ! In the absence of transparent textures, for example, or when no shader mess with the z coordinate. And the driver can configure the graphic pipeline to choose either to do the z-test early, or at the end of the pipeline, after analysing the shaders and some other things. So it's better if they are close together in the case of early-test. So separate, but packed together, with a direct connection between the two.

    • @maksroma
      @maksroma 5 วันที่ผ่านมา

      @guyg.8529 thank you so much for detailed explanation

  • @bfbunny
    @bfbunny 6 วันที่ผ่านมา +7

    As far as the reviews goes, the integer calculation upgrades this generation doesn't benefit gaming workloads that much, making it sound like a very compute-centric design.
    What I do find interesting is undervolting results of the 5080 suggesting that this new architecture can hit 2.7+GHz with 0.875V while the last generation requires more voltage to do so. Even more interesting is how, despite the low voltage, these new cards still consume a lot more power core for core when compared to 40 series at a similar voltage, and even if accounting for the extra performance the higher clocks get, the 50 series is less efficient for gaming core for core at the same voltage & much higher clocks.
    Seems like Blackwell's VF behavior is quite different from Ada, and I'm definitely looking forward to see some efficiency curves

  • @zachb1706
    @zachb1706 6 วันที่ผ่านมา +1

    With the massive AI power the 5090 has, I’m surprised Nvidia doesn’t release multiple DLSS models across their stack. The 5090 should be able to run a much higher parameter count model

  • @Atlas_Enderium
    @Atlas_Enderium 5 วันที่ผ่านมา +1

    So, 10.6% cut down GB202 (involving cache, encoders, and computer) and the same process node as the AD102…
    Either they’re planning a 5090 TI to give all the early 5090 adopters buyers remorse later down the line OR they cut it down to account for the higher power demands of the 5090 and leaving power budget in the standard 1800W budget of the traditional North American 15A outlet. I doubt it’s the latter but I’d love to be surprised.

  • @T33K3SS3LCH3N
    @T33K3SS3LCH3N 6 วันที่ผ่านมา

    Having some education in graphics programming but not being a specialist, this is really cool to see.
    From my perspective, things like the raster engines are part that I only configured. I only have a very rudimentary overview of how they actually function. My typical point of contact would be to configure them for depth testing, i.e. checking if the tested triangle at the specific pixel has a lower depth value than the depth buffer, thereby indicating that the triangle is the closest object to the camera/unobstructed and should therefore be rendered onto the output image.
    I do know enough about these processes to do some more interesting stuff with it (like abusing the rasteriser for custom alpha masking), but that's about it.
    It's satisfying how it physically sits in the same position as it does in the logical render pipeline : Between the caches from which all that data will be loaded, and the GCPs that will do the actual shading if the depth test passes.

  • @henrikmikaelkristensen4784
    @henrikmikaelkristensen4784 6 วันที่ผ่านมา +3

    I saw no mention of the transistor type or thoughts about switching to GAAFET for future NVidia GPUs. Is it realistic that GPUs of this size will switch to GAAFET within the next 1-2 generations?

    • @egalanos
      @egalanos 6 วันที่ผ่านมา +3

      TSMC doesn't have any GAAFET in production yet. N2 is meant to switch to it.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา +1

      Probably 6 years or 3 gems.

  • @TheBackyardChemist
    @TheBackyardChemist 6 วันที่ผ่านมา +1

    2:30 Are we really sure about that? AMD Hawaii had a 512-bit GDDR5 PHY with only ~440 mm^2 of area. OK, they likely need more die area to deal with PAM3, but it still feels like an incomplete explanation.

  • @Tyuwhebsg
    @Tyuwhebsg 6 วันที่ผ่านมา

    great video

  • @kaeota
    @kaeota 6 วันที่ผ่านมา +3

    Around 12:00 you mention GB202 is more like a beefed up AD102, this strikes on a nomenclature oddity i noticed very early on - the 200 prefix instead of the usuall 100. Its widely understood that early stage designs are often developed in parrallell before a final design is settled on and proceeded with. Seeing the results, is there any chance that Blackwell is actually an early Ada Lovelace design, perhaps a more accellerator design, that Nvidia went "back" to for current gen due to some limitation or another?
    This would also add further reason to stay on N5 family, as PDK work was already partially completed. It could signal troubles (in design, manufacturing, yeild, or returns), or simply keeping an ace up the sleeve approach.

    • @DigitalJedi
      @DigitalJedi 6 วันที่ผ่านมา +3

      It's possible that Blackwell and Ada were developed as sibling architectures. I could see them giving the Blackwell team more time to put things out than Ada to iron out the twin dual-mode ALUs and the AMP setup, as well as waiting on GDDR7. Basically, this is what Ada would look like with more time in the oven.
      I know that Intel used to do this quite a bit, for example with Raptor Cove and Redwood Cove, where different sub-teams both took Golden Cove and worked towards different ultimate goals (consumer vs server) and targeted different nodes with their designs. Those cores are twins and they look the part in benchmarks, trading wins and losses but with very similar designs from their shared direct predecessor. I would not be surprised to hear that a version of Lion cove with SMT was developed next to the one we got on consumer chips, or that a smaller 6-wide Skymont was also tried.

    • @ironicdivinemandatestan4262
      @ironicdivinemandatestan4262 5 วันที่ผ่านมา

      B100 is used for datacenter; B20x is used for consumer GPUs.

  • @LatelierdArmand
    @LatelierdArmand วันที่ผ่านมา

    good video, thank you :)

  • @Col_Panic
    @Col_Panic 6 วันที่ผ่านมา +1

    I feel like this design would make it hard to BIN down defective chips. Maybe that explains the large gap between the the 5090 and the 5080. It looks like gamers are getting the scraps. I have had that feeing that gaming has been low on the concern shelf for some time and that DLSS was more if a "what can we use this stuff for when playing games instead of mining etc". I am cynical when it comes to Nvida after a couple decades of building. I gave them multible chances and every time I regretted it in short order. Usually because of their misleading/"embellished" marketing.
    Anyway, great breakdown as per usual!

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา +1

      No, the large gap is because this is the first real 90 card since the GTX 690. The rage bait influencers won’t tell you this because they know bad press gets more clicks, but the 90s were always 2x the hardware specs of the 80s. The 3090 and 4090 were the exception likely because nVidia was working out power and heat management which is very obvious with the 5090s new cooler design.
      90 cards are literally our replacement for running 2x 80 cards in SLI. Once you start to understand that it all makes more sense, just don’t expect Beave Sturke and the other loser influencers to tell you this because they make a lot of money off of their Intel and nVidia hate watchers. The 3090 and 4090 were cut, the 5080 is not.

  • @bibithebunny2628
    @bibithebunny2628 2 วันที่ผ่านมา

    love your stuff, would looove to see a deep dive into amds new strix halo apus

    • @HighYield
      @HighYield  2 วันที่ผ่านมา

      Strix Halo will come for sure 👍

    • @bibithebunny2628
      @bibithebunny2628 วันที่ผ่านมา

      @ thank you! Please take your time, your videos are a gift, not an obligation.

  • @connorharris1900
    @connorharris1900 6 วันที่ผ่านมา +2

    sufficient analysis. not sure how accurate this is. no one has them. And chips will be getting much bigger in 20 years hopefully 5 - you may see the manufacturing process change from starting with large circular wafers to starting with a long silicon bar. allowing them to keep etching the bed until an error is detected, then sliceing ze bar at its error terminal. this would allow for a chip to be many inches long. 3in to 12in. imagine having a computational die 10 inches long. you could fit the cpu ram gpu and everything else all on one monolithic die. couple that with etching the water pathways directly through the top bits of the chip and palladium infusion to prevent degredation the power gain would be a moonshot step into the future. And its all possible with the current equipment. I hope this information helps us get us the Artificial Intelligence we Deserve. Changing the ATX standard to an all pcie insertion standard and boom. Now you have a central circuitboard with 5+ pcie slots filled with watercooled GPU, CPU, RAM, NVME, NPU, etc. all communicated directly with one another by pcb trace. There is a lot of improvement to be made in computer science but i fear it is just trickling out so they can squieze every cent from our pockets each year. bye

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา +1

      I don't think we will see sweeping changes to standards like that until we move on to graphene.

    • @connorharris1900
      @connorharris1900 5 วันที่ผ่านมา +1

      @ Sure graphene has been promising for the past 20 years but a super material like that wont get made before more advanced AI is born. AI will make that material possible to produce. Theres no need for it yet the current materials are sufficient

  • @danieloberhofer9035
    @danieloberhofer9035 6 วันที่ผ่านมา +1

    Considering the sheer amount of silicon real estate that gets essentially wasted even with a ~11% cut-down, I wouldn't be surprised at all if in about 9-12 months we'll see a GB202 based 384 bit 4080ti 24GB with somewhere around 15.000 CUDA cores for ~$1.500. Just smack in the middle between GB202 and 203. I'd wager they're setting aside dies for something like that already.

  • @nielsdaemen
    @nielsdaemen 12 ชั่วโมงที่ผ่านมา

    6:55 *NOW is it GPC or GCP?* I think you switched things up...🤯

  • @tomaszwojtkowski2759
    @tomaszwojtkowski2759 6 วันที่ผ่านมา +2

    if ALU is about 50% of the die, couldn't they go with 3d stack?
    compute on the top die, and cache, memory controller and I/O on bottom die.
    I think first AMD will try with this design in their UDNA architecture.

  • @whistl034
    @whistl034 3 วันที่ผ่านมา

    I am excited by Nvidia's Project Digits and the GB10 based "super-mini" desktop, they say is coming in May for around $3,000US

  • @MacGuyver85
    @MacGuyver85 6 วันที่ผ่านมา

    Excellent, thank you!

  • @anonimowyanonimowy516
    @anonimowyanonimowy516 7 วันที่ผ่านมา +2

    5090 is successor to crown chips of Nvidia enthusiast line for those chips it isn`t the biggest there was TITAN V, titan RTX and what most important GeForce RTX 2080 Ti with bigger/similar chip sizes.

  • @jouniosmala9921
    @jouniosmala9921 6 วันที่ผ่านมา

    The cache cutdown isn't probably like that. More probable case is that 1/4th of each block. Basically, keeping the GPU standard way of handling cache. There's a crossbar between GPC:s and cache and a direct one to one link to memory controllers and each cache block will cache things that are handled from that memory controller.

  • @stefansynths
    @stefansynths วันที่ผ่านมา

    This video is real hard to read on a phone, with all that tiny text. It would be helpful to zoom into the parts you're talking about more often. Also, the die yield calc section would really benefit from larger text. I'm sure it looks great on your big 4k monitor, but that doesn't reflect most viewers experience.
    Yes, I know I can zoom the video, but I shouldn't have to all the time.

  • @MrTurbo_
    @MrTurbo_ วันที่ผ่านมา

    Why would the die size be set in stone? A rectangle can have any circumference above (√ area)*4

  • @uhohwhy
    @uhohwhy 6 วันที่ผ่านมา +2

    chips out for Highyielde!

  • @TheJohdu
    @TheJohdu 5 วันที่ผ่านมา

    very enlightening video. thanks a lot. .. can the "raster engine" be understood as the equivalent to a shader array in amd GPUs?

  • @luis15499
    @luis15499 6 วันที่ผ่านมา

    Hey great video
    Can you explain how each part is designed to do a certain task? How some cores multiply matrixes and others do encoding... I dont get how thats a hardware specific thing

  • @OfSheikah
    @OfSheikah 7 วันที่ผ่านมา +1

    Now I get to get a deep dive in general of a GPU kind that makes use the olden 512 bit memory bus packed as a modern and recent reference

  • @borisdg
    @borisdg 6 วันที่ผ่านมา +1

    GB202 is 750 mm² vs TU102 - 754 mm². Aka it's not the biggest. + We had GV100 on Titan V with size of 815 mm².

    • @BaBaNaNaBa
      @BaBaNaNaBa 6 วันที่ผ่านมา

      this @high yield

  • @untseac
    @untseac 6 วันที่ผ่านมา +1

    GB200 is barely a chiplet design. They have two big backwell GPUs connected to each other in a single die, so technically they are chiplets, even with a giant size, but it doesn't scale (max is two chiplets), or is cheaper to produce, or reduce power draw, or share memory. This is unlike AMD MI300 that has up to 8 much smaller XCD vertically stacked, sharing memory, and can even have a mix of XCD and CCD. For comparison each XCD has 40 CUs, while one Blackwell GPU has 132 SMs. I expect GB200 to cost a lot more than a MI350. There are no real advantages with GB200 interconnected GPUs. Might as well have separate chips. It's a similar architecture to MI250X, two chiplets glued together.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา

      You're wrong, the last card that did that was the GTX 690 which had to be run in SLI that comes with its own efficiency issues. While you are correct that it doesn't scale, it absolutely scales better than SLI which was the entire point of ending support for SLI.
      People grossly misunderstand what the 90 cards are, they think the 90 cards replaced the 80 cards as the flagship when in reality it is the flagship 80 card in modern "SLI". I think once people do a little research on the old 90 cards and realize what nvidia is doing they will have a greater appreciation for it.

  • @pravardhanus
    @pravardhanus 6 วันที่ผ่านมา

    You did not show us the power pins and power bus on the die 😢

  • @aayankhan942
    @aayankhan942 20 ชั่วโมงที่ผ่านมา

    Its obvious that nvidia use same node because ada and blackwell are developed during same timeline. Blackwell in beginning was ai data centre use.

  • @ProjectPhysX
    @ProjectPhysX 6 วันที่ผ่านมา +3

    It's great to see a 512-bit memory bus again, last one was 10 years ago on AMD R9 390X. Performs as expected in lattice Boltzmann. And great that it is only 2-slot.
    But the $2k price tag and 575W TDP are a bummer.

    • @cl4ster17
      @cl4ster17 5 วันที่ผ่านมา +1

      I think 2-slot was a mistake. Sure, it's an extremely impressive cooler but if they just kept it 3-slot it could have been so much quieter.

    • @HighYield
      @HighYield  5 วันที่ผ่านมา +1

      The HD 2900 XT also had a 512-bit bus iirc. A long time ago.

  • @VADemon
    @VADemon 6 วันที่ผ่านมา +1

    Well RTX 5090 is a server-grade chip at its core, no? What's their pro segment marketed counterpart?

  • @geoffseyon3264
    @geoffseyon3264 6 วันที่ผ่านมา

    Looks like there is potential for a 5090Ti if they find enough yield on chips with fewer defects.
    Also can you please do an analysis of Apple’s M4/Pro/Max chips?

  • @headlesschickenfarm
    @headlesschickenfarm 6 วันที่ผ่านมา

    I would be interested as to how many Megabytes of bandwidth you think is actually appropriate for games.
    I can appreciate that PCIe is slow and has very limited bandwidth so you will not properly be able to update things in real-time. But saying that not being able to update memory in real-time because games don't need to is a bit weird.

  • @sai_co5337
    @sai_co5337 วันที่ผ่านมา

    How are these images made? my best guess is X-ray but idk if its this precise. Just curious

  • @WarThunderGerald
    @WarThunderGerald 4 วันที่ผ่านมา

    I’ll admit when I’m in over my head and this is that time. But can I ask with many mixed views and a lot of disappoint with gaming improvement that.. while it does interest me.. I really would like to know as an RTX 4090 FE owner (that the latest Nvidia app updates were amazing! Idk how but War Thunder was using 80-98% GPU without streaming even and now is at like max 12% GPU usage idk what they did fully)
    But what I can never find online is anyone discussing architecture improvements for creators.. Streaming while gaming … could this help me not need to do a dual computer setup possibly with multiple 4K outputs? I mean are there any major improvements for streaming / and video editing work in adobe premier pro and photoshop / Lightroom .. but mostly anything worthwhile over the 4090 worth trying to change in gains in performance I feel must be there by these massive jumps up to 512 bit and memory speed and amount.. and I heard you mention the band in the middle being for video encode and decode? What I can’t find is anyone discussing or doing real testing with the new architecture (heck even the more controversial 5080s.. I can’t get past memory amount though.. but I will say the ProArt series of the 4080s dual was something I considered! With pro art motherboard and love of the aesthetic with so few good looking professional cards). But .. If you’re working and streaming and doing regular 4K video editing WITH DUAL 4K monitors (soon to be 3x 4K 32”). With a lot just sitting there being used without any work being done…. Like would you expect a large worthwhile increase in professional applications of this card over the 4090? In your opinion?
    And does anyone know of anyone with better information or resources on it for professional work and or streaming purposes!?

    • @WarThunderGerald
      @WarThunderGerald 4 วันที่ผ่านมา

      Also any changes ever regarding the fact with dying Intel CPUs (13900k* I have & 14900k) - I’ve always hated that to film in 10ft color that the GPU can’t assist the encoding or whatever to move through timeline and …. In a way I almost want to shoot in lesser quality just to avoid this like like my a6400 can at max vs Sony A7IV 10bit - If ever a GPU could or would allow native support of the 10 bit coded to not put that on the CPU only…. I mean.. that would be A MASSIVE BIT OF NEWS TO SHARE! But I can’t find anyone talking about anything like this…. Are there any upgrades that may allow for a change like that with this generation? Or ever? For native support of 10bit color codec in premiere pro or any program?

  • @dare2liv_nlove
    @dare2liv_nlove 6 วันที่ผ่านมา +2

    Strongly dislike Nvidia's anti-consumer habits, & totally uninterested in anything _GeForce_ in the near term...
    BUT A HIGH YIELD VIDEO ABOUT THIS... INSTANT WATCH!! 😂

  • @Violet-ui
    @Violet-ui 6 วันที่ผ่านมา +1

    Why does the video decoding area look so big? Isn't decoding supposed to be a lot easier than encoding?

  • @kanezhang5813
    @kanezhang5813 3 วันที่ผ่านมา

    super interesting video - very precisely highlights nvidia's motives for this generation. no thought for the average gaming consumer, but marketed right for them. 32gb gddr7+512bit bus is way too much, and clock are somehow lower because the cores are underpowered. Wonder what they could have done if they had done what you said and focused on raw performance gains rather than just memory/int32 enabled cores. 4090 was memory starved, but I feel like a 28gb gddr7/448 bit running at even 28gbps would have been enough to not throttle the cores, with the addition of being way cheaper and less power draw. Definitely could have been priced around or under the 4090 msrp, but they got overzealous or greedy or both. What a shame. If only nvidia listened to people like you :(

  • @Techn9cian123
    @Techn9cian123 5 วันที่ผ่านมา +1

    Insane that normal people can just buy these special rocks inscribed with invisible magic runes

  • @TheMcSebi
    @TheMcSebi 5 วันที่ผ่านมา

    haven't bought a new card in over 10 years, currently on a 3090 from ebay for 1000€ and not planning on switching any time soon

  • @TestarossaF110
    @TestarossaF110 6 วันที่ผ่านมา

    when will they begin to layer all the non-compute on top of the rest... and maybe below it too

  • @QuinnCsVideos
    @QuinnCsVideos 9 ชั่วโมงที่ผ่านมา

    I don't care what anyone says this is earth-shattering

  • @robbay8610
    @robbay8610 6 วันที่ผ่านมา +1

    Gamers went from being consumers to prosumers.

    • @apersonontheinternet8006
      @apersonontheinternet8006 5 วันที่ผ่านมา +1

      People forget about the old 90 cards, last true one we had was the GTX 690.
      But yes, most gamers need to forget that the 5090 exists.