RDNA3 - AMD's Zen Graphics Moment

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ม.ค. 2025

ความคิดเห็น • 1K

  • @ThorsShadow
    @ThorsShadow 2 ปีที่แล้ว +1206

    When the world needed him most, he returned.

    • @JK-zx3go
      @JK-zx3go 2 ปีที่แล้ว +25

      Spot on

    • @grimlock8369
      @grimlock8369 2 ปีที่แล้ว +6

      This

    • @IBZORK
      @IBZORK 2 ปีที่แล้ว +32

      Arrrritght guys howse it goin

    • @ekinteko
      @ekinteko 2 ปีที่แล้ว +7

      This was an obvious coming.
      Doing chiplets on a CPU is very difficult and problematic. AMD have accomplished that, they're gliding through the advancements and easily dominating Intel.
      Doing chiplets on GPU is much easier than for CPUs and less problematic. Since AMD have mastered chiplets, it was obvious this was the next-step.
      The last point would be an APU, or mixed processor with CPU and GPU based on a chiplet design. But I don't think that's where they're going, AMD will likely stick with a Monolithic Design here for better efficiency on x86 Mobile Devices. I would like to see this CPU+GPU chiplet design with ARMv9 architecture as that's very interesting for HEDT, Servers, and Supercomputers.

    • @TestarossaF110
      @TestarossaF110 2 ปีที่แล้ว +4

      @@ekinteko wild APU's are coming but afaik it's all still for mobile. Wonder how much FPGAs (Xillinx) will influence the future of AMD.

  • @ferdievanschalkwyk1669
    @ferdievanschalkwyk1669 2 ปีที่แล้ว +290

    The wafer calculator is still available on the internet way back machine.
    Also, the calculations are done in JavaScript, so it may be possible to save it locally for future use.

    • @ArtisChronicles
      @ArtisChronicles 2 ปีที่แล้ว +16

      It would be great if it could be saved locally.

    • @N0N0111
      @N0N0111 2 ปีที่แล้ว +11

      @@ArtisChronicles I bet you could download the whole website and emulate it :D

    • @teaser6089
      @teaser6089 2 ปีที่แล้ว

      Yeah just open in on WB and then look at the Java Script and then copy paste it

    • @teaser6089
      @teaser6089 2 ปีที่แล้ว +10

      @@N0N0111 You don't emulate it, you can run it, Javascript is a supported language on most computers now isn't it.

    • @dondraper4438
      @dondraper4438 2 ปีที่แล้ว +13

      @@teaser6089 "JavaScript is a supported language on all browsers* ...."
      Fixed it for you.

  • @aggamerytttv
    @aggamerytttv 2 ปีที่แล้ว +196

    Man, this is so fascinating! I know you had issues with this content, but your coverage is great because no one else goes this deep into CPU and GPU architectures. It’s very interesting and educational. Please keep this up.

    • @CrackaSlapYa
      @CrackaSlapYa 2 ปีที่แล้ว +1

      @@randomguydoes2901 LOLOLOL. You SOLELY get your info from this Numbnut? Dude, GD. Thats really stupid. The guy is wrong 90% of the time, gets buttrage and vanishes for months, then reappears to get whack-a-moled again.

  • @orangeduck474
    @orangeduck474 2 ปีที่แล้ว +137

    Always look forward to your videos, been watching them for last 6 years now. Thanks Jim!

  • @TrueThanny
    @TrueThanny 2 ปีที่แล้ว +194

    Something you didn't cover, which also makes a big difference, is that AMD isn't limited to a single GCD. They didn't do it this generation, probably because it wasn't quite up to expectations, but you can be sure they're working on that.
    The big question I still have is how the MCD's are connected to the GCD. I don't think it can be Infinity Fabric. The fastest current IF link is 800Gb/s, or 100GB/s. SRAM is more than an order of magnitude faster than that, and a 64-bit GDDR6 link at 2.5GHz is 160GB/s - more than that single IF link. If you increase the number of IF links, you're just adding more and more space to the die, and increasing power usage. So I think the links have to be direct, with no logical communication in place. So the MCD's will effectively be extensions of the GCD. I don't know what that link looks like, but I'm guessing that's what AMD means by "advanced chiplet packaging" in that slide they have describing it.
    Which does have implications for how much memory throughput they can have. Each MCD needs wide connection point along the edge of the GCD. Which makes multiple GCD's even more important, as you can greatly increase edge area with two rectangular GCD's connected together on their short ends (using a similar "advanced chiplet packaging", I'd wager).
    Hopefully we'll get some details on the 3rd, but I wouldn't be too surprised if that level of information doesn't get out until the review samples and review guides are distributed.

    • @jimmyjiang3413
      @jimmyjiang3413 2 ปีที่แล้ว +13

      While RDNA 3 to be presented, the development of RDNA 4 has already started. I am sure that it will be further improved chiplet packaging, probably more like EPYC Milan package with several GCDs with up to 16 WGPs (32CU) per chiplet, and one physically separate IO die consisting of PCIe, VCN codec, and display output engine, all connected with 4th gen Infinity Architecture. Maybe this would also be the base for chiplet-based APUs, probably suitable for future Nintendo console needs, as well as Radeon Pro series.

    • @GrimpakTheMook
      @GrimpakTheMook 2 ปีที่แล้ว +2

      How about doing it kinda like they did with HBM? Interposer connecting under instead sideways? Dunno if SRAM allows for this tho.

    • @jimmyjiang3413
      @jimmyjiang3413 2 ปีที่แล้ว +1

      @@GrimpakTheMook it could be using InFO-LSI with 3rd gen Infinity Fabric to link between MCD and GCD.

    • @halbouma6720
      @halbouma6720 2 ปีที่แล้ว

      Yeah, but they can increase the die too for now as well until its ready.

    • @defeqel6537
      @defeqel6537 2 ปีที่แล้ว +5

      Indeed. It might take until RDNA4, but I would wager that we will see multi-GCD RDNA3 next year already.

  • @N0N0111
    @N0N0111 2 ปีที่แล้ว +53

    No one can do it like Jim.
    He just walks us through how the great checkmate in the chip industry was started.

  • @benjaminoechsli1941
    @benjaminoechsli1941 2 ปีที่แล้ว +141

    We've been waiting for years for Radeon to have its "Ryzen moment". It's finally here.
    And as Jim said, Zen 1 was just the generation that made people sit up and take notice of AMD again. Zen 2 is when Intel started to feel the pain. Exciting!

    • @ArtisChronicles
      @ArtisChronicles 2 ปีที่แล้ว +10

      I was definitely hoping/expecting this a bit sooner. The rx 5xxx cards were the generation I was really hoping this would show up on.

    • @MaddoScientisto-fb3kb
      @MaddoScientisto-fb3kb 2 ปีที่แล้ว +30

      @Transistor Jump that's mostly issue of pricing, not technological inferiority. Zen 3 alone is still dwarfing Intel in retail sales precisely because of it's value and price reduction after Zen 4 launch. AMD's strategy seems to be aimed at more profit margin with Zen 4 until Zen 3 is sold out, then they can both lower prices for regular PUs and offer 3D variants of Zen 4 which will outcompete Intel in gaming (the only sphere Intel has advantage in), while Intel itself has very limited options to counter 3Ds.

    • @ithkul
      @ithkul 2 ปีที่แล้ว +15

      @Transistor Jump Nah, they about dead-even if you running the silicon at sane power targets. Raptor Lake is meh at best, just a minor refresh of Alder Lake. Intel just boost it the silicon further and add more L2 cache/E-cores to keep slightly ahead on some synthetic benchmarks at the expense of power consumption. The platform cost gap will narrow once B series motherboards on AMD front come out.

    • @dondraper4438
      @dondraper4438 2 ปีที่แล้ว +5

      @Transistor Jump Gaming performance wise they are about the same. Multi threaded performance is where the 13600k eats the 7600x's lunch. However, it's also AMD, and I assume they have a response to Intel's E cores with Zen 5.

    • @olaole8315
      @olaole8315 2 ปีที่แล้ว

      I want to upvote, but you have 69 upvotes....

  • @CraigieBee
    @CraigieBee 2 ปีที่แล้ว +46

    Oh boy Jim, been waiting for you to peak your interest again. Marvelous video, goes in my list of the great ones. I still go back to your other great ones from time to time. Whether you are right or wrong about this or any of the other topics, it doesn't really matter to me, its that passion and emotional grip I get from your videos that is really exciting. Great job, again!
    Would be great if AMD levels the playing field though, well at least as far as they got with Intel on the CPU side. Also I'd like a high end graphics card that costs less than a kidney!

    • @israellewis5484
      @israellewis5484 2 ปีที่แล้ว +3

      Jim would be amazing as a lecturer in Computer Systems Architecture.

  • @newkick100
    @newkick100 2 ปีที่แล้ว +15

    Ian Cutress from tech tech potato ( former anandtech ) was also complaining about the caly tech wafer calculator. his work around was to get the page from the way back machine

  • @apefu
    @apefu 2 ปีที่แล้ว +70

    Beautifully laid out story, Jim! Absolutely loved it.
    I am really excited to see if they can get 3d-stacking memory working for this. It might save Nvidia for a generation, but the thermals... AMD at least have experience here.
    The future looks interesting 😎

  • @Fakeman
    @Fakeman 2 ปีที่แล้ว +25

    Missed these tech analysis videos! Glad to see these pop up with more frequency.

  • @Think666_
    @Think666_ 2 ปีที่แล้ว +15

    Thanks for taking the time to pull all of this together and share it with us.

  • @nikmabc
    @nikmabc 2 ปีที่แล้ว +142

    Nvidia's Intel like moment. To upstage Amd, Intel used a phase change cooler and pushed their old cpu design.

    • @mtunayucer
      @mtunayucer 2 ปีที่แล้ว +10

      The thing is lovelace is actually very efficient. Nvidia just pushed it too much

    • @MrVohveli
      @MrVohveli 2 ปีที่แล้ว +9

      Not to be that guy, but AMD doesn't have the fastest CPU currently.
      Also adored unfortunately misses a key point: 40-series maxes out game engines at 1440p. If there's no way up, you need to have a selling point:
      Ray tracing is exactly that.
      DLSS containing DLAA and 3.0 are features AMD does not have and FSR has sever image quality defects DLSS simply does not have.
      The resurrection of cyberpunk and its new path traced mode.. I'm sorry, but AMD is (probably) behind where it counts. You can be sure nVidias r&d department has had chiplets in the works for years.
      nVidia isn't Intel and you make a severe misjudgement if you think so. AI leveraged performance is the future, as diminishing returns will kick in for shader counts very soon.
      I hope to be wrong, but as witnessed by intel 13th gen:
      Unlimited resources do help and nVidia doesn't have the corrupt toxic corporate culture intel does.. so.. my money is on nVidia _maybe_ loosing this gen, but never again. RT will be it, because now AMD fan boys can have it too so a ton of FOMO/Brand blindness will shift to wanting more RT and raster will have reached its limits as nVidia predicted.

    • @elon6131
      @elon6131 2 ปีที่แล้ว +3

      @@MrVohveli yep. the 4090 is easily >85% faster in pretty much any *pure raster* VR game. 1440p/4k games just don't have the pixel count to utilise these cards right now. AMD's theoretical 2x performance is going to crash and burn in the exact same way it happened to nvidia, there's just no way around the CPU bottleneck and how increasingly wide GPUs just don't have enough pixels to push.

    • @amr.c1650
      @amr.c1650 2 ปีที่แล้ว +2

      Commenting just so I can go back and see who got what right.

    • @MrVohveli
      @MrVohveli 2 ปีที่แล้ว +3

      ​@@amr.c1650 Aside from a few things, I got it exactly as it is:
      1.5x in Watch dogs and 1.7x in Cyberpunk still come out to ~10% slower performance. Exactly around where I said it would.
      Doubled the core count, got 1.5 - 1.7x performance, so there's your diminishing returns kicking in for AMD too. Got that right.
      AMD pushing FSR with Ray tracing exactly as nVidia does, so got that one too.
      They came out with hardware support for FSR, which was expected and FSR 3.0 to boot, so didn't get that one quite right thankfully.
      Not the Intel moment people hoped for, but brilliant stuff for a whole lot less money. Waiting on benchmarks though, as AMD didn't give out any without FSR.

  • @kojack57
    @kojack57 2 ปีที่แล้ว +10

    That's insane. AMD could become an absolute powerhouse. Imagine trying to keep your ...glee under control for five, six years once you have had the realisation of what at first you can do to the CPU while also knowing that you can carry it over into your GPU company. You gotta love modularity. Nice video as always.

  • @enrac
    @enrac 2 ปีที่แล้ว +14

    Hey! New video!! Thanks to your videos I actually work at AMD now!!

  • @catalystguitarguy
    @catalystguitarguy 2 ปีที่แล้ว +82

    I’m sure the reason the shift in calling the Graphics Processing UNITS over the formerly more common card moniker is due to them literally attempting to become or becoming absolute Units.

    • @klobiforpresident2254
      @klobiforpresident2254 2 ปีที่แล้ว

      Nvidia, inventors of the GPU and the GP Unit.

    • @jotunheim5302
      @jotunheim5302 2 ปีที่แล้ว +6

      @@klobiforpresident2254 Sure, just like how Nvidia "invented" PhysX ...

    • @teaser6089
      @teaser6089 2 ปีที่แล้ว +7

      @@klobiforpresident2254 Ah yes a Nvidia fanboy in the wild...
      The first GPUs can be credited to PowerVR and 3dfx not Nvidia.
      Nvidia only was the first by marketing a GPU specifically for gaming, but GPUs are much more then gaming accelerators, if they were just gaming accelerators they would be called Gaming Processor Unit not Graphics Processing Unit...

    • @klobiforpresident2254
      @klobiforpresident2254 2 ปีที่แล้ว +4

      @@jotunheim5302
      They didn't invent graphics processing and only invented the GPU insofar as they were the first to make up the word (for a term that surely purely by coincidence described their and only their product), which is to say they didn't. My comment was supposed to be humorous, not a factual recounting of graphics computing history.

    • @billschauer2240
      @billschauer2240 2 ปีที่แล้ว +1

      @@klobiforpresident2254 For humor smiley face required.

  • @BalázsBessenyei
    @BalázsBessenyei 2 ปีที่แล้ว +42

    There is one implication of moving the cache and memory controller off die to chiplets. It will improve binning drastically. With monolithic die, if the shaders came out perfectly, but there was an error in one of the memory controller or cache sections, the otherwise prefect compute had to be binned down to a lower-end model. With chiplets, if such error occurs, it can be replaced with a good chiplet, thus the compute core can be saved.
    What is even more drastic, if the quality can be tested, they can bin together parts that roughly operate on the same speeds. Meaning, if a memory controller works, but can't reach the same operating frequencies, it can be replaced with one that does.
    Regarding the reticule limit on 0.55 NA it is around 800, but in case of the newer 0.38NA machines that limit gets halved to 400. Meaning AMD can still manufacture their CPU and GPU, while NVidia and possibly Intel can't if they remain monolithic.

    • @MetroidChild
      @MetroidChild 2 ปีที่แล้ว +2

      One thing to note, the main benefit of the smaller reticle is to skip a single mask step on each layer, which is nice if your chip fits the reticle, but nothing game-changing compared to the amount of steps you save by going EUV.

  • @InvadersDie
    @InvadersDie 2 ปีที่แล้ว +11

    I saw Ian on his techtechpotato channel using the way back machinenon the internet archive. It let him use the old wafer calculator

  • @JJ20OL
    @JJ20OL 2 ปีที่แล้ว +11

    Love your breakdown videos

  • @averageuser8757
    @averageuser8757 2 ปีที่แล้ว

    Thanks!

  • @DerIchBinDa
    @DerIchBinDa 2 ปีที่แล้ว +5

    Oh boy oh boy! Adored video? Stopped everything, grabbed some snacks and here we go!
    Missed you, Jim!

  • @spacecommanderbear
    @spacecommanderbear 2 ปีที่แล้ว +1

    Thank you buddy great work glad too see you're content back up and still got you're scuba gear on.

  • @asjdiaosdjasd
    @asjdiaosdjasd 2 ปีที่แล้ว +14

    Ah 2017 when any other youtuber reviewer did not believe your analysis, and yet I believed you and turns out you're correct

  • @jimmahT
    @jimmahT 2 ปีที่แล้ว +107

    I hope you are right Jim but I hope it's not at the cost of AMD becoming more greedy than Nvidia currently are.

    • @Downstars
      @Downstars 2 ปีที่แล้ว +20

      Bro, do you know how capitalism works?

    • @Ludak021
      @Ludak021 2 ปีที่แล้ว +37

      You mean like, AMD being more expensive than Intel now? That kind of greedy? Noo... AMD would never...

    • @kintustis
      @kintustis 2 ปีที่แล้ว +39

      @SoundwaveSinus9 exactly right.
      Undersupply? increase prices.
      Oversupply? slow r&d and production and increase prices.
      Have a good architecture? increase prices to match performance. Bad architecture? increase prices to widen margins

    • @ArtisChronicles
      @ArtisChronicles 2 ปีที่แล้ว +9

      @SoundwaveSinus9 yup, been noticing that too.

    • @rickflare6893
      @rickflare6893 2 ปีที่แล้ว

      @@Ludak021 😊

  • @Diglo1
    @Diglo1 2 ปีที่แล้ว +124

    Nvidia has been hitting theoretical limits with die sizes for a while now and it is indeed true that Nvidia can't get that huge increases anymore unless it is a gimmick or a massive discovery with graphics architecture.
    And yes it is exactly like you said, AMD is no where close to maximum die size with Navi31 which gives them the room to increase their horsepower which Nvidia doesn't have.
    This is why Nvidia is introducing gimmicks like DLSS 3.0 in order to mask less significant gains.
    When Intel couldn't keep up with Zen it was obvious that chiplets or MCM was the way to go and it was superior in almost every way, but it would scale with no issues and keep efficiency high,
    Personally I think RDNA3 will be more efficient and use less power then Lovelace, cost slightly less to manufacture and is likely to out perform Nvidia in rasterization. It is still likely to loose in raytracing, but due to raw horsepower it could be getting very close, but we will just have to wait and see.

    • @Ludak021
      @Ludak021 2 ปีที่แล้ว +17

      Ah yes, I remember when the die couldn't get any bigger than the one in 1080Ti. Good old times, repeating themselves...when internet random people know what's possible better than greatest engineering minds.

    • @TheVanillatech
      @TheVanillatech 2 ปีที่แล้ว +25

      Saw a really interesting video on DLSS 3.0 on Hardware Unboxed the other day. They showed the "generated" frames when slowing the video down to 5fps etc, and you could clearly see huge artifacting around text (from the GUI) and also very thin objects or ones that didn't move uniformly in the direction the screen action was heading. Admittedly they said that at 120fps or higher, it is hardly noticable when using Quality DLSS, but they said that south of 120fps, and certainly around 60fps (or lower), it is glaringly obvious and ugly. The lower the quality setting of DLSS, the worse it gets. So it seems to be a feature only suited to people with 144Hz (or higher) screens, with GPU's that can achieve those numbers in games. The lower tier 40XX series cards will get nowhere near those numbers of performance unless running very old games, so it will be interesting to see what Nvidia does to try and address this. Especially the GUI factor.

    • @BonktYT
      @BonktYT 2 ปีที่แล้ว +5

      "gimmicks like DLSS 3.0 " Sure, root for AMD, but don't make a fool of yourself

    • @VoldoronGaming
      @VoldoronGaming 2 ปีที่แล้ว +13

      @@BonktYT DLSS is hot garbage. It benefits no one with a refresh lower than 144 or 160 mhz.

    • @KibitoAkuya
      @KibitoAkuya 2 ปีที่แล้ว +22

      @@Ludak021 People were being skeptical because it seemed excessive in size
      The difference now is that they are literally skirting the reticle limit, it's not "it can't get bigger because it's already ridiculously big", it's "it can't get bigger because it's physically impossible for the fab machines to make anything bigger"

  • @ZAGAN-OZ
    @ZAGAN-OZ 2 ปีที่แล้ว +196

    If Jim gets excited, AMD never fails to disappoint.

    • @Lennox032
      @Lennox032 2 ปีที่แล้ว +6

      Please don't say that! I hope they don't disappoint this time.

    • @jaromor8808
      @jaromor8808 2 ปีที่แล้ว +24

      imagine all the people pulling their hairs for not buying AMD stocks back when he made his first Zen video 😂🤣
      yes, that includes me 💥😀😢

    • @ZAGAN-OZ
      @ZAGAN-OZ 2 ปีที่แล้ว

      @@Lennox032 4090 is just too fast. Chiplet also have penalty in performance. AMD needs to win with price which should be easy even if they can’t match 4090.

    • @ArtisChronicles
      @ArtisChronicles 2 ปีที่แล้ว +2

      @SoundwaveSinus9 only 3D cache chips? For workstation? Really?

    • @ArtisChronicles
      @ArtisChronicles 2 ปีที่แล้ว +2

      @@jaromor8808 I'd have done it if I could. Heck I would've done it sooner than that. Would've enjoyed bigger returns if I had.

  • @WickedRibbon
    @WickedRibbon 2 ปีที่แล้ว +13

    The TH-cam notification bell was invented so we'd never miss one of this man's videos 🙏

  • @tcdnbm
    @tcdnbm 2 ปีที่แล้ว +2

    Really enjoyed this, thanks for putting in all the hard work and doing the research

  • @RobBCactive
    @RobBCactive 2 ปีที่แล้ว +4

    The Cali Tech dpw calculator still works using a web archive, I saw it used.
    It might have been Chips and Cheese or Angstronomics that has an article using it, if it wasn't Dr Ian Cutress TechTechPotato

    • @vorlon123
      @vorlon123 2 ปีที่แล้ว +2

      Yeah, it was TechTechPotato on the "AMD Ryzen 9 7950X: Cost to Manufacture?"-video, he has it linked in the description.

    • @earthtaurus5515
      @earthtaurus5515 2 ปีที่แล้ว +1

      Think it was Dr Ian Cutress? 🤔

  • @mikehuston2132
    @mikehuston2132 2 ปีที่แล้ว +1

    Good to see you back. i look forward to your analysis everytime !!!

  • @exioncore
    @exioncore 2 ปีที่แล้ว +6

    Unlisted? 2 views? I just saw your tweet about 15 hours long editing shift today. Video soon. That was 25 minutes ago, you speedy legend!

    • @exioncore
      @exioncore 2 ปีที่แล้ว +2

      Not sure you meant this to be watchable yet? Is currently listed in your "Latest Tech Upload" playlist

  • @guyva_unito_sree3
    @guyva_unito_sree3 2 ปีที่แล้ว +2

    the little red-orange dragon that could

  • @AltoXn
    @AltoXn 2 ปีที่แล้ว +4

    Loving the new content Jim, and your insight into the new technologies

  • @eranraz
    @eranraz 2 ปีที่แล้ว

    i love those videos of pure knowledge. this is why I keep coming back to your channel. please continue releasing them - I enjoy them tremendously!

  • @Xearin
    @Xearin 2 ปีที่แล้ว +6

    Great stuff as always Jim. The first obvious split between RDNA and CDNA? Looks promising.

  • @sickbailey21
    @sickbailey21 2 ปีที่แล้ว +1

    Its always good to see you grace us with another upload brother. Much appreciated.

  • @Zorro33313
    @Zorro33313 2 ปีที่แล้ว +29

    When you said "Nvidia's going to get Inteled" i realized that was Lisa's strategy in a finance-restricted environment all along. When everyone where saying "oh, AMD should sell its graphics division cuz it only sucks money from CPU division", Lisa was actually using gained time, money and advantage gap to slow down on CPU side a bit and put more resources into GPU. To intel Nvidia right after she inteled Intel.

    • @noahflare6825
      @noahflare6825 2 ปีที่แล้ว +2

      Lisa Su for President!

    • @andersjjensen
      @andersjjensen 2 ปีที่แล้ว +18

      The real money isn't in gaming GPUs. The real money is in data center GPUs. Apply everything Jim said about reticle size there first. If AMD goes to an 852mm2 GCD with, say, 32 stacked MCDs they can deliver a 64GB compute accelerator with an effective bandwidth that surpasses HBM2e and is so brutally faster than anything Nvidia could ever dream of that it's not even funny. You can charge $20,000 for such an accelerator without even blinking.
      Desktop CPUs are a by product of Epyc/Threadripper. And next gen their midrange and high end laptops are going to be based on the standard Zen 4 chiplets. Only ultra-portable laptops get their own dedicated chips. GPUs are heading in a similar direction: Gaming GPUs will get various GCDs, but MCDs will be a by product of compute accelerators.

    • @Zorro33313
      @Zorro33313 2 ปีที่แล้ว

      @@andersjjensen andres, beratna! mi pensa data center/compute gpus also have little to no scaling problems with GCDs, so AMD can make a 2 GCDs unit and pashang da Novideo nakangepensa still sitting on monolithic dead end.
      tho mi xalte ere gova Novideo roadmap showing chiplet GPUs right about now so maybe data-center ADA will bring smth but mi nada fosho it's gonna happen.

    • @defeqel6537
      @defeqel6537 2 ปีที่แล้ว +2

      @@andersjjensen AMD already has a multi-chip data center design with Instinct, ie. CDNA, they aren't going to use RDNA for it

    • @Fractal_32
      @Fractal_32 2 ปีที่แล้ว +2

      @@defeqel6537 well they could carry over this technology to CDNA or maybe we are seeing it backwards and it’s CDNA work put on RDNA.
      Like how Ryzen and Threadripper are based off EPYC chips. I always found it cool and was curious why Zen had a level of ECC support and gained CCDs well I realized this is because it’s the same chips they put in their server products. (EPYC) Assumingely they chose this due to the economics of binning “defective” dies for lower end platforms. (Threadripper and Ryzen)
      I could totally be wrong with this CDNA ported to RDNA idea and I’m fine with that. I find the idea interesting because I see the possibility of Ryzen like features/performance scaling overtime.

  • @granthartley
    @granthartley 2 ปีที่แล้ว +1

    Wow, so glad someone mentioned this channel in the comments of another video on the day of their launch! That was super informative, interesting and exciting to watch. Thank you for the insight! Subscribed and looking forward to more of this good stuff in the future. 😁👍 Just one more channel to turn me even more into a introverted nerd... and I love it. 😂

  • @Abu_Shawarib
    @Abu_Shawarib 2 ปีที่แล้ว +10

    Pretty interesting stuff, regardless of who "wins" this generation, I'm fascinated by the challenges imposed by the divergence of scaling of different components in silicon.

  • @daledude66
    @daledude66 2 ปีที่แล้ว +1

    The Caly calculator is still available via the wayback machine.

  • @Zorro33313
    @Zorro33313 2 ปีที่แล้ว +67

    ATI's doing powerful efficient and cheap GPUs, while Nvidia's GPUs burn, explode and have useless gimmicks. Like it was yesterday...
    _Someone once told me "Time is a flat circle"..._
    - True Enthusiast

    • @Haldjas_
      @Haldjas_ 2 ปีที่แล้ว +10

      @@MadLustEnvy raytracing was around for much longer already and nvidia just made special cores to run that, barely, in realtime for the first RTX version. Any card can run some form of raytracing though, and as we have seen with unreal engines lumen technology, you don't need any RTX hardware at all to get to the same level on software raytracing and other engines will ahve to follow suit on those advancements which means the nvidia RT cores will probably lose their meaning in the next 5 years or so
      EDIT
      And as far as i am concerned, the new dlss version sucks huge D
      So what really is there to like about the gimmiks?

    • @eleventy-seven
      @eleventy-seven 2 ปีที่แล้ว

      I love my useless gimmicks.

    • @MaddJakd
      @MaddJakd 2 ปีที่แล้ว +8

      @@MadLustEnvy Considering there are normal folk with the card dissing on DLSS3, yeah. You're entire argument is moot.
      Once you get away from the ignorant, frame chasing, common folk, the nerds on either side are calling the blatant as well as technical pitfalls.
      Anything else is either blind fanboyism, or literally having no eye nor knowledge of what's in front of them.

    • @MaddJakd
      @MaddJakd 2 ปีที่แล้ว

      @@MadLustEnvy You know dam well I'm not talking about TH-camrs.
      Just becasue you don't know real info when it's in front of you doesn't mean you're on to something. Your twisting of my words and reality are quite telling.
      Your crew of tech illiterates and untrained eyes are why we keep getting hoodwinked by Nvidia. Go find some actual technical dives and even reddit posts and learn something for heavens sake

    • @Haldjas_
      @Haldjas_ 2 ปีที่แล้ว

      @@MadLustEnvy after your little mad rant here, i am fairly certain that you are just trying to troll so I'm not gonna feed you, but if you aren't.. well then you're pretty butthurt about someone not liking your precious nvidia gimmicks i guess
      Either way, I'm gonna save my time talking to you any further and go somewhere with real conversations where people won't throw a fit for no real reason

  • @MIK33EY
    @MIK33EY 2 ปีที่แล้ว +1

    This is the first use of the square root function that I’ve seen in absolutely ages - usually people divide by two.

  • @Ben-ry1py
    @Ben-ry1py 2 ปีที่แล้ว +19

    Hell yeah Jim, it's really good to hear from you! I love the zen parallel, they won't spank Nvidia in this gen, but they have room to undercut them to get more market share, and they can use that market share to bring out bricks that will beat the competition in performance and price to performance...then they will raise prices quite a bit, but still be ahead. It almost feels like a fantasy to think that Nvidia could get schooled, but it felt the same with intel before zen.
    Cheers
    Ben

  • @tstokemb
    @tstokemb 2 ปีที่แล้ว +1

    Thank you for the video Jim. I really enjoy your content and how you break it down!

  • @ygny1116
    @ygny1116 2 ปีที่แล้ว +5

    Full AD102 is around 40% faster than Navi 31, talking about being a generation behind.

  • @N0N0111
    @N0N0111 2 ปีที่แล้ว +2

    OMG he has done it again!
    This is what we wanted to know, and it is clear as day now.

  • @chasecrappel9480
    @chasecrappel9480 2 ปีที่แล้ว +21

    I always thought that if AMD tweeted “Game Over” with their tweet on The day RTX Beyond was held, they would have INSANE hype around them. It fits the gamer theme and if they came out and had a 4090 raster card for like 1k. It would be “game over” and they would win. But that was a dream, hoping for the best.

    • @Salabar_
      @Salabar_ 2 ปีที่แล้ว +15

      Being cheeky never worked out for AMD in the past tbh.

    • @ShawFujikawa
      @ShawFujikawa 2 ปีที่แล้ว +7

      That would be cringeworthy marketing if they did that, which probably means they will do it.

    • @jayclarke777
      @jayclarke777 2 ปีที่แล้ว +2

      @@Salabar_ "Poor Volta"... Yep

  • @gabrielhenriquez1700
    @gabrielhenriquez1700 2 ปีที่แล้ว +1

    We missed you Jim! Good to see you making videos more often again.

  • @billy65bob
    @billy65bob 2 ปีที่แล้ว +4

    Yay, our Cassandra is back! ❤
    EDIT: I'd really like to see some sort of consumer grade SR-IOV solution with graphics chiplets, but I guess that's still a ways away.

  • @sacamentobob
    @sacamentobob 2 ปีที่แล้ว +2

    This is 2016 Jim 2.0. Will make sense to long time subs

  • @XenZenSen
    @XenZenSen 2 ปีที่แล้ว +3

    When the world needed him most, he returned

  • @t3h51d3w1nd3r
    @t3h51d3w1nd3r 2 ปีที่แล้ว

    MAN, I absolutely love your videos, listening to your break downs of the current industry situation is always fascinating but the bit at the end makes the the future looks so promising. Superb video!!

  • @OldManBryan
    @OldManBryan 2 ปีที่แล้ว +3

    Oh man just about to head out for a nightshift and this pops up. What luck something to listen to while I'm out. :D

  • @osgrov
    @osgrov 2 ปีที่แล้ว +3

    Superb analysis Jim, I'm very grateful for your insights.
    Yes, I believe you're correct: this may very well be the Zen moment. It is incredibly exciting!
    I've ran Nvidia cards ever since the old Radeon 5870 drivers who never really worked made me rage-quit ATI. Now though, it seems the good old team red are back. Very excited to see what RDNA3 brings.

  • @philscomputerlab
    @philscomputerlab 2 ปีที่แล้ว

    Welcome back!

  • @worldtownfc
    @worldtownfc 2 ปีที่แล้ว +4

    Awesome video. The cache wars will be epic. I'm hoping AMD can fix their 1% lows, but the chiplet latency will hurt unless AMD figured out a solution. When you mentioned how AMD is cutting the fat from their architecture, I wonder how much compute performance will be lost?

  • @skaltura
    @skaltura 2 ปีที่แล้ว

    Amazing analysis as usual! :)
    Been hoping to see more vids from you :)

  • @bella_ciao4608
    @bella_ciao4608 2 ปีที่แล้ว +3

    Seems very interesting. I’m curious Jim, what do you think is going to happen to GPU and CPU innovation once we hit the physical limitation of node shrinks from quantum tunneling? Isn’t it supposed to be in the area of 3nm?

    • @killingtimeitself
      @killingtimeitself 2 ปีที่แล้ว +1

      i suspect we'll probably start work to shifting to a completely new base structure. Either that or really funny massive chiplet spam.

    • @tranquil14738
      @tranquil14738 2 ปีที่แล้ว

      Software will become smarter I think, upscaling tech like dlss is gonna continually get better, and we’ll see some die area sacrificed for better dedicated asic hardware for such, because an insane scaler is better than 11,000 SPs when 10k sps is effectively the same

    • @killingtimeitself
      @killingtimeitself 2 ปีที่แล้ว

      @@tranquil14738 ultimately though hardware will need to march on.

    • @bella_ciao4608
      @bella_ciao4608 2 ปีที่แล้ว

      @@killingtimeitself you mean like ditching x86 and moving to arm or risc-v?

    • @skirata3144
      @skirata3144 2 ปีที่แล้ว +1

      Node names are just pure marketing and every single part in a transistor is a lot larger than the node would suggest. We are still a long time away from hitting those physical limitations and ways around it could be switching materials (i.e. gallium nitride or graphene) or entirely switching to another principle (i.e. photonic computing)

  • @Strykenine
    @Strykenine 2 ปีที่แล้ว

    So good to hear from you again, Jim.

  • @kaseyboles30
    @kaseyboles30 2 ปีที่แล้ว +5

    IF amd doubles both raster and rt perf then the 4090 winds up between 7800 and 7800xt in raster and just barely(1%-2%) behind the 7900xtx. They however have room for a 4090TI/titan, and then amd likely has some room and/or 3d cache to add in. The real benefit of chiplets isn't primarily cost, but being able to build bigger than the reticule limits.

    • @TheXev
      @TheXev 2 ปีที่แล้ว +1

      AMD still has room for a 2 GCD ultra behemoth GPU SKU... a flagship that would bury nVidia this generation and any of it's gamer mind share. That is what I want to see if AMD can make it happen this generation.

    • @kaseyboles30
      @kaseyboles30 2 ปีที่แล้ว

      @@TheXev Possibly, I suspect they have work to do to optimize for that configuration, It's not the same degree as sli/crossfire, but it still has some of the issues to be overcome and they may not have fully solved them in rdna3. They almost certainly will by rdn4, and perhaps even for a refresh. Though I'd be very happy to be very wrong on this one. The inter-die latency is what worries me here, it's much more problematic on a gpu than on a multi-core cpu. I strongly suspect they have it good enough for pro-grade cards where it's less about frame times and more about high poly models and total render times.

    • @tranquil14738
      @tranquil14738 2 ปีที่แล้ว

      @@TheXev they’ve def been tinkering with the idea for a long long time. Vega had infinity fabric support for multi gpu with incredible scaling, just not in games

  • @ZuneGuy1118
    @ZuneGuy1118 2 ปีที่แล้ว +1

    Man I saw your two part videos from 2016 and you were off the beam with many of your predictions.

  • @pvalpha
    @pvalpha 2 ปีที่แล้ว +4

    Absolutely amazing analysis Jim. I greatly appreciate it and its good to hear you again. I'm more than a bit worried now though - I want competition in the space. I am hoping that Nvidia starts their own migration to MCM architecture to compete. But I also know that if AMD could have done this last generation, we'd have seen it because I know they have been angling for it since Zen launched and proved itself capable. That means that it took a lot of brain-scraping work to get to this - which is a zen1-like structure, and Nvidia undoubtedly is doing some of that... but the question is are they doing enough of it? Nvidia stumbled in the 30 series generation a lot, going to samsung. That was not a design advance, it was a side-step. What we're seeing now from Nvidia is the design we should have seen last generation (IMO as a casual observer). And that means if they have an MCM target... that got sidestepped too, maybe 1-2 generations. And Just like intel... Nvidia is going to get burned by that mistake... which I don't know if its 'hubris' as much as intel or more profit-taking and a lack of clarity. Multi-billion dollar tech firms are not something people should worship like their favorite football team (also, don't worship football teams, or anything else with lots of money behind it really) - what we should be doing is looking at the tech and demanding with our power coupons (money) value for our exchange. I don't think we're going to get something like that available this generation. And if AMD has the lead you think they do, they might price this like competition doesn't exist at all - which would be horrifically depressing. I lament the death of affordable equipment. :(

  • @danield.7359
    @danield.7359 2 ปีที่แล้ว +1

    Videos like this are the reason why I watch AdoredTV since 2014! Well explained!

  • @Saturn2888
    @Saturn2888 2 ปีที่แล้ว +3

    You were right about chiplets and a reduced CU count compared to where they could be. 3-4GHz clocks were a lie though. I'm interested in the full-size package rather than the half-size one we have now. I wanna see a decent 4090 contender.

  • @johnpaulbacon8320
    @johnpaulbacon8320 2 ปีที่แล้ว +1

    Thanks for this very well done and informative video.

  • @BakiYuku
    @BakiYuku ปีที่แล้ว +9

    That did not age well...

  • @conza1989
    @conza1989 2 ปีที่แล้ว

    Jim! Thanks for the content as always.

  • @nuclearpcs2139
    @nuclearpcs2139 2 ปีที่แล้ว +8

    SO NO NVIDIA KILLER .... AS USUAL AND RT PERFORMANCE 1 GENERATION BEHIND YIKES

  • @devwadehra9896
    @devwadehra9896 2 ปีที่แล้ว

    Please keep making more videos and more often. I love the detail, your accent, it's great to have you back making these kinds of videos again! :)

  • @AdamS-nd5hi
    @AdamS-nd5hi 2 ปีที่แล้ว +33

    The chiplet are deff a cost cutting feature when you factor it how less often defects will destroy a perfectly good chip ( esp with chips this size) which youre not including on your calculations. overall, very solid analysis as always. temping to break out the hopium. Id eat my own shit to watch nvidia be humbled like intel has been. swish it around with my tounge and between my teeth and everything. MAKE MORE MORE VIDEOS FUNNY ACCENT GUY!
    - love, fat gun loving American.

    • @ntomnia585
      @ntomnia585 2 ปีที่แล้ว

      Looking forward to justifying your fetish? 😁

    • @Senzorei
      @Senzorei 2 ปีที่แล้ว +3

      I think it's the yield figure in the second calculator he used, although the old calculator that no longer works definitely did that and they even had a neat visualization for it.

    • @tringuyen7519
      @tringuyen7519 2 ปีที่แล้ว +1

      Yes, Jim was being overly generous to Nvidia on the die/wafer calculator. There’s no way that the TSMC 4N process is yielding 90% right now. Also Nvidia using the reticle limit on TSMC’s 3nm will be seriously expensive!

    • @AdamS-nd5hi
      @AdamS-nd5hi 2 ปีที่แล้ว +1

      @@tringuyen7519 nvidia is also likely paying more for their wafers for having jumped ship to samsung for a couple gens. I think generally is better form to be overly generous to those youre making an argument against, but be clear about that

  • @Tehzii
    @Tehzii 2 ปีที่แล้ว

    So happy to have you back!
    Thank you

  • @TestarossaF110
    @TestarossaF110 2 ปีที่แล้ว +3

    Zen2 moment incoming, wow! I'm so excited!!!!

  • @JuanGarcia-lh1gv
    @JuanGarcia-lh1gv 2 ปีที่แล้ว +1

    Great video as always! Things are getting exciting.

  • @TheXev
    @TheXev 2 ปีที่แล้ว +7

    AMD has been making CDNA2 GPUs using this chiplet process for 1 generation already. AMD is ready for chiplets on the consumer side. nVidia likely won't have any chiplets usable for a few more years. nVidia is in serious trouble, especially if AMD hits hard with a newer video encoder this generation, and they continue to work with major productivity software vendors to improve their software stacks. AMD is starting to fire on all cylinders in the software department, and nVidia needs to be worried.

    • @andersjjensen
      @andersjjensen 2 ปีที่แล้ว +1

      CDNA2 accelerators are MCM. I'm fairly certain CDNA3 will be chiplet though.

    • @TrueThanny
      @TrueThanny 2 ปีที่แล้ว +2

      Not quite. The MI250 uses two fully GPU's on the same package, making it an MCM design, not a chiplet design. The kinds of workloads that card is designed for don't care about multiple GPU's. It's different with a gaming card, where the software has to see it as a single GPU, because developers don't want to support multiple-GPU functionality since DX12 was released (effectively killing SLI and Crossfire).

  • @user-zh9kc7tw4n
    @user-zh9kc7tw4n 2 ปีที่แล้ว

    Excellent to see another of your fantastic videos! thank you!

  • @giglioflex
    @giglioflex 2 ปีที่แล้ว +3

    Typically the larger the cache the higher the latency. AMD's design may incur a latency penalty due to being on a separate chiplet but because each chiplet has a separate cache each should have low latency. In addition, AMD's cache is spread out across the edge of the shader chiplet, which should prevent shaders closer to the edge of the chip from having to fetch data from caches farther away. AMD's design seems to ensure that shaders have the resources needed as close as they can be with an MCM design.
    In addition, this isn't mentioned in the video but a big advantage of chiplets is bandwidth. The infinity fabric connecting the chiplets could very well provide a major advantage. Even if the latency does end up being higher that could be offset by an increase in bandwidth.

    • @TestarossaF110
      @TestarossaF110 2 ปีที่แล้ว

      Yeah I hope AMD can make it all work way better within their own ecosystem, I wonder how (/if) an X3D v-cached 7000 series cpu, gen5 storage and ddr5 will make the RDNA3 come alive.

    • @TrueThanny
      @TrueThanny 2 ปีที่แล้ว +1

      IF is too slow to connect those chiplets. The fastest IF link currently in use is 800Gb/s, which is 100GB/s. That's slower than even the GDDR6 memory connected to each MCD, without even mentioning the SRAM speed of the cache. It would also require having IF controllers on each end, taking up GCD die space and creating power usage that doesn't translate into performance. They have to be doing something different.

    • @Salabar_
      @Salabar_ 2 ปีที่แล้ว

      GPUs are already designed around high latency memory accesses regardless.

  • @TestarossaF110
    @TestarossaF110 2 ปีที่แล้ว

    I saw your tweets but didn't expect it now, wow thanks!!

  • @kamachi
    @kamachi 2 ปีที่แล้ว +3

    I see a video from Jim, I watch it. Hope you're keeping well pal.

  • @aaronpingle9839
    @aaronpingle9839 2 ปีที่แล้ว

    Omg Jim welcome! You were the only TH-camr I ever sponsored. It's good to hear you every once and a while.

  • @MarvoloRiddle
    @MarvoloRiddle 2 ปีที่แล้ว +20

    The Hero we need but don't deserve, returns when the world needed him the most.

    • @keralius
      @keralius 2 ปีที่แล้ว

      Mate, nothing would be different if he hadn’t returned. I love his content but this is an incredibly cringe comment.

    • @MarvoloRiddle
      @MarvoloRiddle 2 ปีที่แล้ว

      @@keralius you need to calm down and take life more lightly sometimes buddy.

    • @keralius
      @keralius 2 ปีที่แล้ว

      @@MarvoloRiddle This is also true

  • @Paddzr
    @Paddzr 2 ปีที่แล้ว

    I'm glad you're still uploading!

  • @maugre316
    @maugre316 2 ปีที่แล้ว +4

    I don''t think Nvidia is in trouble, TBH. AMD could have the faster card and people would still buy Nvidia for DLSS because consumers are that stupid. Plus a lot of compute workloads have locked themselves into CUDA. I'm looking forward to RDNA3 and think they'll perform and sell well, but Nvidia will continue to dominate the GPU market regardless.

  • @RNeeko
    @RNeeko 2 ปีที่แล้ว +1

    It's good to hear your voice again Jim. Hope you're doing good.

  • @christophermullins7163
    @christophermullins7163 2 ปีที่แล้ว +3

    I think this guy doesn't realize that Nvidia has chiplets all figured out.

    • @MrNova39X
      @MrNova39X 2 ปีที่แล้ว

      Yeah right, if Nvidia would have chiplets all figured out, you would see it day 1, they are not even close to AMD on that part as it seems.

  • @phenomanII
    @phenomanII 2 ปีที่แล้ว

    Thank you Jim for these wonderful deep dives on architecture.
    I hope that ButterDonut makes an appearance one day!
    I am so glad that there are innovations to look forward to in the future.
    I remember the launch of Ryzen 5000 extremely well because it was before I finally sought help and medication for my mental health. That Wednesday (bloody timezones) was a very dark one, but I remembered that I planned to watch the unveiling. After watching it, I just *had* to wait for the reviews. And sometimes the most important victories are the smallest ones - just hanging on for another day (or even several weeks).

  • @Vonklieve
    @Vonklieve 2 ปีที่แล้ว +3

    NVIDIA deserve everything they have coming. The price fixing, the selling to crypto miners on the side etc.

    • @starkistuna
      @starkistuna 2 ปีที่แล้ว +1

      crypto mining was inevitable, also that is what has driven the cards to perform at compute levels in performance the way the 4090 was designed was catering to miners , thank god mining collapsed.

  • @marcw205
    @marcw205 2 ปีที่แล้ว

    Wow what a great video. Been following the leakers, reviewers and not learning the economics of chip making. Thankyou. Subscribed

  • @TheHeadhunter85
    @TheHeadhunter85 2 ปีที่แล้ว +11

    Fantastic video Jim! Mark my words, AMD will be going ballistic against Nvidia in pricing. Meaning that AMD will keep same MSRP for 7000 series as the 6000 series at launch. Same strategy as they did with Zen 4 launch, "correcting" prices with no price bumps. Looking foward the november 3 announcement.

    • @nate6908
      @nate6908 2 ปีที่แล้ว +8

      x to doubt

    • @andersjjensen
      @andersjjensen 2 ปีที่แล้ว +1

      @SoundwaveSinus9 Yeah, me too. But if it beats the 4090 handily in raster while coming in like a 4080 in ray tracing they'll have it perfectly nailed. Setting the price at parity with their lowest performing feature makes all the rest something you get "for free". That's hard to argue with.

    • @TheHeadhunter85
      @TheHeadhunter85 2 ปีที่แล้ว

      @SoundwaveSinus9 Probably at most expensive . AMD is on a slightly cheaper node and memory than Nvidia. It's up to them to price it right.

  • @kenhopkins1132
    @kenhopkins1132 2 ปีที่แล้ว

    Jim love having you back !!!!!!

  • @lilyounggamer
    @lilyounggamer 2 ปีที่แล้ว +10

    rdna 3 is a game changer with its sli mcm chiplets based they gonna either take the crown or be very very close for alot less money amd gpu like the rx 500 and vega have aged way better than nividia gpus just great value rtx 5000 series needs a completely new redesign too much power burning people's pcs nividia reddit censored people from talking about it

    • @TheMaxstpau
      @TheMaxstpau 2 ปีที่แล้ว +2

      Still rocking a Vega64

    • @lilyounggamer
      @lilyounggamer 2 ปีที่แล้ว +2

      @@TheMaxstpau how good it it i got a rx 590

    • @maydaygoingdown5602
      @maydaygoingdown5602 2 ปีที่แล้ว +2

      AMD haven't charged a lot less money for there GPU'S in years and years. They certainly aren't going to start now.

    • @corok12
      @corok12 2 ปีที่แล้ว +1

      @@maydaygoingdown5602 seriously, people are acting like they will do anything more than price match. They'll produce for much cheaper, and take a larger profit margin, that's all.

    • @SweatyFeetGirl
      @SweatyFeetGirl 2 ปีที่แล้ว +2

      @@corok12 they COULD gain a lot of market share if they would price them good. but i believe AMD will be sadly greedy

  • @ChrisMcFadyen
    @ChrisMcFadyen 2 ปีที่แล้ว

    I quickly scanned the comments to see if anyone else brought this up, but didn't see it: Concerning the chiplet sizes, if the MCDs were incorporated into the GCD, different libraries would have to be used for the lithography, ones where SRAM takes up noticeably more space than when they're separate dies because of the mixed transistors. That would increase the size of the single chip some amount beyond your estimate.

  • @nuclearpcs2139
    @nuclearpcs2139 2 ปีที่แล้ว +5

    every year there is a new nvidia killer and you know what ....
    IT NEVER HAPPENS LOL

    • @nossy232323
      @nossy232323 2 ปีที่แล้ว +1

      Yeah, it's always "the next RDNA GPU" that will beat Nvidia. And they are also always late.

    • @nuclearpcs2139
      @nuclearpcs2139 2 ปีที่แล้ว +1

      @@nossy232323 wow it became true again lol 62 fps on cyberpunk 4k with fsr ...... lol I had that performance years ago with the 3090 XD

  • @LichuStar64
    @LichuStar64 2 ปีที่แล้ว +2

    I adore Adored's videos.

  • @RobBCactive
    @RobBCactive 2 ปีที่แล้ว +12

    Certainly it's going to be fascinating to see how the architecture pans out. Fun if Navi31 Ryzenises Lovelace.
    The latest power leaks suggest Tom of MLiD reports were accurate and power consumption is in line with the highest end of RDNA2 while efficiency is increased significantly.
    I suspect there's Infinity Fabric improvements to transfer data fast and efficiently from L2 caches to the MCDs.
    Chiplet GPUs were always going to require high bandwidth, but the pro dual GCD compute cards don't need the global store which gaming GPUs producing tightly coupled frames.

  • @storm-sf5rj
    @storm-sf5rj 2 ปีที่แล้ว

    Great to see you back Jim

  • @Keiktu
    @Keiktu 4 หลายเดือนก่อน +4

    Aged like milk

  • @Spikeypup
    @Spikeypup 2 ปีที่แล้ว

    All right...how's it going Jim? ;) Nice to see you as always my friend. Glad to see more videos lately... thanks for coming back and giving us that wonderous melodious voice of yours.

  • @thecooletompie
    @thecooletompie 2 ปีที่แล้ว +4

    AMD could be limited by the bandwidth of the interconnect this could also explain why doubling the cash doesn't give any noticeable improvement, the interconnect is already full handling the other data. Or it's power limited, moving data off chip certainly cannot be a cheap operation in general it holds that the further the data to travel the more power you spend. If it was shader limited what would've stopped AMD from developing a larger die? Maybe I'm missing something here haven't really thought about it long yet

    • @supremeboy
      @supremeboy 2 ปีที่แล้ว

      AMD IS watching power consumption. They have said many many times with CPU's also that they want efficient products as main focus. Thats main reason they cant go crazy with cache size now. First they need to bring out first GCD design and then next one as Jim said will be crazy fast due to changes of size in logic and density gets even higher. AMD could build monster gpu now but that would cost high power consumption that is not their wish.

  • @TheDidiwolf34
    @TheDidiwolf34 2 ปีที่แล้ว +1

    That 2017 vibe... FeelsGoodMan

  • @tomstech4390
    @tomstech4390 2 ปีที่แล้ว +3

    Can't wait for nvidia to say AMD cards are "glued together" and monolithic designs are better. Then 18 months later they'll release GPU's using "tiles" talking about how revolutionary thier new architectures are....