AMD ZEN 6 - Next-gen Chiplets & Packaging

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 พ.ค. 2024
  • AMD's Infinity Fabric On-Package has been used since Zen 2 in 2019, it's time for a change! In this video we will discuss next-gen interconnect and packaging rumored to be introduced with Zen 6, including silicon interposer, silicon bridge and organic RDL technologies.
    Support the channel on Patreon: www.patreon.com/user?u=46978634
    Follow me on Twitter/X: / highyieldyt
    MLID Zen 6 leak video: • AMD Zen 6 Venice Leak:...
    0:00 Intro
    1:21 Infinity Fabric On-Package / Zen 2
    3:22 Next-gen interconnects
    3:46 Silicon Interposer Technology
    6:12 Silicon Bridge Technology
    9:22 Organic RDL Technology / AMD Infinity Links
    13:11 Zen 6 Layout & Packaging
    15:14 Conclusion
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 303

  • @alpha007org
    @alpha007org 2 หลายเดือนก่อน +429

    You're skilled at illustrating how industry buzzwords translate into real-world scenarios. Much appreciated.

    • @LabiaLicker
      @LabiaLicker 2 หลายเดือนก่อน

      yes

    • @filipenicoli_
      @filipenicoli_ 2 หลายเดือนก่อน

      But are they really buzzwords? I mean, most packaging acronyms are somewhat descriptive.

    • @BirnieMac1
      @BirnieMac1 หลายเดือนก่อน +4

      @@filipenicoli_The difference is whether the person using it understands what it means and whether its being used to inform or advertise
      Like the word modular for example, very different in circuitry and engineering as a whole; but in a commercial context its a painful buzzword

    • @theminer49erz
      @theminer49erz 26 วันที่ผ่านมา

      Sure is! Makes you wonder why the manufacturers can't just explain it that way, but I trust him more anyway.

  • @HighYield
    @HighYield  2 หลายเดือนก่อน +242

    2:56 The IF protocol isn't PCI-E, that’s what the physcial layer is using (bascially the wires). The protocol is based on HyperTransport. Please disregard my mistake.
    And, as someone on Patreon pointed out, Navi 31/32 might already use ASE FOCoS packaging, we don't know for sure if its InFO_oS/R from TSMC.

    • @bradmorri
      @bradmorri 2 หลายเดือนก่อน +14

      Latency is not the boogieman that everyone seems to think it is. The larger caches in the ZEN cpus help mitigate memory latency and actually benefit SMT bu giving the second thread a little more space to perform as thread 1 waits for data.
      Your descriprion of Zen 2 Infinity fabric is correct, each interconnect is a single point to point connection, each of which can be saturated by the data from a dual channel memory implementation. Zen 3 changed things, replacing the single bidirectional point to point connections with a loop that provides dual bidirectional inteconnects that double the data transfer bandwidth and eliminate the saturated bottlenecks when memory reads and GPU writes were competing with each outher that caused the slow gaming performance on the 1000=3000 ryzen cpus.
      Zen 4 changed things up a little to reduce power consumption, limiting the IF to 2000mhz instead of the frequency of the memory. Dual connections at 2000, dont double the memory bandwidth requirements of ddr4 6000MTs but still provide enough bandwidth not to be bottlenecked by the dual channels of memory and GPU competing for IF bandwidth.

    • @johndoh5182
      @johndoh5182 2 หลายเดือนก่อน +1

      I already know what you meant because you didn't say that it uses PCIe, you said it's based on PCIe

    • @johndoh5182
      @johndoh5182 2 หลายเดือนก่อน +8

      @@bradmorri Since there's no way to compare, you're statement is baseless as there's no data to back it up. I have to disagree when you run apps that are heavily threaded and data always needs to be passed between threads AND you have a 2 CCD part. That latency is going to add up. And in fact this is why Intel has been able to compete in different areas over others against Zen 4.
      The next issue is, and I'm sorry you're just GOING to be wrong here, as cores get faster either through IPC uplifts and clock speed improvement, EVERY bit of latency will matter more and more and to say otherwise would be wild.
      Cache is NOT read ahead and Cache ONLY provides benefit in certain applications, mostly when you need to keep REUSING the same data/code over and over, and in the world of PC this happens more in gaming than anything else which is why X3D parts are better for gaming. But if you do a render task, sorry but that cache is almost worthless because you are CONTINUALLY using new data from streams, and then creating new streams. These are read from memory or read from disk, then write back to disk operations. Now, for code there's a lot more benefit from bigger L1 and L2 caches especially when you keep running some function over and other again, as in that render task. You don't get benefit for the data with larger cache, but you do get the benefit for the code.
      I mean really go check out and myriad of CPU reviews, look at what applications have benefited from larger L1, L2 and L3 cache. Latency in core to core data transfers which once again will happen in heavily threaded apps, WHEN you have to pass data from a core on one CCD to another is impacted by that latency and it's part of the reason why moving to 2 CCD parts doesn't scale as nicely as what most people would want. But that latency also affects other operations and I said.

    • @D0x1511af
      @D0x1511af 2 หลายเดือนก่อน +2

      can you make video about intel backside power delivery *BPD* technology?

    • @shanemshort
      @shanemshort 2 หลายเดือนก่อน +4

      @@bradmorri your comment is at odds with itself.
      There's a reason why we don't have monster L1/L2 caches on CPUs, which is because of latency. The physically bigger the cache becomes, the higher the latency and the less efficient the core becomes. That's why AMD made such a huge deal about v-cache and 3d stacking, it allowed them to make the caches larger without moving them physically further away from the logic that needs them.
      And why do we need big caches? Latency. Having to wait to go out to memory is slow and if we can avoid doing so, we should. DMA exists to cut latency, it's one of the main drivers for CXL, etc. Some things are obviously more latency bound than others, obviously, but poor latency hurts everything.

  • @fVNzO
    @fVNzO 2 หลายเดือนก่อน +138

    I remember Adoreds video from years and years ago talking about new interposer technology. As always there's a 5-10 year gap from research to application.

    • @slimjimjimslim5923
      @slimjimjimslim5923 2 หลายเดือนก่อน +1

      Intel already use silicon interposer since Alder lake from 2020-2021 era. Their tile are using Foveros. Intel EMIB is like within package communication, that's what AMD does with chiplet. Pretty cool Intel tech. If Intel already been doing it in 2021, I think it won't be long until AMD catches up and start using silicon interposer for their chiplet. Maybe in 2-3 years.

    • @hammerheadcorvette4
      @hammerheadcorvette4 2 หลายเดือนก่อน +9

      He covered the "Buttered Donut" and many laughed. Turns out, he was right and Zen3 did have the "Buttered donut" tech. It was the foundation for the 3D caches

    • @fVNzO
      @fVNzO 2 หลายเดือนก่อน +1

      ​@@hammerheadcorvette4 That sounds very dramatic. Last time i checked AMD never specified their chiplet topology but that was back when Zen 3 launced.

    • @cbrunhaver
      @cbrunhaver 2 หลายเดือนก่อน

      @@slimjimjimslim5923AMD did it in GPUs prior to that with Fiji (radeon fury)

    • @christophermullins7163
      @christophermullins7163 2 หลายเดือนก่อน

      The substrate manufacturing can be incredibly difficult similar to the silicon itself. Future cpus will be staggeringly complex beyond the transistors themselves.

  • @EyesOfByes
    @EyesOfByes 2 หลายเดือนก่อน +108

    I guess Apple's make back their Ultra SoC interposer investment, by charging 200 dollars for one additional 256GB generic SSD NAND flash chip.

    • @aylim3088
      @aylim3088 2 หลายเดือนก่อน +35

      £200 for 8GB of ram..

    • @ps3guy22
      @ps3guy22 2 หลายเดือนก่อน +12

      lol never thought of it like that

    • @alesksander
      @alesksander 2 หลายเดือนก่อน +9

      Yeah apple tax. IMO but from tech implementations Apple did wonders with own chips. I can just applaud performance. Especially GPU section is very very impressive per watt.

    • @dmtd2388
      @dmtd2388 2 หลายเดือนก่อน

      Apple = trash for retail marketing non technical crowds sheeps

    • @robblincoln2152
      @robblincoln2152 2 หลายเดือนก่อน +4

      Apple are planning on using a silicon interposed for the construction of individual chips, meaning each processing core will be cut and glued into place, as will each cache, gpu, neural processor, analog processor and whatever specialty chunks they decide to add next. Shouldn’t expect to see it before M7 though (~2028ish)

  • @megamasterbloc
    @megamasterbloc 2 หลายเดือนก่อน +41

    Si interposers are coslty because they are generally huge but on a consumer platform like zen 6 they can make sense if AMD manage to shrink both the logic and IO die and put them right next to each other, but at that point they might as well use EMIB

    • @thevaultsup
      @thevaultsup 2 หลายเดือนก่อน +11

      Still too expensive for consumer platform, inFO is what will be used, like RDNA 3.

    • @keylanoslokj1806
      @keylanoslokj1806 2 หลายเดือนก่อน

      ​@@thevaultsupwhat do those terms mean

    • @thevaultsup
      @thevaultsup 2 หลายเดือนก่อน +4

      ​@@keylanoslokj1806 Consumer platform = Ryzen, inFO = new ways of packaging chiplets. I mean i don't even know what you actually mean, the video already explain most of it.

  • @cat81702
    @cat81702 28 วันที่ผ่านมา +1

    I really love how you used the green ambient light for an AMD video.

  • @PaulDodd
    @PaulDodd 2 หลายเดือนก่อน +17

    Fantastic video. Really enjoyed the breakdown and elaboration on the various interconnect options and their cost/benefit implications. Subbed for more!

  • @Subut
    @Subut 2 หลายเดือนก่อน +5

    Interesting, nice insight into complicated topics. Thanks for simplifying.

  • @Tithulta1
    @Tithulta1 2 หลายเดือนก่อน +2

    Wow, this was sooo above my understanding, but you made it understandable and the how an whys!

  • @OfSheikah
    @OfSheikah 2 หลายเดือนก่อน +8

    here's to more insight and knowledge from no other than High Yield! Thank you for the lessons!

  • @stennan
    @stennan 2 หลายเดือนก่อน +5

    These are excellent illustrations and explanations for those of us who are nerds but not experts!

  • @SupraSav
    @SupraSav 2 หลายเดือนก่อน +5

    The best deep dives on the tube. Cheers🍻

  • @danieloberhofer9035
    @danieloberhofer9035 2 หลายเดือนก่อน +12

    I concur with your analysis and your conclusion that Infinity Link (meaning InFO in the end) will be AMD's next interconnect technology. Moreover, I'm not only seeing AMD go with the same interconnect technology for client and server, I see the mere potential to do so as a very compelling argument in favor of Infinity Link.
    Also, it would be a very AMD thing to do. Ever since their return to competitiveness, AMD has capitalized on implementing the most cost-effective solution to an engineering problem that just gets the job done without overextending themselves and further iterating on proven solutions afterwards.
    Other than that, a very well thought out video that I enjoyed a lot! Your presentation skills have improved significantly. Hard to believe you're still doing all this in your spare time. Should you ever come somewhere around the middle of Germany, I'd very much like to have a beer together. 😉
    Oh - and Tom has tainted you. You've obviously converted to the "mm-squared" crowd. 😂

  • @Allyouknow5820
    @Allyouknow5820 2 หลายเดือนก่อน +8

    Very good, sensible, credible, video! Discovered your channel with the SteamDeck OLED video you did and this one is excellent as well!
    One question about Infinity Links:
    Since it improves latency and bandwidth, would that be a major advantage for iGPU as well? Even by Zen6 it's highly unlikely that we have on package GDDR or HBM, but already if the local cache was shared between CPU and iGPU, that would alleviate somewhat the enormous bottleneck of the iGPU right? While benefitting from the energy efficiency and going out to main memory when it is needed.

  • @jonathanwagner9279
    @jonathanwagner9279 2 หลายเดือนก่อน +2

    Great info and well presented as always! Thanks for the explanation, and organic interposer technology definitely seems like the best solution for Zen 6. I also am glad to hear you talk about how power and heat of data transmission is becoming a performance bottleneck, and the role of advanced packaging in solving it. I'd love to see you do a video on in memory/near memory computing and how that may filter down to consumer products.

  • @salmiakki5638
    @salmiakki5638 2 หลายเดือนก่อน +32

    What packaging technology is it utilized on apple's MAX chips? Are they silicon bridges?
    Anyhow, Thank you for these explainers and compendiums. Really appreciated.

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +18

      Do you mean the Ultra versions, which is two Max chips glued together? Yes, that uses silicon bridges. Even the Max chips that don't end up in an Ultra still have a relatively large portion of their area dedicated to the interconnect they don't use.

    • @HighYield
      @HighYield  2 หลายเดือนก่อน +19

      There's no official information, but Apple is using a silicon bridge. It's either CoWoS-L or InFO_LSI.

    • @salmiakki5638
      @salmiakki5638 2 หลายเดือนก่อน +1

      Thank you both for the answer.
      Yes, I meant the Ultra version. I had the lineup mixed up in my mind.
      This is interesting then, since it means that the vías that connect to the bridge are vertical with respect to the plane of the die, and not stacked on the side as per apple marketing material. I was wondering how that was possible though lithography

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +4

      @@salmiakki5638 It is a very confusing lineup. Pro, Max, Ultra... you can be forgiven for thinking the Max would be... like... the max and not the mid :D

  • @TheGunngir
    @TheGunngir 2 หลายเดือนก่อน +3

    hi, just subscribed - your channel rang a bell from a MooresLawIsDead podcast you participated!

  • @Pbenter
    @Pbenter 2 หลายเดือนก่อน +1

    I was just thinking about this while I was changing the thermal compound on my 7900 XTX the other day. Thanks for sharing more about what is going on under those chips!

  • @DestinyAdvisor
    @DestinyAdvisor 2 หลายเดือนก่อน +11

    5:33 who else thought at first he was showing the headquarter of some random silicon tech company

    • @jktech2117
      @jktech2117 2 หลายเดือนก่อน

      now i cant unsee

  • @mitnetrom4422
    @mitnetrom4422 2 หลายเดือนก่อน +1

    wow^^ mega interessant! besten dank für deine mühen!

  • @anonymouscommentator
    @anonymouscommentator 2 หลายเดือนก่อน

    i always love your videos, thank you very much!

  • @Faisal1504
    @Faisal1504 2 หลายเดือนก่อน

    Very well explained thank you.

  • @damonvanopdorp2905
    @damonvanopdorp2905 2 หลายเดือนก่อน

    I haven't understood the "advanced packaging" mentioned over recent years at topic, but now I feel all caught up :)

  • @snowdog993
    @snowdog993 2 หลายเดือนก่อน +1

    Motherboard redesign to allow more than 128 PCIe lanes at full speed. Imagine having at least 4 PCI Express slots with x16 lanes at full speed and having workstation/server capability in a moderately priced package. Imagine the improvement. The only thing is, one can't work without the other.

  • @Vuhhin
    @Vuhhin 2 หลายเดือนก่อน +1

    These animations are so cool, awesome video!

  • @MrEddieLomax
    @MrEddieLomax 2 หลายเดือนก่อน

    A great video, one metric that could be the most important could be thermal efficiency, if we overheat then performance is limited - does one connector better dissipate heat?

  • @Mio1570
    @Mio1570 2 หลายเดือนก่อน +2

    You make a good case for what AMD might do for Zen 6 consumer products. I also think we might get a preview of it with Zen 5 based Strix Halo if it is multi-die as has been rumored. InFO_oS seems like the natural method of going about since we've already seen it work with GPU IP in RDNA 3 and it would need a more efficient interconnect for mobile and APU use. Odds are probably still good that for at least some enterprise products they will use silicon bridges or interposers, especially since like Intel, AMD will also be looking to use HBM on Epyc if the rumors pan out.

  • @PhilfreezeCH
    @PhilfreezeCH 2 หลายเดือนก่อน +14

    5:20 I don‘t really see why the EUV reticle matters. As long as you don‘t place any (or at least not too much) logic in the interposer and it really is just an interposer, then you can use why less advanced nodes with larger reticles or where mask stitching is easier to do.
    In fact the upper metals in newer process nodes are still done using immersion machines, not EUV (at least that was true last time I look at it and it would be weird if it changed). You need to go through the upper metals anyway so its not like you can achieve a higher bump density, so using the same expensive processing nodes for an interposer makes little sense in my opinion (as long as you don‘t put serious logic in it that is).
    I just looked it up, our last research chip Occamy had chiplets fabed in 12nm and the interposer used 65nm.

    • @JorenVaes
      @JorenVaes 2 หลายเดือนก่อน

      Your at ETH?
      I agree, makes little sense to have the EUV reticle matter. As far as I am aware, CoWoS uses passive silicon interposers. I know ST has done quite a bit of work on active interposers, moving things like power management, clock management, power gating hardware into the interposers, with the interposer based on something like a 65nm technology. Could even offset the higher cost of the interposer as you save area on the 2/3 nm CPU die because you move that stuff onto the interposer.

    • @BobHannent
      @BobHannent 2 หลายเดือนก่อน +2

      That was exactly what I was thinking, older process node fabs would be delighted to have the opportunity to fabricate a relatively high value interposer die and you would get great yields on such a simple layout.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 2 หลายเดือนก่อน

    Thanks for providing us these insights. Probably you are right in your conclusion. They need to feed the increasing number of cores with more memory bandwidth. Didn't know they left so much power consumption on the table 😮, good that they have so much room for improvement.

  • @ramr7051
    @ramr7051 2 หลายเดือนก่อน

    Very interesting video! Micro (nano?) electronics are insane.

  • @christophermullins7163
    @christophermullins7163 2 หลายเดือนก่อน

    Damn i love this stuff.. right up there with the best in youtube. No one else is delivering such digestible info on these topics.

  • @marvinranaldson3457
    @marvinranaldson3457 2 หลายเดือนก่อน +1

    I wonder about how this changes X3D implementation?
    I can see them carrying over the existing design where the memory chiplet is stacked on the CCD. I could also see them stackong the CCD on top of the memory chiplet, removing the need for TSVs on the CCD Enabling increased logic density, and smaller die area on the cutting edge node. This would also improve thermals as well.
    That being said AMD could use a SI interposer with an integrated memory for the X3D variants and organic interposer for the non X3D variants.

  • @mika2666
    @mika2666 2 หลายเดือนก่อน +1

    For Epyc I think the silicon interposer makes a lot of sense as the margins are higher there, for Ryzen, AMD's Z6 solution makes a lot of sense

  • @kmalnasef1512
    @kmalnasef1512 2 หลายเดือนก่อน

    great explanation 👍👍👍
    thank you

  • @Mario211DE
    @Mario211DE 2 หลายเดือนก่อน

    Danke! Ich kann kaum auf Zen5 warten.
    Thanks for your amazing content. Cant wait for Zen5.

  • @kingkrrrraaaaaaaaaaaaaaaaa4527
    @kingkrrrraaaaaaaaaaaaaaaaa4527 2 หลายเดือนก่อน +14

    I wonder what comes next after silicon bridges.
    Stacking dies on top of the I/O die?

    • @benjaminoechsli1941
      @benjaminoechsli1941 2 หลายเดือนก่อน +6

      ​@@brainletmong6302Yeah. AMD claims it's no problem with two layers (normal CPU + V-cache), but three layers? More? We're bound to see issues.

    • @DigitalJedi
      @DigitalJedi 2 หลายเดือนก่อน +11

      From what I've seen the next step is to get latency and bandwidth of existing interconnects back to monolithic levels, but far future I would expect to see stacked logic and I/O like you've said. I could see CCDs stacked on top of an active interposer that houses the I/O functions.

    • @pham3383
      @pham3383 2 หลายเดือนก่อน

      how to manage the heat layer in x3d variants,high IPC,low latency and high cache cpu is amd future,if they dont mess up

    • @marshallb5210
      @marshallb5210 2 หลายเดือนก่อน

      the current x3d stacking is fine, it's just not rated for the same high temperature as the CCD below

    • @maynardburger
      @maynardburger 2 หลายเดือนก่อน +4

      @@pham3383 The obvious and more sensible option is to just put the cache chip underneath instead of on-top. The drawback is that you basically cant do 'optional' cache chip additions anymore and have to use it as standard unless you want to make a different chiplet without the same connects underneath. But the benefits are great. You can use more cache, you can get better cooling on the compute that needs it, and you can put more cores in a CCD since you dont need to put any large L3 on the main compute die anymore(or alternatively use a smaller die for the compute chiplet).

  • @jrherita
    @jrherita 2 หลายเดือนก่อน +5

    Curious why the ‘double die’ method used with early Pentium D’s was abandoned. It seemed like a good solution for two dies communicating with each other and yields. If both dies were good, you have really low power intercommunications already in place. If one die is good and one is bad, you can create a lower end chip by slicing in half. If the yields are mixed, then you have an intermediate product (i.e. 7900X instead of 7950X).

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +1

      I think the Pro vs. Max version of Apple Silicon chips use this approach.

    • @looncraz
      @looncraz 2 หลายเดือนก่อน +4

      Data rates are much higher these days. First gen Ryzen used the dual die method, but it had all sorts of problems moving memory around efficiently when scaled out.
      The IO die method allowed keeping the cores all fed evenly and kept costs down. But, DDR5 is outpacing the bandwidth AMD can get with the current link density, so a solution to that issue is required.

    • @looncraz
      @looncraz 2 หลายเดือนก่อน

      @@unvergebeneid Sort of, yes, but they use the area of the die that would normally be cut to transfer data, so it's just on piece of silicon.

    • @TrueThanny
      @TrueThanny 2 หลายเดือนก่อน +3

      The Pentium D was a terrible solution. There was no interconnect at all. They were two separate processors communicating over the front side bus through the socket. Same situation later with the Core 2 Quad, which was two separate Core 2 Duo chips that could only communicated over the FSB through the socket - no interconnect between the dies at all on the package.

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +1

      @@TrueThanny wasn't the FSB famously slow and bottlenecky anyway?

  • @ortegagonza1
    @ortegagonza1 2 หลายเดือนก่อน

    Best explanation ever!

  • @awdrifter3394
    @awdrifter3394 2 หลายเดือนก่อน

    16:22 I would like to see photonic interconnects. There were articles about them for years, but so far no actual mass market products use them.

  • @j.rumbleseed
    @j.rumbleseed 27 วันที่ผ่านมา

    If they don't take advantage of the optical properties of the silicon, they will be left behind. As a note...It was the backside power delivery breakthrough, and the optical silicon switching and outboarding the I/O that enabled Moore's law to scale to 6x.

  • @samdovakin2977
    @samdovakin2977 2 หลายเดือนก่อน

    love your videos, thank you very much

  • @anthonylipke7754
    @anthonylipke7754 2 หลายเดือนก่อน

    I don't know how hard the assembly is but do think the bridging of smaller parts get around the big upscaling problem. It seems like assembling has a longer future path. Can the organic cost benfits be added to bridges?

  • @spambot7110
    @spambot7110 2 หลายเดือนก่อน

    5:20 why the EUV reticle limit in particular? I would've assumed you could use a much older node since that would still be way denser than a PCB

    • @HighYield
      @HighYield  2 หลายเดือนก่อน +2

      I misspoke, it’s not about the EUV reticle limit since most interposers are produced in older nodes. But DUV has the same 858mm2 reticle limit.

  • @GeoStreber
    @GeoStreber 2 หลายเดือนก่อน

    How about connecting the individual chiplets via edge-on connectors?

  • @flex_ible
    @flex_ible 2 หลายเดือนก่อน +12

    While watching a video I got an ad for an advanced packaging company... They make custom cardboard boxes.

    • @benjaminoechsli1941
      @benjaminoechsli1941 2 หลายเดือนก่อน

      Since I've started watching videos like this, I've started getting ads for help with advanced macular degeneration.
      Wrong AMD, advertisers. 😂

  • @reptilespantoso
    @reptilespantoso 2 หลายเดือนก่อน +1

    Love this breakdown. Rocking an undervolted 7700X and it's a VERY capable CPU.

  • @olnnn
    @olnnn 2 หลายเดือนก่อน

    Memory bandwidth is also very relevant on t he desktop platform in the graphics side of APUs if that's affected by this.

  • @gehtgoogle_nixan
    @gehtgoogle_nixan 25 วันที่ผ่านมา

    with the new infinity link would it be easier to place smaller chiplets? i think it would be more efficient to surround the io die with the cpu dies to spread the heat more evenly and reduce the paths.

  • @johnpaulbacon8320
    @johnpaulbacon8320 2 หลายเดือนก่อน

    They all sound like good options. Wonder how costly it would be to make technology providers for each option.

  • @shintsu01
    @shintsu01 2 หลายเดือนก่อน

    Looks like i will go and update to Zen6 3D model once it comes out :)

  • @filipenicoli_
    @filipenicoli_ 2 หลายเดือนก่อน +1

    I was amazed to see Package-on-package solutions being used on earlier versions of Raspberry PIs some years ago. Would thermal management be the major technical challenge for these kinds of platforms?

    • @PaulSpades
      @PaulSpades 29 วันที่ผ่านมา

      Mobile SOCs have been using memory on package for a very long time, it's tech based on Toshiba's DRAM package staking, from 2 decades ago. RPIs have always used Broadcom SOCs for embedded solutions, that do have stacked DRAM over logic, and they were somewhat early in implementing the technology for the mobile market.
      The main limitation is memory size, although you can stack 10 high right now. Some of the cooling efficiency loss is gained back as memory transfers are more efficient with less metal in the way. But yes, the more you stack, the more you alternate silicon and packaging material and silicon etc. - creating a heat barrier.

  • @michaelhackl8358
    @michaelhackl8358 2 หลายเดือนก่อน +2

    How is EUV reticle size affecting interposer? They would use EUV for core chiplets and increasingly older processes for bigger silicon. Wouldn't be surprised if the interposer was 28 or higher

    • @HighYield
      @HighYield  2 หลายเดือนก่อน +5

      DUV has the same reticle size. And yes, you are right, interposers are mostly older, non EUV nodes.

  • @buckstarchaser2376
    @buckstarchaser2376 2 หลายเดือนก่อน

    I sincerely hope they will produce a breakthrough in bandwidth capacity so as to make commonplace the real-time speech to text to speech, synchronizing, and blending so as to remove muddling accents without completely removing speakers' otherwise natural speech character and tonality. Such a breakthrough would be equivalent to inventing the babelfish, and probably result in a Nobel Prize.

  • @bobbyboygaming2157
    @bobbyboygaming2157 2 หลายเดือนก่อน +1

    Cool Channel, finally, I get a good recommendation from youtube...

  • @tomcatvrkalisz820
    @tomcatvrkalisz820 2 หลายเดือนก่อน

    Do new socket would be needed, am6 ?

  • @shieldtablet942
    @shieldtablet942 10 วันที่ผ่านมา

    On desktop and server it seems overall better to increase cache than fabric performance, if it comes at the expense of cost.
    One must not forget that the IF design was developed due to cost. It was the cheaper way to increase core counts beyond what was possible.
    Is the fabric power even an issue right now? From everything I see, the issue is that power is not scaling as well as density on new nodes. So we end up with 95C cores and 200W+ CPUs.

  • @beardoe6874
    @beardoe6874 2 หลายเดือนก่อน

    There is no EUV reticle limit for interposers because the feature size should be well above the minimum for 193nm wavelength lithography.
    No one will make a huge interposer on a fab that could be making 28nm planar semiconductors instead. It's all going to be coming from depreciated fabs, probably from the 90nm days or earlier.
    The only question I have is if manufacturers will start putting active or passive components on interposers. Resistors seem like an easy thing to do. Inductors are typically done near top metal so that would also be easy. Capacitors would need poly, gate oxide and doping so that's a lot more process but it gets you close to doing CMOS or bipolar devices.

    • @HighYield
      @HighYield  2 หลายเดือนก่อน

      The DUV reticle limit is the same.

  • @dgo4490
    @dgo4490 2 หลายเดือนก่อน

    I don't expect the cheap and efficient on substrate interconnect will go away, that's still good enough for way too many uses cases, no need to invite the extra cost and production capacity constraint of advanced packaging for products that don't really need it.

  • @davidgunther8428
    @davidgunther8428 2 หลายเดือนก่อน

    I think AMD will again use the dense organic interconnect that they used on the RX 7900XTX. A silicon interposer for Epyc would be difficult and expensive due to size. So, if servers aren't using interposers, consumer products won't either. I think the RDL material is a possibility for server too. Actuality, since the compute chiplets are the same, server and consumer packages have to use the same fan-out packaging technology. So it will be RDL.

  • @youz382
    @youz382 2 หลายเดือนก่อน

    amd could have just elevated area needed for silicon bridges if possible but worth a look for even layout amd hight convergences of heat dissipation needs of zen4. should be coming hopefully.

  • @medallish
    @medallish 2 หลายเดือนก่อน +1

    Didn't Vega also use an organic interposer? I think AMD only used silicon interposer once, in their Fury series card.

  • @nathangamble125
    @nathangamble125 2 หลายเดือนก่อน

    I honestly think that beyond Zen 6, CPU performance won't matter much in the consumer market, it will all be about GPU and AI acceleration.
    The best interconnect for CPUs will be the one that is the cheapest while not significantly limiting CPU performance.
    The best interconnect overall is the one that works best for GPUs and AI accelerators.

  • @systemBuilder
    @systemBuilder 2 หลายเดือนก่อน

    Isnt zen1 threadripper the actual 1st occurrence of chiplets? Zen2 was later wasnt it?

  • @anthonyblacker8471
    @anthonyblacker8471 2 หลายเดือนก่อน

    I feel as though in the near future these connections will become even more thorough by using almost an organic material. I realize with silicon we are getting there, and this coming technology is amazing, imagine 'growing' all the interposers and 'wires' for a CPU.
    edit: oh. Well.. yeah then ok you explain this (and I didn't even realize they used this already, man I'm behind the times!!)

  • @Poctyk
    @Poctyk 2 หลายเดือนก่อน

    I wonder if this switch will somehow affect next iterations of MI chips, or will they stay on SI interposers because of the size of a chip

  • @Fractal_32
    @Fractal_32 2 หลายเดือนก่อน

    Whatever interconnect technology AMD does choose I assume it will be the same between Consumers grade chips (RYZEN) and enterprise grade chips (EPYC). The reason why I believe this is that Epyc and Ryzen already share many of the same chips and they could use the consumer grade chips as a testing ground for their enterprise grade chips and vice versa. (Like how 3D Vcache was an EPYC first technology that they bought to Ryzen.)

  • @50H3i1
    @50H3i1 2 หลายเดือนก่อน

    Brilliant

  • @drwho9437
    @drwho9437 2 หลายเดือนก่อน +1

    The era of band-aids and patches. It was the norm to bring as much as possible onto one die for decades of course because it saved power. But the cost per transistor has largely stopped at 28 nm nodes, so now this makes economic sense. Chiplets with just a PCB are hobbled. We can call these things anything we want but they all are just fancy versions of wiring boards with more or less capacitance depending on the feature size. Intel seems to be betting on glass (not mentioned here). Although it is put forward that silicon is not economic that seems to be what Intel is doing for now until they move to glass. It will be interesting who uses IFS technologies in the next few years.

  • @AlfaPro1337
    @AlfaPro1337 2 หลายเดือนก่อน +1

    So much gluing.

  • @Psychx_
    @Psychx_ 2 หลายเดือนก่อน

    5:19 Does an interposter have to be manufactured on the most bleeding edge nodes though? Wouldn't something like 40/28nm be enough?

    • @HighYield
      @HighYield  2 หลายเดือนก่อน +2

      No, and they are not. Still, due to their size and mask stitching, they are expensive.

    • @DigitalJedi
      @DigitalJedi 2 หลายเดือนก่อน +2

      Intel is making MTL's on a modified verison of an old node. 40nm might not offer the interconnect density, but but 28-20nm would be plenty. I don't know what TSMC makes their silicon packages on though, they might be using 16nm as they've had that node for ages.

  • @elfo7918
    @elfo7918 2 หลายเดือนก่อน

    I think it's great, that the CPU manufacturer still pushes their ideas to improve the CPUs. Even though AMD is now on top in many cases and way superior in the server market, and still have a bunch of points to improve Zen.
    Unlike 12 years ago when Bulldozer came and Piledriver/Excavator had no real changes, just pushed the clockspeed. Intel was superior in every measurement and also just pushed the clockspeed and made nearly no changes.

  • @El.Duder-ino
    @El.Duder-ino 2 หลายเดือนก่อน

    Excellent prediction analysis, thank u once again for very educative vid. Chip packaging will most certainly play a very important role in the future. As u elaborated it's going to be right optimum in between performance vs costs. AMD's approach is clear to be more performant per $ and as efficient per Watt as they can be to keep their competitive edge against competition.
    Can you open a "can of worms" in future vids about Intel's approach to use glass substrate as well as other competing technologies besides organic?
    I also wonder what Nvidia will deliver with their Arm custom cores and what kind of SoC/APUs we r going to see from them. With failed acquisition of Arm Nvidia clearly showed their future plan to not be just a GPU gaming/compute accelerator company, but complete CPU + GPU & all kinds of other accelerator company like AMD and Intel.

  • @EntekCoffee
    @EntekCoffee 2 หลายเดือนก่อน

    Don't Meteor Lake CPUs use silicon interposers?
    If they change Infinity fabric, I think we it should expect an additional "interconnect tier" that is decoupled from infinity fabric. Faster than infinity fabric, but slower than L3 links.
    Chip and cheese also showed that L3 latency penalty from N21 to N31 was surprisingly small.

  • @sinom
    @sinom 2 หลายเดือนก่อน

    While the locations of the chiplets on the CPU are the same, actuqlly looking at thwur layouts they are very different. The IO die between zen2 and zen3 is basically the same ofc. but rhe core chiplets are all very different from one another. And the zen4 IO-die also looks really different than the previous one

  • @user-hl5zx2ce2q
    @user-hl5zx2ce2q 2 หลายเดือนก่อน +4

    Excellent video, i’m both intrigued and confused.
    AMD’s chipset design seems to be much much more rudimentary than Intel’s EMIB and foveros. Yet, it worked much better. Why?

    • @spankeyfish
      @spankeyfish 2 หลายเดือนก่อน +10

      I think it worked because it's simpler. It didn't require as big a leap in technology as Intel's strategy.

    • @maou5025
      @maou5025 2 หลายเดือนก่อน +1

      Intel used monolith for desktop, chiplets for laptop. AMS used monolith for laptop, chiplets for desktop. Weird but each have their own advantages.

    • @dex6316
      @dex6316 2 หลายเดือนก่อน +9

      Packaging is only one part of CPU performance and efficiency. Monolithic is most performant and most efficient. More advanced packaging will not compensate for deficiencies in the node, micro architecture, or core layouts/counts. You can use chiplets to increase performance by having more silicon than monolithic, or reduce costs by using such smaller dies that the higher yields and simpler design pay for the packaging and then some.
      AMD uses a tiny chiplet to house their CPU cores and uses this chiplet and most of their server, workstation, and mainstream CPUs. With such scale AMD can easily bin their products to have high frequencies for desktop and excellent efficiency for server and workstation; they are also very cheap with excellent yields. AMD’s usage of a separate IO die allows them to use a separate node for IO, and AMD can save a lot of money. The negatives of using such primitive packaging are higher in server and workstation. AMD remains more efficient than Intel by using more advanced nodes and having higher core counts. The higher core counts also allow AMD to maintain a performance lead. All together, AMD’s chiplet philosophy reduces costs for the company.
      Intel is less efficient than AMD primarily due to less advanced nodes. The 13900K at ISO power is more efficient than the 5950X, but is competing with the 7950X which has a node advantage. Meteor Lake is actually comparable in efficiency to Phoenix in spite of using tiles, but the silicon interposer causes it be much more expensive than Phoenix. On server Intel uses massive tiles over 5x as large as AMD’s CPU chiplet. Intel is limited by yields causing many of their server products to be delayed. Intel can’t go bigger due to poor yields, so their server products have a core count disadvantage compared to AMD. Delays cause Intel to launch on poorer nodes, and having fewer cores further hurts performance and efficiency compared to AMD.

    • @DigitalJedi
      @DigitalJedi 2 หลายเดือนก่อน

      AMD got into it first on consumer platforms. We've only seen EMIB in server chips from Intel so far. Up until very recently with Meteor Lake, which uses Foveros on a full interposer, all Intel consumer CPUs have been functionally monolithic. AMD actually loses power everywhere with their current interconnects compared to a monolithic chip, and you can see this in idle power draw where a 14900K can get into the single digit watts while the 7950X sits above 20W most of the time.
      Where they got ahead was with a process node advantage, sometimes multiple steps ahead. Intel's 14nm was impressive for how much they extracted from it and how much power the dies can take before exploding, but it was never a highly efficient node aside from the year or so it was brand new and everything else was just worse, making it look good in comparison.

  • @Pegaroo_
    @Pegaroo_ 2 หลายเดือนก่อน

    What technology do they use to connect the 3D V-cache chiplets? And could that be used to connect the chiplets to the IO die?

    • @kingkrrrraaaaaaaaaaaaaaaaa4527
      @kingkrrrraaaaaaaaaaaaaaaaa4527 2 หลายเดือนก่อน

      It uses TSVs .

    • @Pegaroo_
      @Pegaroo_ 2 หลายเดือนก่อน

      @@kingkrrrraaaaaaaaaaaaaaaaa4527 Cool, could that be used to stack the CPU dies directly on the IO die?

    • @kingkrrrraaaaaaaaaaaaaaaaa4527
      @kingkrrrraaaaaaaaaaaaaaaaa4527 2 หลายเดือนก่อน

      @@Pegaroo_ It could. Just a matter of cost as it the main reason why HBM is so expensive compared to GDDR.

  • @OneAngrehCat
    @OneAngrehCat 2 หลายเดือนก่อน

    My guess would be that Ryzen will take Organic RDL/Infinity Links for sure, but for server CPUs, I'd bet everything on Silicon Bridges. There's just no point in going cheap there, especially if they're reworking the entire packaging.

  • @Tech2C
    @Tech2C 2 หลายเดือนก่อน +5

    I read somewhere Zen6 would have 3D cache underneath

    • @jamegumb7298
      @jamegumb7298 2 หลายเดือนก่อน

      I am not sure AMD has the specs on Zen6 final yet.
      Though putting it underneath makes for easier transport of heat away from the cpu.

    • @MaxIronsThird
      @MaxIronsThird 2 หลายเดือนก่อน

      @@jamegumb7298 Zen6 is over 2 years away, they're still in the simulation phase

    • @hyperion8382
      @hyperion8382 2 หลายเดือนก่อน

      Lower CPU temps are good, but are the 3d v cache itself sensitive to heat? ​@@jamegumb7298

  • @joechang8696
    @joechang8696 2 หลายเดือนก่อน

    the IOD should have a snoop filter or directory for cache coherence.

    • @jamegumb7298
      @jamegumb7298 2 หลายเดือนก่อน

      They should add the dual data back in and let the chipset be a glorified usb controller.
      Hopefully that allows much better routing and avoids retainers pushing the signal all over the mobo and back again.

  • @joelandersson3474
    @joelandersson3474 2 หลายเดือนก่อน

    Would it be possible to use older generation silicon tech for the interconnect silicon stuff? Like this might not be true, but my assumption would be that we only have a very limited capacity to make like sub 5nm stuff but a much larger capacity to make like 20nm stuff. And I was thinking that couldn't we just use the 20nm factories to make the interconnects cheaply and then then 5nm factories for the actual important chips? Surely the interconnects don't need to have the fanciest smallest gates possible?

  • @HablaConOwens
    @HablaConOwens 2 หลายเดือนก่อน

    I wonder what applications zen 6 will do better considering the design change.

    • @kingkrrrraaaaaaaaaaaaaaaaa4527
      @kingkrrrraaaaaaaaaaaaaaaaa4527 2 หลายเดือนก่อน

      Frame pacing for games and latency sensitive applications will be the biggest benefactors of Zen 6.
      This also solves energy consumption when communicating between different dies so Idle power consumption should go way down as well.
      So chiplets are a more viable technology for mobile applications like laptops and maybe even handhelds.

  • @mirkomeschini80
    @mirkomeschini80 2 หลายเดือนก่อน +1

    Why not 3D stacking, with chiplets on top of i/o die and L3 inside the i/o die? And Infinity links to connect more i/o dies on Epyc...

    • @DigitalJedi
      @DigitalJedi 2 หลายเดือนก่อน

      Think about how you would cool that stack. The I/O die can run fairly hot itself. L3 inside it could be interesting, but keeping as much cache close to the cores is better, which is what V-cache is for. Long term I suspect an active interposer handling I/O could happen, but bridges between dies would also do mostly the same job.

    • @mirkomeschini80
      @mirkomeschini80 2 หลายเดือนก่อน +1

      @@DigitalJedi cooling side, Is the same as actual x3d parts, but with hotter Chips on top instead of on bottom...

  • @m_sedziwoj
    @m_sedziwoj 2 หลายเดือนก่อน

    I think same as you, and Intel is doing what he have, they don't have experience with organic RDL, and I don't even know if they have any demo of this technology.
    And RDL looking great, because from what I know, even if package don't work, they can remove RDL and put on new one.

  • @user-qs7gk3bp1v
    @user-qs7gk3bp1v 2 หลายเดือนก่อน

    there were some reviews from some builders about AMD cpu reliability problem compared to Intel, what do you think about that? thanks.

  • @AFistfulOf4K
    @AFistfulOf4K 2 หลายเดือนก่อน

    Silicon interposers or bridges should be manufacturable on obsolete manufacturing processes like >28nm, no? And they wouldn't be nearly as sensitive to minor defects as a CPU or GPU. So they'd increase cost, but not anywhere near as much as you might guess based on the size of the "chip" compared to a modern CPU or CPU. Those old foundries are probably very cheap to place orders on as well as very reliable.

  • @JohnWilliams-gy5yc
    @JohnWilliams-gy5yc 2 หลายเดือนก่อน

    TSMC made Ryzen : In-package interconnect wiring, performance and latency take a backseat.
    Intel 14nm & 10nm : A +100 MHz per year " *_Refreshed_* " backseat.

  • @JorenVaes
    @JorenVaes 2 หลายเดือนก่อน

    With technologies like bridge dies, I see a cut in latency by reducing the SERDES overhead. When your channel is sufficciently short, I think it is likely we see a switch back from lots of unsynced serial links to massive parallel busses with 'dumb' transceivers. This can cut out a lot of protocol overhead, saving power and latency.

  • @spuchoa
    @spuchoa หลายเดือนก่อน

    I really like your videos!. I think Intel has a bumpy road ahead (in the short term) but can become the best Fab in 5 years.

  • @Maisonier
    @Maisonier 2 หลายเดือนก่อน

    When are we going to have 32gb hbm3 chiplet with the CPU?

  • @user-jv3zj1yr1v
    @user-jv3zj1yr1v หลายเดือนก่อน

    Why not place the ram close to the chips on the same Socket

  • @yeagyeag
    @yeagyeag 2 หลายเดือนก่อน +4

    Any chance we'll see the ability for quad channel ram on consumer motherboards in the future? That bandwidth on the apple m chips looks so nice...

    • @jamegumb7298
      @jamegumb7298 2 หลายเดือนก่อน +1

      If CAMM2 modules take off we first have to deal with those. CKD might become optional part of spec in DDR6 we see this year.
      A single CAMM2 offers dual channel, and since they use less space getting more "sticks" in takes less space. Much easier to stack 2-3 of those modules.

    • @marshallb5210
      @marshallb5210 2 หลายเดือนก่อน

      ​@@jamegumb7298i don't think you can stack more than two

  • @Great_America
    @Great_America 2 หลายเดือนก่อน

    AMD also needs to further enhance the 3D V-cache and provision all cores to have it in the higher end product line, and not just half.

    • @marshallb5210
      @marshallb5210 2 หลายเดือนก่อน +1

      both dies with v-cache isn't that much better from what i've read

  • @syndan9245
    @syndan9245 2 หลายเดือนก่อน +1

    Silicon Interposers also kind of defeat the point of splitting the chips in the first place no? It was to lower manufacturing losses and inaccuracies while being more thermally and thus energy efficient by spreading the heat load. Not sure I'm following your logic as it would undo all of the benefits.

    • @Poctyk
      @Poctyk 2 หลายเดือนก่อน +1

      You still keep the benefit of manufacturing different things on different nodes, not to mention easier semi-custom designs, which are AMD's forte, or at least important part of buiseness

  • @TerminalWorld
    @TerminalWorld 2 หลายเดือนก่อน +2

    The logo/intro sound is way to loud compared to speech volume.

    • @HighYield
      @HighYield  2 หลายเดือนก่อน

      I’m sorry, the voice recording what very low volume for some reason and I had to adjust the volume up by quite a bit. I tried to lower the volume of the intro, but apparently it wasn’t enough. Will double check next time!

  • @theminer49erz
    @theminer49erz 26 วันที่ผ่านมา

    Sorry for the delay, been a weird few months. Anyway, I was wondering how you think this compares to what Nvidia is doing with their "stacked dies"? Its the foundry 's tech that Nvidia is using as far as I understand it, so I dont think its proprietary to Nvidia, but I havent heard much besides their marketing releases that I don't trust.
    It seems similar, but without a lot more time to look into it, I cant tell. I WOULD trust what you think though. Mainly because if you were not 100% sure, you would say so, while still providing what you do know/ suspect. I have been watching your videos long enough now to KNOW any speculation you may engage in is extremely well calculated, well informed, carefully considered, and based on what facts you have plus your overall understanding/experience. I would take your speculation over any Keynote hyperbole any day!
    I will completely understand if you miss this completely (which would not be the case if you are reading this lol), or dont see it for a long time considering I'm much later to comment than usual. Plus, you have always taken the time to reply in the past, which I appreciate immensely, so please don't get bummed if you dont see this until 2+ years from now 😁 Great video as per usual! Thanks!

  • @RiddledEnigma93
    @RiddledEnigma93 11 วันที่ผ่านมา

    I see a future where the CPU will house also a SSD chip that will be utilized for the OS only, and all the programs will be stored on another SSD on the PC, thus reducing boot time!~

  • @shuginubi
    @shuginubi 2 หลายเดือนก่อน

    I always wondered, why Bulldozer (and every next architecture for AM3+) didn't have North Bridge chip alongside CPU.. would be so much better.

    • @dylf14
      @dylf14 2 หลายเดือนก่อน +1

      North and south bridge is integrated into IOD

  • @BryantWhite
    @BryantWhite 2 หลายเดือนก่อน

    I also rather call them PWB (printed wiring board) as a better description of what it actually do.

  • @FougaFrancois
    @FougaFrancois 2 หลายเดือนก่อน

    You mixed substrate and PCB ... 😜

  • @butterflyfilms939
    @butterflyfilms939 2 หลายเดือนก่อน

    So is there still a need for a Zen6C or is Zen6 already C?