This and learning about the history of AMD64, made me realize that they are an amazingly innovative company. Their stock price was around $2-3 at this time.
and I invested $300 back then in 2009. Amd is very innovative glad people are starting to realize this today. Even Mantle used in dx12 and vulkan now, chiplets, monolithic, 3dcache, hbm, first to reach 1 teraflop, also 1 tbs of bandwidth and the list goes one.
@@OrlandoCaba I wish they would have more money reflecting that but Nvidia is greedy and still the most of the people own the Nvidia even if just for gaming, imagine why
This, Zen chiplets, and RDNA 3 MCMs really make it clear that AMD is a company that bets on advanced technology with long lead times compared to its competition. All of these developments have been pioneered by them, often when they had significantly less revenue and cash then others, and didn't see a true benefit until multiple generations of refinement on top of the long R&D phases. It's crazy to think that AMD was close to bankruptcy so many times when they had so many advanced projects in development that are now being adopted by their competitors. If they had gone under, it's hard to imagine we'd have seen anywhere close to the level of advancement in performance and efficiency that we have now. It makes me wonder why Intel doesn't seem to have been developing similar technology until AMD started to see their investments pay off, when they had much more revenue to spend on R&D.
You answered your own question. No company wants to get close to bankruptcy. AMD has many times been put in a position where it has to take the risky gambles.
@@MrSandvich03same story with GK210 vs Hawaii before. Spec wise Hawaii will eat nvidia GK210. but majority of the player in the industry still pick nvidia solution because of more developed and robust ecosystem.
It's not that surprising really, you can easily get a small factor but heat management suffers since you can't fit a larger heat sink and space out components.
I remember doing repairs on a few of these. I had to do a core transplant on one once. It was easier to tell the customer to buy a new one and swap their painted shroud onto it, with their permission of course. I salvaged the dead core card for it's other components. I eventually tore the core off to see what killed it, and found that the interposer had cracked right below the main GPU die.
@@lucasremThat's not how these designs work at all. Including support for both GDDR and HBM would take up a ton of die area, die area they don't have room for. You can see this in diagrams of Sapphire Rapids, just how much room DDR5 and HBM controllers take up combined. GDDR controllers are even larger. Doing that on Fiji would have meant cannibalizing the graphics core. Even SPR had to sacrifice a CPU core on each tile to do both with smaller DDR5 controllers and was not constrained by the reticle limit in a traditional sense.
Allegedly UMC uses Canon steppers, and Canon makes some "wide-field" steppers which are meant to expose a full-frame CMOS image sensor without stitching and conversely have a 33 x 42 mm reticle. The Fiji interposer would fit into that reticle with relative ease (seems to be around 30x36).
I love seeing these old interposer designs. I recently finished my PhD on interposer design and silicon packaging, and have worked with Intel on Foveros throughout it. Lakefield was their first product to have my name on some papers, which I'm rather sad about given it's performance, but it was a testbed for so many things. We learned a lot about how to handle multiple architectures in one chip, how to handle caches for those cores, refined earlier Foveros processes, and set the stage for it's spiritual successors, Meteor Lake and Arrow Lake. Alder Lake and Raptor Lake are a different concept, as they have monolithic dies, but also took some lessons from Lakefield as they are hybrids as well, with the RPL-C0 die being a hybrid of hybrids, having Alder Lake's L2 cache and Raptor Lake's L3. I really hope we start seeing more HBM packages in the future. When I saw the first Navi31 pictures from a pal in team Radeon, I was stoked to see what I assumed at the time to be HBM3 stacks, and 6 of them! I was so excited for another HBM card, and I still bought one when I found out those maniacs moves the memory controllers onto those dies instead. It retired a Radeon VII as my primary GPU, which itself replaced a Titan Xp. My hunger for VRAM still grows lol.
@@critic_empower_joke_rlaxtslife it's definitely not the same design. I mean could be same in the digital architecture but definitely not in the actual circuit (the layout of the transistors and etc)
Very interesting. Look like my preference for super speed memory on chips have some agreement. But this may take a decade to deploy these pieces *cheap* and *everywhere* .
Damn, this takes me back. This thing was an SFF treat, as well as the Fury X for micro-atx cases. Pretty much a 980 for SFF systems. HBM for consumers was a crazy play by AMD, but it really shows how much they learned from this, and improved their packaging technology. I remember all the threads and articles discussing, arguing about the bus width vs mem clock/size. Not to mention the many talks of a 'Titan Killer', how history repeats itself lol. Also, didn't know how long AMD was working on this too. While writing this, remember the Pro Duo/Fury X2? The Pro Duo would be an amazing collection to have with your Nano lol
Yes I remember. 2015 wasn’t even long ago. Same year I got married. I still remember when the first GeForce and the first Radeon came out and was a huge performance improvement over the old Nvidia TNT and ATI Rage cards. feels weird to talk about 2015 as history for me. But I get it. I’m old.
My Fury X was my main card until somewhat recently, it had good performance, but AMD has stopped driver support, and even with modded drivers, it felt like you didn't get proper performance in some modern games. It's part of my collection now of ATI/AMD cards, their use of cutting edge tech is what prompted me to make this collection, HD 4000 series as the first cards to use GDDR5, GDDR4 and the ring bus architecture of the 1950 XTX, and so on, back to the 9700 Pro. Great video, something I'll probably show if someone was curious about what made my Fury X so special :D.
There is something wrong with Fury architcture wise. The card unable to flex it's performance at low resolution. In 2016/2017 RX480 and GTX1060 able to beat Fury X at 1080p res (can be seen in techpowerup test) in some tittles. That's why despite heavy discount it had back then i had hard time to recommemd the card if your main resolution is 1080p. At such res it is better to get RX480.
@@arenzricodexd4409 It's almost certainly GCN's scaling issue, leading to it stalling. When I say GCN I mean pre-vega, although Vega's (NCU iirc?) had pretty much a similar issue. I remember playing Rage 2(a vulkan game) was a disaster on it. But yeah unless you could get it dirt cheap and was fine with it not playing the latest games, I would also recommend something like the RX 480/580/590, or 1060 6GB over it.
@@arenzricodexd4409 it had fundamental bottleneck issues on architecture level, IIRC some parts of the chip like the ROP is starved of data despite the HBM.
I have an R9 FURY X in my system, bought it a couple of months after launch, I remember everyone mocking me for “overpaying” for a 4GB card, still rocking after almost 7 years with no signs of slowing down :)
That's beautiful thing about old cards, they're supported for quite a long while. I know most people hate it, but I'm glad we have upscaling today. Cards like my cousin's 1650 laptop are able to hold up pretty well for what they are.
I've had R9 Nano, RX Vega56. And I'm watching this video using Radeon VII. R9 Nano was a crazy beast! This tiny card performed like a top level NVidia GTX 980. Vega56 was a very average card. There were rumors about 4 stacks version but both 56 and 64 got only 2. Radeon VII is still a good card because of enormous amount of VRAM and bandwidth. It's funny when modern GPUs struggle with low memory bandwidth but old Radeon VII has about 1TB/s of it.
This was an extremely well done exposé! What a cool story that I had no idea about. It’s just incredible that AMD was nearly bankrupt at this time, but were literally innovating well ahead of the industry in many ways. They deserve all the success they’re having now, just fantastic R&D *and* productization.
@@lucasrem how was it a fail? They outperformed their competitor for years and the technology is still in use, just not in gaming industry. You sound like a hater that doesn't know what they're talking about. Poor little fanboi
No, nvidia rooted the path to AI in 2007 with cuda and gpgpu well AMD did create HBM, nvidia has used it on the high end but largely abandoned it for gddr5x ecc or gddr6 ecc. (Look to nvidia Tesla t4 as not the fastest but still the standard as a lot of them are still in production deployments.
@@t8z5h3nvidia and intel actually bet more on HMC. AMD work with Hynix to push HBM. I heard AMD spend quite some money to ensure HBM to become the standard instead of HMC. Nvidia did not abandoned HBM. The issue with HBM is very high cost making them unsuitable for consumer grade GPU. so they work with Micron to develop GDDR5X for pascal which later leads to GDDR6 development. their compute card such as GP100 and GV100 are using HBM. AMD hope wider adoption will eventually bring the HBM cost down but they only command 30%+ market share and because of the expensive nature of HBM only their high end card can really afford it. So wide adoption strategy failed. The HBM was so expensive that Vega only using 2 stacks and end up having less bandwidth than Fury X. Even 1080Ti that use GDDR5X end up having more bandwidth than Vega. And it hamper Vega from flexing it's true performance.
I was using Fury X for many years, since its release, to just only over year ago I updated, as it had simply too little VRAM. If it had 8GB I would be still using it. Now I am using 6900XT with 16GB, which should be good. Still I think Fury X was amazing design, and loved it. I do think HBM will come back to consumer cards eventually. HBM is used a lot in other cards, like professional GPUs, AI accelerator, ultra high frequency oscilloscopes, and ultra fast network switches and routers, where its performance is advantageous despite higher price. GDDR will reach its limits eventually, and only way up would be to copackage HBM close by.
I remember wishing these would me made into mobile GPUs. Without the need for vram chips you could fit a huge power delivery onto an MXM board which is about the same size as the fury nano pcb!
I have the Fury Nano, and only replaced it with a newer card (along with the rest of the PC) this year. Aside from memory capacity issues, the card still performed well. The computer it is in is now on loan to a friend who cannot afford to buy a new computer. I say "on loan" because I had the stipulation that when it was to be replaced, or it died, I wanted to reclaim the Fury Nano, take it apart, and pot the PCB into clear resin. It is the most interesting GPU I have ever owned, and always felt like it was a little slice of significant computer history
I love the idea of putting that PCB in resin. I've done that as well, and may I suggest instead finding a good shadow box. The resin thing is way harder to get right than it seems. I tried it first on my delidded Pentium III, and while it turned out OK, I much prefer the look of my Radeon VII in it's shadow box.
The late Vega 56 releases were the spiritual successor to this. They had the same small board and most coolers, like the power color red dragon, used the saved space for flow through fins.
Vega 64, nobody needed that, released way too late.... demand HBM mem, why not support all mem ? why go fabless, need more Nvidia companies in TSMC ? WHY GO EVEN CHEAR THAN VEGA 64 ?
I keep forgetting Vega 56 used HBM. Good God those PCBs are tiny! There was the Radeon RX Vega Nano which was never released. That thing had a sleek truncated Radeon VII shroud and was just as small as the Fury Nano. On a side note, I want to see more components in the style of the Radeon VII -- blocky and bare brushed metal. Too much "gamer" styling these days.
@@tomhsia4354 no the evga nano was a Vega 56 and the reference cooler didn't release but the boards went to power color and sapphire who made the flow through coolers I mentioned. The first releases of Vega 64 and 56 had big boards, it was only later ones that used the compacted design.
@@wile123456I was referring to a single-fan reference Radeon RX Vega Nano that was never released. Judging by photos it was going to be the same size as the Fury Nano with the cooler styled like the Radeon VII (silver square shroud with a glowing corner cube and RADEON logo). The ones that used that small PCB are all larger than the unreleased reference design. Powercolor did make a Vega 56 Nano without a flow through cooler, but that one is slightly larger than the Fury Nano.
I used to have a R9 Fury, but I upgraded to and RTX 2080 Super, as some of the games I were playing started struggling on the Fury. I bought it because of the HBM. The technology seemed fascinating to me, and I wanted to see how it performed in real life.
@@lucasremNot sure what's you mean by "evil deal". Nothing wrong with HBM. In fact, it's the superior memory technology, only, it's not cost effective for consumer cards. AMD chose that as a hail mary attempt at getting the performance crown, but there's a reason why they aren't using it for consumer cards at the moment. The Fury and Fury X were competitive at the time though, only overclocked 980 Ti's were consistently faster, and they were more expensive. However, once the 1080 was released the following year, they were outpaced significantly.
I have a Sapphire Pulse Vega 56 and an Asus Strix Vega 64 in my pc and my sons one. Definately surprised by the board size which is really short all the volume of the cards is their heatsinks. They are still decent in performance in games, especially the 64, and can be used with Resizable Bar, or SAM as AMD calls it, which helps for smoother framerates and much better lows with a simple Registry hack. HBM works more like cache rather than classic RAM ddr chips. It is so close to the GPU and has such a wide bus that I believe it deserves to be called cache memory and paved the way for 3DVCache technology we find on the modern Ryzens. Very nice video keep up!
Great historical video on Fiji and its line of Fury cards. A few months ago, I retired my Sapphire Vega 56 Pulse from daily use as a 1080p card. In late 2019, I bought the used Vega 56 for $225 after tax. Power hungry and dumped a lot of heat at full blast, I love the Vega 56's compute power when I did video rendering. From my experience, the Vega 56 is my second most stable card after my first card, RX 580. Undervolted Vega 56 was a great card for my use case. With Vega 56, I got a taste of HBM. I can't lie; I want more, but we are not going to get HBM gaming cards for the foreseeable future. As a result, I want to collect the other HBM cards like Fury, Vega 64/64 Frontier Edition, Radeon VII, and Titan V.
I have a modded mi 25, not as good looking as the frontier edition, but it sure costs a lot less to get 16gb of HBM2, even after accounting for a bios flashing tool and a way to cool it.
I have/had all 3. I love the concept and that it is something different, so I went with it, despite the shortcomings. Still have my Fury non-x and my Vega 56, but sold my VII. But if I find one, I will buy it "back" to have them side by side.
I still own Vega 56. Not in my PC, it got retired for 6800XT due to lacking in performance and memory capacity. But it was really nice card, gonna keep it as collectible, maybe put PCB in some frame just to exposed the package with HBMs. Maybe it's good idea for also to hunt for dead GPUs with HBMs just to make some nerd keychains for myself, tho probably size would be bit too much.
Actually Virtual Visions Finland did vram embedded graphics chip in 1994ish, it had 3d accelerator. It was built irl and was much faster than that era's other gpus. Some vga manufacturer bought em. But they invented the future..
Well this explains where AMD's chiplet packaging of its CPUs came from. When it was first introduced people were surprised by such an advance and wondered why AMD had it production-ready before Intel the much larger company.
IIRC the main downside to HBM is the strangely high latency, which never made sense to me, how could something so close to the die, have a higher latency than GDDR placed so far away from the die.
Interposers are used in other products such as Intel's EMIB in product like sapphire rapids and Apples direct die interconnect "UltraFusion" for M2 Pro Ultra.
Vega56 user user here. That thing is still running my main PC, underpowered to ~130W, in passive mode 99% of the time. When it gets into retirement I'm going to open it up so it can shine some silicone on my bookshelf :)
It was doing a custom system that used piezoelectric actuator to slide the silicon under a sliding negative during prototyping this was changed to a a set of expanding and colomating optics lenses the reduced the minimum feature size. We have newer methods today
The R9 Fury was the first and only AMD GPU that I ever wanted. Tiny form factor and integrated water cooling. But of course I didn't have the money. Is it a good idea to find a used one today just for collection purposes? I assume the water cooler must be not working after such a long time
No. This card was way too popular among miners. Also innovative design put a price of less reliability on its memory. Also you cant switch memory bank in a local repair shop. So in the end of the day while technology is quite exciting and important for future, I dont recommend to buy old cards.
I remember the launch and the reviews from jayz2cents, linus, GN, bitwit, and Paul's hardware. The cost of that HBM was what made the R9 forgotten, around $200 higher than the 980 Ti. It just didn't make sense paying more for a card with the same performance and 2gb less vram. I'm still rocking a vega 56 I got second hand in 2019. It's slowly showing it's age and it's next life is going to be running my couch gaming setup once I get my hands on a newer card.
I used to have an R9 Fury (which died during OC) and a Vega FE onto which i had to mod a Raijintek Morpheus cooler. Cool products, held back by the GCN architecture and, in case of the Vega, the cooler. Seriously, with the stock blocker, it wouldn't run it's full boost, as it would quickly hit 95 on core, 105 on the HBM2 and like 115 junction. Tuning and undervolting made them much more usable.
The HBM on Vega (or the card's IMC) was temperature limited. I got a ref. V56 from the first production run that hit European shelves, put it under water and flashed Vega 64 bios. That HBM has been doing 1045Mhz no problem since then. With the stock cooler, this just wasn't possible to get stable. Almost all people who also did water cooling, put a Morpheus cooler on their card or got an AIB model with sufficient cooling will tell you that they could crank their HBM clocks up by quite a bit. Later in the series, even with Hynix memory as live VRAM timings editing became a thing.
Heeeeey, I got one of these in my backup PC. These things still sell for up to 250 on ebay and are sometimes not available at all. It's worth more than my primary 5700xt! I love my little Nano, I wish HBM was still a thing in GPUs. The Nano was the neatest GPU I have ever owned
Man, I remember wanting one of these so bad back in 2016. Constantly checking eBay to see if I could find one. They were so cool. Then I remember when the prices of Fiji crashed in... was it late 2016 or early 2017? The R9 Fury was going for like $300 or $400. I can't remember exactly, but my memory tells me it was something like half off. I remember wanting one so bad, but in hind sight it was an innovative lemon. The 4GB of memory really was quite limiting, although I think most people would have survived at the time. There were so many calls of, "but 4K gaming!!" at the time, but the reality is that we were far from conquering 4K at that time. Really 4K was a pipe dream until the 2080ti in my opinion, and is yet to be democratized still 5 years later.
I had one. It required an undervolt to get working reliably due to thermal constraints. The memory could be overclocked. I shipped it off to a friend in Canda when I upgraded and unfortunately the drivers are too old for him to use it.
High Yield : you forgot to mention that the HBM technology is a proprietary technology owned by AMD with pattened status. Nvidia uses it on Tesla cards but firstly experimentally used it on the Nvidia GTX Titan V 12GB HBM2 back in 2017. You also forgot to mention that anytime Nvidia manufactures a GPU with HBM, they have to pay AMD for the rights to use it and they have to buy HBM from AMD :).
Actually, the Titan V wasn't Nvidia's first HMB2 card. It was implemented in their GP100 chip a year earlier, which made it into several Tesla models and a Quadro.
I wouldn't be so sure that AMD 'owns' HBM, as it sounds like multiple firms own patents related to HBM. Titan V was a really weird card all things considered, since basically every server card since then uses HBM but not consumer cards.
@@No-mq5lw do your history facts checking right....here is no room for your "what you feel that you think you know" AMD developed HBM memory. The only brought partially in Hynix so that they could get access to large amounts of mass producing equipment of the actual memory. Yes, many server grade GPU-s have HBM because there you need data gobbling performance and not texture crunching performance. For every HBM memory module used, Nvidia has to pay quite a hefty amount of royalties to AMD. And that is the right way if you as a manufacturer have no technology while your competition has a vastly superior tech. Nvidia should be happy to even get any since the way how disrespectful they have been towards AMD for the last 20 years using anti-competitive bribing tactics almost all of the time.
For AI, what you do is you have the memory distributed amongst the processing silicon that does the hardware multiplies. You need to store millions of weight values so you can use memristors that store them as analogue values. Tsinghua University has a working prototype. It's far more energy efficient.
You're right about GCN. It was a good architecture at the start, but it just didn't scale with compute capacity. Just compare the Radeon VII with the 5700 XT. Both are very close to the same performance in games, but the Radeon VII has 50% more compute capacity, and even more memory throughput (it has four stacks of HBM2 delivering 1TB/sec of throughput). RDNA is massively better for gaming than GCN.
I have a readon VII that I picked up recently for the mostly for the cool and hard to find factor (and the price was good as I got it for 161 us dollars from a microcent with a warranty as a refurbished gpu) and run that in my main system from time to time. Was just playing Star Citizen last night with it. This explains why my radeon VII has the monstrous 4096bit bus that it has, with the 16gb of HBM2 that it has, because it's on 4 stacks. Kinda thought the reason that it had that much memory bandwidth was because it is really a compute GPU/because it had HBM2 (didn't have a specific reason why I thought it had the high bandwidth bus other than I knew it was because of HBM2) its kinda cool to know how connected the bus width is to the memory stacks
AMD has introduced a lot of new ideas to tech in general. It's really too bad their driver and software quality held them back for so long, and that reputation has continued to cause them problems long after they've mostly solved it.
AMD's biggest problem is people spreading unsupported nonsense like yours and of course smear campaigns from Nvidia . I'm not saying their can't be errata but it's an industry wide issue, nothing is perfect, especially software. Most problems seem to originate with Windows and easily corrupted Registry. Have you seen the mess gamer's systems are like? I own both AMD GPUs and CPUs, Intel CPUs, and Nvidia GPUs and never have any substantial issues. AMD needs to put more effort into software integration though.
@@maxwellsmart3156 You obviously didn't deal with AMD's drivers back during the first 5 years after they took over ATI. Those were horrible times. Their drivers didn't get all that much better until 2018. Buggy, crashing, missing features, etc. I know because I directly experienced it.
@@dangingerich2559 I don't dispute your experience but I think when AMD's "reputation" is bandied about frivolously then you start to get a lemming effect and what the consumer is left with is a de facto monopoly and then neither AMD nor Nvidia really have to be responsive. People are going to buy the green box because, and AMD can't win for trying. Also, the level of technical capability is pretty low so there's isn't going to be an easy recovery, except for word of mouth.
@@maxwellsmart3156 Not true, AMD was able to push out Intel easily when their products were more performant. If AMD releases a top tier GPU that beats Nvidia's in RT and raster, gamers will buy AMD.
@@maxwellsmart3156 Going back, ATI, and then AMD afterward, made a LOT of missteps with their drivers, and only gained stability in their drivers in the last 4 years or so. I've had ATI/AMD cards in the past, both with promises not kept, like the Rage Fury MAXX, and major stability problems, like the Radeon 9700 Pro and Radeon HD 4870X2. Those soured me on ATI/AMD, and I wouldn't even try using their products for a long time, until I got my current card, a RX 6800XT, and I only got that because Nvidia had begun misbehaving so badly, and people said the driver situation had improved. So far, my experience with the 6800XT has been stable, but it's always been in the back of my mind to watch out for issues. I also built a few computers for my parents based on AMD products, and they've had issues with them. They were even hesitant to accept the laptops I gave them three years ago because they were based on AMD's APUs. They've had a couple reliability issues, and are quick to judge based on their previous experiences. My mom's 5700U laptop has had major issues with recognizing some USB ports, to the point my Dad has mentioned he doesn't want AMD products in the next replacement. I'm not the only one who experienced this. There are a LOT of people who have expressed similar situations, and many of them are not so easy to forgive as I am. It's going to take significant time for them to overcome that, and there are a lot of people who simply will not forgive it. My parents are among them. Their reputation problems are not unearned.
Ive used AMD (amd+ati) exclusively from 2004 till 2021(when i had to buy a laptop amd models were not widely available in my country). They are a great company very affordable too.
Clockspeed is a factor as well, latency is a big factor. Hbm's biggest Achilles heel was the increased latency compared to gddr, but bandwidth for sure was there even though it was still not enough with only two hbm2 stacks for vega cards which was proven with r7. Fiji was not fast enough for the large bandwidth of 4 hbm stack but the latency was what dragged down the perf of the fiji cards. Even vega cards would see a healthy perf increase with oced hbm2 even though the theoretical bandwidth was faaaar above what it needed.
I have owned R9 Fury from Sapphire... great card, a lot better than it was led to belive by online reviews. I changed it for RX580 8GB just because Fury was a 2.2 slot card, and I needed 2 slot GPU... still, it was a little better performer than 580, nvm only 4GB of VRAM. It was not such a big issue in 2018., at least not for casual gamer such as me... it was cool having HBM in my computer :)
The best and worst thing about the Nano is that it was greatly power-starved, and nothing made that more obvious than it being the only card of its time that could easily run Furmark without any risk of thermal throttling, because it just couldn't ramp up enough to overheat in that application.
I was looking to upgrade my graphics card back in 2019 I think, and I was looking mostly on the used market for Vega 56 as well as GTX 1070 tis. I found a good price for a 1070 ti first, so never got to mess around with a Vega card, which would have been interesting. The Fury cards, with their HBM memory, kinda blew my mind at the time. It got me excited, but then AMD pivoted back to regular memory again. I have since upgraded again to a 6800 XT that I got on the used market, but I am still curious to know how the Vega cards are aging these days against the Pascal and Turing series.
i bought a arez vega only for that tech marvel , using it in a x79 platform as a pcie pass-through for win10 vm; also the possibility to cache system memory to increase video memory is another benefit for the 4ch ram workstation boards
Having lived through AMD's Fiji generation, I felt like HBM was kind of the start of something new despite the negative press coverage of HBM. Didn't help either that the Titan V also showed this glimmer of promise that team green could join in very soon to the party that AMD started, but obviously that was not to be now that the Titan V is a footnote in history as a stepping stone to the Turing architecture and beyond with only either side really equipping server GPUs with this memory technology. The introduction of AMD's multi chip designs and everyone + their dog also seemingly following that pattern, as well as treating it like a now permanent fixture of computing, I've felt that someday this memory technology could come back into the limelight somehow without the chains of the past dragging it into the pits of failure. Even if it's reintroduced as some weird level 4 caching strategy, the fire in my heart that was lit a long time ago by these cards that always believed this technology could very well be a part of the future would be validated. Now that cards are unironically launching with gimped memory bus width for their size, a laughably small amount of VRAM for their cost on either side, it feels like we've officially entered the wrong timeline.
I recently upgraded computers for few of my familly members with used Vega 56s. Even I still own vega 56 in my pc (actually downgraded from 5700xt because I don't really need that much performance) and for fun got one of those weird Chinese versions. Pcb looks like it was made with some weird parts but hey, it works. It took me a while to tune it just right to not draw too much power but still do it's job.
I still have both the Fury X and Radeon VII cards in working computers, while the Fury X still performs decent today, that 4GB of HBM is really a bottleneck, one that the Radeon VII doesn't suffer from
Problem with hbm is you can't repair it when its gone while reballing or replacing ddr modules is relatively easy. But i do believe that hbm will be the way forward as integrated graphics become more powerful and more power efficient than traditional pc-s.
Now they just have to add more cores per ccd, improve the infinity fabric bandwidth and latency (it's already a bottleneck currently and with more cores per ccd it's just going to get worse), improve the memory controllers on the io die to support faster ddr5 (the additional cores will require more memory bandwidth and tbh even currently with 16 cores you're often bound by memory bandwidth especially with zen 5 and it's wider cores and faster avx512) + work just as fast with all 4 memory slots filled and put a gb or two of edram stacked on the io die to serve as l4 cache (again to provide more bandwidth for the extra cores). Smt4 support would be nice too especially now that the cores are wider (preferably with the option to switch between smt4/smt2/no smt per ccd on the fly) as would extra l3 cache per core (so if each ccd has 16 cores 128mb l3 cache per ccd without counting the 3dvcache) and more l2 cache. Hopefully we'll get at least some of that with zen 6. And once they move to am6 they really should add an extra memory channel, ddr6 support, extra pcie lanes for the cpu - chipset connection (going from 4 to 8) and add pcie 6 support at least for the cpu - chipset connection. With all the extra bandwidth between the cpu and the chipset they could put a memory controller or two on the chipset (at least on the high end ones) that would use ddr5 so you could use your old ram too and get more memory capacity/bandwidth from it (it will be slower than the memory connected directly to the cpu so hopefully the os would be smart enough to be aware of that and use it for things that are less critical (the filesystem cache perhaps? or let the user decide per process which memory should it use if the cpu memory is full) and/or use it only after the cpu memory is full... so good luck to anyone using Windows (since even after zen has been around for a long time it still keeps bouncing processes/threads from ccd to ccd and still doesn't let you select which apps will use the ccd with 3dvcache, which will use the one without it and which can use either/both so you have to use things like process lasso for that nor does it let you select if you want to use the cores/threads on one ccd first and only turn on the other ccd if necessary (for lower power consumption) or if you want it to use both ccds from the start (for extra performance)))
Its worth saying that amd start developing infinity cache about the same time. It was originslly made for APUs but that turned out to be a bit to hard. Specially now when amd made sort of the comrpmise solution of cache at the memory controller. Getting sort of the best of both worlds.
Been a fan of AMD since i bought s sempron wich was a Barton core that was on the athlon XP an fixed all the laser cut cache back an used a pin mod with wires in the socket to get 2.50v. Was an insanely clockable chip was à 1.6ghz that clocked to 2.9ghz on air. Boosted performance massively, not bad for 40 quid. Sought out the certain Samsung ddr1 memory on brainpower pcbs that could hit 500mhz, was a killer rig that bairly cost me 250 quid with a 6600gt that was a really good clocking chip too was almost as good as a stock 6800.
I think that the von neuman architecture where memory is separated from cpu will one day be replaced with something where memory is part of the calculation path, like do the neurons. My bet, let's see.
In my personal collection I have one Vega 56 and another Vega 64. Both Rog Strix versions. I have to Radeon VII which are rare cards and such a beauty cooler design. I think in the future those cards would be very high demand due to their design and rarity and will worth way more. Hunting to buy more Radeon VII. But those are like looking for the needle in the haystack.
@@lucasrem Why you triggered? I like tech and I keep rare interesting architecture cards. If for you it’s trash fair enough. I don’t really care what you think of Vega architecture. I don’t keep for you or anyone else. I keep for my pleasure. So anyone collecting vintage cars has a collection of trash? If you had a bad day today it’s not my fault. Have a great day bro.
I got one of those. It competes handily against the RX 580 performance-wise. I probably need to replace the thermal paste, but I'm terrified of damaging those chips.
I wonder if they are cooking up something similar to this. Making a product with more forward thinking technologies that is probably way ahead of it's time.
GPUs i have used... Geforce 256 => Geforce 2 Ti => Geforce 6600 GT => Radeon HD5850 => GTX 560 => GTX 580 => GTX 970(good GPU) => Radeon RX 580(great GPU) => RTX 3060ti(great GPU) => Radeon RX 6600(great GPU for its price) => Radeon RX 6800 xt(super happy with this GPU i picked up for $420 abt 8 months ago used right now) Alot of the radeon driver issues were blown out of proportion by fanboys/nvidia plants. They all feel more or less the same to me, except the HD5850 which had the cooling fan melt under the heat generated by the GPU lol. Those were the days man. 😂
I always thought since HBM launch, that this tech shouldn't be used as memory itself but as a cache for a much larger memory that is slower...That 4gb of vram, even for 2015, was amazing because of it speed and terrible because of it's size. At least AMD learned a lot with this and today we have the Infinity Cache, which on the RX 7000 series looks like HBM chips.
Aaah yes i inlt stopped using my r9 fury X about a year ago. HBM was so cool i kept the card even while GPUs were selling for insane profits. HBM is not only cool. It looks cool
for the current memory problem something like HBM is a good solution, and for AI you might actually start looking more at something like a FPGA, but also with some new form of memory directly build into the compute logic itself, essentially AI often uses many weights and such, now they are stored in ram, instead it could be stored more analog or paralel like actually having logic weights buffering memory, where you can even buffer weights directly to logic, kind of use that as ram, this could allow insane speed increases. with a fpga this could be simulated early on just on a small scale to test it. but essentially when done right it is ideal for AI since then it doesn't really need to compute it anymore, instead it just knows it, and it is kind of like analog but then abstract and perhaps digital still but it is like analog as in more like a quantum computer where you insert values/changes somewhere in the logic itself. perhaps inserting it to a special core like thing which decides/manages where it needs to be input and then doing that. this essentially means that the ram no longer needs a speed and just is direct basically, and instructions are not given but instead written to the logic which also is the ram, this means the main bottleneck is a possible clockspeed/settle time and the speed at which the instructions can be handled, probably mostly the last one which right now tends to not really be concidered a problem.
I wish their new high end models used HBM. A 1024 or 2048 bit bus is insane throughput for memory. Now we're seeing 7800XTs using what is technically one SKU lower. An (RX 6800).
So I had an R9 nano. I made a Ghetto R9 Fury X out of one. So the Nano is a full fat Fury X, just clocked down. I slapped a watercooler on with an adapter from NZXT and OC'd it to Fury speeds. Outran my buddies Nitro+ R9 Fury 56.
This and learning about the history of AMD64, made me realize that they are an amazingly innovative company. Their stock price was around $2-3 at this time.
and I invested $300 back then in 2009. Amd is very innovative glad people are starting to realize this today. Even Mantle used in dx12 and vulkan now, chiplets, monolithic, 3dcache, hbm, first to reach 1 teraflop, also 1 tbs of bandwidth and the list goes one.
@@OrlandoCaba I wish they would have more money reflecting that but Nvidia is greedy and still the most of the people own the Nvidia even if just for gaming, imagine why
@@OrlandoCaba If I remember correctly, they were also the first to break the 1GHz barrier for desktop CPU...
Yup I told everyone to invest in them and everyone that did is very happy!
@@yeshuayahushua4338Nvidia does have superior product, even if certain parts of the experience are inferior
This, Zen chiplets, and RDNA 3 MCMs really make it clear that AMD is a company that bets on advanced technology with long lead times compared to its competition. All of these developments have been pioneered by them, often when they had significantly less revenue and cash then others, and didn't see a true benefit until multiple generations of refinement on top of the long R&D phases. It's crazy to think that AMD was close to bankruptcy so many times when they had so many advanced projects in development that are now being adopted by their competitors. If they had gone under, it's hard to imagine we'd have seen anywhere close to the level of advancement in performance and efficiency that we have now. It makes me wonder why Intel doesn't seem to have been developing similar technology until AMD started to see their investments pay off, when they had much more revenue to spend on R&D.
You answered your own question. No company wants to get close to bankruptcy. AMD has many times been put in a position where it has to take the risky gambles.
@compilationhub54 Yet the H100 can't beat the MI300X
@@MrSandvich03same story with GK210 vs Hawaii before. Spec wise Hawaii will eat nvidia GK210. but majority of the player in the industry still pick nvidia solution because of more developed and robust ecosystem.
@compilationhub54 unless you're willing to pay $1500 for a card just to play 1080p sure buddy.
@compilationhub54 it's called sarcasm.
I still remember how surprised i was knowing that such a small form factor GPU had a tremendous power to catch up with GTX 980.
It's not that surprising really, you can easily get a small factor but heat management suffers since you can't fit a larger heat sink and space out components.
@@SilvaDreams It is surprising because it's almost half the length of traditional GPUs due to the advanced stacked HBM
I remember doing repairs on a few of these. I had to do a core transplant on one once. It was easier to tell the customer to buy a new one and swap their painted shroud onto it, with their permission of course. I salvaged the dead core card for it's other components. I eventually tore the core off to see what killed it, and found that the interposer had cracked right below the main GPU die.
Bad design, why only support HBM ????
big fail it was !
Just let board builders do what ever mem they need on the card !
@@lucasremThat's not how these designs work at all. Including support for both GDDR and HBM would take up a ton of die area, die area they don't have room for. You can see this in diagrams of Sapphire Rapids, just how much room DDR5 and HBM controllers take up combined. GDDR controllers are even larger. Doing that on Fiji would have meant cannibalizing the graphics core. Even SPR had to sacrifice a CPU core on each tile to do both with smaller DDR5 controllers and was not constrained by the reticle limit in a traditional sense.
@DigitalJedi ive got several bad cards from owning a computer company. How do I learn to fix them? What do you charge?
@@Josef-K The worst parts of the repairs are BGA soldering and PCB repair. I'd suggest looking u resources on those. I charged parts & labor + 22/hr.
@@lucasrem The HBM die is made from the factory and is highly complicated/expensive. There is zero practically in supporting GDDR
I bought the R9 nano in 2015 and still use it today. I love this little piece of hardware.
Allegedly UMC uses Canon steppers, and Canon makes some "wide-field" steppers which are meant to expose a full-frame CMOS image sensor without stitching and conversely have a 33 x 42 mm reticle. The Fiji interposer would fit into that reticle with relative ease (seems to be around 30x36).
For some reason, I read that as "US Marine Corps" and was very confused.
I love seeing these old interposer designs. I recently finished my PhD on interposer design and silicon packaging, and have worked with Intel on Foveros throughout it. Lakefield was their first product to have my name on some papers, which I'm rather sad about given it's performance, but it was a testbed for so many things. We learned a lot about how to handle multiple architectures in one chip, how to handle caches for those cores, refined earlier Foveros processes, and set the stage for it's spiritual successors, Meteor Lake and Arrow Lake. Alder Lake and Raptor Lake are a different concept, as they have monolithic dies, but also took some lessons from Lakefield as they are hybrids as well, with the RPL-C0 die being a hybrid of hybrids, having Alder Lake's L2 cache and Raptor Lake's L3.
I really hope we start seeing more HBM packages in the future. When I saw the first Navi31 pictures from a pal in team Radeon, I was stoked to see what I assumed at the time to be HBM3 stacks, and 6 of them! I was so excited for another HBM card, and I still bought one when I found out those maniacs moves the memory controllers onto those dies instead. It retired a Radeon VII as my primary GPU, which itself replaced a Titan Xp. My hunger for VRAM still grows lol.
Great story, thanks
How many more generation of same design is coming up?
I have 9th gen, I don't see the need to lash out so much cash for a 14th gen.
@@critic_empower_joke_rlaxtslife it's definitely not the same design. I mean could be same in the digital architecture but definitely not in the actual circuit (the layout of the transistors and etc)
Very interesting. Look like my preference for super speed memory on chips have some agreement. But this may take a decade to deploy these pieces *cheap* and *everywhere* .
Damn, this takes me back. This thing was an SFF treat, as well as the Fury X for micro-atx cases. Pretty much a 980 for SFF systems. HBM for consumers was a crazy play by AMD, but it really shows how much they learned from this, and improved their packaging technology.
I remember all the threads and articles discussing, arguing about the bus width vs mem clock/size. Not to mention the many talks of a 'Titan Killer', how history repeats itself lol. Also, didn't know how long AMD was working on this too. While writing this, remember the Pro Duo/Fury X2? The Pro Duo would be an amazing collection to have with your Nano lol
Yes I remember. 2015 wasn’t even long ago. Same year I got married. I still remember when the first GeForce and the first Radeon came out and was a huge performance improvement over the old Nvidia TNT and ATI Rage cards. feels weird to talk about 2015 as history for me. But I get it. I’m old.
My Fury X was my main card until somewhat recently, it had good performance, but AMD has stopped driver support, and even with modded drivers, it felt like you didn't get proper performance in some modern games. It's part of my collection now of ATI/AMD cards, their use of cutting edge tech is what prompted me to make this collection, HD 4000 series as the first cards to use GDDR5, GDDR4 and the ring bus architecture of the 1950 XTX, and so on, back to the 9700 Pro.
Great video, something I'll probably show if someone was curious about what made my Fury X so special :D.
There is something wrong with Fury architcture wise. The card unable to flex it's performance at low resolution. In 2016/2017 RX480 and GTX1060 able to beat Fury X at 1080p res (can be seen in techpowerup test) in some tittles. That's why despite heavy discount it had back then i had hard time to recommemd the card if your main resolution is 1080p. At such res it is better to get RX480.
@@arenzricodexd4409 It's almost certainly GCN's scaling issue, leading to it stalling. When I say GCN I mean pre-vega, although Vega's (NCU iirc?) had pretty much a similar issue. I remember playing Rage 2(a vulkan game) was a disaster on it. But yeah unless you could get it dirt cheap and was fine with it not playing the latest games, I would also recommend something like the RX 480/580/590, or 1060 6GB over it.
I remember AMD cucks saying 4gb HBM was enough because it was that fast... Recommending it over the 980ti lmao
Man I loved my old water cooled fury x
@@arenzricodexd4409 it had fundamental bottleneck issues on architecture level, IIRC some parts of the chip like the ROP is starved of data despite the HBM.
I have an R9 FURY X in my system, bought it a couple of months after launch, I remember everyone mocking me for “overpaying” for a 4GB card, still rocking after almost 7 years with no signs of slowing down :)
Same for the RX 480 with 8GB. Still running great.
That's beautiful thing about old cards, they're supported for quite a long while. I know most people hate it, but I'm glad we have upscaling today. Cards like my cousin's 1650 laptop are able to hold up pretty well for what they are.
I've had R9 Nano, RX Vega56. And I'm watching this video using Radeon VII.
R9 Nano was a crazy beast! This tiny card performed like a top level NVidia GTX 980.
Vega56 was a very average card. There were rumors about 4 stacks version but both 56 and 64 got only 2.
Radeon VII is still a good card because of enormous amount of VRAM and bandwidth. It's funny when modern GPUs struggle with low memory bandwidth but old Radeon VII has about 1TB/s of it.
Now we know the problem is that of economics and not of technical bandwidth limitations.
@@CST1992efficiency was never about the hardware, but about how efficiently they making that dough
Lucky you; you were able to get your grubby hands on a RADEON VII
It was crap, Vega 64 too
why go even cheaper then Vega 64 ?
why demand HBM ??? let board makers support all mem types please !
@lucasrem you don't know what you're talking about and it's VERY obvious
This was an extremely well done exposé! What a cool story that I had no idea about. It’s just incredible that AMD was nearly bankrupt at this time, but were literally innovating well ahead of the industry in many ways. They deserve all the success they’re having now, just fantastic R&D *and* productization.
They are gone, Fabless now
BIG FAIL !
@@lucasrem how was it a fail? They outperformed their competitor for years and the technology is still in use, just not in gaming industry. You sound like a hater that doesn't know what they're talking about. Poor little fanboi
Thanks for the quality content. Keep up the very good work.
wait bist du der echte tech aktien von insta?
AMD certainly were ahead of the curve on this one. Now look at every major HPC/AI GPU, they all look like Vega only on more advanced nodes.
No, nvidia rooted the path to AI in 2007 with cuda and gpgpu well AMD did create HBM, nvidia has used it on the high end but largely abandoned it for gddr5x ecc or gddr6 ecc. (Look to nvidia Tesla t4 as not the fastest but still the standard as a lot of them are still in production deployments.
@@t8z5h3nvidia and intel actually bet more on HMC. AMD work with Hynix to push HBM. I heard AMD spend quite some money to ensure HBM to become the standard instead of HMC. Nvidia did not abandoned HBM. The issue with HBM is very high cost making them unsuitable for consumer grade GPU. so they work with Micron to develop GDDR5X for pascal which later leads to GDDR6 development. their compute card such as GP100 and GV100 are using HBM. AMD hope wider adoption will eventually bring the HBM cost down but they only command 30%+ market share and because of the expensive nature of HBM only their high end card can really afford it. So wide adoption strategy failed. The HBM was so expensive that Vega only using 2 stacks and end up having less bandwidth than Fury X. Even 1080Ti that use GDDR5X end up having more bandwidth than Vega. And it hamper Vega from flexing it's true performance.
I was using Fury X for many years, since its release, to just only over year ago I updated, as it had simply too little VRAM. If it had 8GB I would be still using it. Now I am using 6900XT with 16GB, which should be good. Still I think Fury X was amazing design, and loved it. I do think HBM will come back to consumer cards eventually. HBM is used a lot in other cards, like professional GPUs, AI accelerator, ultra high frequency oscilloscopes, and ultra fast network switches and routers, where its performance is advantageous despite higher price. GDDR will reach its limits eventually, and only way up would be to copackage HBM close by.
Demanding HBM was a mad choice !
Let board builders do what ever mem they need on it !
I remember wishing these would me made into mobile GPUs. Without the need for vram chips you could fit a huge power delivery onto an MXM board which is about the same size as the fury nano pcb!
Did they ever make into mobile GPUs?
@@SectorfiveYTthat’s what the modern Apple mobile chips are
@@SomeDude0881 I have an iphone, i didn't notice a difference in performance from an android phone.
@@SectorfiveYTyou could have a 10x faster phone processor and you wouldn’t notice cause nothing can take advantage of that
@@SectorfiveYT You use your phone for anything other than a web browser?
I had a Vega64 for a VERY long time that I got for £200 second hand and only fairly recently replaced it with a 6700XT. Great bit of kit that was!
"VERY" long time? dude, thats max 5 years :D
@@smts0243 5 years is quite a long time in chip years
I have the Fury Nano, and only replaced it with a newer card (along with the rest of the PC) this year. Aside from memory capacity issues, the card still performed well.
The computer it is in is now on loan to a friend who cannot afford to buy a new computer. I say "on loan" because I had the stipulation that when it was to be replaced, or it died, I wanted to reclaim the Fury Nano, take it apart, and pot the PCB into clear resin. It is the most interesting GPU I have ever owned, and always felt like it was a little slice of significant computer history
I love the idea of putting that PCB in resin. I've done that as well, and may I suggest instead finding a good shadow box. The resin thing is way harder to get right than it seems. I tried it first on my delidded Pentium III, and while it turned out OK, I much prefer the look of my Radeon VII in it's shadow box.
It would make a good card for a display piece, right next to my GTX 295... if I knew how to display it...
The late Vega 56 releases were the spiritual successor to this. They had the same small board and most coolers, like the power color red dragon, used the saved space for flow through fins.
Vega 64, nobody needed that, released way too late....
demand HBM mem, why not support all mem ?
why go fabless, need more Nvidia companies in TSMC ?
WHY GO EVEN CHEAR THAN VEGA 64 ?
I keep forgetting Vega 56 used HBM. Good God those PCBs are tiny!
There was the Radeon RX Vega Nano which was never released. That thing had a sleek truncated Radeon VII shroud and was just as small as the Fury Nano.
On a side note, I want to see more components in the style of the Radeon VII -- blocky and bare brushed metal. Too much "gamer" styling these days.
@@tomhsia4354 no the evga nano was a Vega 56 and the reference cooler didn't release but the boards went to power color and sapphire who made the flow through coolers I mentioned. The first releases of Vega 64 and 56 had big boards, it was only later ones that used the compacted design.
@@wile123456I was referring to a single-fan reference Radeon RX Vega Nano that was never released. Judging by photos it was going to be the same size as the Fury Nano with the cooler styled like the Radeon VII (silver square shroud with a glowing corner cube and RADEON logo).
The ones that used that small PCB are all larger than the unreleased reference design. Powercolor did make a Vega 56 Nano without a flow through cooler, but that one is slightly larger than the Fury Nano.
I used to have a R9 Fury, but I upgraded to and RTX 2080 Super, as some of the games I were playing started struggling on the Fury. I bought it because of the HBM. The technology seemed fascinating to me, and I wanted to see how it performed in real life.
demanding HBM was some evil deal ?
support all mem standards please, why demand HBM ?
RTX 2080 was the better card !
@@lucasremNot sure what's you mean by "evil deal". Nothing wrong with HBM. In fact, it's the superior memory technology, only, it's not cost effective for consumer cards. AMD chose that as a hail mary attempt at getting the performance crown, but there's a reason why they aren't using it for consumer cards at the moment. The Fury and Fury X were competitive at the time though, only overclocked 980 Ti's were consistently faster, and they were more expensive. However, once the 1080 was released the following year, they were outpaced significantly.
I have a Sapphire Pulse Vega 56 and an Asus Strix Vega 64 in my pc and my sons one. Definately surprised by the board size which is really short all the volume of the cards is their heatsinks.
They are still decent in performance in games, especially the 64, and can be used with Resizable Bar, or SAM as AMD calls it, which helps for smoother framerates and much better lows with a simple Registry hack.
HBM works more like cache rather than classic RAM ddr chips. It is so close to the GPU and has such a wide bus that I believe it deserves to be called cache memory and paved the way for 3DVCache technology we find on the modern Ryzens.
Very nice video keep up!
Another well edited video with great research and content. And a stripped down Fury Nano!
Great historical video on Fiji and its line of Fury cards.
A few months ago, I retired my Sapphire Vega 56 Pulse from daily use as a 1080p card. In late 2019, I bought the used Vega 56 for $225 after tax. Power hungry and dumped a lot of heat at full blast, I love the Vega 56's compute power when I did video rendering. From my experience, the Vega 56 is my second most stable card after my first card, RX 580. Undervolted Vega 56 was a great card for my use case. With Vega 56, I got a taste of HBM. I can't lie; I want more, but we are not going to get HBM gaming cards for the foreseeable future. As a result, I want to collect the other HBM cards like Fury, Vega 64/64 Frontier Edition, Radeon VII, and Titan V.
I have a modded mi 25, not as good looking as the frontier edition, but it sure costs a lot less to get 16gb of HBM2, even after accounting for a bios flashing tool and a way to cool it.
I have/had all 3. I love the concept and that it is something different, so I went with it, despite the shortcomings.
Still have my Fury non-x and my Vega 56, but sold my VII. But if I find one, I will buy it "back" to have them side by side.
I still own Vega 56. Not in my PC, it got retired for 6800XT due to lacking in performance and memory capacity.
But it was really nice card, gonna keep it as collectible, maybe put PCB in some frame just to exposed the package with HBMs.
Maybe it's good idea for also to hunt for dead GPUs with HBMs just to make some nerd keychains for myself, tho probably size would be bit too much.
Actually Virtual Visions Finland did vram embedded graphics chip in 1994ish, it had 3d accelerator. It was built irl and was much faster than that era's other gpus. Some vga manufacturer bought em. But they invented the future..
Hbm's death was a real bummer since eol hbm2 (titan V) to this day DOGS on modern gddr6x
Amazing video. Worth the wait. Going to have to watch this a couple more times for sure.
Something about seeing tiny shiny chips sitting on top of another chip is just so cool, and it's crazy to think about the development time!
Well this explains where AMD's chiplet packaging of its CPUs came from. When it was first introduced people were surprised by such an advance and wondered why AMD had it production-ready before Intel the much larger company.
Doated my sapphire furyX to a friend during the covid & crypto era .. its still going strong. great little card.
This was my dream GPU at the time, the size and power it had! Just amazing would be nice if had 8GB!
IIRC the main downside to HBM is the strangely high latency, which never made sense to me, how could something so close to the die, have a higher latency than GDDR placed so far away from the die.
Interposers are used in other products such as Intel's EMIB in product like sapphire rapids and Apples direct die interconnect "UltraFusion" for M2 Pro Ultra.
This is a brilliant presentation. Thanks very much!
I know this is gonna be a good content when i’m hear the jazz while disassembling
Vega56 user user here. That thing is still running my main PC, underpowered to ~130W, in passive mode 99% of the time. When it gets into retirement I'm going to open it up so it can shine some silicone on my bookshelf :)
if gaming cards do switch back to HBM it will probs be a lower generation of it as the newer stuff will be reserved for Ai cards
It was doing a custom system that used piezoelectric actuator to slide the silicon under a sliding negative during prototyping this was changed to a a set of expanding and colomating optics lenses the reduced the minimum feature size. We have newer methods today
When I see chiplet designs I always think of the chip they show off in Terminator 2 when they visit the engineer's house.
The R9 Fury was the first and only AMD GPU that I ever wanted. Tiny form factor and integrated water cooling. But of course I didn't have the money.
Is it a good idea to find a used one today just for collection purposes? I assume the water cooler must be not working after such a long time
No. This card was way too popular among miners. Also innovative design put a price of less reliability on its memory. Also you cant switch memory bank in a local repair shop. So in the end of the day while technology is quite exciting and important for future, I dont recommend to buy old cards.
I remember the launch and the reviews from jayz2cents, linus, GN, bitwit, and Paul's hardware.
The cost of that HBM was what made the R9 forgotten, around $200 higher than the 980 Ti. It just didn't make sense paying more for a card with the same performance and 2gb less vram.
I'm still rocking a vega 56 I got second hand in 2019. It's slowly showing it's age and it's next life is going to be running my couch gaming setup once I get my hands on a newer card.
I used to have an R9 Fury (which died during OC) and a Vega FE onto which i had to mod a Raijintek Morpheus cooler.
Cool products, held back by the GCN architecture and, in case of the Vega, the cooler.
Seriously, with the stock blocker, it wouldn't run it's full boost, as it would quickly hit 95 on core, 105 on the HBM2 and like 115 junction. Tuning and undervolting made them much more usable.
Buddy of mine in the US has the rig AMD used to tour and show off this GPU, the BS-Mods AMD Nano I-Beam, it's super cool
This channel is a gem of knowledge, thank you very much, I happily subscribed! Great presentation too!
The HBM on Vega (or the card's IMC) was temperature limited. I got a ref. V56 from the first production run that hit European shelves, put it under water and flashed Vega 64 bios. That HBM has been doing 1045Mhz no problem since then. With the stock cooler, this just wasn't possible to get stable. Almost all people who also did water cooling, put a Morpheus cooler on their card or got an AIB model with sufficient cooling will tell you that they could crank their HBM clocks up by quite a bit. Later in the series, even with Hynix memory as live VRAM timings editing became a thing.
Heeeeey, I got one of these in my backup PC. These things still sell for up to 250 on ebay and are sometimes not available at all. It's worth more than my primary 5700xt! I love my little Nano, I wish HBM was still a thing in GPUs. The Nano was the neatest GPU I have ever owned
I still have my old Vega64. I still pull it out every now and then because for high memory intensity workloads it *still* stomps around like Godzilla.
Man, I remember wanting one of these so bad back in 2016. Constantly checking eBay to see if I could find one. They were so cool. Then I remember when the prices of Fiji crashed in... was it late 2016 or early 2017? The R9 Fury was going for like $300 or $400. I can't remember exactly, but my memory tells me it was something like half off. I remember wanting one so bad, but in hind sight it was an innovative lemon. The 4GB of memory really was quite limiting, although I think most people would have survived at the time. There were so many calls of, "but 4K gaming!!" at the time, but the reality is that we were far from conquering 4K at that time. Really 4K was a pipe dream until the 2080ti in my opinion, and is yet to be democratized still 5 years later.
I love how he got straight into it!!
I had one. It required an undervolt to get working reliably due to thermal constraints. The memory could be overclocked. I shipped it off to a friend in Canda when I upgraded and unfortunately the drivers are too old for him to use it.
High Yield : you forgot to mention that the HBM technology is a proprietary technology owned by AMD with pattened status. Nvidia uses it on Tesla cards but firstly experimentally used it on the Nvidia GTX Titan V 12GB HBM2 back in 2017. You also forgot to mention that anytime Nvidia manufactures a GPU with HBM, they have to pay AMD for the rights to use it and they have to buy HBM from AMD :).
Actually, the Titan V wasn't Nvidia's first HMB2 card. It was implemented in their GP100 chip a year earlier, which made it into several Tesla models and a Quadro.
I wouldn't be so sure that AMD 'owns' HBM, as it sounds like multiple firms own patents related to HBM. Titan V was a really weird card all things considered, since basically every server card since then uses HBM but not consumer cards.
@@No-mq5lw do your history facts checking right....here is no room for your "what you feel that you think you know"
AMD developed HBM memory. The only brought partially in Hynix so that they could get access to large amounts of mass producing equipment of the actual memory.
Yes, many server grade GPU-s have HBM because there you need data gobbling performance and not texture crunching performance.
For every HBM memory module used, Nvidia has to pay quite a hefty amount of royalties to AMD. And that is the right way if you as a manufacturer have no technology while your competition has a vastly superior tech.
Nvidia should be happy to even get any since the way how disrespectful they have been towards AMD for the last 20 years using anti-competitive bribing tactics almost all of the time.
I’ve still got mine as shelf art.
For AI, what you do is you have the memory distributed amongst the processing silicon that does the hardware multiplies. You need to store millions of weight values so you can use memristors that store them as analogue values. Tsinghua University has a working prototype. It's far more energy efficient.
You're right about GCN. It was a good architecture at the start, but it just didn't scale with compute capacity.
Just compare the Radeon VII with the 5700 XT. Both are very close to the same performance in games, but the Radeon VII has 50% more compute capacity, and even more memory throughput (it has four stacks of HBM2 delivering 1TB/sec of throughput). RDNA is massively better for gaming than GCN.
And that's why AMD separated RDNA and CDNA...
Very nice and helpful review. Thank you.
I have a readon VII that I picked up recently for the mostly for the cool and hard to find factor (and the price was good as I got it for 161 us dollars from a microcent with a warranty as a refurbished gpu) and run that in my main system from time to time.
Was just playing Star Citizen last night with it.
This explains why my radeon VII has the monstrous 4096bit bus that it has, with the 16gb of HBM2 that it has, because it's on 4 stacks. Kinda thought the reason that it had that much memory bandwidth was because it is really a compute GPU/because it had HBM2 (didn't have a specific reason why I thought it had the high bandwidth bus other than I knew it was because of HBM2) its kinda cool to know how connected the bus width is to the memory stacks
AMD has introduced a lot of new ideas to tech in general. It's really too bad their driver and software quality held them back for so long, and that reputation has continued to cause them problems long after they've mostly solved it.
AMD's biggest problem is people spreading unsupported nonsense like yours and of course smear campaigns from Nvidia . I'm not saying their can't be errata but it's an industry wide issue, nothing is perfect, especially software. Most problems seem to originate with Windows and easily corrupted Registry. Have you seen the mess gamer's systems are like? I own both AMD GPUs and CPUs, Intel CPUs, and Nvidia GPUs and never have any substantial issues. AMD needs to put more effort into software integration though.
@@maxwellsmart3156 You obviously didn't deal with AMD's drivers back during the first 5 years after they took over ATI. Those were horrible times. Their drivers didn't get all that much better until 2018. Buggy, crashing, missing features, etc. I know because I directly experienced it.
@@dangingerich2559 I don't dispute your experience but I think when AMD's "reputation" is bandied about frivolously then you start to get a lemming effect and what the consumer is left with is a de facto monopoly and then neither AMD nor Nvidia really have to be responsive. People are going to buy the green box because, and AMD can't win for trying. Also, the level of technical capability is pretty low so there's isn't going to be an easy recovery, except for word of mouth.
@@maxwellsmart3156 Not true, AMD was able to push out Intel easily when their products were more performant.
If AMD releases a top tier GPU that beats Nvidia's in RT and raster, gamers will buy AMD.
@@maxwellsmart3156 Going back, ATI, and then AMD afterward, made a LOT of missteps with their drivers, and only gained stability in their drivers in the last 4 years or so.
I've had ATI/AMD cards in the past, both with promises not kept, like the Rage Fury MAXX, and major stability problems, like the Radeon 9700 Pro and Radeon HD 4870X2. Those soured me on ATI/AMD, and I wouldn't even try using their products for a long time, until I got my current card, a RX 6800XT, and I only got that because Nvidia had begun misbehaving so badly, and people said the driver situation had improved. So far, my experience with the 6800XT has been stable, but it's always been in the back of my mind to watch out for issues. I also built a few computers for my parents based on AMD products, and they've had issues with them. They were even hesitant to accept the laptops I gave them three years ago because they were based on AMD's APUs. They've had a couple reliability issues, and are quick to judge based on their previous experiences. My mom's 5700U laptop has had major issues with recognizing some USB ports, to the point my Dad has mentioned he doesn't want AMD products in the next replacement.
I'm not the only one who experienced this. There are a LOT of people who have expressed similar situations, and many of them are not so easy to forgive as I am. It's going to take significant time for them to overcome that, and there are a lot of people who simply will not forgive it. My parents are among them.
Their reputation problems are not unearned.
Ive used AMD (amd+ati) exclusively from 2004 till 2021(when i had to buy a laptop amd models were not widely available in my country). They are a great company very affordable too.
I am using fpga with hbm technology for few years now. It was available for intel's(altera )FPGAs and XILINXs , aquired not so long time ago by Amd.
Nice. That was so cool. Didn’t know AMD was just a pioneer
I wanted to get my hands on a Radeon VII so badly, and it really struck a nerve when the editor-in-chief of Rock Paper Shotgun used it as a doorstop.
Clockspeed is a factor as well, latency is a big factor. Hbm's biggest Achilles heel was the increased latency compared to gddr, but bandwidth for sure was there even though it was still not enough with only two hbm2 stacks for vega cards which was proven with r7. Fiji was not fast enough for the large bandwidth of 4 hbm stack but the latency was what dragged down the perf of the fiji cards.
Even vega cards would see a healthy perf increase with oced hbm2 even though the theoretical bandwidth was faaaar above what it needed.
I have owned R9 Fury from Sapphire... great card, a lot better than it was led to belive by online reviews. I changed it for RX580 8GB just because Fury was a 2.2 slot card, and I needed 2 slot GPU... still, it was a little better performer than 580, nvm only 4GB of VRAM. It was not such a big issue in 2018., at least not for casual gamer such as me... it was cool having HBM in my computer :)
I am reminded of the Hybrid Memory Cube, imagine if that technology won the memory market.
I'm still using the r9 nano with modded drivers and it's still give me a satisfaction when play game in 1080p !
Oh man! I wish flagship small form factors caught on. Imagine how small itx builds we could have today.
The best and worst thing about the Nano is that it was greatly power-starved, and nothing made that more obvious than it being the only card of its time that could easily run Furmark without any risk of thermal throttling, because it just couldn't ramp up enough to overheat in that application.
I have one of those and 2 liquid cooled Fury's. Worth every penny when they came out
0:40 a vapor chamber that goes to 2 u shaped flattened heatpipes? Interesting
Really interesting video sir
I got two Radeon Pro Duo's. Their PCB's are a joy to behold.
I was looking to upgrade my graphics card back in 2019 I think, and I was looking mostly on the used market for Vega 56 as well as GTX 1070 tis. I found a good price for a 1070 ti first, so never got to mess around with a Vega card, which would have been interesting.
The Fury cards, with their HBM memory, kinda blew my mind at the time. It got me excited, but then AMD pivoted back to regular memory again.
I have since upgraded again to a 6800 XT that I got on the used market, but I am still curious to know how the Vega cards are aging these days against the Pascal and Turing series.
i bought a arez vega only for that tech marvel , using it in a x79 platform as a pcie pass-through for win10 vm; also the possibility to cache system memory to increase video memory is another benefit for the 4ch ram workstation boards
Having lived through AMD's Fiji generation, I felt like HBM was kind of the start of something new despite the negative press coverage of HBM. Didn't help either that the Titan V also showed this glimmer of promise that team green could join in very soon to the party that AMD started, but obviously that was not to be now that the Titan V is a footnote in history as a stepping stone to the Turing architecture and beyond with only either side really equipping server GPUs with this memory technology. The introduction of AMD's multi chip designs and everyone + their dog also seemingly following that pattern, as well as treating it like a now permanent fixture of computing, I've felt that someday this memory technology could come back into the limelight somehow without the chains of the past dragging it into the pits of failure. Even if it's reintroduced as some weird level 4 caching strategy, the fire in my heart that was lit a long time ago by these cards that always believed this technology could very well be a part of the future would be validated.
Now that cards are unironically launching with gimped memory bus width for their size, a laughably small amount of VRAM for their cost on either side, it feels like we've officially entered the wrong timeline.
I recently upgraded computers for few of my familly members with used Vega 56s. Even I still own vega 56 in my pc (actually downgraded from 5700xt because I don't really need that much performance) and for fun got one of those weird Chinese versions. Pcb looks like it was made with some weird parts but hey, it works. It took me a while to tune it just right to not draw too much power but still do it's job.
It is an interesting use of 1920 vintage VRM.
Wow, I did not know this first Fiji used HBM!. Thanks You. I though the Vega ones were the first. They do suck power like crazy!.
I still have both the Fury X and Radeon VII cards in working computers, while the Fury X still performs decent today, that 4GB of HBM is really a bottleneck, one that the Radeon VII doesn't suffer from
It was already crab when it was released....
AMD failed to sell any, too late on the market......
Problem with hbm is you can't repair it when its gone while reballing or replacing ddr modules is relatively easy. But i do believe that hbm will be the way forward as integrated graphics become more powerful and more power efficient than traditional pc-s.
Now they just have to add more cores per ccd, improve the infinity fabric bandwidth and latency (it's already a bottleneck currently and with more cores per ccd it's just going to get worse), improve the memory controllers on the io die to support faster ddr5 (the additional cores will require more memory bandwidth and tbh even currently with 16 cores you're often bound by memory bandwidth especially with zen 5 and it's wider cores and faster avx512) + work just as fast with all 4 memory slots filled and put a gb or two of edram stacked on the io die to serve as l4 cache (again to provide more bandwidth for the extra cores). Smt4 support would be nice too especially now that the cores are wider (preferably with the option to switch between smt4/smt2/no smt per ccd on the fly) as would extra l3 cache per core (so if each ccd has 16 cores 128mb l3 cache per ccd without counting the 3dvcache) and more l2 cache. Hopefully we'll get at least some of that with zen 6. And once they move to am6 they really should add an extra memory channel, ddr6 support, extra pcie lanes for the cpu - chipset connection (going from 4 to 8) and add pcie 6 support at least for the cpu - chipset connection. With all the extra bandwidth between the cpu and the chipset they could put a memory controller or two on the chipset (at least on the high end ones) that would use ddr5 so you could use your old ram too and get more memory capacity/bandwidth from it (it will be slower than the memory connected directly to the cpu so hopefully the os would be smart enough to be aware of that and use it for things that are less critical (the filesystem cache perhaps? or let the user decide per process which memory should it use if the cpu memory is full) and/or use it only after the cpu memory is full... so good luck to anyone using Windows (since even after zen has been around for a long time it still keeps bouncing processes/threads from ccd to ccd and still doesn't let you select which apps will use the ccd with 3dvcache, which will use the one without it and which can use either/both so you have to use things like process lasso for that nor does it let you select if you want to use the cores/threads on one ccd first and only turn on the other ccd if necessary (for lower power consumption) or if you want it to use both ccds from the start (for extra performance)))
Its worth saying that amd start developing infinity cache about the same time. It was originslly made for APUs but that turned out to be a bit to hard.
Specially now when amd made sort of the comrpmise solution of cache at the memory controller. Getting sort of the best of both worlds.
Been a fan of AMD since i bought s sempron wich was a Barton core that was on the athlon XP an fixed all the laser cut cache back an used a pin mod with wires in the socket to get 2.50v. Was an insanely clockable chip was à 1.6ghz that clocked to 2.9ghz on air. Boosted performance massively, not bad for 40 quid. Sought out the certain Samsung ddr1 memory on brainpower pcbs that could hit 500mhz, was a killer rig that bairly cost me 250 quid with a 6600gt that was a really good clocking chip too was almost as good as a stock 6800.
I think that the von neuman architecture where memory is separated from cpu will one day be replaced with something where memory is part of the calculation path, like do the neurons. My bet, let's see.
Very interesting, i remember looking at those cards and i didn't get the point, thanks for explaining, very interesting.
Great video, very cool. Got a sub here! Keep it up!
In my personal collection I have one Vega 56 and another Vega 64. Both Rog Strix versions. I have to Radeon VII which are rare cards and such a beauty cooler design. I think in the future those cards would be very high demand due to their design and rarity and will worth way more. Hunting to buy more Radeon VII. But those are like looking for the needle in the haystack.
game testing bla bla bal
Personal collection, you run a trash museum at home ?
It was already bad on release, too late !
Any Nvidia cards did better.
@@lucasrem Why you triggered? I like tech and I keep rare interesting architecture cards. If for you it’s trash fair enough. I don’t really care what you think of Vega architecture. I don’t keep for you or anyone else. I keep for my pleasure. So anyone collecting vintage cars has a collection of trash? If you had a bad day today it’s not my fault. Have a great day bro.
I got one of those. It competes handily against the RX 580 performance-wise. I probably need to replace the thermal paste, but I'm terrified of damaging those chips.
I wonder if they are cooking up something similar to this.
Making a product with more forward thinking technologies that is probably way ahead of it's time.
GPUs i have used... Geforce 256 => Geforce 2 Ti => Geforce 6600 GT => Radeon HD5850 => GTX 560 => GTX 580 => GTX 970(good GPU) => Radeon RX 580(great GPU) => RTX 3060ti(great GPU) => Radeon RX 6600(great GPU for its price) => Radeon RX 6800 xt(super happy with this GPU i picked up for $420 abt 8 months ago used right now)
Alot of the radeon driver issues were blown out of proportion by fanboys/nvidia plants.
They all feel more or less the same to me, except the HD5850 which had the cooling fan melt under the heat generated by the GPU lol. Those were the days man. 😂
I always thought since HBM launch, that this tech shouldn't be used as memory itself but as a cache for a much larger memory that is slower...That 4gb of vram, even for 2015, was amazing because of it speed and terrible because of it's size. At least AMD learned a lot with this and today we have the Infinity Cache, which on the RX 7000 series looks like HBM chips.
Aaah yes i inlt stopped using my r9 fury X about a year ago. HBM was so cool i kept the card even while GPUs were selling for insane profits. HBM is not only cool. It looks cool
for the current memory problem something like HBM is a good solution, and for AI you might actually start looking more at something like a FPGA, but also with some new form of memory directly build into the compute logic itself, essentially AI often uses many weights and such, now they are stored in ram, instead it could be stored more analog or paralel like actually having logic weights buffering memory, where you can even buffer weights directly to logic, kind of use that as ram, this could allow insane speed increases. with a fpga this could be simulated early on just on a small scale to test it.
but essentially when done right it is ideal for AI since then it doesn't really need to compute it anymore, instead it just knows it, and it is kind of like analog but then abstract and perhaps digital still but it is like analog as in more like a quantum computer where you insert values/changes somewhere in the logic itself. perhaps inserting it to a special core like thing which decides/manages where it needs to be input and then doing that.
this essentially means that the ram no longer needs a speed and just is direct basically, and instructions are not given but instead written to the logic which also is the ram, this means the main bottleneck is a possible clockspeed/settle time and the speed at which the instructions can be handled, probably mostly the last one which right now tends to not really be concidered a problem.
I would love to see your insights about computational memory. Apparently HBM or rather memory is now a bottleneck.
non regulated memory makes it to expansive for noobs....
@@lucasrem what are you saying
I wish their new high end models used HBM. A 1024 or 2048 bit bus is insane throughput for memory. Now we're seeing 7800XTs using what is technically one SKU lower. An (RX 6800).
Loved my Fury, but having only 4GB was a huge kneecap not long after release.
So I had an R9 nano.
I made a Ghetto R9 Fury X out of one.
So the Nano is a full fat Fury X, just clocked down.
I slapped a watercooler on with an adapter from NZXT and OC'd it to Fury speeds.
Outran my buddies Nitro+ R9 Fury 56.
Ive got a R9 Nano. Cool little gpu. Idk why they dont make compact cards like this anymore
wow. what a solid GPU. and smol
I remember being sad they dropped HBM, HBM2. and HBM in general, despite actually having mostly forgotten about it until it was mentioned here.