I think that would take longer than 4 minutes to do justice. But I guess you could break it down as AMD focuses on having few ultra-wide cores, and also puts an emphasis on software control (for example, divergent branch instructions are implemented with code rather than hardware resources). Intel focuses on many ultra-small cores, more similar to how GPUs were made before unified shaders became the norm, and NVidia is somewhere between the two with an emphasis on hardware instruction support. So in theory, Intel would be good for lots of divergent workload (like many small triangles), AMD is good for lots of uniform work (like many big triangles), and NVidia is adaptable (it can do both types of work, but not as efficiently). Most rendering applications have a mix though, which is why NVidia usually does better.
@@possamei Apple's GPU architecture is based on the PowerVR TBDR, which has routes in the GPU used by the SEGA Dreamcast. It's closest to Intel's GPU architecture (compared to AMD and NVidia), but unlike Intel, it has a special shared memory specifically for the framebuffer. In that architecture, the screen is drawn in tiles, and blended within the shared memory before being written back to the VRAM (the goal is to never fetch the framebuffer from VRAM), but in exchange it imposes restrictions on latency - you need to set up your frame one frame before it's drawn (so you get 1+ frames of lag with optimal performance).
I know he gets mentioned rather frequently, but Anthony is a godsend for this channel. His voice, his mannerisms, his general disposition is just perfect, especially in videos like this.
@@Papa-Murphy Agree. Anthony is good at explaining things and has a kind of "normal", relatable matter to him. But I personally prefer the other hosts for their energy, rapid delivery, and comedic timing.
Linus actually dislikes(not really dislikes, but more like avoids) working with Anthony, because he can(in Linus's own words) get a bit too technical. I do enjoy Anthony ''s content a lot tho
Anthony's Tone, Inflection, and personability on screen plus how he arranges his content makes the information he is presenting easy to digest and does not leave you feeling lost. I feel like Anthony is writing the Electronics for Dummies LTT version while making you feel smart just listening to him. He is a Great and invaluable asset to the team.
Thank you for taking the time to explain the differences between Intel & AMD, especially since the marketshare between the two are now neck-in-neck and not the blowout Intel once had. I guess what it boils down to for someone who does a lot of programming and some casual gaming on older games like EVE Online and WoW, the differences really doesn't matter. It's like trying to compare a detached house with a semi-detached house. The architecture might be different but the house is still your own.
You're describing a different technology called package-on-package. Chip stacking is 3D-integration using through-silicon vias, and significantly more complicated and expensive to do.
@@hjups yeah you are right I confused the two. But the raspberry pi zero 2 does have true chip stacking with wire bonding. Just take a look at the x-rays!
@@bonnome2 I wouldn't consider wire-bonding to be stacking. It's more like one of those weird package-in-package things. An evolution of multiple dies on a fiber composite like what Microchip did with some of their SAMD MPUs. Chip stacking would imply that wire bonds are not used.
@@asterphoenix3074 Not necessarily. It has to do with the interconnect size. Package on package can work for a LPDDR4 chip for example (~60 pins), whereas 3D stacking can be full-scale (~10,000 pins). Also, you get higher parasitics with PoP and still need to translate the signal to something that can go external (that's fine for LPDDR4 though, because it's using the LPDDR4 standard). 3D stacking on the other hand typically just has re-drivers (buffers) to go between dies. So I guess tl;dr. If you want to stack something that you could otherwise put on the motherboard, then PoP is fine. If you need something higher performance, you want 3D stacking.
The terms "Zen 3" and "Zen 2" are misused here to explain CCD, what you actually mean is "Vermeer" and "Matisse"... there are other Zen 3 and Zen 2 CPUs like Cezanne and Renoir that are monolithic and don't use CCDs.
This, and AMD seem to have dropped the "CCX" terminology for Vermeer / Milan, because these chips no longer have a crossbar connecting the 4 cores, instead all 8 connected via ringbus.
@@saricubra2867 Because it uses zen 3 chips. The 5700g actually loses to the 5800x by quite a huge margin, so much that it is much closer to a 3700x in multicore performance due to the lack of cache.
@@saricubra2867 Yes, the 5700g beats the 3600 and 3700x in single core, but that has nothing to do with its packaging. Its monolithic form factor lets it down in multicore performance, because it is restricted in cache capacity. Your original comment was "5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count". Why mention the core count if you were comparing single core performance? It is really an unfair statement when you are comparing zen 3 monolithic to zen 2 chiplets, then concluding that chiplets are worse because faster zen 3 cores on a monolithic package are faster in single core compared to older, slower cores on a chiplet design. You should be comparing either the 4700g and 3700x, or the 5700g and 5800x, not the 5700g and 3700x, if you want to argue about the packaging technique for the cores.
I know this title seems catchy, but it's an over simplification on a rather trivial difference... The big difference between AMD and Intel performance comes down to the CCX and internal core architecture, and not the package technology used... The package technology has more of an impact for manufacturing costs and yields than for performance. You could have spent time talking about how the cache sizes and philosophy is different, how the inter-core communication strategy is different, how the branch predictors and target caches are different, how the instruction length decoding is different, how the instruction decoders themselves are different, the differences in the scheduling structure, the difference in the register files and re-order buffer, etc. But instead... you discuss the manufacturing difference and still don't get that quite right... So a few clarifications. 1) The latency in infinity fabric is largely due to the off-die communication. The signals within the die are far weaker and have to be translated into something that can leave the die and then translated into something that can work in the next die. It's sort of like fiber-optic ethernet, you have to translate the electrical signal into light, travel along the fiber, and then translate the light back into an electrical signal. However, the latency for infinity fabric for die-die communication, is on par with the far ring communication on intel CPUs. So it's not the major contributing factor for performance. 2) Infinity fabric is not serial, at least from what I could find. It utilizes SERDES for fewer wires, but it is still able to transfer 32-bits at the 1.6-1.8 GHz interconnect speed. That does not make it serial - it's effectively identical to a 32-bit bus. It should be noted that infinity fabric is a NoC, just like the ring-bus on Intel chips, where the flits are 32-bit. Granted though, the Intel ring bus NoC is likely wider (possibly 128-bits). I don't believe this is public knowledge, so I'm not sure about the exact parameters. 3) The video said that the core-core communication is slower across infinity fabric, however, it should be noted that the majority of the communication is not core-core. Instead, it's cache-cache communication (i.e. maintaining memory consistency and executing atomic operations). Core-Core communication would imply mailboxes, IRQs, or some sort of MSR based messaging.
@@richardsalazar4817 No, the 3d-vcache is just to have a bunch of cache. To do any sort of computation, data needs to be moved from memory into the CPU. If it's in DRAM, then that takes a relatively long amount of time (1000s of CPU cycles), whereas if it's in SRAM (cache), that can be as low as 3 cycles for L1, or 50 cycles for the L3. This is largely due to the inherent properties of the memory technology itself (DRAM vs SRAM). So ideally, you want most of your data in SRAM. But SRAM also has the problem that it's not very dense, making it expensive in large quantities. However, if instead of making the CPU die bigger to fit more SRAM, you can put it in another die sitting atop the CPU die (the 3d-vcache), then you don't need a very big die for the SRAM. There are still limits though, which is why vcache isn't GBs in size.
@Finkel - Funk that’s honestly what I was hoping to see in their April Fools video, where Linus is replaced by Anthony and gradually loses everything before waking up at the end of the video revealing it was all a nightmare of his lol. Maybe next year.
Current Ryzen & Epyc chiplets do not use a silicon interposer. They use traces in the package substrate to connect the chiplets. However AMD already has an answer to Intel EMIB by using Elevated Fanout Bridge (EFB) from TSMC in their Instinct MI200.
@@niks0987 Apple M1 Ultra uses TSMC InFO_LI (Parallel) as confirmed by TSMC. Check the article published in Tom's Hardware on 27-Apr-2022. This is similar to what AMD uses in its Instinct MI200.
That's incorrect. 7nm EUV (as well as 5, 4, and 2 nm) can still do full wafer sized chips (i.e. one chip per wafer). The lithography constraint is that you need to expose the wafer in many small intervals. If what you said was true, then Nvidia and Intel would be unable to manufacture their monolithic chips, and neither could AMD manufacture the PS5 / Xbox X/S, both of which are also monolithic.
I usually bought Intel CPU's most of the time as they were always reliable, but over 1 year ago i went for AMD Ryzen 9 5900X instead. 100% satisfied with that too.
AMD ones are now as reliable as Intel, but because they are built differently it affects certain processing tasks. I'm a 3d visualizer and have been using intel chips for my rendering process, as well as them being the standard for most rendering farms. No problems all this while, until I switched to AMD and while the creation process is very much the same, when it comes to rendering AMD computes differently from Intel hence the render results are different and inconsistent with those rendered using Intel cpus. So I had to stick back to Intel for my work, but for anything else like coding or gaming there's no issue. I believe it would also affect physics simulation as well. I guess what I'm saying is that for the average user it won't matter the way AMD and Intel chips are built differently but for calculation sensitive tasks it does.
@@kenhew4641 AMD(AMF/VCE) definately sucks when it comes to rendering and encoding compare to Nvidia NVENC and Intel QSV (EposVox made good analysis on this)
Very good video. I enjoyed it because it discussed the underlying tech of something we use instead of a million dollar server that’ll never use or need in my life.
Anthony, your presence here is great! It looks WAY more natural when you're not trying to hide the 'clicker' thingie :) If anything, this fits YOU very well, since YOU are the one who shows us how things work IN DEPTH. So it fits 'conceptually' too. I approve wholeheartedly. We all know 'how the pie is made' by now; so much 'behind the scenes' information about LMG; ...there's no need to pretend you're on network television or something :)
@@HULK-HOGAN1 Yet here you are, commenting on a video with Anthony in the thumbnail. It seems 'going out of your way to avoid anything with Anthony in the thumbnail' does not include 'NOT CLICKING on anything with Anthony in the thumbnail'. Lightly stated; there are some flaws in your methodology. More firmly; do something positive in your life - something that you truly love - that drains the energy and need from you to want to be negative towards others. Anthony makes complicated topics feel understandable to regular people, and is able to make 'us regular folk' feel excited about things we had no idea even existed 2 seconds ago That is an exceptional skill. - My question to you is; WHY do you waste your time commenting negative shit; especially if you didn't even feel like watching this video "because Anthony's in the thumbnail"? - There's enough negativity in this world. Whenever you want to feel better about yourself by dragging others down, just because your own life isn't working out like you pictured... I don't need to hear/read your '2 cents'. - ... And.if that last part is the case; happy to talk sometime, or maybe go see a psychologist (it can help out a lot - trust me on that one). You're not alone in your misery; there's better times to come, even if you can't picture them right now. I know how tough shit can get. It gets better. Ain't no shame to ask for help along the way - that can save you a couple years (again; trust me. I know) Anyways; no more negativity towards people on the internet, please. Talk to people about how you feel instead. It's scary as hell at first. You'll get used to it. And you might find out who your best friends truly are (they might not be the ones you think of first) One love, yo
This is a really good video. Just the right amount of depth, pacing and audio/video content. Anthony is very articulate and covers the stuff I care about. Thank you!
The difference is you are not replacing your motherboard every time with AMD. Gotta love spending $200 bucks on a motherboard for a $300 processor. AMD BABY.
@@fahrai4983 Yea great then I will have the AM5 board for the next 6-8 years. The point is a new generation does not mean a new board EVERY SINGLE TIME like intel does purposefully. There is zero reason for it. "Oh we added a pin so its 1151 pins instead of 1150 now, that extra pin does nothing but we changed the pattern just to screw you." I understand AMD has to update their socket with technologies but we got so many glorious years of AM4, and before that, AM3.
on a lower level, the cores are also structured differently between brands, with intel favoring having a large branch predictor and having much higher transistor count for instructions to push through (beyond the more complex branch predictor). This leads to marginally better single core performance, higher power draw and less space on the die for cores (ignoring MOSFET size differences). Because AMD favors less branch prediction and generally less transistors in a instruction path, they are generally able to have more cores that run more efficiently with marginally worse single core performance due to worse branch prediction. There's a lot more to it, but that has been a big difference between the 2 brands since AMD started making their own x86 chips
yep, this is why in games (which mostly require high single core performance) intel beats AMD, while workload processes (such as decompression and compression, physics simulations) run better on AMD because it is better suited for it than intel..
@@petrkdn8224 and also at the end of the day,both chips can do gaming and workloads :) unless you are obsesed with numbers....for us it doesn't matter what you choose :)
@@robb5828 yes of course, both are good.. I have an i3 7100, sure I can't run modern games on high settings, but it still runs everything (except warzone because that shit is unoptimized as fuck)
@@robb5828 to add to your point, if hardware/software has "solved" your workload already (common example being word processing) any chip will do and many tasks like gaming are more demanding on other systems within a computer/network. So the differences being marginal already have even smaller impacts if at all in the larger picture.
There were a lot of differences between AMD and Intel that I really wasn’t familiar with when doing my first build. Like, I saw a lot of things mentioning XMP profiles for RAM, and then I spent god knows how long trying to figure out how to enable XMP, because that’s what you’re supposed to do… nobody ever said anything about DOCP. I wouldn’t even know it existed!
Yup. Always had Intel till the 3600 launched and actually had to google AMD XMP to figure out it was called DOCP though the manual probably would have mentioned that had I read it. Still can't wrap my head around overclocking
This was actually quite informative. I was expecting more benchmarking and specific tasking head to head, but I definitely learned something new and useful. Always good to see Anthony showing out, good stuff, great channel and as always, I look forward to moer!
As a newish gaming pc user something that has made me wonder, is if an amd gpu works more efficiently when paired with an amd cpu, or if it matters at all if you pair your gpu with what ever brand processor? This would be a useful video topic for a lot of people I believe.
This man always paces his presentations so you can follow them. I really appreciate that - not too slow, not too fast. Some of the other hosts in this group have zero sense of how to structure their presentaions.
Anthony is just someone who can probably explain almost anything you need to understand - maybe, he should narrate that "easy" quantum mechanics book by Hawking - "The Theory of Everything."
Another great Anthony video. Personally I would love it if he would be allowed to make them even more technical, but I do understand the reasoning of LMG wishing to appeal to a wider audience
WOW! Another AWESOME video!! What would be so cool, awesome and appreciated is if you guys did a video on which one (Intel vs. AMD) is good for Cybersecurity, Coding, Programming and the like, although it would be subjective it would also be great to be able to pick your minds about it all. Somewhat a "Knowing What We Know," Series. There are a whole lot of aspiring Cybersecurity/ Coding enthusiasts [such as myself] who are coming into it all blind and even caught up in picking between which one? CES 2022 had us confused even more with the plethora of awesomeness in the CPUs but now...which one would be good for what? Thanks!!!!
When I put together my pc, I went team red simply because I intended to upgrade later and I knew amd cpus have a habit to be chipset backwards compatible with older mobo chipsets. I still haven't upgraded though... (Still rocking a 2400g) I'd like to say with this edit I went to 3600 and it's amazing but I hit my limit, I need to get a new motherboard if I ever do upgrade further.
In the most simplistic terms, Intel had the bank to crush fair competition, and they had AMD licked on single core performance for ages. It is only within the last decade that multicore performance really started to become more prominent in the mainstream. AMD went back to the drawing board for their chiplet design and continued mutlicore performance improvements, which has made then as competive and moreso in recent years. There are tonnes more reasons, but those two stand out most to me
Venkat and his wife madhavi are new ceo of my company. You will see more snd more competition as I have manufacturing unit in every house , dont underestimate the power of sales owner ramya vallabh and her vadas and mirchi bajji. It can make kings , it can make bhagwans
they are counting laser disabled ones... because thats how they are made... its a six core part, but it has the entire 8 core chip. in theory, 1 or 2 of those cores didnt meet validation requirements due to defacts so they laser them off and sell it as a 6 core CPU instead. its the cheapest way to maufacture at scale, at least for now anyway...
@@holobolo1661 The yield on TSMC N7 by now is so high that you can bet they are crippling a tonne of perfectly good chiplets to fullfill demand of 5600(X). That is the sole reason why AMD up to now didn't offer a non-X 5600 at reduced prices. They only do now because of actual competition by Intel with parts like the 12400.
Would've been nice to mention that AMD still uses monolithic designs on it's laptops and APUs. Would have been an interesting aside to about the space disadvantages of chiplets. Great video though!
Parallel transmission of data suffers from one drawback, synchronisation. Remember when we had parallel interfaces connecting our hard-disks and printers? Remember how limited they were in speed because of the required acknowledgements, synchronisation, and reassembly silicon (parallel cache) used to ensure data was not lost? Remember when SATA and USB arrived and suddenly we had better drive speeds and device hubs were now possible? No? Oh, well. Just remember parallel data transmission architectures work most efficiently when using separate serial streams in parallel where each stream is independent and synchronisation is optional - just like PCIe. I'd be surprised if the Intel "parallel" EMIB was actually truly parallel. It is more likely it is used as a way to overlap execution ports on the cores. The giveaway is the lack of reassembly buffers.
I thought this would be about the architecture of the x86 designs they each use, but it turned out to be just about the recent way they're each implementing multicore.
@@hjups I'm not sure if the K6 was the last per-core equivalence. The last truly identical cores where Intel 80486 and AMD Am486. As for other cores, AMD until the K10 (Phenom) did not fundamentally change the architecture. Bulldozer (FX) was the first major overhaul. Intel changed things up a fair bit sooner, with Netburst (Pentium 4). Funnily enough both Netburst and Bulldozer were ultimately dead ends, worse than their predecessors. Intel brought back the i686 design in the form of first Pentium M and later Core2. Core2 competed against K8 and K10, which I think share the same lineage to the first microcoded "inner-RISC" CPU's like K6 and Pentium Pro. AMD instead started over once again, and that brings us to Zen. What I find interesting is that Zen3/Vermeer and Golden Cove/Alder Lake are very good at trading blows: depending on what you're doing, one can be wildly faster than the other. As far as I can see though, that mostly seems to be caching matters; a Cezanne chip does not have the same strengths as Vermeer, but does have the same weaknesses, as far as I can see. I'm also curious how far hybrid architectures are going to go. On mobile, they're a massive success, and Alder Lake has proven them to be very useful on desktop as well.
@@scheurkanaal I think you misunderstood my statement. I'm not referring to performance, I'm referring to architecture. Obviously, there are going to be differences that have a substantial effect, even as far as the node in which the processors are fabricated on. Yes, the last time they were identical in architecture was the 486, however, the K5/K6 and the Pentium Pro/Pentium 2/Pentium 3, were all quite similar internally. AMD then diverged with the K7/K8+ line, while Intel tried Netburst with the Pentium 4. After the failure of Netburst, Intel returned to the Pentium 3 structure and expanded it into Core 2/Nehalem/etc. and have a similar structure to this day. Similarly, AMD maintains a similar structure to the K10, with families like Bulldozer diverging slightly in how multi-core was implemented with shared resources. Also note that AMD since the K5, and Intel since the original Pentium and the Pentium Pro have used a "RISC" micro-operation based architecture. The original Pentium is the odd one out there though, since it was less apparent due to it being an in-order processor while the others have all been out-of-order. Hybrid architectures may not really go much further than Alder Lake and Zen 4D. There isn't much room to innovate in the architectural space, where most of the innovation needs to happen at the OS level (how do you schedule the system resources). It's also driven by the software requirements though. Other than that, there may be some innovation in the efficiency cores themselves, to save power even further, but in exchange for lower performance (the wider the gap, the more useful they will be).
@@hjups I was also talking about architecture :) I was just not under the impression K7 was much different from K6, since it did not seem all that different from what Intel was doing circa Pentium 3 (which is like "a P2 with SSE", and the P2 in turn was just a tweaked Pentium Pro), and the numbers also imply a more incremental improvement (although to be fair, K5 and K6 were quite different). That said, I wouldn't be so sure if Zen and K10 are that similar. As far as I know, Zen was (at least in theory) a clean-sheet design, more-or-less. I was also referring to micro-operations when I said "inner-RISC". The word "micro-operation" just did not occur to me. Finding something that said whether or not the original Pentium was based on such a design was also quite hard, so I assumed it didn't. It was superscalar, but I think the multi-issue was quite limited in general, which gave me the impression the decoder was like the one on a 486, just wider (for correctly written code). I don't know how far efficiency cores will go. Their use is not from a wider gap, but rather, more efficiency (power per watt). Saving 40% of power but reducing performance by 50% is not very effective. Also, in desktop machines, die size is a very big consideration, not just power. And little cores are useful here. Keep in mind that the E-cores from Alder Lake are significantly souped up compared to earlier Atom designs. That's important to maximize their performance in highly threaded workloads. I think the next thing that should be looked at is memory and interconnect. CPU's are getting faster, and it's becoming harder and harder to keep them properly fed with enough data.
@@scheurkanaal Maybe we have different definitions of architecture. SSE wouldn't be included in that discussion at all, since it's just a special function unit added to one of the issue ports, similar to 3DNow! (which came before SSE). The K5 and K6 are much more similar than the K6 and K7... The K5 and K6 even use the same micro-op encoding as I understand it. The K7 diverged from simple operations though into more complex unified operations, that's also when AMD split up the integer and floating point paths. The cache structure changed, the whole front end changed, the length decoding scheme changed, etc. As for P2 vs Pentium Pro, the number of ports changed, and the front end was improved to include an additional decoder (which has a substantial difference for the front end performance - it negatively impacts it, requiring a new structure). The micro-op encodings may have also changed with the P2 (I believe they still used the Pentium uops in the Pentium Pro which are very similar to the K5 and K6 uops). Zen may have been designed from the "ground up", but it still maintains the same structure and design philosophy - that's likely for traditional reasons (they couldn't think outside of the box). Although, it does have some significant benefits in terms of design complexity over what Intel does - especially when dealing with the x87 stack (the reason why the K5 and K6 performed so poorly with x87 ops, and why the K7 did much better). Yeah, I knew what you meant by "inner-RISC". I just used more technical terms. The P1 was touted as two 486's bolted together, but that was an overly simplified explanation meant for marketing people who couldn't tell the difference between a vacuum tube and a transistor. In reality, you're correct, the dual issue was very restricted, since the second pipeline really could only do addition and logical ops, as well as FXCH which was more impactful (again for x87). I would guess that most of the performance improvements came from being able to do CMP with a branch, a load/store and a math op, or two load/stores. As for specific information about the P1 using uops, you're not going to find that anywhere, because it's not published. But it can be inferred. You would have to look at the instruction latencies, pipeline structure, know that a large portion of the die / effort was spend on "emulating instructions" (via micro-code), and have knowledge of how to build something like the Pentium Pro/2/K6. At that point, you would realize that the P1 essentially had two of what AMD called "long decoders" and one "vector decoder", which it could either issue two "long" instructions or one "vector" instruction. The long decoders were hard coded though, and unlike the K6/P2, the uops were issued over time rather than area (i.e. the front end could only issue 2 uops per cycle, and many instructions were 3 uops. So if logically they should be A,B,C,D,E,F, the K6 would issue them as [A,B,C,D] then [E,F], but the P1 issues them as [A,C],[B,D],[E,F]). Yes, power efficiency is proportional to performance. The wider the gap implies more power efficient. But there's also the notion of making the cores smaller too and throwing more at the problem (making them smaller also improves power efficiency with fewer transistors). If the performance is too high though, there's no reason to have the performance cores, which is what I meant by the wide gap being important. Memory and interconnect are an active area of research. One approach is to reduce the movement of data as much as possible, to the extent of performing the computation in RAM itself (called processing in memory). It's a tricky problem though, because you have to tradeoff flexibility with performance and design complexity (which is usually proportional to area and power usage - effectively energy efficiency).
To do all of the architectures and GPU styles justice, you could do a four-part video thing explaining each GPU and how each architecture works. What do you guys think?
I'm reading about Operating systems and i just discovered that the big difference lies in the Architecture....both perform the same tasks quite differently but the results of a a top tier AMD or Intel CPU are hard for the average user to even notice
I'd love to see a video of if it's possible to add your own CCD if you could get the parts to just add more cores to your existing CPU using a cpu with an empty CCD section. Might want to get a microscope for that one and I doubt you could ever do it at home but would be interesting to see if it is possible.
I mean you probably could but there would be tons of issues. The chip would not be suppported by any motherboard and would need a custom bios. You'd probably have differences in the chips that ones produced together would not have. It would be insanely easy to mess up. It might be fused off which would completely neglect doing anything I'm pretty sure people have added more vram to gpus and it has wroked but very was very unstable.
@@gamagama69 Seems that if the chip can use the signals used to identify 3900x or 3950x silicon then maybe, you could use existing In bios signatures for existing ryzen chips to make a 3800x into a 3950x but that would be extremely difficult without Nanometer scale precision tools.
Well yes and no. Intel uses a comparatively traditional ringbus for in die communication. 2 cores not directly in line can also not communicate directly with this method. The Infinity fabric adresses this Problem. Thats why its named Infinity fabric. Infinite numbers of cores can directly communicate. And while this increases latency against ringbus with lower core counts it decreases latency for their huge core count epyc line up. For regular Desktop its just Cost reduction though atm
Been using intel since 8088/86 days. Intel got lazy and sloppy by the time I had built my i7 3770 3.5 ghz. At end 2018 I switch to AMD 2700x and she been a beast and great Cpu, at least for games and software I use. Sure ive used few AMD Athlon over the years. But at the time intel was king for so long, because the lack of competition. They got lazy. Thankfully now both Amd and intel compete with each other now. Been happy with Amd and prob won’t be buying any Intel cpu for awhile. Intel GPUs I’ll be watching, looking at some point replace my GTX 1080ti. Then again RDNA3 sound good. So does future intel GPUs. Time will tell if either them will be able compete with RTX 4xxx when they come out. Good video.
Intel not only got lazy but also has some dodgy corporate stuff that puts more money in their pockets while the consumers are stuck with "you only need 4c/8t". Whenever I can I have switched to AMD both on my main desktop and laptop and given the chance I also recommend AMD everyone else around me. Not to mention as well that AMD has a really good track record so far of supporting their platform for far more generations, I'm talking about AM4.
@@13thzephyr Yeah I also have been recommending AMD Ryzen after I built my 2700x. Been very happy with it. And they run cooler then intel cpus imo. Yeah after I built the 2700x. I know soon after we started also hearing about all Cpu vulnerabilities that dates back like what 10+ years. Spectre/meltdown and bunch other. I also know Amd isn’t perfect and it has its share issue too. But I think generally AMD are better made and better secure. TSMC is currently the leader. They way better imo to anything out there. Yeah I know my zen+ (2700x) was on global Foundries. Been happy with it. Prices here Canada for CPUs and any pc hardware still kinda high or insane. Though slowly starting come down. Some day I’ll upgrade zen 3. But I’m in no rush. Most my online friends have also moved to AMD Ryzen and the small indie dev team I was helping few yrs back, (I was ex mod/server admin /tech help guy) I know one head devs (Vipe) has also upgrade to 3900x or 3950x, I forget which, but I know he was very happy with it. Let’s him code UE4 much faster, which let them beta test more quickly, then can deploy patches more easily for their dino UE4 game. :) Would only recommend intel if your doing specific tasks or apps the run better on intel. Zen 4 ( AM5) sound very impressive as does RDNA3. Pc have come a long way from old 286/386/486 days. Lol
@@xruud24 Yeah hopefully RDNA3 will be as good as next RTX GPUs. Been happy with 1080ti, but imo Nvidia need some spanking, they been at the top too long and Nvidia kinda becoming more and more anti consumer. Would love see AMD or intel GPU take lead for few years. But prob never happen cus Nvidia has $$$$$ to keep their share holder happy.
i've never really cared about either lol id just try to build comparable systems and then decide based on over all price / taking into consideration reviews on all the parts around them. was working on it a bit last night as im considering upgrading and noticed that i7-12700k out performs the ryzzen 9 5900x by a decent margin and is cheaper which was interesting to me as a step up on either side was a huge price jump for not a big jump in power.
It’s kind of weird watching Anthony in a Techquickie instead of “i used a paperclip and an experimental linux compilation to compress and decompress data going between the two Linus Media Group buildings in real time”
I decided to try an AMD machine after being with intel for a few generations. The Infinity Fabric was completely unstable and caused any audio playing to be filled with static and blue screening occasionally. I tried for a week with updates, tweaks, tutorials but couldn't stabilise it. I sold the parts and bought intel parts and had no problems at all. I've been building computers for myself since I was twelve years old (20 years), and that AMD machine was the only time I was forced to give up when presented with an issue. I've bounced back and forth between the two, as well as ATI and nVidia over the decades, but that experience really put me off AMD for the moment.
Lol i build since 15 years, just went with intel. This time the hype about Zen 4 was so huge, i could not resist to try. I bought a 7900x. Hat stuttering issues, because of ftpm, issues with memory controller. Then after two days i returned it and bought a 13600k, wich worked perfectly since. I cant relay my money anymore to AMD. First impression not good.
One correction here. Meteor lake doesn't use EMIB, it uses Foveros. Basically 3D stacking of silicon. But unlike TSMC/ AMD 3D v cache, meteor lake can be overclocked like normal CPUs.
I would love a series of video that explains from the utmost bottom to top how tech things work. Like ok, you have cores in a CPU, but what are cores? According to google, they are "the pathways made up of billions of microscopic transistors". Sure, but then what are transistors? And on it goes, all the way down to the most basic level at which CPUs operate, the electric impulses or whatever. How does a lump of metal and silicon mage magic happen?
It still fascinates me, 2 companies, started in the same decade (okay intel was named different back then), and they are still competing against each other as "top dog" as such. Kind of reminds me of 2 brothers constantly trying to 1 up each other.
@The Deluxe Gamer they arent really "competing" as would be in other countires where there is an acutal left / center party rather than the US's 2 right wing parties, if it were really comparable to US political system then both intel and amd would have to be competing in a single sector of proccessors such as workload or gaming , while they do both (and they have different features etc etc)
Parallel interconnect has its drawbacks for instead when there is a curvature in the lanes, so for instance the higher bits has to go farther. So usually parallel buses had to have lower frequencies and therefore were slower and more expensive (more lanes->more complexity for design+harder manufacturing+more materials) compared to the serial ones. This is why FSB was replaced by both of the manufacturers, and there are really successful serial connectors, like USB. And as far as I know Infinity Fabric can be used to connect distant parts or multiple CPU dies, so it being serial in many of its usages most likely definitely not a drawback for it. But in a short distance without bends like these tiles Intels way of "gluing" chiplets/tiles together can be a good option, we will see.
I think you are wrong regarding EMIB and Infinity fabric. First comparing them both is kind of odd, as EMIB is as far as I know actually just an embedded die in the substrate and does not define any protocol, It's "just" the wires that connects the dies and can provide PCIe, HBM, UCIe or what ever connection. 1:16 Yes Infinity fabric is a serial communication, but that does not necessary mean higher latency. The infinity fabric link (IFOP) is 32b parallel and gets the data from the CAKE with 128b. But as the IFOP is clocked at 4x the clock speed it is able to transfer those 128b as fast as the CAKE. 4:00 As said above it is only the connection, not a protocol, so it does not require parallel data transfers.
Anthony's presentation really improved over the years! People with deep understanding of complex subjects AND the skill to convey information efficiently are too rare. We need more Anthonies.
Serial communication can be dine faster then parallel, and take much less space. That is why the industry left the parallel IDE for SATA, which is serial. Now, IF you can afford the space for parallel connections, then it can move data faster, and a higher cost in space and money.
With AMD moving to TSMC N5, I felt they should move back to monolithic design for parts 8 core and less, and have the snap together interface, so if you want to add an 8 core chiplet, you can. I think this would be ideal for Ryzen considering 16 core compute is still PLENTY of compute power. And then when they want to make their move to big-little, they could use the same approach, but do it with N4 or N3 where the amount of density still allows them to use a very small die and don't take much losses. In other words, the approach on how to do something can't be looked at in a bubble. It has to account for the total ecosystem, including the manufacturing process and the node being used. Sure, this next gen for Intel, or actually two generations from now will use tiles. But what happens when they can finally produce Intel 20A? Are you going to use tiles to create a 12 core part? I mean a monolithic design on 20A means the chiplets will be TINY. It would seem better to go back to monolithic so many desktop parts, but leave the ability to snap on another chiplet (tile) to add another core complex. Now, for WS and server that's a totally different realm, but desktop, for most people is STILL browsing the web, office apps and media, and not editing media. You don't need the cost of these interconnects, unless maybe it's an APU, in which case the APU could be a tile/chiplet. I think this would be the most cost effective approach and not a waste of die space. I don't think total package size could shrink much because of so many connections needed between the MB and the CPU. But the die could shrink quite a bit.
I always went Intel because I valued single core performance and power/thermal stats a bit more. Once games favor amount of cores over single core performance and AMD are still better for that, I will switch to AMD.
Ok here's a video suggestions Can I turn my one home PC into many home computers with a hypervisors running at least one gaming rig, a nas server, a home media server and a router And what are the drawbacks to such an attempt 🤔
Intel also has less cache compared to AMD cpu's which tend to slow down your computer after a while. For example I had switched from a ryzen 3700x to i5 11400 system because I had sold that computer to a friend and at the time intel 11400 system costed much less than zen 3 systems. And i5 11400 is supposed to be faster than 3700x for single threaded applications right? Yes it is faster in games but after only 9-10 months of usage, the web browsing experience and a couple applications like obs got significantly slower compared to 2 years of heavy usage on ryzen 3700x. I am now just lazy to reinstall windows due to my job taking too much of my time and leaving no room backing up stuff. And for those who might ask, I don't have more programs or I'm not using an antivirus, I still have the same ssds, I am up to date on drivers and I don't use browser extensions... And no the cpu or memory usage isn't high. And I got significantly faster memory on this system with super low timings. And yes memory overclock is stable, It has passed memtest 1500%, linpack, time spy, y cruncher all of that. So yeah, at least as far as I can tell 11th gen intel sucks in that case which I think is caused by 32 megabytes of l3 cache vs 12 megabytes. Making a video on youtube full screen on chrome is taking a couple seconds for example. I mean like wtf...
@@zatchbell366 hwinfo shows total 9tb host writes, its a samsung 970 evo plus 2tb and it only has windows and some programs, total 230 gb is used so I dont think thats the case
@@danieljimenez1989 Just reinstalled the windows and all the other programs and all the windows updates on the same SSD. Everything is now running flawlessly fast. So apparently it was windows and software updates bloating the system which made the cpu or cache no longer being able to keep up in some programs. I have got my bookmarks and everything else back on the chrome. And all the "default programs" are still running at startup, got the same drivers installed as I had my "install" folder remaining on another drive which I keep all my driver setups. I also have my steam and games installed as well. So it was not SSD nor anything else. Just stupid windows bloating things up.
Hi. Great video from Techquickie. I hope Techquickie would make a video for comparison, which nowadays cpu is better for Linux? AMD for Linux or Intel for Linux. Thanks
This is not the only "actual" difference, as Intel and AMD differ noticeably in terms of Technologies and Instruction Sets: TSX and AVX512 are the obvious ones to note from Intel, but because chips from Team Blue also have specific resources that better optimize Media Production and Broadcasting (QuickSync) and noticeably faster rendering of AI effects or projects, there is a genuine reason to go for Core, Core: X Series, or Xeon parts over Ryzen and Threadripper if the applications you use make genuine use of them in one major capacity or another; Ryzen's rough point is definitely in AI and also the inability to use certain apps and emulators properly due to the lack of Intel TSX and AVX512, it'll definitely be felt in certain software even on a 5950X or Threadripper and thus my advice to study up on which software benefits from having certain Intel~based technologies or instructions is more important than ever, there may be genuine reason to have two or more separate systems in your house based on Core and Ryzen due to this and thus being fluent in understanding the differences will help you optimize them for specific workloads. (:
Well said. I was expecting to find some point I disagreed on but you pretty well nailed it. AMD APUs have hardware encode but it isn't as polished as Intel's, that said, I've got a Skylake that isn't that terrific for encoding so it's not as though Intel has had this capability for long. Their decode is excellent and very low powered, their encode leaves a lot to be desired. Neither company's encoder is as good as Nvidia.
Well, intel 12 gen already oficially not support AVX-512 (i wonder why... [sarcasm]), and they now forcibly remove instructions from newer CPU's from what i heard, but i may be wrong there. So... Yeah, that point kinda missed, but overall true, Intel have some proprietary licensed instructions, which can give advantages here and there in specialised tasks
@@glenwaldrop8166 Well, you could start with the fact, that Series X is almost useless the last 3+ years, unless you want to save a buck for much lower performance(except a literal FEW programs/Games), but then you could buy 3950x(let alone 5950x) and save more for better performance on average(or (near) total better performance for 5950x) than Series X (10980xe) Core is only good for gaming compared to Ryzen (AMD did pretty good with 5000, too bad they increased the price, but i guess they are lowering it now) EPYC vs Xeon is outside of my FOV, but i wouldn't expect EPYC to lose there, they are mighty cheaper to produce, which makes them easy to stack performance in all price ranges as well as easier to stock more of them. But for certain productive workloads and gaming/hybrid Intel might be arguably better
@@lilituthebetrayer2184 honestly, Intel is only faster in games in certain circumstances and then it's not massive. Most people are GPU limited anyway. You can get 60 fps out of a Sandy Bridge or even the FX with the right GPU. For high frame rate gaming both companies can provide over 120 fps, being nitty about that last couple of frames is silly. If you lose because it was only 100 fps instead of 120, it wasn't the computer that lost the match.
@@glenwaldrop8166 I tend to buy once and for a good time, which helped me in the mining shortages. Let alone badly optimized games, Sandy Bridge can be really weak for some memory intensive games. Personally i wouldn't chose anything DDR3 by now. It's better to have 10 or 20 frames over than under and especially better minimum frames, in case you need V-Sync. Emulation, old games, rendering, archiving, multitasking, etc. It's always good to have a better rig, if you have the money and sometimes even if you don't really.
Well even though Intel is actually coming back, I think that my next build I'm going to switch team to red. I stayed with Intel mostly due to the compatibility but now with their new e-core/p-core design(that doesn't work well with older software) and with constantly improving AMD drives I guess that unless Intel won't solve this problem most of Intel fans will switch side.
@@kusayfarhan9943 good to know but still there are people having issues with drm on some older software (Italians are lazy into developing new stuff so at least here I heard some speaking about it) and obviously in my specific case some older multiplayer games. I'm still an Intel user so who knows, maybe all of early adoption issues may be solved when I l'll have to ugrade my rig but at least for now as someone who works and plays on the same rig I don't want to deal with those problems.
@@rk3senna61 they've had it for a long time, they just typically weren't discrete gpus, just integrated, they're starting to do discrete, but gpus in-of-themselves is not new to them.
Video Idea: I just read something about SGX and only being on 10 series for playing UHD blu-rays in 4k. It is my understanding you can forget about 4k uhd on AMD. I’m wanting to build a home-theater pc and would like to know other “gotchas” or is home theater on a pc no longer possible? It is difficult to find a pc that has a 5.25 bay for blu-ray or dvd playback. Is blu-ray playable on amd? I know the Netflix app can stream 5.1 but are there other ways to get surround sound via streaming on a pc other than that? Anyway it is a topic I wish could be revisited for that use case. Thanks.
Please do more videos like this. Focused on the chips, technologies behind and so on. It's awesome content.
Yes, it's nice to get a glimpse into how the hell this stuff works
And have Anthony host them
He needs to do Mediatek vs Qualcomm
This. But I wish they were TechLongies. Ran through it too fast to really comprehend and didn't go into deep details.
@@xADDxDaDealer dis iz de wey
A version of this explaining the difference between Nvidia, AMD, and Intel's GPU architecture would be amazing!
for real
I think that would take longer than 4 minutes to do justice. But I guess you could break it down as AMD focuses on having few ultra-wide cores, and also puts an emphasis on software control (for example, divergent branch instructions are implemented with code rather than hardware resources). Intel focuses on many ultra-small cores, more similar to how GPUs were made before unified shaders became the norm, and NVidia is somewhere between the two with an emphasis on hardware instruction support. So in theory, Intel would be good for lots of divergent workload (like many small triangles), AMD is good for lots of uniform work (like many big triangles), and NVidia is adaptable (it can do both types of work, but not as efficiently). Most rendering applications have a mix though, which is why NVidia usually does better.
And Apple's!
Yes please
@@possamei Apple's GPU architecture is based on the PowerVR TBDR, which has routes in the GPU used by the SEGA Dreamcast. It's closest to Intel's GPU architecture (compared to AMD and NVidia), but unlike Intel, it has a special shared memory specifically for the framebuffer. In that architecture, the screen is drawn in tiles, and blended within the shared memory before being written back to the VRAM (the goal is to never fetch the framebuffer from VRAM), but in exchange it imposes restrictions on latency - you need to set up your frame one frame before it's drawn (so you get 1+ frames of lag with optimal performance).
AMD: "We're introducing chip stacking"
Pringles: 😎
lol
And once you pop, the fun don't stop
I know he gets mentioned rather frequently, but Anthony is a godsend for this channel. His voice, his mannerisms, his general disposition is just perfect, especially in videos like this.
Anthony is great at explaining things. Love him.
I like the topics he chooses, but Riley, James, Linus, etc are still my preferred hosts.
Anthony is the best. and he has a bad ass track suit.
@@Papa-Murphy Agree.
Anthony is good at explaining things and has a kind of "normal", relatable matter to him. But I personally prefer the other hosts for their energy, rapid delivery, and comedic timing.
Linus actually dislikes(not really dislikes, but more like avoids) working with Anthony, because he can(in Linus's own words) get a bit too technical. I do enjoy Anthony ''s content a lot tho
Anthony's Tone, Inflection, and personability on screen plus how he arranges his content makes the information he is presenting easy to digest and does not leave you feeling lost. I feel like Anthony is writing the Electronics for Dummies LTT version while making you feel smart just listening to him. He is a Great and invaluable asset to the team.
He will be missed.
@@RodrigoAReyes95 He died?
@@vengeance2825 no, but she isn’t “Anthony” anymore, if you know what I mean 😑
@@RodrigoAReyes95 Ohhh, him became a shim... dang.
Thank you for taking the time to explain the differences between Intel & AMD, especially since the marketshare between the two are now neck-in-neck and not the blowout Intel once had.
I guess what it boils down to for someone who does a lot of programming and some casual gaming on older games like EVE Online and WoW, the differences really doesn't matter. It's like trying to compare a detached house with a semi-detached house. The architecture might be different but the house is still your own.
Stacking chips is actually used a lot in mobile phones. Even the raspberry pi zero has stacked chips
You're describing a different technology called package-on-package. Chip stacking is 3D-integration using through-silicon vias, and significantly more complicated and expensive to do.
@@hjups yeah you are right I confused the two. But the raspberry pi zero 2 does have true chip stacking with wire bonding. Just take a look at the x-rays!
@@bonnome2 I wouldn't consider wire-bonding to be stacking. It's more like one of those weird package-in-package things. An evolution of multiple dies on a fiber composite like what Microchip did with some of their SAMD MPUs.
Chip stacking would imply that wire bonds are not used.
@@hjups is package on package less efficient or something?
@@asterphoenix3074 Not necessarily. It has to do with the interconnect size. Package on package can work for a LPDDR4 chip for example (~60 pins), whereas 3D stacking can be full-scale (~10,000 pins). Also, you get higher parasitics with PoP and still need to translate the signal to something that can go external (that's fine for LPDDR4 though, because it's using the LPDDR4 standard). 3D stacking on the other hand typically just has re-drivers (buffers) to go between dies.
So I guess tl;dr. If you want to stack something that you could otherwise put on the motherboard, then PoP is fine. If you need something higher performance, you want 3D stacking.
Ones innovative,
And the other ...
Is also innovative
READ MY NAME!!!!!
!
The terms "Zen 3" and "Zen 2" are misused here to explain CCD, what you actually mean is "Vermeer" and "Matisse"... there are other Zen 3 and Zen 2 CPUs like Cezanne and Renoir that are monolithic and don't use CCDs.
This, and AMD seem to have dropped the "CCX" terminology for Vermeer / Milan, because these chips no longer have a crossbar connecting the 4 cores, instead all 8 connected via ringbus.
5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count and don't have integrated graphics lmao.
I'm on team monolithic.
@@saricubra2867 Because it uses zen 3 chips. The 5700g actually loses to the 5800x by quite a huge margin, so much that it is much closer to a 3700x in multicore performance due to the lack of cache.
@@mingyi456 That is not true in terms of single core speed.
@@saricubra2867 Yes, the 5700g beats the 3600 and 3700x in single core, but that has nothing to do with its packaging. Its monolithic form factor lets it down in multicore performance, because it is restricted in cache capacity.
Your original comment was "5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count". Why mention the core count if you were comparing single core performance? It is really an unfair statement when you are comparing zen 3 monolithic to zen 2 chiplets, then concluding that chiplets are worse because faster zen 3 cores on a monolithic package are faster in single core compared to older, slower cores on a chiplet design. You should be comparing either the 4700g and 3700x, or the 5700g and 5800x, not the 5700g and 3700x, if you want to argue about the packaging technique for the cores.
I really enjoyed the detail in this. Interesting to deep dive into how the tech actually works. Thanks!
I wouldn’t call a five minute video on something as complex as CPUs as a deep dive.
lol u think this was a "deep" dive
I know this title seems catchy, but it's an over simplification on a rather trivial difference...
The big difference between AMD and Intel performance comes down to the CCX and internal core architecture, and not the package technology used... The package technology has more of an impact for manufacturing costs and yields than for performance.
You could have spent time talking about how the cache sizes and philosophy is different, how the inter-core communication strategy is different, how the branch predictors and target caches are different, how the instruction length decoding is different, how the instruction decoders themselves are different, the differences in the scheduling structure, the difference in the register files and re-order buffer, etc. But instead... you discuss the manufacturing difference and still don't get that quite right...
So a few clarifications.
1) The latency in infinity fabric is largely due to the off-die communication. The signals within the die are far weaker and have to be translated into something that can leave the die and then translated into something that can work in the next die. It's sort of like fiber-optic ethernet, you have to translate the electrical signal into light, travel along the fiber, and then translate the light back into an electrical signal. However, the latency for infinity fabric for die-die communication, is on par with the far ring communication on intel CPUs. So it's not the major contributing factor for performance.
2) Infinity fabric is not serial, at least from what I could find. It utilizes SERDES for fewer wires, but it is still able to transfer 32-bits at the 1.6-1.8 GHz interconnect speed. That does not make it serial - it's effectively identical to a 32-bit bus. It should be noted that infinity fabric is a NoC, just like the ring-bus on Intel chips, where the flits are 32-bit. Granted though, the Intel ring bus NoC is likely wider (possibly 128-bits). I don't believe this is public knowledge, so I'm not sure about the exact parameters.
3) The video said that the core-core communication is slower across infinity fabric, however, it should be noted that the majority of the communication is not core-core. Instead, it's cache-cache communication (i.e. maintaining memory consistency and executing atomic operations). Core-Core communication would imply mailboxes, IRQs, or some sort of MSR based messaging.
Yeah!
Is that why amd is implementing 3dvcache?
@@richardsalazar4817 No, the 3d-vcache is just to have a bunch of cache. To do any sort of computation, data needs to be moved from memory into the CPU. If it's in DRAM, then that takes a relatively long amount of time (1000s of CPU cycles), whereas if it's in SRAM (cache), that can be as low as 3 cycles for L1, or 50 cycles for the L3. This is largely due to the inherent properties of the memory technology itself (DRAM vs SRAM). So ideally, you want most of your data in SRAM. But SRAM also has the problem that it's not very dense, making it expensive in large quantities. However, if instead of making the CPU die bigger to fit more SRAM, you can put it in another die sitting atop the CPU die (the 3d-vcache), then you don't need a very big die for the SRAM. There are still limits though, which is why vcache isn't GBs in size.
@@hjups who are you?
@@gabadu529 A computer architecture researcher, who doesn't work for Intel or AMD.
Linus, give the man his own show already!
Sorry Linus, this is Anthony's Tech Tips now. ATT.
@Finkel - Funk that’s honestly what I was hoping to see in their April Fools video, where Linus is replaced by Anthony and gradually loses everything before waking up at the end of the video revealing it was all a nightmare of his lol. Maybe next year.
Lol, he can make his own channel whenever he wants.
@@TH3C001 I hope they see this for next year.
"man" lol
Ridiculously rude commercial break at 1:52, regular programming resumes at 2:23 🙂
Current Ryzen & Epyc chiplets do not use a silicon interposer. They use traces in the package substrate to connect the chiplets. However AMD already has an answer to Intel EMIB by using Elevated Fanout Bridge (EFB) from TSMC in their Instinct MI200.
Its interesting to know what apple uses in their ultrafusion, I mean if that is serial interconnect like amd or parallel like intel.
@@niks0987 Apple M1 Ultra uses TSMC InFO_LI (Parallel) as confirmed by TSMC. Check the article published in Tom's Hardware on 27-Apr-2022. This is similar to what AMD uses in its Instinct MI200.
@@srikanthramanan thanks, apple indeed does a serious business! Great info.
Smaller Chiplets are actually due to EUV Lithography.
Because they have to use mirrors instead of lenses, the area of the chip is quite limited.
I wish i could get my hands on some EUV lenses lol i wanna build a EUV microscope
@@mastershooter64 you will probably get a Nobel price, if you manage to make EUV work with lenses.
That's incorrect. 7nm EUV (as well as 5, 4, and 2 nm) can still do full wafer sized chips (i.e. one chip per wafer). The lithography constraint is that you need to expose the wafer in many small intervals. If what you said was true, then Nvidia and Intel would be unable to manufacture their monolithic chips, and neither could AMD manufacture the PS5 / Xbox X/S, both of which are also monolithic.
The size limit is still around 800mm² (or 400mm² for high NA), much larger than the compute chiplets AMD has been making (
@master shooter64 everything absorbs EUV, so it would be less useful than electron microscopes, and lower resolution.
I usually bought Intel CPU's most of the time as they were always reliable, but over 1 year ago i went for AMD Ryzen 9 5900X instead. 100% satisfied with that too.
New intel cpu somehow happens to be cheaper here so I use it instead.
AMD ones are now as reliable as Intel, but because they are built differently it affects certain processing tasks. I'm a 3d visualizer and have been using intel chips for my rendering process, as well as them being the standard for most rendering farms. No problems all this while, until I switched to AMD and while the creation process is very much the same, when it comes to rendering AMD computes differently from Intel hence the render results are different and inconsistent with those rendered using Intel cpus. So I had to stick back to Intel for my work, but for anything else like coding or gaming there's no issue. I believe it would also affect physics simulation as well. I guess what I'm saying is that for the average user it won't matter the way AMD and Intel chips are built differently but for calculation sensitive tasks it does.
Does AMD still make their chips run hotter than hell? The only one I ever owned fried itself. i have used Intel since(mid 90's).
@@kenhew4641 AMD(AMF/VCE) definately sucks when it comes to rendering and encoding compare to Nvidia NVENC and Intel QSV (EposVox made good analysis on this)
@@h.mandelene3279 sometimes, it depends on the set up but AMD set ups are usually hotter and more power consuming than Intel ones
Very good video. I enjoyed it because it discussed the underlying tech of something we use instead of a million dollar server that’ll never use or need in my life.
Anthony is my favorite person, nice to see him in a video
READ MY NAME!!!!!
!
Agreed. I love the way he explains stuff. He does it so clearly, but for some reason, I can't process or maintain the videos he's in.
I'd love to see a video talking about the differences in instruction sets between CPUs - x86/PowerPC/ARM, etc...
Anthony, your presence here is great!
It looks WAY more natural when you're not trying to hide the 'clicker' thingie :)
If anything, this fits YOU very well, since YOU are the one who shows us how things work IN DEPTH.
So it fits 'conceptually' too.
I approve wholeheartedly.
We all know 'how the pie is made' by now; so much 'behind the scenes' information about LMG;
...there's no need to pretend you're on network television or something :)
I don’t like seeing Anthony in videos. I usually go out of my way to avoid clicking on any video with him in the thumbnail
@@HULK-HOGAN1 Care to elaborate why?
@@HULK-HOGAN1 Yet here you are, commenting on a video with Anthony in the thumbnail.
It seems 'going out of your way to avoid anything with Anthony in the thumbnail' does not include 'NOT CLICKING on anything with Anthony in the thumbnail'.
Lightly stated; there are some flaws in your methodology.
More firmly; do something positive in your life - something that you truly love - that drains the energy and need from you to want to be negative towards others.
Anthony makes complicated topics feel understandable to regular people,
and is able to make 'us regular folk' feel excited about things we had no idea even existed 2 seconds ago
That is an exceptional skill.
-
My question to you is;
WHY do you waste your time commenting negative shit;
especially if you didn't even feel like watching this video "because Anthony's in the thumbnail"?
-
There's enough negativity in this world.
Whenever you want to feel better about yourself by dragging others down, just because your own life isn't working out like you pictured...
I don't need to hear/read your '2 cents'.
-
... And.if that last part is the case; happy to talk sometime, or maybe go see a psychologist (it can help out a lot - trust me on that one).
You're not alone in your misery; there's better times to come, even if you can't picture them right now.
I know how tough shit can get. It gets better. Ain't no shame to ask for help along the way - that can save you a couple years (again; trust me. I know)
Anyways; no more negativity towards people on the internet, please.
Talk to people about how you feel instead. It's scary as hell at first. You'll get used to it.
And you might find out who your best friends truly are (they might not be the ones you think of first)
One love, yo
@@HULK-HOGAN1 Opposite of the rest of us then
@@HULK-HOGAN1 Before anyone else responds to this, please remember: Do not feed the trolls.
I want to see a technological overview on the history of cpu coolers
You must be a fan.
@@GregMoress this fan spins as well
This is a really good video. Just the right amount of depth, pacing and audio/video content. Anthony is very articulate and covers the stuff I care about. Thank you!
The difference is you are not replacing your motherboard every time with AMD.
Gotta love spending $200 bucks on a motherboard for a $300 processor.
AMD BABY.
That’s not true this generation. The 5000 series is the last supported one for AM4.
@@fahrai4983 Yea great then I will have the AM5 board for the next 6-8 years. The point is a new generation does not mean a new board EVERY SINGLE TIME like intel does purposefully. There is zero reason for it. "Oh we added a pin so its 1151 pins instead of 1150 now, that extra pin does nothing but we changed the pattern just to screw you."
I understand AMD has to update their socket with technologies but we got so many glorious years of AM4, and before that, AM3.
@@fahrai4983 AM4 has been the latest since 2016, that's a long time. AM5 will probably last around the same amount of time.
And it’s slower
on a lower level, the cores are also structured differently between brands, with intel favoring having a large branch predictor and having much higher transistor count for instructions to push through (beyond the more complex branch predictor). This leads to marginally better single core performance, higher power draw and less space on the die for cores (ignoring MOSFET size differences). Because AMD favors less branch prediction and generally less transistors in a instruction path, they are generally able to have more cores that run more efficiently with marginally worse single core performance due to worse branch prediction. There's a lot more to it, but that has been a big difference between the 2 brands since AMD started making their own x86 chips
Interesting!!
yep, this is why in games (which mostly require high single core performance) intel beats AMD, while workload processes (such as decompression and compression, physics simulations) run better on AMD because it is better suited for it than intel..
@@petrkdn8224 and also at the end of the day,both chips can do gaming and workloads :) unless you are obsesed with numbers....for us it doesn't matter what you choose :)
@@robb5828 yes of course, both are good.. I have an i3 7100, sure I can't run modern games on high settings, but it still runs everything (except warzone because that shit is unoptimized as fuck)
@@robb5828 to add to your point, if hardware/software has "solved" your workload already (common example being word processing) any chip will do and many tasks like gaming are more demanding on other systems within a computer/network. So the differences being marginal already have even smaller impacts if at all in the larger picture.
There were a lot of differences between AMD and Intel that I really wasn’t familiar with when doing my first build. Like, I saw a lot of things mentioning XMP profiles for RAM, and then I spent god knows how long trying to figure out how to enable XMP, because that’s what you’re supposed to do… nobody ever said anything about DOCP. I wouldn’t even know it existed!
Yup. Always had Intel till the 3600 launched and actually had to google AMD XMP to figure out it was called DOCP though the manual probably would have mentioned that had I read it. Still can't wrap my head around overclocking
0:55 that seems labelled wrong. 5600 is Zen3.
There is no 5600 only 5600x and yes they meant 3600
@@Alirezarz62 there is 5600, since yesterday
This was actually quite informative. I was expecting more benchmarking and specific tasking head to head, but I definitely learned something new and useful.
Always good to see Anthony showing out, good stuff, great channel and as always, I look forward to moer!
As a newish gaming pc user something that has made me wonder, is if an amd gpu works more efficiently when paired with an amd cpu, or if it matters at all if you pair your gpu with what ever brand processor? This would be a useful video topic for a lot of people I believe.
go nvidia + intel
This man always paces his presentations so you can follow them. I really appreciate that - not too slow, not too fast. Some of the other hosts in this group have zero sense of how to structure their presentaions.
No, he thinks he's a woman now. 🙄
Anthony is just someone who can probably explain almost anything you need to understand - maybe, he should narrate that "easy" quantum mechanics book by Hawking - "The Theory of Everything."
Another great Anthony video. Personally I would love it if he would be allowed to make them even more technical, but I do understand the reasoning of LMG wishing to appeal to a wider audience
Ahh yes, thanks for making the entirely more relatable link to modern basketball court construction, certainly something I'm far more in tune with :)
WOW! Another AWESOME video!! What would be so cool, awesome and appreciated is if you guys did a video on which one (Intel vs. AMD) is good for Cybersecurity, Coding, Programming and the like, although it would be subjective it would also be great to be able to pick your minds about it all. Somewhat a "Knowing What We Know," Series. There are a whole lot of aspiring Cybersecurity/ Coding enthusiasts [such as myself] who are coming into it all blind and even caught up in picking between which one? CES 2022 had us confused even more with the plethora of awesomeness in the CPUs but now...which one would be good for what? Thanks!!!!
Nice presentation and explanation of Intel vs AMD tech. It will be hard to imagine what chip design will be like in 20-50 years.
When I put together my pc, I went team red simply because I intended to upgrade later and I knew amd cpus have a habit to be chipset backwards compatible with older mobo chipsets. I still haven't upgraded though... (Still rocking a 2400g)
I'd like to say with this edit I went to 3600 and it's amazing but I hit my limit, I need to get a new motherboard if I ever do upgrade further.
What is “chipset backwards”
This was very interesting Anthony and helped clear up a number of things I wasn't sure about 👍
Would have loved to see some background and why Intel was better for so long
In the most simplistic terms, Intel had the bank to crush fair competition, and they had AMD licked on single core performance for ages. It is only within the last decade that multicore performance really started to become more prominent in the mainstream. AMD went back to the drawing board for their chiplet design and continued mutlicore performance improvements, which has made then as competive and moreso in recent years. There are tonnes more reasons, but those two stand out most to me
Venkat and his wife madhavi are new ceo of my company. You will see more snd more competition as I have manufacturing unit in every house , dont underestimate the power of sales owner ramya vallabh and her vadas and mirchi bajji. It can make kings , it can make bhagwans
0:46 the 5600X has 6 cores, not 8 (unless you're counting laser-cut ones?). And 0:56, the 5600 is not based on Zen2. did you mean the 3600?
I was confused when it mentioned 5600 as zen 2.
lmao who even made that? they should double check those
they are counting laser disabled ones... because thats how they are made... its a six core part, but it has the entire 8 core chip. in theory, 1 or 2 of those cores didnt meet validation requirements due to defacts so they laser them off and sell it as a 6 core CPU instead. its the cheapest way to maufacture at scale, at least for now anyway...
@@William-Morey-Baker I hope they're not disabling perfectly good cores... that's so stupid.
@@holobolo1661 The yield on TSMC N7 by now is so high that you can bet they are crippling a tonne of perfectly good chiplets to fullfill demand of 5600(X). That is the sole reason why AMD up to now didn't offer a non-X 5600 at reduced prices. They only do now because of actual competition by Intel with parts like the 12400.
Would've been nice to mention that AMD still uses monolithic designs on it's laptops and APUs. Would have been an interesting aside to about the space disadvantages of chiplets. Great video though!
Parallel transmission of data suffers from one drawback, synchronisation. Remember when we had parallel interfaces connecting our hard-disks and printers? Remember how limited they were in speed because of the required acknowledgements, synchronisation, and reassembly silicon (parallel cache) used to ensure data was not lost? Remember when SATA and USB arrived and suddenly we had better drive speeds and device hubs were now possible?
No? Oh, well. Just remember parallel data transmission architectures work most efficiently when using separate serial streams in parallel where each stream is independent and synchronisation is optional - just like PCIe. I'd be surprised if the Intel "parallel" EMIB was actually truly parallel. It is more likely it is used as a way to overlap execution ports on the cores. The giveaway is the lack of reassembly buffers.
I thought this would be about the architecture of the x86 designs they each use, but it turned out to be just about the recent way they're each implementing multicore.
The x86 architecture difference is more interesting, in my opinion. They're vastly different strategies, which were last unified with the AMD K6.
@@hjups I'm not sure if the K6 was the last per-core equivalence. The last truly identical cores where Intel 80486 and AMD Am486. As for other cores, AMD until the K10 (Phenom) did not fundamentally change the architecture. Bulldozer (FX) was the first major overhaul.
Intel changed things up a fair bit sooner, with Netburst (Pentium 4). Funnily enough both Netburst and Bulldozer were ultimately dead ends, worse than their predecessors. Intel brought back the i686 design in the form of first Pentium M and later Core2. Core2 competed against K8 and K10, which I think share the same lineage to the first microcoded "inner-RISC" CPU's like K6 and Pentium Pro. AMD instead started over once again, and that brings us to Zen.
What I find interesting is that Zen3/Vermeer and Golden Cove/Alder Lake are very good at trading blows: depending on what you're doing, one can be wildly faster than the other. As far as I can see though, that mostly seems to be caching matters; a Cezanne chip does not have the same strengths as Vermeer, but does have the same weaknesses, as far as I can see.
I'm also curious how far hybrid architectures are going to go. On mobile, they're a massive success, and Alder Lake has proven them to be very useful on desktop as well.
@@scheurkanaal I think you misunderstood my statement. I'm not referring to performance, I'm referring to architecture. Obviously, there are going to be differences that have a substantial effect, even as far as the node in which the processors are fabricated on.
Yes, the last time they were identical in architecture was the 486, however, the K5/K6 and the Pentium Pro/Pentium 2/Pentium 3, were all quite similar internally. AMD then diverged with the K7/K8+ line, while Intel tried Netburst with the Pentium 4. After the failure of Netburst, Intel returned to the Pentium 3 structure and expanded it into Core 2/Nehalem/etc. and have a similar structure to this day. Similarly, AMD maintains a similar structure to the K10, with families like Bulldozer diverging slightly in how multi-core was implemented with shared resources.
Also note that AMD since the K5, and Intel since the original Pentium and the Pentium Pro have used a "RISC" micro-operation based architecture. The original Pentium is the odd one out there though, since it was less apparent due to it being an in-order processor while the others have all been out-of-order.
Hybrid architectures may not really go much further than Alder Lake and Zen 4D. There isn't much room to innovate in the architectural space, where most of the innovation needs to happen at the OS level (how do you schedule the system resources). It's also driven by the software requirements though. Other than that, there may be some innovation in the efficiency cores themselves, to save power even further, but in exchange for lower performance (the wider the gap, the more useful they will be).
@@hjups I was also talking about architecture :) I was just not under the impression K7 was much different from K6, since it did not seem all that different from what Intel was doing circa Pentium 3 (which is like "a P2 with SSE", and the P2 in turn was just a tweaked Pentium Pro), and the numbers also imply a more incremental improvement (although to be fair, K5 and K6 were quite different).
That said, I wouldn't be so sure if Zen and K10 are that similar. As far as I know, Zen was (at least in theory) a clean-sheet design, more-or-less.
I was also referring to micro-operations when I said "inner-RISC". The word "micro-operation" just did not occur to me. Finding something that said whether or not the original Pentium was based on such a design was also quite hard, so I assumed it didn't. It was superscalar, but I think the multi-issue was quite limited in general, which gave me the impression the decoder was like the one on a 486, just wider (for correctly written code).
I don't know how far efficiency cores will go. Their use is not from a wider gap, but rather, more efficiency (power per watt). Saving 40% of power but reducing performance by 50% is not very effective. Also, in desktop machines, die size is a very big consideration, not just power. And little cores are useful here. Keep in mind that the E-cores from Alder Lake are significantly souped up compared to earlier Atom designs. That's important to maximize their performance in highly threaded workloads.
I think the next thing that should be looked at is memory and interconnect. CPU's are getting faster, and it's becoming harder and harder to keep them properly fed with enough data.
@@scheurkanaal Maybe we have different definitions of architecture. SSE wouldn't be included in that discussion at all, since it's just a special function unit added to one of the issue ports, similar to 3DNow! (which came before SSE).
The K5 and K6 are much more similar than the K6 and K7... The K5 and K6 even use the same micro-op encoding as I understand it. The K7 diverged from simple operations though into more complex unified operations, that's also when AMD split up the integer and floating point paths. The cache structure changed, the whole front end changed, the length decoding scheme changed, etc.
As for P2 vs Pentium Pro, the number of ports changed, and the front end was improved to include an additional decoder (which has a substantial difference for the front end performance - it negatively impacts it, requiring a new structure). The micro-op encodings may have also changed with the P2 (I believe they still used the Pentium uops in the Pentium Pro which are very similar to the K5 and K6 uops).
Zen may have been designed from the "ground up", but it still maintains the same structure and design philosophy - that's likely for traditional reasons (they couldn't think outside of the box). Although, it does have some significant benefits in terms of design complexity over what Intel does - especially when dealing with the x87 stack (the reason why the K5 and K6 performed so poorly with x87 ops, and why the K7 did much better).
Yeah, I knew what you meant by "inner-RISC". I just used more technical terms. The P1 was touted as two 486's bolted together, but that was an overly simplified explanation meant for marketing people who couldn't tell the difference between a vacuum tube and a transistor. In reality, you're correct, the dual issue was very restricted, since the second pipeline really could only do addition and logical ops, as well as FXCH which was more impactful (again for x87). I would guess that most of the performance improvements came from being able to do CMP with a branch, a load/store and a math op, or two load/stores.
As for specific information about the P1 using uops, you're not going to find that anywhere, because it's not published. But it can be inferred. You would have to look at the instruction latencies, pipeline structure, know that a large portion of the die / effort was spend on "emulating instructions" (via micro-code), and have knowledge of how to build something like the Pentium Pro/2/K6. At that point, you would realize that the P1 essentially had two of what AMD called "long decoders" and one "vector decoder", which it could either issue two "long" instructions or one "vector" instruction. The long decoders were hard coded though, and unlike the K6/P2, the uops were issued over time rather than area (i.e. the front end could only issue 2 uops per cycle, and many instructions were 3 uops. So if logically they should be A,B,C,D,E,F, the K6 would issue them as [A,B,C,D] then [E,F], but the P1 issues them as [A,C],[B,D],[E,F]).
Yes, power efficiency is proportional to performance. The wider the gap implies more power efficient. But there's also the notion of making the cores smaller too and throwing more at the problem (making them smaller also improves power efficiency with fewer transistors). If the performance is too high though, there's no reason to have the performance cores, which is what I meant by the wide gap being important.
Memory and interconnect are an active area of research. One approach is to reduce the movement of data as much as possible, to the extent of performing the computation in RAM itself (called processing in memory). It's a tricky problem though, because you have to tradeoff flexibility with performance and design complexity (which is usually proportional to area and power usage - effectively energy efficiency).
To do all of the architectures and GPU styles justice, you could do a four-part video thing explaining each GPU and how each architecture works. What do you guys think?
Anthony's videos are informative AND entertaining. Well done sir, well done!
I'm reading about Operating systems and i just discovered that the big difference lies in the Architecture....both perform the same tasks quite differently but the results of a a top tier AMD or Intel CPU are hard for the average user to even notice
I'd love to see a video of if it's possible to add your own CCD if you could get the parts to just add more cores to your existing CPU using a cpu with an empty CCD section. Might want to get a microscope for that one and I doubt you could ever do it at home but would be interesting to see if it is possible.
I mean you probably could but there would be tons of issues.
The chip would not be suppported by any motherboard and would need a custom bios.
You'd probably have differences in the chips that ones produced together would not have.
It would be insanely easy to mess up.
It might be fused off which would completely neglect doing anything
I'm pretty sure people have added more vram to gpus and it has wroked but very was very unstable.
@@gamagama69 Seems that if the chip can use the signals used to identify 3900x or 3950x silicon then maybe, you could use existing In bios signatures for existing ryzen chips to make a 3800x into a 3950x but that would be extremely difficult without Nanometer scale precision tools.
It's pretty much impossible to do by yourself even if you could afford the needed tooling you ain't getting the microcode on to the CPU.
Well yes and no. Intel uses a comparatively traditional ringbus for in die communication. 2 cores not directly in line can also not communicate directly with this method. The Infinity fabric adresses this Problem. Thats why its named Infinity fabric. Infinite numbers of cores can directly communicate. And while this increases latency against ringbus with lower core counts it decreases latency for their huge core count epyc line up. For regular Desktop its just Cost reduction though atm
Video Suggestion: How are Programming Languages created?
READ MY NAME!!!!!
!
Its not that complicated. C compiler is written in C, java compiler is written in java, Python compiler is written in python,...
@@Slada1 python interpreter is written in c. java compiler is written in java/c/c++ ;)
@@Slada1 Yea, but how are those compilers created then?
First 15 seconds of the video was absolute facts!
Been using intel since 8088/86 days. Intel got lazy and sloppy by the time I had built my i7 3770 3.5 ghz. At end 2018 I switch to AMD 2700x and she been a beast and great Cpu, at least for games and software I use. Sure ive used few AMD Athlon over the years. But at the time intel was king for so long, because the lack of competition. They got lazy. Thankfully now both Amd and intel compete with each other now. Been happy with Amd and prob won’t be buying any Intel cpu for awhile.
Intel GPUs I’ll be watching, looking at some point replace my GTX 1080ti. Then again RDNA3 sound good. So does future intel GPUs. Time will tell if either them will be able compete with RTX 4xxx when they come out.
Good video.
Intel not only got lazy but also has some dodgy corporate stuff that puts more money in their pockets while the consumers are stuck with "you only need 4c/8t". Whenever I can I have switched to AMD both on my main desktop and laptop and given the chance I also recommend AMD everyone else around me. Not to mention as well that AMD has a really good track record so far of supporting their platform for far more generations, I'm talking about AM4.
It seems like Rx 7000 will beat Rtx 4000
@@13thzephyr
Yeah I also have been recommending AMD Ryzen after I built my 2700x. Been very happy with it. And they run cooler then intel cpus imo. Yeah after I built the 2700x. I know soon after we started also hearing about all Cpu vulnerabilities that dates back like what 10+ years. Spectre/meltdown and bunch other. I also know Amd isn’t perfect and it has its share issue too. But I think generally AMD are better made and better secure. TSMC is currently the leader. They way better imo to anything out there. Yeah I know my zen+ (2700x) was on global Foundries. Been happy with it. Prices here Canada for CPUs and any pc hardware still kinda high or insane. Though slowly starting come down. Some day I’ll upgrade zen 3. But I’m in no rush.
Most my online friends have also moved to AMD Ryzen and the small indie dev team I was helping few yrs back, (I was ex mod/server admin /tech help guy) I know one head devs (Vipe) has also upgrade to 3900x or 3950x, I forget which, but I know he was very happy with it. Let’s him code UE4 much faster, which let them beta test more quickly, then can deploy patches more easily for their dino UE4 game. :)
Would only recommend intel if your doing specific tasks or apps the run better on intel. Zen 4 ( AM5) sound very impressive as does RDNA3. Pc have come a long way from old 286/386/486 days. Lol
@@xruud24
Yeah hopefully RDNA3 will be as good as next RTX GPUs. Been happy with 1080ti, but imo Nvidia need some spanking, they been at the top too long and Nvidia kinda becoming more and more anti consumer. Would love see AMD or intel GPU take lead for few years. But prob never happen cus Nvidia has $$$$$ to keep their share holder happy.
Intel 12th gen is like the holy grail right now, so fucking good for the pricing.
Thankfully AMD is starting to release budget CPUs again now too.
love how you ask for a like OR dislike, haha u are so easy to listen to as always! YOU BRING IT BRIGHT AND CLEAR!
Glad Anthony is getting lots of screen time. He's great
Nope
@@smilinandlaughin what
Man I loved that Pringles joke way too much. Great simplicity in the explanation!
i've never really cared about either lol id just try to build comparable systems and then decide based on over all price / taking into consideration reviews on all the parts around them. was working on it a bit last night as im considering upgrading and noticed that i7-12700k out performs the ryzzen 9 5900x by a decent margin and is cheaper which was interesting to me as a step up on either side was a huge price jump for not a big jump in power.
It’s kind of weird watching Anthony in a Techquickie instead of “i used a paperclip and an experimental linux compilation to compress and decompress data going between the two Linus Media Group buildings in real time”
Can you make a video on the difference between amd and Intel in terms of performance for different uses? Like a quick guide on which to get
I'm sure you can find a video about that already LoL
I decided to try an AMD machine after being with intel for a few generations. The Infinity Fabric was completely unstable and caused any audio playing to be filled with static and blue screening occasionally. I tried for a week with updates, tweaks, tutorials but couldn't stabilise it. I sold the parts and bought intel parts and had no problems at all. I've been building computers for myself since I was twelve years old (20 years), and that AMD machine was the only time I was forced to give up when presented with an issue. I've bounced back and forth between the two, as well as ATI and nVidia over the decades, but that experience really put me off AMD for the moment.
Lol i build since 15 years, just went with intel. This time the hype about Zen 4 was so huge, i could not resist to try. I bought a 7900x. Hat stuttering issues, because of ftpm, issues with memory controller. Then after two days i returned it and bought a 13600k, wich worked perfectly since. I cant relay my money anymore to AMD. First impression not good.
You got a calm, soothing voice that's great to listen to Anthony!
One correction here. Meteor lake doesn't use EMIB, it uses Foveros. Basically 3D stacking of silicon. But unlike TSMC/ AMD 3D v cache, meteor lake can be overclocked like normal CPUs.
I would love a series of video that explains from the utmost bottom to top how tech things work. Like ok, you have cores in a CPU, but what are cores? According to google, they are "the pathways made up of billions of microscopic transistors". Sure, but then what are transistors? And on it goes, all the way down to the most basic level at which CPUs operate, the electric impulses or whatever. How does a lump of metal and silicon mage magic happen?
It still fascinates me, 2 companies, started in the same decade (okay intel was named different back then), and they are still competing against each other as "top dog" as such. Kind of reminds me of 2 brothers constantly trying to 1 up each other.
Same can be said for Microsoft and Apple, with Windows and Mac/iOS
@The Deluxe Gamer they arent really "competing" as would be in other countires where there is an acutal left / center party rather than the US's 2 right wing parties,
if it were really comparable to US political system then both intel and amd would have to be competing in a single sector of proccessors such as workload or gaming , while they do both (and they have different features etc etc)
Can we get some type of yearly update on this topic?
More Anthony!
READ MY NAME!!!!!
!
Parallel interconnect has its drawbacks for instead when there is a curvature in the lanes, so for instance the higher bits has to go farther. So usually parallel buses had to have lower frequencies and therefore were slower and more expensive (more lanes->more complexity for design+harder manufacturing+more materials) compared to the serial ones. This is why FSB was replaced by both of the manufacturers, and there are really successful serial connectors, like USB. And as far as I know Infinity Fabric can be used to connect distant parts or multiple CPU dies, so it being serial in many of its usages most likely definitely not a drawback for it. But in a short distance without bends like these tiles Intels way of "gluing" chiplets/tiles together can be a good option, we will see.
Anthony always leaves me satisfied and smiling
Good bit of information. I am glad you shared this information with us. Now to get some Pringles.
I remember when you could swap an intel CPU for an AMD. How the times have changed.
Oh Socket 7... those were some good times.
@@SpinDlsc
Wonderful explanation with a wonderful host!
For some reason I can follow Anthony better than other hosts with these complex topics.
When you invest, you are buying a day that you don't have to work.
I pray everyone reading this becomes successful.
You are absolutely right 👍
Investing in crypto is very cool, especially with the current rise in the market.
I really don't know why people still remain poor out of ignorance.
It is not all about ignorance, there are lots of unprofessional brokers in the market.
I will introduce you to my trader Mr Lennart Antero, his methods works like magic and is working for me at the moment.
I think you are wrong regarding EMIB and Infinity fabric. First comparing them both is kind of odd, as EMIB is as far as I know actually just an embedded die in the substrate and does not define any protocol, It's "just" the wires that connects the dies and can provide PCIe, HBM, UCIe or what ever connection.
1:16 Yes Infinity fabric is a serial communication, but that does not necessary mean higher latency. The infinity fabric link (IFOP) is 32b parallel and gets the data from the CAKE with 128b. But as the IFOP is clocked at 4x the clock speed it is able to transfer those 128b as fast as the CAKE.
4:00 As said above it is only the connection, not a protocol, so it does not require parallel data transfers.
No clue what you said, but you said it nicely so i like you
3:55 You missed the chance to say "Build from the middle out"
Can you do a video on x86 vs Arm?
So I don't follow basketball, and it's odd to learn this from a tech channel, but I was today years old when I learned they did those floors in tiles.
Anthony's presentation really improved over the years! People with deep understanding of complex subjects AND the skill to convey information efficiently are too rare. We need more Anthonies.
Serial communication can be dine faster then parallel, and take much less space. That is why the industry left the parallel IDE for SATA, which is serial.
Now, IF you can afford the space for parallel connections, then it can move data faster, and a higher cost in space and money.
With AMD moving to TSMC N5, I felt they should move back to monolithic design for parts 8 core and less, and have the snap together interface, so if you want to add an 8 core chiplet, you can. I think this would be ideal for Ryzen considering 16 core compute is still PLENTY of compute power. And then when they want to make their move to big-little, they could use the same approach, but do it with N4 or N3 where the amount of density still allows them to use a very small die and don't take much losses.
In other words, the approach on how to do something can't be looked at in a bubble. It has to account for the total ecosystem, including the manufacturing process and the node being used. Sure, this next gen for Intel, or actually two generations from now will use tiles. But what happens when they can finally produce Intel 20A? Are you going to use tiles to create a 12 core part? I mean a monolithic design on 20A means the chiplets will be TINY. It would seem better to go back to monolithic so many desktop parts, but leave the ability to snap on another chiplet (tile) to add another core complex.
Now, for WS and server that's a totally different realm, but desktop, for most people is STILL browsing the web, office apps and media, and not editing media. You don't need the cost of these interconnects, unless maybe it's an APU, in which case the APU could be a tile/chiplet. I think this would be the most cost effective approach and not a waste of die space. I don't think total package size could shrink much because of so many connections needed between the MB and the CPU. But the die could shrink quite a bit.
the last place i expected to learn that basketball courts are made4 of removable tiles and arent actual hardwood flooring
You and me both my guy.
I always went Intel because I valued single core performance and power/thermal stats a bit more.
Once games favor amount of cores over single core performance and AMD are still better for that, I will switch to AMD.
Have you heard of ryzen 5000 series? Sure doesn't sound like it
Think you need to look into Ryzen 5000, beating Intel on singel core and multicore.
The difference is really negligible these days. The idea of amd being a cheap option isn't the case anymore. They are trading blows.
I feel like there is lots they didn’t cover, like NUMA nodes or how they are binned differently.
Intel's chips named after lakes start to sink, but AMDs be Ryzen.
Amd chips named after artists
Ok here's a video suggestions
Can I turn my one home PC into many home computers with a hypervisors running at least one gaming rig, a nas server, a home media server and a router
And what are the drawbacks to such an attempt 🤔
if you have the cores, memory, storage, gpus, and wifi adapter, yes
Intel also has less cache compared to AMD cpu's which tend to slow down your computer after a while. For example I had switched from a ryzen 3700x to i5 11400 system because I had sold that computer to a friend and at the time intel 11400 system costed much less than zen 3 systems. And i5 11400 is supposed to be faster than 3700x for single threaded applications right? Yes it is faster in games but after only 9-10 months of usage, the web browsing experience and a couple applications like obs got significantly slower compared to 2 years of heavy usage on ryzen 3700x. I am now just lazy to reinstall windows due to my job taking too much of my time and leaving no room backing up stuff. And for those who might ask, I don't have more programs or I'm not using an antivirus, I still have the same ssds, I am up to date on drivers and I don't use browser extensions... And no the cpu or memory usage isn't high. And I got significantly faster memory on this system with super low timings. And yes memory overclock is stable, It has passed memtest 1500%, linpack, time spy, y cruncher all of that. So yeah, at least as far as I can tell 11th gen intel sucks in that case which I think is caused by 32 megabytes of l3 cache vs 12 megabytes. Making a video on youtube full screen on chrome is taking a couple seconds for example. I mean like wtf...
ssds slow down over time
its not the cpu
@@zatchbell366 hwinfo shows total 9tb host writes, its a samsung 970 evo plus 2tb and it only has windows and some programs, total 230 gb is used so I dont think thats the case
@@zatchbell366 Agreed. My Macbook takes ages to read or write, but when something is running in the memory it's as fast as ever.
@@danieljimenez1989 Just reinstalled the windows and all the other programs and all the windows updates on the same SSD. Everything is now running flawlessly fast. So apparently it was windows and software updates bloating the system which made the cpu or cache no longer being able to keep up in some programs. I have got my bookmarks and everything else back on the chrome. And all the "default programs" are still running at startup, got the same drivers installed as I had my "install" folder remaining on another drive which I keep all my driver setups. I also have my steam and games installed as well. So it was not SSD nor anything else. Just stupid windows bloating things up.
@@danieljimenez1989 Thanks pal
Hi. Great video from Techquickie. I hope Techquickie would make a video for comparison, which nowadays cpu is better for Linux? AMD for Linux or Intel for Linux. Thanks
AM4 also got plenty of support while Intel swaps sockets every other gen. Hopefully AM5 lasts just as long.
that makes upgraing to new CPU easy with AMD and not having by new MOBO
@@BeautifulAngelBlossom yup. you don't get pcie 4.0 on older boards but that's not an enormous issue
Linus recently said Andy gets super technical. Way to play to his strengths! Great content!
No clue what you're saying bro, but fully trust you on this!
Thank you! Been wondering this for decades! Now I have a current answer! Little Legos on a big slab vs large Legos on a different slab
he lost me by second 0:33 @_@
😂
1:03 you can’t deny that Big Mac‘s are your real passion :D
More Anthony!
I love this guy's way of speaking
READ MY NAME!!!!!
!
Narrator has great radio voice. Great video, quick and educational. Will subscribe
I like Intel because the names of their cpus is easier to understand for me
you must be dense lol
Otis, you just left wrestle mania bro. Love the work ethic.
This is not the only "actual" difference, as Intel and AMD differ noticeably in terms of Technologies and Instruction Sets: TSX and AVX512 are the obvious ones to note from Intel, but because chips from Team Blue also have specific resources that better optimize Media Production and Broadcasting (QuickSync) and noticeably faster rendering of AI effects or projects, there is a genuine reason to go for Core, Core: X Series, or Xeon parts over Ryzen and Threadripper if the applications you use make genuine use of them in one major capacity or another; Ryzen's rough point is definitely in AI and also the inability to use certain apps and emulators properly due to the lack of Intel TSX and AVX512, it'll definitely be felt in certain software even on a 5950X or Threadripper and thus my advice to study up on which software benefits from having certain Intel~based technologies or instructions is more important than ever, there may be genuine reason to have two or more separate systems in your house based on Core and Ryzen due to this and thus being fluent in understanding the differences will help you optimize them for specific workloads. (:
Well said.
I was expecting to find some point I disagreed on but you pretty well nailed it.
AMD APUs have hardware encode but it isn't as polished as Intel's, that said, I've got a Skylake that isn't that terrific for encoding so it's not as though Intel has had this capability for long.
Their decode is excellent and very low powered, their encode leaves a lot to be desired. Neither company's encoder is as good as Nvidia.
Well, intel 12 gen already oficially not support AVX-512 (i wonder why... [sarcasm]), and they now forcibly remove instructions from newer CPU's from what i heard, but i may be wrong there.
So... Yeah, that point kinda missed, but overall true, Intel have some proprietary licensed instructions, which can give advantages here and there in specialised tasks
@@glenwaldrop8166 Well, you could start with the fact, that Series X is almost useless the last 3+ years, unless you want to save a buck for much lower performance(except a literal FEW programs/Games), but then you could buy 3950x(let alone 5950x) and save more for better performance on average(or (near) total better performance for 5950x) than Series X (10980xe)
Core is only good for gaming compared to Ryzen (AMD did pretty good with 5000, too bad they increased the price, but i guess they are lowering it now)
EPYC vs Xeon is outside of my FOV, but i wouldn't expect EPYC to lose there, they are mighty cheaper to produce, which makes them easy to stack performance in all price ranges as well as easier to stock more of them.
But for certain productive workloads and gaming/hybrid Intel might be arguably better
@@lilituthebetrayer2184 honestly, Intel is only faster in games in certain circumstances and then it's not massive. Most people are GPU limited anyway.
You can get 60 fps out of a Sandy Bridge or even the FX with the right GPU. For high frame rate gaming both companies can provide over 120 fps, being nitty about that last couple of frames is silly. If you lose because it was only 100 fps instead of 120, it wasn't the computer that lost the match.
@@glenwaldrop8166 I tend to buy once and for a good time, which helped me in the mining shortages.
Let alone badly optimized games, Sandy Bridge can be really weak for some memory intensive games. Personally i wouldn't chose anything DDR3 by now.
It's better to have 10 or 20 frames over than under and especially better minimum frames, in case you need V-Sync. Emulation, old games, rendering, archiving, multitasking, etc. It's always good to have a better rig, if you have the money and sometimes even if you don't really.
I thought this was one of the better videos in terms of usage of pictures...so thanks for that!
Well even though Intel is actually coming back, I think that my next build I'm going to switch team to red. I stayed with Intel mostly due to the compatibility but now with their new e-core/p-core design(that doesn't work well with older software) and with constantly improving AMD drives I guess that unless Intel won't solve this problem most of Intel fans will switch side.
I haven't ran into any software that doesn't work with 12th gen.
@@kusayfarhan9943 good to know but still there are people having issues with drm on some older software (Italians are lazy into developing new stuff so at least here I heard some speaking about it) and obviously in my specific case some older multiplayer games. I'm still an Intel user so who knows, maybe all of early adoption issues may be solved when I l'll have to ugrade my rig but at least for now as someone who works and plays on the same rig I don't want to deal with those problems.
A version of this explaining the difference between Nvidia, AMD, and Intel's GPU architecture would be amazing!
thats a good idea
Intel has gpus?
Edit: no way intel has gpus
@@literallysteel yes they do now
@@rk3senna61 they've had it for a long time, they just typically weren't discrete gpus, just integrated, they're starting to do discrete, but gpus in-of-themselves is not new to them.
@@tuxshake i still prefer intel
I just wanted to buy a laptop how did I fall into a rabbit hole
Video Idea: I just read something about SGX and only being on 10 series for playing UHD blu-rays in 4k. It is my understanding you can forget about 4k uhd on AMD. I’m wanting to build a home-theater pc and would like to know other “gotchas” or is home theater on a pc no longer possible? It is difficult to find a pc that has a 5.25 bay for blu-ray or dvd playback. Is blu-ray playable on amd? I know the Netflix app can stream 5.1 but are there other ways to get surround sound via streaming on a pc other than that? Anyway it is a topic I wish could be revisited for that use case. Thanks.
Protip - If you give the Pringles can a very gentle shake from side to side, you can hear how broken up the contents are without breaking them worse.