RISC vs CISC Computer Architectures (David Patterson) | AI Podcast Clips with Lex Fridman

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น •

  • @ShovelShovel
    @ShovelShovel 4 ปีที่แล้ว +65

    Lex is a good interviewer, pretty sure he knew alot of the stuff David was explaining but the way he explains it is really good for viewers that aren't as well versed.

    •  4 ปีที่แล้ว +6

      And best of all, he almost never interrupts and is never trying to take the spotlight.

    • @asdfg3421
      @asdfg3421 4 ปีที่แล้ว +1

      Yeah... He was talking to Lex like he was in his sophomore year.

  • @Frymando93
    @Frymando93 4 ปีที่แล้ว +77

    Lex, you should try to have a security engineer, like Chris Domas on. The instruction set architecture discussion gets so interesting when you consider how other people (ab)use the operating system to do what they want.

    • @jimivie
      @jimivie 4 ปีที่แล้ว

      agree

    • @ciarfah
      @ciarfah 4 ปีที่แล้ว +1

      @@secretname4190 Meltdown

  • @taaaaaaay
    @taaaaaaay 4 ปีที่แล้ว +66

    7:19 “These high level languages were just too inefficient”
    First year uni me would be crying if I heard C was a high level language

    • @autohmae
      @autohmae 4 ปีที่แล้ว +8

      We can talk all day about low or high level languages, but these days we can run Windows 2000 (which was written in C++ and compiled to x86) in Javascript in the browser on an ARM device. And that is at half the speed or faster than half the speed compared to bare x86 hardware.

    • @Conenion
      @Conenion 4 ปีที่แล้ว +2

      @@autohmae
      The Windows NT _kernel_ is written mostly in C, with some assembly as needed. Maybe some C++ for newer parts. Everything what you see, the GUI part, is written in C++ and C#.

    • @drewmandan
      @drewmandan 4 ปีที่แล้ว +4

      I have a disagreement with people who claim that C is a "high level language". It's certainly human-readable, but that's an aesthetic choice. They could have renamed all the keywords to things more esoteric and that wouldn't change its "level". Instead, I think the important thing is how easy it is to draw a map between C instructions and machine instructions, and it's almost 1-1. Not only that, but a C programmer needs to actively think about the machine instructions in a way that a Java or Python programmer does not. So perhaps there should be a separate category for C or C++, like "semi-high level" or "medium level".

    • @autohmae
      @autohmae 4 ปีที่แล้ว +5

      @@drewmandan C was considered one of the first high-languages after Assembler, so that makes all other even higher languages also a high-language :-) Maybe something like: super-high-language would be a good fit ? There are other ways you can talk about languages: Python, like Javascript, Bash and Powershell are considered scripting languages. Which imply they are 'super higher' languages in practice (my guess is Lua still fits that category too). An other way to distinguish the languages you mentioned is that Java and Python both have a runtime which usually means they work with bytecode, Python(.pyo), PHP, Java, all do that. And Javascript does something similar at runtime (Webassembly is very similar to the bytecode for Javascript). Rust and C, C++, etc. are also often called "system languages"

    • @autohmae
      @autohmae 4 ปีที่แล้ว

      @@Conenion yes, you can run it unmodified in the browser with the right Javascript code.

  • @DaSkuggo
    @DaSkuggo 4 ปีที่แล้ว +253

    I didn't know Bryan Cranston know this much about CPU architecture.

    • @TheSolidfoxhound
      @TheSolidfoxhound 4 ปีที่แล้ว +6

      ikr?? 🤣🤣

    • @bendover4728
      @bendover4728 4 ปีที่แล้ว +7

      Heisenberg, please..

    • @stabgan
      @stabgan 4 ปีที่แล้ว +3

      Hope his lung cancer got away

    • @cannaroe1213
      @cannaroe1213 4 ปีที่แล้ว +6

      _"JESSIE! WE HAVE TO LOCK!"_

    • @zebratangozebra
      @zebratangozebra 4 ปีที่แล้ว +3

      Thought he was interviewing Picard.

  • @Mvobrito
    @Mvobrito 4 ปีที่แล้ว +33

    RISC is faster, more energy efficient and easier to design.
    CISC uses less memory and is simpler for compilers.
    It made sense to use CISC in the 1980s, when memory was much more expensive, programming languages were lower level and compiler technology was not yet well established.
    Nowadays, memory is no longer a limiting factor and modern compilers/interpreters can turn high-level languages into machine code very easily.
    The priority now, with the mobile device revolution, is to design faster, less energy-consuming processors, and RISC is the way to go.
    In addition, as they are simpler to conceive, the market's transition to RISC would greatly increase competition in this segment, which has been quite stagnated in the last decade because of the Intel/AMD duopoly.

    • @Kpopzoom
      @Kpopzoom 4 ปีที่แล้ว +4

      The only difference is in the processors translator - less work to do with RISC, but more machine cycles to accomplish that same task.
      With multi-thread, multi-core processors (like AMD Ryzen) CISC is still the best, especially for high powered computers used from gaming/ video editing etc..

    • @PuntiS
      @PuntiS 4 ปีที่แล้ว +1

      For low-power application chips, though, which see much more use in products worldwide, RISC-based processors are being increasingly used in products, and have been gathering much attention in the past couple years.
      In this case, prioritizing higher clock cycles per instruction means less clock activity to carry out processes, which in turn translates into less consumption.
      And low consumption is one of the hot words going around, along with security, cloud and ML.

    • @isodoublet
      @isodoublet 4 ปีที่แล้ว +1

      "RISC is faster, "
      Empirically false.

    • @lubricustheslippery5028
      @lubricustheslippery5028 4 ปีที่แล้ว

      Memory access time is an big factor for modern CPU. An cache miss takes about 200 cycles. So an instruction set that minimize the amount off necessary access to the memory could improve the performance

  • @d3ly51d
    @d3ly51d 4 ปีที่แล้ว +2

    In university I took two courses on computer architecture where we studied the entire book, and it was my favorite set of lectures from the entire CS curriculum. The book gives you a wonderful insight into how computers and compilers actually work and how various types of speedup are achieved and measured. In the exam we had to unroll a loop in DLX among other things, and to calculate the CPI speedup. I'm so glad to actually see one of the authors behind this amazing book.

    • @abdullahsiddiqui1065
      @abdullahsiddiqui1065 ปีที่แล้ว

      literally had a midterm today based on one of his books lol

    • @ByteMeCompletely
      @ByteMeCompletely ปีที่แล้ว

      Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.

  • @ByteMeCompletely
    @ByteMeCompletely ปีที่แล้ว +1

    Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.

  • @peters972
    @peters972 4 ปีที่แล้ว +2

    This guy has one brother who makes the highest quality crystal meth, another brother who commands the space ship enterprise, and he himself provided the best cpu design theory. Pretty talented family.

  • @_ilearn
    @_ilearn 6 หลายเดือนก่อน

    The Golden Days of Lex's Podcast.

  • @andy16666
    @andy16666 4 ปีที่แล้ว +13

    Great interview with a great guest.

  • @rolfw2336
    @rolfw2336 4 ปีที่แล้ว +2

    Interview ends kind of abruptly, but I really enjoyed it! You bring out Dr Patterson’s talent for explaining these concepts to a wide audience. Wow, he must have been great in the classroom.

  • @briancase6180
    @briancase6180 3 ปีที่แล้ว +2

    Basically, the RISC opponents didn't understand how optimizing compilers work and what they are capable of; many of them also didn't understand what high-performance processor implementation really requires. The argument basically boils down to that. One thing that Dave didn't get to is that CISC computers tend to have instructions that execute *more slowly* than a sequence of simpler instructions...from *their own* instruction set. This was very true of the Digital Equipment Corporatation (DEC) VAX machines. In some ways, the VAX was the CISC-y-est CISC. If you understand hardware and compilers (and software frameworks), you understand why RISC makes sense and why you would never choose to design a CISC architecture from scratch. Even the original ARM architecture was not really a RISC. ARM V8 and v9 are much more simple.

    • @mikafoxx2717
      @mikafoxx2717 11 หลายเดือนก่อน

      One good way to think about it in a way, is the Java runtime environment, where you assemble the java into a virtual machine, which is then converted into the actual underlying instruction set on the fly. In this case, the x86 cpu is doing the same thing under its hood too, converting it into simpler micro-operations.

  • @adriangibbs
    @adriangibbs 2 ปีที่แล้ว +1

    Brilliant conversation. This just closed the remaining knowledge gap I had when it comes to understanding how modern hardware and software work together.

  • @sharonneedlesfreedomsnotfr813
    @sharonneedlesfreedomsnotfr813 4 ปีที่แล้ว +16

    -A bunch of computer nerds involved in violent debate...many mothers got the call “mom im gonna need you to pick me up late”-

  • @RyanMitchell-yy4no
    @RyanMitchell-yy4no ปีที่แล้ว +1

    As an early career web developer, CISC architecture sounds like an absolute nightmare.

  • @michaelrenper796
    @michaelrenper796 4 ปีที่แล้ว +3

    The RISC-CISC wars are long over. Neither side has won but rather the whole issue got obsoleted by long pipelines and speculative execution. For simple CPUs, which run slowly but are power optimized simple instruction sets usually win. For fast CPUs minimizing code size and therefore going a bit more CISCy wins. All modern instruction sets are hybrids and all have some form of microcode being spit into the decoding pipeline.

  • @TNTsundar
    @TNTsundar 4 ปีที่แล้ว +20

    Thinking about the fundamental instructions running on my phone’s processor are designed by this guy, puts a smile on my face. Great video!

  • @vivichellappa7645
    @vivichellappa7645 4 ปีที่แล้ว +13

    Around 8:00 minutes, the good professor starts talking about how operating systems and even application programs were written in assembly language to achieve speed. And if compilers could have been made smart enough to translate from various languages into complex instructions, that would have been wonderful but RISC makes it easier. And the he goes off on to Unix,C,C++, etc.
    I now understand why for the last 40 years, we have been getting programmers pretty much illiterate about computer architecture.
    Does the good professor know about the Burroughs B5500, B6500, B7500 series and their follow-ups? Those machines had a push down stack architecure so that they could effectively use Algol as the language in which the operating system could be implemented. As opposed to C which is a high-level machine-oriented language, you had a true high-level language for writing operating systems. And those machines were equally efficient in running Fortran, COBOL and such languages which did not need a stack for their statically declared variables.
    And if you want register-to-register instructions only (this is claimed to be an essential feature of RISC computers) as opposed to the IBM 360's register-to-register, register-to-memory and memory-to-memory types of (CISC) instructions, then I can tell you that the CDC 3200 dating back to the early 1960's had that type of instruction set. Every arithmetic instruction meant that the programmer painfully accessed the memory to load the two operands into two registers, performed the arithmetic operation (add, subtract, multiply, divide) on the registers and then stored the result back in main memory. What a pain for the programmer who had to program that beast in assembly language! It is the speed increases in hardware over 30 years that enabled a RISC computer to perform fast. If one implements the CDC 3200 using today's VLSI technology, I am sure it would beat any RISC processor in performance.
    Write a nice proposal to get a grant, get a bunch of PhD students to design the chip and write compilers for C or Smalltalk or C++, and you have a nice decade-long run of research publications and more research funding. That is what RISC was about.

    • @lemonsavery
      @lemonsavery 4 ปีที่แล้ว +2

      I'm not sure why Lex doesn't know about RISC and CISC. Having just finished my undergrad CS degree, one of my later classes had us make a rudimentary processor from scratch gates, we coded a little in assembly and converted it into machine language by hand, and we learned some about the history of RISC ARM/CISC x86 as well.
      Did I just get an abnormally competent professor?

  • @msalvi6302
    @msalvi6302 4 ปีที่แล้ว +3

    Have you heard about MIPS, kids at berkey could have used that, but they wanted to look cool and started RiscV

    • @32gigs96
      @32gigs96 4 ปีที่แล้ว +1

      Problem is mips is proprietary.

  • @sergioropo3019
    @sergioropo3019 4 ปีที่แล้ว +1

    It is fascinating how this guy come up with instant, perfectly structured answers.

    • @anonymoose9322
      @anonymoose9322 4 ปีที่แล้ว

      He thinks in RISC.

    • @_____case
      @_____case ปีที่แล้ว

      This is what tenured professors are like.

  • @jimreynolds2399
    @jimreynolds2399 4 ปีที่แล้ว +6

    I remember the RISC/CISC debate/battle back in the 80s. I always thought that CISC was better and all the hard work can be done by the compiler - which is a piece of software - so I felt that CISC would win-out. When Sun started moving away from the Sparc I was surprised and puzzled but I read about commercial arguments that started to change the economic argument in favour of RISC but I always felt it was like VHS beating Betamax again. I'm surprised to hear about RISC-V now. I'm not convinced things are sufficiently different to justify switching from RISC to CISC for everything but there are bound to be applications where CISC is way better. Interesting times.

    • @Eugensson
      @Eugensson 3 ปีที่แล้ว +1

      In the end CISC CPU’s like x86-64 are decomposing their long instructions into a set of many simple micro operations, so effectively modern Intel CPU’s are in some sense RISC in disguise. On the other hand ARM has many very complex instructions which could be considered CISC in nature. Same with RISC-V, altho the instructions are simple and short, at a certain stage under the hood they are fused into more complex ones for the efficiency purpose.
      The debate of RISC vs CISC is irrelevant these days. It is more about: can instructions operate directly with memory (x86) or one has to use load/store (arm/r5); and does CPU rely on a fixed length instructions (basic Risc-v and basic Arm) or on variable length ones (x86, Arm Thumb, Risc-v C-extension).

    • @FLMKane
      @FLMKane ปีที่แล้ว

      Ever since the mid 90s, x86 processors use RISC like microcode internally, and theres an x86 frontend that translates the binary into the risc microcode.

    • @mikafoxx2717
      @mikafoxx2717 11 หลายเดือนก่อน

      ​@@FLMKaneAnd this is how Intel can patch some issues with certain instructions for security purposes, they can change that the cpu does internally to do each instruction. It's almost like an x86 emulator, in a way. Just with a hardware accelerated instruction decoder.

  • @stevecoxiscool
    @stevecoxiscool 4 ปีที่แล้ว +15

    Worked for Compaq in the mid 80's and remember these arguments. Back then x86=Intel=PC=DOS = "Inexpensive" computer. Yes, Compaq had SGI workstations to help out designing the x86 boxes being sold. It wasn't which architecture was technically superior, everyone new THAT, it was what chip set is the cheapest. Just ask "Sun Micro, Silicon Graphics, DEC, HP, NeXT". CISC/RISC is a dead argument in the multi-core universe we live in.

    • @Conenion
      @Conenion 4 ปีที่แล้ว +8

      > CISC/RISC is a dead argument in the multi-core universe we live in.
      I don't see why. Since CISC vs RISC has nothing to do with single or multi-core.
      It is rather a dead argument, because X86 is RISC internally, and many RISC chips, which started as a pure RISC design, have more and more instructions and complexity added to them over time.

    • @250txc
      @250txc 4 ปีที่แล้ว

      Yep Intel chips run UNIX also..

    • @neonlost
      @neonlost 4 ปีที่แล้ว +1

      lol this comment won’t age well.. this decade will be the decade of RISC, CISCs days are numbered

    • @TheCablebill
      @TheCablebill 4 ปีที่แล้ว

      The distinction is arbitrary but the discussion is interesting.

  • @GL-Kageyama2ndChannel
    @GL-Kageyama2ndChannel 4 ปีที่แล้ว +13

    RISC-V ?

    • @neonlost
      @neonlost 4 ปีที่แล้ว +1

      yes!

  • @TheVincent0268
    @TheVincent0268 4 ปีที่แล้ว +5

    I can remember that the Acorn Archimedes had a RISC processor.

    • @DavidRutten
      @DavidRutten 4 ปีที่แล้ว +1

      And it's operating system was RISC OS. You could run that cpu for hours and it would barely be warm to the touch.

  • @daysofgrace2934
    @daysofgrace2934 ปีที่แล้ว

    Even in the late 80s, computer games on the Commodore Amiga & Atari ST were written in assembler...

  • @daysofgrace2934
    @daysofgrace2934 ปีที่แล้ว

    Acorn Archimedes was a RISC computer but it failed commercially against the Amiga & ST but the CPU, the Acorn Risc Machine (ARM) went on to conquer the world, should also mention MIPS...

  • @andytroo
    @andytroo 4 ปีที่แล้ว +2

    if risc is better than cisc, then why does "just in time compiled" code work so well; the thing that can best understand how to execute a complex instruction would be a CPU. the breakdown of a complex instruction into micro instructions is what happens inside a CISC cpu, why is this less efficient than the compiler doing it up front into RISK instructions?

    • @complexacious
      @complexacious 4 ปีที่แล้ว +3

      It was touched upon in the video, but if you don't know the answer already it's easy to miss. The genuine CISC instructions come at a higher penalty through the translation layer. In more detail a compiler that targets modern x86 will greatly favor the "CISC" instructions which are actually 1:1 with the hidden internal RISC instructions. I know, you're thinking "but isn't it just coming up with the same instructions? Why is it slower?" The CPU just has to do more work to get usable instructions out of these CISC instructions and unlike a JIT compiler it doesn't have 16gigs of RAM to store the results for next time. There's also the general efficiency of the instructions in particular. These instructions tend to operate on specific registers, so software that uses them has to use EXTRA instructions to move data from RAM to registers and back again to make use of them. With a 386 this was acceptable since all instructions had that limitation in some fashion, but on a modern version of the ISA you can save all that overhead by using the simpler instructions that can operate on the registers that make the code simpler directly. I'm sure many an Intel engineer has argued for the moving of the CISC decoder to software and exposing the internal RISC to the outside to save space on the die, save power, lower heat, etc. but for business reasons Intel doesn't want to do that.

    • @rolfw2336
      @rolfw2336 4 ปีที่แล้ว

      It’s a legit question.. but JIT is still a kind of compiling. I think Dr Paterson argues that the compiler will better match the available instructions of RISC than CISC.

  • @paradox_695
    @paradox_695 ปีที่แล้ว

    10:40 good sir, if they are inefficient languages, still in the context of compiled languages, which ones are?

  • @petergoodall6258
    @petergoodall6258 4 ปีที่แล้ว +1

    Some folks got their Smalltalk VM to be resident in the RISC CPU cache

  • @thefreethinker4441
    @thefreethinker4441 4 ปีที่แล้ว +4

    SHAKTi is based on this RISC. Good going team Shakti. Thanks to Lex for bringing knowledge to the world. Russian Legend!

    • @alexben8674
      @alexben8674 4 ปีที่แล้ว +1

      Technically RISC is more feasible these days than older times. Because the processor were slower and running those complex programs would had been time consuming and not so viable but now we have high frequency processor which solve those problems. So, RISC is the future. Where CISC will be history in the. Coming future. Until some kinda of radical change happens in the architecture.

    • @ciarfah
      @ciarfah 4 ปีที่แล้ว

      @@alexben8674 I think it will flip flop back and forth. CISC makes more sense when reducing transistor size or memory access latency become prohibitively expensive.

  • @albeit1
    @albeit1 4 ปีที่แล้ว

    many small things flow through a system quicker. Works with web requests too. With web requests, there's more opportunities for caching because it's less likely that individual small responses have changed than one monolithic response.

  • @maxfmfdm
    @maxfmfdm 4 ปีที่แล้ว +4

    As someone who is pro-CISC because of economic and software development ecosystem reasons. It's important for me to hear the logical reasons and arguments for the merit of RISC architecture. Thank you.

    • @deletevil
      @deletevil 4 ปีที่แล้ว

      ^this.

    • @victorpinasarnault9135
      @victorpinasarnault9135 2 ปีที่แล้ว

      Have you had contact with DEC computers? The ALPHA architecture?

  • @macintush
    @macintush 4 ปีที่แล้ว +10

    "RISC architecture is going to change everything"

    • @ancestralrocha7709
      @ancestralrocha7709 4 ปีที่แล้ว +3

      RISC is good

    • @ogremgtow990
      @ogremgtow990 4 ปีที่แล้ว +3

      I heard the same thing back in 96. A few months later everyone wanted NT 4 for network security. All the RISC workstations and servers would not run NT 4 and RISC died .
      I gather history is repeating itself again ?

    • @ashishpatel350
      @ashishpatel350 4 ปีที่แล้ว

      @@ogremgtow990 the problem with risc and arm chips is they are very basic and need to be redesigned for certain workloads. So if your workload changes the chips can't run the software 😂. Software has the ability to move much faster than hardware.

    • @Mvobrito
      @Mvobrito 4 ปีที่แล้ว

      @@ogremgtow990 Not with Apple going for it

    • @hailtothechief7181
      @hailtothechief7181 4 ปีที่แล้ว

      14:29 Sounds like RISC did change everything and Intel adapted.

  • @livingthehardlife
    @livingthehardlife 4 ปีที่แล้ว +41

    HEISENBERG

    • @jojojorisjhjosef
      @jojojorisjhjosef 4 ปีที่แล้ว +2

      This dude is big, Walter White is more the David Patterson of chemistry.

  • @ohdude6643
    @ohdude6643 2 ปีที่แล้ว +1

    Give him a goatee, and this man is Heisenberg.

  • @LyubomyrSemkiv
    @LyubomyrSemkiv 2 ปีที่แล้ว

    I still don’t get the main question: why not having complex operations in cpu works faster. Hardware must be faster than software so calculating sha256 directly in cpu must be faster then by running primitive instructions. The only I can imagine that silicon space for logic for translating from cisc to some microcode can be used for more processing.

  • @sahilchoudhary3002
    @sahilchoudhary3002 3 ปีที่แล้ว

    was looking for a video on mips and stumbled across the great lex

  • @D.u.d.e.r
    @D.u.d.e.r ปีที่แล้ว

    Very well explained difference between these 2 fundamental CPU architectures.

  • @Scorch428
    @Scorch428 4 ปีที่แล้ว +12

    RISC is gonna change everything
    Yeah, RISC is good
    1995, Hackers

  • @ricosrealm
    @ricosrealm 3 ปีที่แล้ว

    I used his book in college... really enjoyed it.

  • @cafeinomano_
    @cafeinomano_ 3 ปีที่แล้ว

    I've seen this video like 15 times, I love RISC and its philosophy.

  • @sandraviknander7898
    @sandraviknander7898 4 ปีที่แล้ว +1

    Awesome intervju!
    One thing that I have always wondered. avx instructions, sure you might have to use an intrinsic for the compiler to use it, but it’s really great way to parallelise. How would you those instructions compare to a risk alternative? You touched a little bit on it at the end but the answer was a little short for such an important part.

    • @kynikersolon3882
      @kynikersolon3882 4 ปีที่แล้ว

      There is a vector extension to risc-v.

    • @mikafoxx2717
      @mikafoxx2717 11 หลายเดือนก่อน

      Complexity of instructions don't make the difference between CISC or RISC.. the basic idea is that RISC has the same size instructions for everything, instead of variable length, they all take a similar time to compute, and they use a load and store architecture - where you load the registers with the needed information l, then you execute the instructions to operate on them, then you store the required registers back to memory. With CISC, like x86, you can have an instruction of variable length up to 15 bytes, so it'll keep pulling in more information for that instruction, the contents for the registers, the task to operate on the registers, and then where to put the registers, in one single instruction. With CISC it could be a simple short instruction like xor a, a, or addsubps.. which I don't even want to explain what it does.. because I don't fully understand.

  • @maxmuster7003
    @maxmuster7003 4 ปีที่แล้ว +3

    Why is RISC not so efficient to access the ram?

    • @MagnumCarta
      @MagnumCarta 4 ปีที่แล้ว +8

      Compiled instructions have to be stored in memory before being loaded into the CPU. CISC systems can narrow down the amount of RAM utilized by keeping the number of bytes to store compiled instructions small. The biggest bottleneck between the CPU and RAM is the MMU (Memory Management Unit) which has a fixed-size in how many bits it can transfer at any given clock cycle. Since CISC can use less memory, it can load more information in the same unit of time as a RISC system.
      A good example of this is the mult instruction to multiply two values. In RISC, you would need to do two load instructions for each given value you want to multiply by whereas in CISC you could fit all of this into the size of the MMU (so for 64-bit this would be stored in only eight bytes of memory).
      So CISC improves the number of lines of instruction whereas RISC improves the number of clock cycles per instruction (only one instruction per clock cycle). The bottleneck is the bandwidth of the MMU.
      That's my understanding of it but please keep in mind I come from the software development perspective not the hardware development perspective. I could be wrong about my interpretations.

    • @maxmuster7003
      @maxmuster7003 4 ปีที่แล้ว

      @@MagnumCarta Thanks, i begin to understand. The Intel core2 CPU can execute up to 4 integer instructions at the same time, if the instructions are pairable. I think this works with one complex and three simple instructions. I never used a compiler, but i am familar with assembler on Intel 80386.

    • @povelvieregg165
      @povelvieregg165 4 ปีที่แล้ว +6

      @@maxmuster7003 It isn't really about how many instructions you can execute in parallel but about how quickly you can pull instructions into the CPU. A simple example would be, say a line of C code, may compile into a single CISC machine code instruction. While on RISC it may turn into 4 instructions. However that single CISC instruction may take 4 clock cycles to execute, while each one of the RISC instructions take 1 cycle. Hence in principle there is no performance difference.
      However this means that for a larger program the RISC processor will fill up its CPU cache faster than the CISC processor. That is why RISC processors tend to have larger caches.
      However it is apparently not as bad as it sounds for RISC. RISC processor avoid a lot of load and store instruction by having a lot more registers than CISC processors. As far as I understand, a good compiler will be able to arrange things so that a RISC doesn't need to have that many more instructions than CISC.
      Anyway that is my understanding. I am also a learner here. I stopped caring about RISC and CISC ever since Apple switched to intel. But it is becoming a more interesting topic again.

    • @Conenion
      @Conenion 4 ปีที่แล้ว +2

      @@povelvieregg165
      > However it is apparently not as bad as it sounds for RISC.
      Also because of instruction caches having a high hit rate.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      @Max Muster
      You can combine both worlds. ARM for examples does this with the thumb instruction set. Those are "compressed" short RISC instructions that are expanded to their long versions during instruction fetch.
      In essence, x86 does this as well. It wasn't planned, though.

  • @beameup64
    @beameup64 3 ปีที่แล้ว +1

    "machine language" was the term I was taught in data processing in the '70s. Apple will be using RISC in all their products.

  • @drmosfet
    @drmosfet 4 ปีที่แล้ว +1

    He forgot Intel 8088 the 8 bit version of the 8086, the interview cut of just when it was getting interesting. Like to know what he thought about Intel iAPX 432, it seems to have so much potential?

    • @Conenion
      @Conenion 4 ปีที่แล้ว +2

      > Like to know what he thought about Intel iAPX 432, it seems to have so much potential?
      iAPX 432 was a total disaster right from the beginning. The idea was to have even higher level instructions than with CISC. Making the processor even more complex than with CISC. What do you expect one of RISC inventors thinking about such a braindead idea?

  • @intheshell35ify
    @intheshell35ify 2 ปีที่แล้ว

    This a gold mine for students. Mind your citations children!!

  • @benschulz9140
    @benschulz9140 4 ปีที่แล้ว

    Would be neat for a GAN to make a game of writing instruction sets and compilers.

  • @hassanjaved4091
    @hassanjaved4091 6 หลายเดือนก่อน

    We need him back to know his take on the new AI harware arms race

  • @Leeszus
    @Leeszus 4 ปีที่แล้ว

    Amazing interview!

  • @prithviraj-mu8ox
    @prithviraj-mu8ox 4 ปีที่แล้ว +1

    Meth to silicon?

  • @TheLkdude
    @TheLkdude 4 ปีที่แล้ว +3

    If you look at current intel architecture, it is not a pure CISC processor, it is a hybrid (th-cam.com/video/NNgdcn4Ux1k/w-d-xo.html) 14:40 . It has a CISC wrapper around a RISC core .

    • @maxmuster7003
      @maxmuster7003 4 ปีที่แล้ว

      It start with the Pentium architecture?

    • @autohmae
      @autohmae 4 ปีที่แล้ว +1

      @@andrewdunbar828 It is in this clip

    • @Gabriel38196
      @Gabriel38196 4 ปีที่แล้ว +3

      that's what I don't get, these days nothing is pure risc or cisc. We have heterogenous x86 cpus, microprogrammed arm chips and every fucking thing in-between. And I love them all.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      @@maxmuster7003
      > It start with the Pentium architecture?
      Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      @@andrewdunbar828
      > consensus seems to be that the RISC inside cisc analogy is badly flawed.
      It is a simplified explanation, sure, but certainly not "badly flawed".
      > but too far off the mark if you know how CPUs work.
      Then it would have been explained in this way at 14:35 in the video.

  • @eliasdat
    @eliasdat 4 ปีที่แล้ว +6

    Heisenberg actually didn’t die, he just switched to manufacturing processors

    • @bendover4728
      @bendover4728 4 ปีที่แล้ว +1

      Now I see where Malcolm got his genes from..

    • @d3ly51d
      @d3ly51d 4 ปีที่แล้ว

      he's now in the microprocessor empire business

  • @viacheslavromanov3098
    @viacheslavromanov3098 4 ปีที่แล้ว +26

    Heisenberg guy is telling truth listen to him 😂 Hope it won’t end up like in the movie..

  • @Mbd3Bal7dod
    @Mbd3Bal7dod 4 ปีที่แล้ว

    They jumped the open source instruction set

  • @wdavid3116
    @wdavid3116 6 หลายเดือนก่อน

    I really like this interview. I would say however that I think the RISC vs CISC debate just doesn't make sense anymore. Jim Keller makes the point that instruction sets just aren't what matters and I find those that take sides in the RISC vs CISC debate seem to vary in when RISC becomes CISC. ARM is considered RISC, RISC-V is considered RISC and named RISC. ARM has AES instructions and I can find papers on AES instruction extensions for RISC-V though I'm not sure if they are officially part of the spec. AES is particular because one it is a very very complex instruction compared to what was originally considered a suitable RISC instruction and two it is generally required to be done in hardware to avoid sidechannel attacks related to timing and/or power use. So do/would AES instructions make RISC-V a CISC architecture? I'm also curious about the argument about the increased number of instructions vs the speed at which they can be executed. Does this account for the fact that RAM is dramatically slower than processors or just assume that the program is held in cache? Does it account for extra cache usage and performance when there are multiple programs fighting for use of memory? I don't know that the definitions of RISC and CISC have ever really been pedantic enough to classify hybrid architectures and I'm pretty sure both concepts pros and cons have been used to create modern architectures that do what makes the most sense for their use-cases. I would also say that given all the SIMD instructions and crypto instructions and instructions specifically designed instructions for things like video processing I believe most people would classify x86-64 as a CISC instruction set. I've seen talk of CISC processors built around RISC cores but also talk of the concepts just not making sense anymore due to the lack of the kinds of transistor density limitations that used to exist and that we just have processors that are designed to go fast based on ideas from each camp. I'm not old enough to remember the original battle, but as far as I can tell I don't think it ended with a winner but a dissolution of the opposing tribes.

  • @nickharrison3748
    @nickharrison3748 4 ปีที่แล้ว

    I personally like the word Opcode or operation code rather than calling it instruction or instruction set

  • @hassanjaved4091
    @hassanjaved4091 4 ปีที่แล้ว

    WoW great clip from the guy whose text books we have read in uni

  • @wolfganglava1511
    @wolfganglava1511 4 ปีที่แล้ว +1

    CISC is not secured, easy to put backdoor in it; hard to audit CISC platform.

  • @kamilziemian995
    @kamilziemian995 4 หลายเดือนก่อน

    I think description of this video should be changed from "AI Podcast" to "Lex Friedman Podcast".

  • @bobweiram6321
    @bobweiram6321 3 ปีที่แล้ว

    Ironically, ARM added Jazelle to execute Java bytes codes natively.

  • @sirousmohseni4
    @sirousmohseni4 4 ปีที่แล้ว

    Excellent video

  • @akemp06
    @akemp06 4 ปีที่แล้ว

    Loving all your interviews! Great questions and good for an audience that does not know all the details. What I don’t understand is your suit ? Why you having a fancy out fit but your table looks like a mess! If you want a style element in the show hide the cables under the table !

  • @k4vms
    @k4vms 4 ปีที่แล้ว

    Talk about CRISP micro processor and DEC(Digital Equipment Corp), IBM, APPLE, Motorola 68K processors, Quick Draw, WNT, VMS, OpenVMS, OPS5, MVS, VM, VAX, APLHA, AIX, POWER, ZOS. System P, System Z, System X, System I, etc
    Ricky from DEC and IBM and APPLE

  • @shableep
    @shableep 4 ปีที่แล้ว +14

    With Apple switching all of their computers over to RISC, and RISC running inside almost all tablets and cellphones, it sounds like RISC won.

    • @maxmuster7003
      @maxmuster7003 4 ปีที่แล้ว +2

      I am not familar with ARM CPUs, so i use the x86 DOSBOX emulator on my Android tablet for x86 assembly. I do not like Apple with or without CISC.

    • @brent56and1
      @brent56and1 4 ปีที่แล้ว

      Especially seeing that Intel and AMD are constantly trying to fix newly discovered speculative execution attack vulnerabilities.

    • @stevecoxiscool
      @stevecoxiscool 4 ปีที่แล้ว +1

      I am so proud of you RISC !!!, It's been 40 years and you finally did IT !!!!

    • @lb5928
      @lb5928 4 ปีที่แล้ว +3

      @@andrewdunbar828 Wrong, x86-64 is owned by AMD and it runs microcode that can implement RISC-like routines not RISC itself. It makes CISC cpus extremely versatile having vast capabilities.

    • @lb5928
      @lb5928 4 ปีที่แล้ว +2

      @@stevecoxiscool RISC didnt do anything the CISC based market share in terms of revenue is like %90 of the computing market.

  • @petergoodall6258
    @petergoodall6258 4 ปีที่แล้ว

    One man’s software is another man’s hardware

  • @danielwait8555
    @danielwait8555 4 ปีที่แล้ว +2

    I love these discussions on Computer Systems! Thanks Lex

  • @11vag
    @11vag 4 ปีที่แล้ว

    What an interesting interview.

  • @segsfault
    @segsfault 5 หลายเดือนก่อน

    Always cracks me up how people think modern RISC is more efficient and shit but in reality other than maybe 2 or 3 details and actual RISC-V chip is not very different than a CISC chip, they are all equally as complex and RISC is just an ISA, it doesn't decide how efficient an chip will be.
    The only selling point of RISC-V is the Open ISA, the actual RISC philosophy is flawed and died long time back, modern RISC processors use micro-codes and all sorts of stuff that modern CISC chip do.

  • @drewmandan
    @drewmandan 4 ปีที่แล้ว +33

    Wow, I didn't know Walter White knew so much about microprocessors.

    • @bendover4728
      @bendover4728 4 ปีที่แล้ว +1

      He is the one who knocks!

  • @Rudrazz
    @Rudrazz 4 ปีที่แล้ว

    Nice talk

  • @julianskidmore293
    @julianskidmore293 ปีที่แล้ว

    Prior to university, of course, the vast majority of kids or students who were into computers, which at the time meant 8-bit home computers had almost no access to the Hennessy and Patterson RISC research. All I knew was from articles in the mid-1980s on the Inmos Transputer and Acorn Risc Machine.
    archive.org/details/PersonalComputerWorld1985-11/page/136/mode/2up
    So, we were properly introduced to RISC only at University (in my case UEA, Norwich) as part of the computer architecture modules. So, normally, I've understood RISC to be a performance or energy optimisation trade-off. That is, the question is how to get the most work out of a given set of transistors in a CPU, and what RISC does is trade under-utilised circuitry (e.g. for seldom used instructions) for speed. In a similar sense, complex decoding represents an under-utilisation of circuitry (which adds to propagation delays, thus limiting pipeline performance) and because microcode is effectively a ROM cache: ISA ==> Microcode ==> Control Signals, it's better to use the resources to implement an actual cache or a larger register set. Etc.

  • @ErwinFranzR
    @ErwinFranzR 4 ปีที่แล้ว

    I love these tech radicals.

  • @PixelPhobiac
    @PixelPhobiac 4 ปีที่แล้ว

    PS3 was RISC, right?

  • @stabgan
    @stabgan 4 ปีที่แล้ว +2

    I am following Lex since when he had 2k connections in linkedin. He also replied to me in past multiple times. He's my idol. Honestly the apex of male peak performance.

  • @atomspalter2090
    @atomspalter2090 4 ปีที่แล้ว +1

    nice video!

  • @WiLLiW_oficial
    @WiLLiW_oficial 4 ปีที่แล้ว +6

    For a moment I though this is a Breaking Bad episode...

    • @demokraken
      @demokraken 4 ปีที่แล้ว

      Science, bi***! 🤓

  • @LabyrinthMike
    @LabyrinthMike 4 ปีที่แล้ว +2

    But, but, but, isn't the memory speed your limiting factor? If you execute more instructions and your are waiting on the memory to serve them, wouldn't that make it slower? Have you accomplished your goal? I don't really want to debate this here, I'm just saying that the Intel itanium wasn't a successful microprocessor. Macs ran for a long time on Rs6000 chips and now run on Intel. I just don't see that RISC is commercially successful. Perhaps, it is a better microprocessor design, but then why aren't Macs still using them? I've been in the computer biz for a long time. Written a bunch of assembly language. I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.

    • @Conenion
      @Conenion 4 ปีที่แล้ว +1

      > itanium wasn't a successful microprocessor.
      Yep. It was a giant failure. But Itanium was VLIW not RISC.
      > it is a better microprocessor design, but then why aren't Macs still using them?
      Apple just announced to use ARM based processors. Which are RISC. They call it "Apple Silicon".
      Search "Mac transition to Apple Silicon" on Wikipedia.
      > I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.
      Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

    • @LabyrinthMike
      @LabyrinthMike 4 ปีที่แล้ว

      @@Conenion Well, it is not important how it works internally, but, this translating to RISC internally, does that mean microcode? If yes, machines have been doing that for a long time. If I recall, the IBM 360 was a microcoded machine.

  • @geostel
    @geostel 6 หลายเดือนก่อน

    wow! Walter White now started working on CPUs

  • @Cuplex1
    @Cuplex1 4 ปีที่แล้ว

    Hmm, 6:00. Thats not how it works. Most of the extra instructions that have been added the last 20 years have been only accelerators. For example SIMD SSE4, or a more obvious example AES instruction set that makes compression and decompression about 20 times faster. All modern heavy compute operations on windows rely on running modern compilers with support for a few optimized instructions like AVX2. You also have pipe lining and branch prediction making the x86 side much more attractive. The instruction set was between AMD and intel have ended but 20 years ago we had competing and completely different instruction sets like 3dNow.
    12:30, thats BS if you know programming. The very efficient instruction sets are widely used even by more high level languages. I have been a computer engineer/developer for over 15 years so what do I know. 😎 I think the majority was right back then if we look at where we are now.
    General compute is never as fast as ASICS which is basically what advanced instruction sets are.

    • @websnarf
      @websnarf 4 ปีที่แล้ว +2

      Yeah, this discussion makes it seem like Patterson has not looked at a serious CPU architecture in 25 years. His arguments may have made sense against the 80386, or Morotola 68K, but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong. Today, there is no such thing as a "high performance RISC"'; the only way to achieve performance is to get a multi-core x86. RISC has been relegated to low-cost/hardware integrated solutions.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      @@websnarf
      > but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong.
      You have obviously never heard of the Alpha processor.
      > the only way to achieve performance is to get a multi-core x86.
      X86 translates from CISC to RISC-like instructions internally since Pentium Pro in 1995.
      Which avoids long RISC instructions for simple instructions like INC.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      > 12:30, thats BS if you know programming.
      No, its not. For a compiler it is still very difficult to map a code snippet to a special instruction doing the same. I doubt that a compiler will replace C code that does AES encryption or decryption with an AES instruction. Taking your example.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      @@juliuszkopczewski5759
      Sure, I know. That is exactly the reason, why you add instructions to the instruction set without caring about the compiler. But this is not "general purpose" code, and for such a code the argument from Prof Patterson is still true to this day. Albeit a bit less so, because compilers are smarter today, than they were 30 years ago.

  • @mmenjic
    @mmenjic 4 ปีที่แล้ว

    why we could not have something close to universal or even dynamic or reprogrammable instruction set instead of 17 different hidden and fixed sets ?

  • @zebratangozebra
    @zebratangozebra 4 ปีที่แล้ว

    Think the guys that write the compiler code are the real wizards, but I'm kinda stupid.

  • @ChitranjanBaghiofficial
    @ChitranjanBaghiofficial 4 ปีที่แล้ว +1

    hey breaking bad chracter is back, nice to see you professor

  • @akhilaryappatt
    @akhilaryappatt 4 ปีที่แล้ว

    but I'm so nostalgic about x86. can't let go
    and I somehow started disliking mobile devices with ARM chips

  • @LoneWolf-wp9dn
    @LoneWolf-wp9dn 4 ปีที่แล้ว

    Damn Mr White you know about computers too!?

  • @jasonzhou6437
    @jasonzhou6437 4 ปีที่แล้ว +2

    My textbook author ;) great book

    • @Yukke91
      @Yukke91 4 ปีที่แล้ว +2

      Haha I was like ”Hey I know that book!”

    • @mika274
      @mika274 4 ปีที่แล้ว

      He also mentioned his friend John Hennessy

  • @KevinInPhoenix
    @KevinInPhoenix ปีที่แล้ว +1

    The RISC vs CISC debate really turns out to be "six of one and a half dozen of another". If one architecture ran software twice as fast as the other then it would have clearly won and we would all be using that design. This is not the case.

  • @stickmanjournal
    @stickmanjournal ปีที่แล้ว +1

    Walter white?

  • @popotit0
    @popotit0 4 ปีที่แล้ว

    RISC will start catching up if you could pay less than us$2k for a server and run Linux on it.

  • @TheOneTrueMaNicXs
    @TheOneTrueMaNicXs 4 ปีที่แล้ว +2

    I feel kind of like he is wrong. On arm processors all instructions take 4 cycles since and x86 instructions are variable, today x86 machines basically are 4 times faster.
    I still want epic ( Explicitly parallel instruction computing ) architecture .

    • @povelvieregg165
      @povelvieregg165 4 ปีที่แล้ว +3

      Curtis ARM instructions take 1 cycle on average to finish because they are pipelined. That is after all the whole point of RISC having the same number of cycles per instruction. It is to make pipelining a lot easier. I am not up to date on the current status of x86 but at least back in the PowerPC days of Apple, it was a point often made that pipelining worked bad with x86. It was hard to keep it full at all times, with variable number of cycles.
      ARM also has a bunch of instructions very well suited for pipelining, such as conditional arithmetic operations. It means you can avoid branching which drains the pipeline.

  • @denni_isl1894
    @denni_isl1894 4 ปีที่แล้ว +1

    Sophie Wilson.

  • @Maxkraft19
    @Maxkraft19 4 ปีที่แล้ว +1

    Almost all chips are RISC. Most Chips just convert there conventional code to a simple code in the CPU. Intel did this with the Pentium 4. So RISC did win. Also VLIW was superseded by SIMD in the the FPU via special instructions or in the GPU. Modern Chips just glue all these different approaches together and hide it in the compiler or CPU instruction decoder.

    • @Conenion
      @Conenion 4 ปีที่แล้ว

      > Intel did this with the Pentium 4
      Before that. Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

  • @Elmaxo1989
    @Elmaxo1989 4 ปีที่แล้ว

    Did anyone make a Malcolm in the Middle reference in the comments yet? Y'know, like something about Hal designing HAL? I'll leave the completion of this joke as an exercise for the reader.

    • @ciarfah
      @ciarfah 4 ปีที่แล้ว

      Multiple layers of joke here given "Hal" authored a book on this stuff, haha

  • @DaiChiMon
    @DaiChiMon ปีที่แล้ว

    Walter White if he not a chemist

  • @parkerd2154
    @parkerd2154 4 ปีที่แล้ว

    If you didn't know this you didn’t know much about computers

  • @petros_adamopoulos
    @petros_adamopoulos 4 ปีที่แล้ว

    No mention of pipelining, one of the most important leverages for RISC vs CISC early on as a means to achieve single cycle instructions vs many cycles even for some of the simple CISC ones.
    No mention of the amount of registers which was/is typically several fold more on RISC; that's one of the things making it easier to target for a compiler.
    No mention of how the cost of CPU cache changed historically, which made it first advantageous for CISC then for RISC.
    This interview is really really dumbed down, so as to wonder what the audience for it would be...
    Pipelining and register allocation are very interesting topics, and defining ones in processor architectures.

  • @iamavolk
    @iamavolk 2 ปีที่แล้ว

    Prof. Patterson :) Go Bears!

  • @henrifritsmaarseveen6260
    @henrifritsmaarseveen6260 4 ปีที่แล้ว +2

    the advance is the fetch of the commands
    because cisc has more commands it need more time to get a instruction as risc
    so in the beginning cisc was risc but because people became lazy and wanted multiplications in the instruction set because code would be come easier and the memory need became smaller to store the commands , less needed
    Also at that time memory was expensive ..
    So when memory became cheaper and clock circles became higher .. risc because faster .
    But around that time Intel and all others Microsoft blocked these CPUs
    Look at the story of the BBC ACHIMEDES .. there is your first real Risc computer with OS !! maybe still ome of the best ever !!

  • @rezan6971
    @rezan6971 4 ปีที่แล้ว

    the question you didn't ask: if risc and cisc where engins ,which one would be more powrfull?

  • @filiperocha1465
    @filiperocha1465 ปีที่แล้ว +1

    "RISC is good"

  • @mysticalsoulqc
    @mysticalsoulqc 4 ปีที่แล้ว

    i shall not add... lol to touchy of situations.