Lex is a good interviewer, pretty sure he knew alot of the stuff David was explaining but the way he explains it is really good for viewers that aren't as well versed.
4 ปีที่แล้ว +5
And best of all, he almost never interrupts and is never trying to take the spotlight.
Lex, you should try to have a security engineer, like Chris Domas on. The instruction set architecture discussion gets so interesting when you consider how other people (ab)use the operating system to do what they want.
In university I took two courses on computer architecture where we studied the entire book, and it was my favorite set of lectures from the entire CS curriculum. The book gives you a wonderful insight into how computers and compilers actually work and how various types of speedup are achieved and measured. In the exam we had to unroll a loop in DLX among other things, and to calculate the CPI speedup. I'm so glad to actually see one of the authors behind this amazing book.
Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.
We can talk all day about low or high level languages, but these days we can run Windows 2000 (which was written in C++ and compiled to x86) in Javascript in the browser on an ARM device. And that is at half the speed or faster than half the speed compared to bare x86 hardware.
@@autohmae The Windows NT _kernel_ is written mostly in C, with some assembly as needed. Maybe some C++ for newer parts. Everything what you see, the GUI part, is written in C++ and C#.
I have a disagreement with people who claim that C is a "high level language". It's certainly human-readable, but that's an aesthetic choice. They could have renamed all the keywords to things more esoteric and that wouldn't change its "level". Instead, I think the important thing is how easy it is to draw a map between C instructions and machine instructions, and it's almost 1-1. Not only that, but a C programmer needs to actively think about the machine instructions in a way that a Java or Python programmer does not. So perhaps there should be a separate category for C or C++, like "semi-high level" or "medium level".
@@drewmandan C was considered one of the first high-languages after Assembler, so that makes all other even higher languages also a high-language :-) Maybe something like: super-high-language would be a good fit ? There are other ways you can talk about languages: Python, like Javascript, Bash and Powershell are considered scripting languages. Which imply they are 'super higher' languages in practice (my guess is Lua still fits that category too). An other way to distinguish the languages you mentioned is that Java and Python both have a runtime which usually means they work with bytecode, Python(.pyo), PHP, Java, all do that. And Javascript does something similar at runtime (Webassembly is very similar to the bytecode for Javascript). Rust and C, C++, etc. are also often called "system languages"
Interview ends kind of abruptly, but I really enjoyed it! You bring out Dr Patterson’s talent for explaining these concepts to a wide audience. Wow, he must have been great in the classroom.
Brilliant conversation. This just closed the remaining knowledge gap I had when it comes to understanding how modern hardware and software work together.
RISC is faster, more energy efficient and easier to design. CISC uses less memory and is simpler for compilers. It made sense to use CISC in the 1980s, when memory was much more expensive, programming languages were lower level and compiler technology was not yet well established. Nowadays, memory is no longer a limiting factor and modern compilers/interpreters can turn high-level languages into machine code very easily. The priority now, with the mobile device revolution, is to design faster, less energy-consuming processors, and RISC is the way to go. In addition, as they are simpler to conceive, the market's transition to RISC would greatly increase competition in this segment, which has been quite stagnated in the last decade because of the Intel/AMD duopoly.
The only difference is in the processors translator - less work to do with RISC, but more machine cycles to accomplish that same task. With multi-thread, multi-core processors (like AMD Ryzen) CISC is still the best, especially for high powered computers used from gaming/ video editing etc..
For low-power application chips, though, which see much more use in products worldwide, RISC-based processors are being increasingly used in products, and have been gathering much attention in the past couple years. In this case, prioritizing higher clock cycles per instruction means less clock activity to carry out processes, which in turn translates into less consumption. And low consumption is one of the hot words going around, along with security, cloud and ML.
Memory access time is an big factor for modern CPU. An cache miss takes about 200 cycles. So an instruction set that minimize the amount off necessary access to the memory could improve the performance
Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.
Basically, the RISC opponents didn't understand how optimizing compilers work and what they are capable of; many of them also didn't understand what high-performance processor implementation really requires. The argument basically boils down to that. One thing that Dave didn't get to is that CISC computers tend to have instructions that execute *more slowly* than a sequence of simpler instructions...from *their own* instruction set. This was very true of the Digital Equipment Corporatation (DEC) VAX machines. In some ways, the VAX was the CISC-y-est CISC. If you understand hardware and compilers (and software frameworks), you understand why RISC makes sense and why you would never choose to design a CISC architecture from scratch. Even the original ARM architecture was not really a RISC. ARM V8 and v9 are much more simple.
One good way to think about it in a way, is the Java runtime environment, where you assemble the java into a virtual machine, which is then converted into the actual underlying instruction set on the fly. In this case, the x86 cpu is doing the same thing under its hood too, converting it into simpler micro-operations.
The RISC-CISC wars are long over. Neither side has won but rather the whole issue got obsoleted by long pipelines and speculative execution. For simple CPUs, which run slowly but are power optimized simple instruction sets usually win. For fast CPUs minimizing code size and therefore going a bit more CISCy wins. All modern instruction sets are hybrids and all have some form of microcode being spit into the decoding pipeline.
I remember the RISC/CISC debate/battle back in the 80s. I always thought that CISC was better and all the hard work can be done by the compiler - which is a piece of software - so I felt that CISC would win-out. When Sun started moving away from the Sparc I was surprised and puzzled but I read about commercial arguments that started to change the economic argument in favour of RISC but I always felt it was like VHS beating Betamax again. I'm surprised to hear about RISC-V now. I'm not convinced things are sufficiently different to justify switching from RISC to CISC for everything but there are bound to be applications where CISC is way better. Interesting times.
In the end CISC CPU’s like x86-64 are decomposing their long instructions into a set of many simple micro operations, so effectively modern Intel CPU’s are in some sense RISC in disguise. On the other hand ARM has many very complex instructions which could be considered CISC in nature. Same with RISC-V, altho the instructions are simple and short, at a certain stage under the hood they are fused into more complex ones for the efficiency purpose. The debate of RISC vs CISC is irrelevant these days. It is more about: can instructions operate directly with memory (x86) or one has to use load/store (arm/r5); and does CPU rely on a fixed length instructions (basic Risc-v and basic Arm) or on variable length ones (x86, Arm Thumb, Risc-v C-extension).
Ever since the mid 90s, x86 processors use RISC like microcode internally, and theres an x86 frontend that translates the binary into the risc microcode.
@@FLMKaneAnd this is how Intel can patch some issues with certain instructions for security purposes, they can change that the cpu does internally to do each instruction. It's almost like an x86 emulator, in a way. Just with a hardware accelerated instruction decoder.
This guy has one brother who makes the highest quality crystal meth, another brother who commands the space ship enterprise, and he himself provided the best cpu design theory. Pretty talented family.
As someone who is pro-CISC because of economic and software development ecosystem reasons. It's important for me to hear the logical reasons and arguments for the merit of RISC architecture. Thank you.
Acorn Archimedes was a RISC computer but it failed commercially against the Amiga & ST but the CPU, the Acorn Risc Machine (ARM) went on to conquer the world, should also mention MIPS...
Technically RISC is more feasible these days than older times. Because the processor were slower and running those complex programs would had been time consuming and not so viable but now we have high frequency processor which solve those problems. So, RISC is the future. Where CISC will be history in the. Coming future. Until some kinda of radical change happens in the architecture.
@@alexben8674 I think it will flip flop back and forth. CISC makes more sense when reducing transistor size or memory access latency become prohibitively expensive.
many small things flow through a system quicker. Works with web requests too. With web requests, there's more opportunities for caching because it's less likely that individual small responses have changed than one monolithic response.
I heard the same thing back in 96. A few months later everyone wanted NT 4 for network security. All the RISC workstations and servers would not run NT 4 and RISC died . I gather history is repeating itself again ?
@@ogremgtow990 the problem with risc and arm chips is they are very basic and need to be redesigned for certain workloads. So if your workload changes the chips can't run the software 😂. Software has the ability to move much faster than hardware.
Around 8:00 minutes, the good professor starts talking about how operating systems and even application programs were written in assembly language to achieve speed. And if compilers could have been made smart enough to translate from various languages into complex instructions, that would have been wonderful but RISC makes it easier. And the he goes off on to Unix,C,C++, etc. I now understand why for the last 40 years, we have been getting programmers pretty much illiterate about computer architecture. Does the good professor know about the Burroughs B5500, B6500, B7500 series and their follow-ups? Those machines had a push down stack architecure so that they could effectively use Algol as the language in which the operating system could be implemented. As opposed to C which is a high-level machine-oriented language, you had a true high-level language for writing operating systems. And those machines were equally efficient in running Fortran, COBOL and such languages which did not need a stack for their statically declared variables. And if you want register-to-register instructions only (this is claimed to be an essential feature of RISC computers) as opposed to the IBM 360's register-to-register, register-to-memory and memory-to-memory types of (CISC) instructions, then I can tell you that the CDC 3200 dating back to the early 1960's had that type of instruction set. Every arithmetic instruction meant that the programmer painfully accessed the memory to load the two operands into two registers, performed the arithmetic operation (add, subtract, multiply, divide) on the registers and then stored the result back in main memory. What a pain for the programmer who had to program that beast in assembly language! It is the speed increases in hardware over 30 years that enabled a RISC computer to perform fast. If one implements the CDC 3200 using today's VLSI technology, I am sure it would beat any RISC processor in performance. Write a nice proposal to get a grant, get a bunch of PhD students to design the chip and write compilers for C or Smalltalk or C++, and you have a nice decade-long run of research publications and more research funding. That is what RISC was about.
I'm not sure why Lex doesn't know about RISC and CISC. Having just finished my undergrad CS degree, one of my later classes had us make a rudimentary processor from scratch gates, we coded a little in assembly and converted it into machine language by hand, and we learned some about the history of RISC ARM/CISC x86 as well. Did I just get an abnormally competent professor?
if risc is better than cisc, then why does "just in time compiled" code work so well; the thing that can best understand how to execute a complex instruction would be a CPU. the breakdown of a complex instruction into micro instructions is what happens inside a CISC cpu, why is this less efficient than the compiler doing it up front into RISK instructions?
It was touched upon in the video, but if you don't know the answer already it's easy to miss. The genuine CISC instructions come at a higher penalty through the translation layer. In more detail a compiler that targets modern x86 will greatly favor the "CISC" instructions which are actually 1:1 with the hidden internal RISC instructions. I know, you're thinking "but isn't it just coming up with the same instructions? Why is it slower?" The CPU just has to do more work to get usable instructions out of these CISC instructions and unlike a JIT compiler it doesn't have 16gigs of RAM to store the results for next time. There's also the general efficiency of the instructions in particular. These instructions tend to operate on specific registers, so software that uses them has to use EXTRA instructions to move data from RAM to registers and back again to make use of them. With a 386 this was acceptable since all instructions had that limitation in some fashion, but on a modern version of the ISA you can save all that overhead by using the simpler instructions that can operate on the registers that make the code simpler directly. I'm sure many an Intel engineer has argued for the moving of the CISC decoder to software and exposing the internal RISC to the outside to save space on the die, save power, lower heat, etc. but for business reasons Intel doesn't want to do that.
It’s a legit question.. but JIT is still a kind of compiling. I think Dr Paterson argues that the compiler will better match the available instructions of RISC than CISC.
Worked for Compaq in the mid 80's and remember these arguments. Back then x86=Intel=PC=DOS = "Inexpensive" computer. Yes, Compaq had SGI workstations to help out designing the x86 boxes being sold. It wasn't which architecture was technically superior, everyone new THAT, it was what chip set is the cheapest. Just ask "Sun Micro, Silicon Graphics, DEC, HP, NeXT". CISC/RISC is a dead argument in the multi-core universe we live in.
> CISC/RISC is a dead argument in the multi-core universe we live in. I don't see why. Since CISC vs RISC has nothing to do with single or multi-core. It is rather a dead argument, because X86 is RISC internally, and many RISC chips, which started as a pure RISC design, have more and more instructions and complexity added to them over time.
I still don’t get the main question: why not having complex operations in cpu works faster. Hardware must be faster than software so calculating sha256 directly in cpu must be faster then by running primitive instructions. The only I can imagine that silicon space for logic for translating from cisc to some microcode can be used for more processing.
I am following Lex since when he had 2k connections in linkedin. He also replied to me in past multiple times. He's my idol. Honestly the apex of male peak performance.
Talk about CRISP micro processor and DEC(Digital Equipment Corp), IBM, APPLE, Motorola 68K processors, Quick Draw, WNT, VMS, OpenVMS, OPS5, MVS, VM, VAX, APLHA, AIX, POWER, ZOS. System P, System Z, System X, System I, etc Ricky from DEC and IBM and APPLE
Loving all your interviews! Great questions and good for an audience that does not know all the details. What I don’t understand is your suit ? Why you having a fancy out fit but your table looks like a mess! If you want a style element in the show hide the cables under the table !
Awesome intervju! One thing that I have always wondered. avx instructions, sure you might have to use an intrinsic for the compiler to use it, but it’s really great way to parallelise. How would you those instructions compare to a risk alternative? You touched a little bit on it at the end but the answer was a little short for such an important part.
Complexity of instructions don't make the difference between CISC or RISC.. the basic idea is that RISC has the same size instructions for everything, instead of variable length, they all take a similar time to compute, and they use a load and store architecture - where you load the registers with the needed information l, then you execute the instructions to operate on them, then you store the required registers back to memory. With CISC, like x86, you can have an instruction of variable length up to 15 bytes, so it'll keep pulling in more information for that instruction, the contents for the registers, the task to operate on the registers, and then where to put the registers, in one single instruction. With CISC it could be a simple short instruction like xor a, a, or addsubps.. which I don't even want to explain what it does.. because I don't fully understand.
He forgot Intel 8088 the 8 bit version of the 8086, the interview cut of just when it was getting interesting. Like to know what he thought about Intel iAPX 432, it seems to have so much potential?
> Like to know what he thought about Intel iAPX 432, it seems to have so much potential? iAPX 432 was a total disaster right from the beginning. The idea was to have even higher level instructions than with CISC. Making the processor even more complex than with CISC. What do you expect one of RISC inventors thinking about such a braindead idea?
Prior to university, of course, the vast majority of kids or students who were into computers, which at the time meant 8-bit home computers had almost no access to the Hennessy and Patterson RISC research. All I knew was from articles in the mid-1980s on the Inmos Transputer and Acorn Risc Machine. archive.org/details/PersonalComputerWorld1985-11/page/136/mode/2up So, we were properly introduced to RISC only at University (in my case UEA, Norwich) as part of the computer architecture modules. So, normally, I've understood RISC to be a performance or energy optimisation trade-off. That is, the question is how to get the most work out of a given set of transistors in a CPU, and what RISC does is trade under-utilised circuitry (e.g. for seldom used instructions) for speed. In a similar sense, complex decoding represents an under-utilisation of circuitry (which adds to propagation delays, thus limiting pipeline performance) and because microcode is effectively a ROM cache: ISA ==> Microcode ==> Control Signals, it's better to use the resources to implement an actual cache or a larger register set. Etc.
I really like this interview. I would say however that I think the RISC vs CISC debate just doesn't make sense anymore. Jim Keller makes the point that instruction sets just aren't what matters and I find those that take sides in the RISC vs CISC debate seem to vary in when RISC becomes CISC. ARM is considered RISC, RISC-V is considered RISC and named RISC. ARM has AES instructions and I can find papers on AES instruction extensions for RISC-V though I'm not sure if they are officially part of the spec. AES is particular because one it is a very very complex instruction compared to what was originally considered a suitable RISC instruction and two it is generally required to be done in hardware to avoid sidechannel attacks related to timing and/or power use. So do/would AES instructions make RISC-V a CISC architecture? I'm also curious about the argument about the increased number of instructions vs the speed at which they can be executed. Does this account for the fact that RAM is dramatically slower than processors or just assume that the program is held in cache? Does it account for extra cache usage and performance when there are multiple programs fighting for use of memory? I don't know that the definitions of RISC and CISC have ever really been pedantic enough to classify hybrid architectures and I'm pretty sure both concepts pros and cons have been used to create modern architectures that do what makes the most sense for their use-cases. I would also say that given all the SIMD instructions and crypto instructions and instructions specifically designed instructions for things like video processing I believe most people would classify x86-64 as a CISC instruction set. I've seen talk of CISC processors built around RISC cores but also talk of the concepts just not making sense anymore due to the lack of the kinds of transistor density limitations that used to exist and that we just have processors that are designed to go fast based on ideas from each camp. I'm not old enough to remember the original battle, but as far as I can tell I don't think it ended with a winner but a dissolution of the opposing tribes.
Always cracks me up how people think modern RISC is more efficient and shit but in reality other than maybe 2 or 3 details and actual RISC-V chip is not very different than a CISC chip, they are all equally as complex and RISC is just an ISA, it doesn't decide how efficient an chip will be. The only selling point of RISC-V is the Open ISA, the actual RISC philosophy is flawed and died long time back, modern RISC processors use micro-codes and all sorts of stuff that modern CISC chip do.
Compiled instructions have to be stored in memory before being loaded into the CPU. CISC systems can narrow down the amount of RAM utilized by keeping the number of bytes to store compiled instructions small. The biggest bottleneck between the CPU and RAM is the MMU (Memory Management Unit) which has a fixed-size in how many bits it can transfer at any given clock cycle. Since CISC can use less memory, it can load more information in the same unit of time as a RISC system. A good example of this is the mult instruction to multiply two values. In RISC, you would need to do two load instructions for each given value you want to multiply by whereas in CISC you could fit all of this into the size of the MMU (so for 64-bit this would be stored in only eight bytes of memory). So CISC improves the number of lines of instruction whereas RISC improves the number of clock cycles per instruction (only one instruction per clock cycle). The bottleneck is the bandwidth of the MMU. That's my understanding of it but please keep in mind I come from the software development perspective not the hardware development perspective. I could be wrong about my interpretations.
@@MagnumCarta Thanks, i begin to understand. The Intel core2 CPU can execute up to 4 integer instructions at the same time, if the instructions are pairable. I think this works with one complex and three simple instructions. I never used a compiler, but i am familar with assembler on Intel 80386.
@@maxmuster7003 It isn't really about how many instructions you can execute in parallel but about how quickly you can pull instructions into the CPU. A simple example would be, say a line of C code, may compile into a single CISC machine code instruction. While on RISC it may turn into 4 instructions. However that single CISC instruction may take 4 clock cycles to execute, while each one of the RISC instructions take 1 cycle. Hence in principle there is no performance difference. However this means that for a larger program the RISC processor will fill up its CPU cache faster than the CISC processor. That is why RISC processors tend to have larger caches. However it is apparently not as bad as it sounds for RISC. RISC processor avoid a lot of load and store instruction by having a lot more registers than CISC processors. As far as I understand, a good compiler will be able to arrange things so that a RISC doesn't need to have that many more instructions than CISC. Anyway that is my understanding. I am also a learner here. I stopped caring about RISC and CISC ever since Apple switched to intel. But it is becoming a more interesting topic again.
@Max Muster You can combine both worlds. ARM for examples does this with the thumb instruction set. Those are "compressed" short RISC instructions that are expanded to their long versions during instruction fetch. In essence, x86 does this as well. It wasn't planned, though.
@@andrewdunbar828 Wrong, x86-64 is owned by AMD and it runs microcode that can implement RISC-like routines not RISC itself. It makes CISC cpus extremely versatile having vast capabilities.
If you look at current intel architecture, it is not a pure CISC processor, it is a hybrid (th-cam.com/video/NNgdcn4Ux1k/w-d-xo.html) 14:40 . It has a CISC wrapper around a RISC core .
that's what I don't get, these days nothing is pure risc or cisc. We have heterogenous x86 cpus, microprogrammed arm chips and every fucking thing in-between. And I love them all.
@@maxmuster7003 > It start with the Pentium architecture? Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
@@andrewdunbar828 > consensus seems to be that the RISC inside cisc analogy is badly flawed. It is a simplified explanation, sure, but certainly not "badly flawed". > but too far off the mark if you know how CPUs work. Then it would have been explained in this way at 14:35 in the video.
But, but, but, isn't the memory speed your limiting factor? If you execute more instructions and your are waiting on the memory to serve them, wouldn't that make it slower? Have you accomplished your goal? I don't really want to debate this here, I'm just saying that the Intel itanium wasn't a successful microprocessor. Macs ran for a long time on Rs6000 chips and now run on Intel. I just don't see that RISC is commercially successful. Perhaps, it is a better microprocessor design, but then why aren't Macs still using them? I've been in the computer biz for a long time. Written a bunch of assembly language. I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.
> itanium wasn't a successful microprocessor. Yep. It was a giant failure. But Itanium was VLIW not RISC. > it is a better microprocessor design, but then why aren't Macs still using them? Apple just announced to use ARM based processors. Which are RISC. They call it "Apple Silicon". Search "Mac transition to Apple Silicon" on Wikipedia. > I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set. Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
@@Conenion Well, it is not important how it works internally, but, this translating to RISC internally, does that mean microcode? If yes, machines have been doing that for a long time. If I recall, the IBM 360 was a microcoded machine.
Did anyone make a Malcolm in the Middle reference in the comments yet? Y'know, like something about Hal designing HAL? I'll leave the completion of this joke as an exercise for the reader.
Almost all chips are RISC. Most Chips just convert there conventional code to a simple code in the CPU. Intel did this with the Pentium 4. So RISC did win. Also VLIW was superseded by SIMD in the the FPU via special instructions or in the GPU. Modern Chips just glue all these different approaches together and hide it in the compiler or CPU instruction decoder.
> Intel did this with the Pentium 4 Before that. Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
I feel kind of like he is wrong. On arm processors all instructions take 4 cycles since and x86 instructions are variable, today x86 machines basically are 4 times faster. I still want epic ( Explicitly parallel instruction computing ) architecture .
Curtis ARM instructions take 1 cycle on average to finish because they are pipelined. That is after all the whole point of RISC having the same number of cycles per instruction. It is to make pipelining a lot easier. I am not up to date on the current status of x86 but at least back in the PowerPC days of Apple, it was a point often made that pipelining worked bad with x86. It was hard to keep it full at all times, with variable number of cycles. ARM also has a bunch of instructions very well suited for pipelining, such as conditional arithmetic operations. It means you can avoid branching which drains the pipeline.
Hmm, 6:00. Thats not how it works. Most of the extra instructions that have been added the last 20 years have been only accelerators. For example SIMD SSE4, or a more obvious example AES instruction set that makes compression and decompression about 20 times faster. All modern heavy compute operations on windows rely on running modern compilers with support for a few optimized instructions like AVX2. You also have pipe lining and branch prediction making the x86 side much more attractive. The instruction set was between AMD and intel have ended but 20 years ago we had competing and completely different instruction sets like 3dNow. 12:30, thats BS if you know programming. The very efficient instruction sets are widely used even by more high level languages. I have been a computer engineer/developer for over 15 years so what do I know. 😎 I think the majority was right back then if we look at where we are now. General compute is never as fast as ASICS which is basically what advanced instruction sets are.
Yeah, this discussion makes it seem like Patterson has not looked at a serious CPU architecture in 25 years. His arguments may have made sense against the 80386, or Morotola 68K, but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong. Today, there is no such thing as a "high performance RISC"'; the only way to achieve performance is to get a multi-core x86. RISC has been relegated to low-cost/hardware integrated solutions.
@@websnarf > but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong. You have obviously never heard of the Alpha processor. > the only way to achieve performance is to get a multi-core x86. X86 translates from CISC to RISC-like instructions internally since Pentium Pro in 1995. Which avoids long RISC instructions for simple instructions like INC.
> 12:30, thats BS if you know programming. No, its not. For a compiler it is still very difficult to map a code snippet to a special instruction doing the same. I doubt that a compiler will replace C code that does AES encryption or decryption with an AES instruction. Taking your example.
@@juliuszkopczewski5759 Sure, I know. That is exactly the reason, why you add instructions to the instruction set without caring about the compiler. But this is not "general purpose" code, and for such a code the argument from Prof Patterson is still true to this day. Albeit a bit less so, because compilers are smarter today, than they were 30 years ago.
My simple analogy Big block muscle cars have much more scalability than small piston high rpm rice burners Because the rice burner technology or in this case the complex sets don't scale well and hit a wall
The RISC vs CISC debate really turns out to be "six of one and a half dozen of another". If one architecture ran software twice as fast as the other then it would have clearly won and we would all be using that design. This is not the case.
Lex is a good interviewer, pretty sure he knew alot of the stuff David was explaining but the way he explains it is really good for viewers that aren't as well versed.
And best of all, he almost never interrupts and is never trying to take the spotlight.
Yeah... He was talking to Lex like he was in his sophomore year.
Lex, you should try to have a security engineer, like Chris Domas on. The instruction set architecture discussion gets so interesting when you consider how other people (ab)use the operating system to do what they want.
agree
@@secretname4190 Meltdown
The Golden Days of Lex's Podcast.
In university I took two courses on computer architecture where we studied the entire book, and it was my favorite set of lectures from the entire CS curriculum. The book gives you a wonderful insight into how computers and compilers actually work and how various types of speedup are achieved and measured. In the exam we had to unroll a loop in DLX among other things, and to calculate the CPI speedup. I'm so glad to actually see one of the authors behind this amazing book.
literally had a midterm today based on one of his books lol
Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.
7:19 “These high level languages were just too inefficient”
First year uni me would be crying if I heard C was a high level language
We can talk all day about low or high level languages, but these days we can run Windows 2000 (which was written in C++ and compiled to x86) in Javascript in the browser on an ARM device. And that is at half the speed or faster than half the speed compared to bare x86 hardware.
@@autohmae
The Windows NT _kernel_ is written mostly in C, with some assembly as needed. Maybe some C++ for newer parts. Everything what you see, the GUI part, is written in C++ and C#.
I have a disagreement with people who claim that C is a "high level language". It's certainly human-readable, but that's an aesthetic choice. They could have renamed all the keywords to things more esoteric and that wouldn't change its "level". Instead, I think the important thing is how easy it is to draw a map between C instructions and machine instructions, and it's almost 1-1. Not only that, but a C programmer needs to actively think about the machine instructions in a way that a Java or Python programmer does not. So perhaps there should be a separate category for C or C++, like "semi-high level" or "medium level".
@@drewmandan C was considered one of the first high-languages after Assembler, so that makes all other even higher languages also a high-language :-) Maybe something like: super-high-language would be a good fit ? There are other ways you can talk about languages: Python, like Javascript, Bash and Powershell are considered scripting languages. Which imply they are 'super higher' languages in practice (my guess is Lua still fits that category too). An other way to distinguish the languages you mentioned is that Java and Python both have a runtime which usually means they work with bytecode, Python(.pyo), PHP, Java, all do that. And Javascript does something similar at runtime (Webassembly is very similar to the bytecode for Javascript). Rust and C, C++, etc. are also often called "system languages"
@@Conenion yes, you can run it unmodified in the browser with the right Javascript code.
I didn't know Bryan Cranston know this much about CPU architecture.
ikr?? 🤣🤣
Heisenberg, please..
Hope his lung cancer got away
_"JESSIE! WE HAVE TO LOCK!"_
Thought he was interviewing Picard.
Interview ends kind of abruptly, but I really enjoyed it! You bring out Dr Patterson’s talent for explaining these concepts to a wide audience. Wow, he must have been great in the classroom.
*it's a clip
Brilliant conversation. This just closed the remaining knowledge gap I had when it comes to understanding how modern hardware and software work together.
Thinking about the fundamental instructions running on my phone’s processor are designed by this guy, puts a smile on my face. Great video!
RISC is faster, more energy efficient and easier to design.
CISC uses less memory and is simpler for compilers.
It made sense to use CISC in the 1980s, when memory was much more expensive, programming languages were lower level and compiler technology was not yet well established.
Nowadays, memory is no longer a limiting factor and modern compilers/interpreters can turn high-level languages into machine code very easily.
The priority now, with the mobile device revolution, is to design faster, less energy-consuming processors, and RISC is the way to go.
In addition, as they are simpler to conceive, the market's transition to RISC would greatly increase competition in this segment, which has been quite stagnated in the last decade because of the Intel/AMD duopoly.
The only difference is in the processors translator - less work to do with RISC, but more machine cycles to accomplish that same task.
With multi-thread, multi-core processors (like AMD Ryzen) CISC is still the best, especially for high powered computers used from gaming/ video editing etc..
For low-power application chips, though, which see much more use in products worldwide, RISC-based processors are being increasingly used in products, and have been gathering much attention in the past couple years.
In this case, prioritizing higher clock cycles per instruction means less clock activity to carry out processes, which in turn translates into less consumption.
And low consumption is one of the hot words going around, along with security, cloud and ML.
"RISC is faster, "
Empirically false.
Memory access time is an big factor for modern CPU. An cache miss takes about 200 cycles. So an instruction set that minimize the amount off necessary access to the memory could improve the performance
It is fascinating how this guy come up with instant, perfectly structured answers.
He thinks in RISC.
This is what tenured professors are like.
Great interview with a great guest.
As an early career web developer, CISC architecture sounds like an absolute nightmare.
Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.
Basically, the RISC opponents didn't understand how optimizing compilers work and what they are capable of; many of them also didn't understand what high-performance processor implementation really requires. The argument basically boils down to that. One thing that Dave didn't get to is that CISC computers tend to have instructions that execute *more slowly* than a sequence of simpler instructions...from *their own* instruction set. This was very true of the Digital Equipment Corporatation (DEC) VAX machines. In some ways, the VAX was the CISC-y-est CISC. If you understand hardware and compilers (and software frameworks), you understand why RISC makes sense and why you would never choose to design a CISC architecture from scratch. Even the original ARM architecture was not really a RISC. ARM V8 and v9 are much more simple.
One good way to think about it in a way, is the Java runtime environment, where you assemble the java into a virtual machine, which is then converted into the actual underlying instruction set on the fly. In this case, the x86 cpu is doing the same thing under its hood too, converting it into simpler micro-operations.
The RISC-CISC wars are long over. Neither side has won but rather the whole issue got obsoleted by long pipelines and speculative execution. For simple CPUs, which run slowly but are power optimized simple instruction sets usually win. For fast CPUs minimizing code size and therefore going a bit more CISCy wins. All modern instruction sets are hybrids and all have some form of microcode being spit into the decoding pipeline.
Very well explained difference between these 2 fundamental CPU architectures.
Even in the late 80s, computer games on the Commodore Amiga & Atari ST were written in assembler...
I remember the RISC/CISC debate/battle back in the 80s. I always thought that CISC was better and all the hard work can be done by the compiler - which is a piece of software - so I felt that CISC would win-out. When Sun started moving away from the Sparc I was surprised and puzzled but I read about commercial arguments that started to change the economic argument in favour of RISC but I always felt it was like VHS beating Betamax again. I'm surprised to hear about RISC-V now. I'm not convinced things are sufficiently different to justify switching from RISC to CISC for everything but there are bound to be applications where CISC is way better. Interesting times.
In the end CISC CPU’s like x86-64 are decomposing their long instructions into a set of many simple micro operations, so effectively modern Intel CPU’s are in some sense RISC in disguise. On the other hand ARM has many very complex instructions which could be considered CISC in nature. Same with RISC-V, altho the instructions are simple and short, at a certain stage under the hood they are fused into more complex ones for the efficiency purpose.
The debate of RISC vs CISC is irrelevant these days. It is more about: can instructions operate directly with memory (x86) or one has to use load/store (arm/r5); and does CPU rely on a fixed length instructions (basic Risc-v and basic Arm) or on variable length ones (x86, Arm Thumb, Risc-v C-extension).
Ever since the mid 90s, x86 processors use RISC like microcode internally, and theres an x86 frontend that translates the binary into the risc microcode.
@@FLMKaneAnd this is how Intel can patch some issues with certain instructions for security purposes, they can change that the cpu does internally to do each instruction. It's almost like an x86 emulator, in a way. Just with a hardware accelerated instruction decoder.
This guy has one brother who makes the highest quality crystal meth, another brother who commands the space ship enterprise, and he himself provided the best cpu design theory. Pretty talented family.
I've seen this video like 15 times, I love RISC and its philosophy.
Have you heard about MIPS, kids at berkey could have used that, but they wanted to look cool and started RiscV
Problem is mips is proprietary.
I can remember that the Acorn Archimedes had a RISC processor.
And it's operating system was RISC OS. You could run that cpu for hours and it would barely be warm to the touch.
As someone who is pro-CISC because of economic and software development ecosystem reasons. It's important for me to hear the logical reasons and arguments for the merit of RISC architecture. Thank you.
^this.
Have you had contact with DEC computers? The ALPHA architecture?
was looking for a video on mips and stumbled across the great lex
I used his book in college... really enjoyed it.
RISC is gonna change everything
Yeah, RISC is good
1995, Hackers
Acorn Archimedes was a RISC computer but it failed commercially against the Amiga & ST but the CPU, the Acorn Risc Machine (ARM) went on to conquer the world, should also mention MIPS...
I love these discussions on Computer Systems! Thanks Lex
SHAKTi is based on this RISC. Good going team Shakti. Thanks to Lex for bringing knowledge to the world. Russian Legend!
Technically RISC is more feasible these days than older times. Because the processor were slower and running those complex programs would had been time consuming and not so viable but now we have high frequency processor which solve those problems. So, RISC is the future. Where CISC will be history in the. Coming future. Until some kinda of radical change happens in the architecture.
@@alexben8674 I think it will flip flop back and forth. CISC makes more sense when reducing transistor size or memory access latency become prohibitively expensive.
-A bunch of computer nerds involved in violent debate...many mothers got the call “mom im gonna need you to pick me up late”-
many small things flow through a system quicker. Works with web requests too. With web requests, there's more opportunities for caching because it's less likely that individual small responses have changed than one monolithic response.
HEISENBERG
This dude is big, Walter White is more the David Patterson of chemistry.
Give him a goatee, and this man is Heisenberg.
"RISC architecture is going to change everything"
RISC is good
I heard the same thing back in 96. A few months later everyone wanted NT 4 for network security. All the RISC workstations and servers would not run NT 4 and RISC died .
I gather history is repeating itself again ?
@@ogremgtow990 the problem with risc and arm chips is they are very basic and need to be redesigned for certain workloads. So if your workload changes the chips can't run the software 😂. Software has the ability to move much faster than hardware.
@@ogremgtow990 Not with Apple going for it
14:29 Sounds like RISC did change everything and Intel adapted.
This a gold mine for students. Mind your citations children!!
Around 8:00 minutes, the good professor starts talking about how operating systems and even application programs were written in assembly language to achieve speed. And if compilers could have been made smart enough to translate from various languages into complex instructions, that would have been wonderful but RISC makes it easier. And the he goes off on to Unix,C,C++, etc.
I now understand why for the last 40 years, we have been getting programmers pretty much illiterate about computer architecture.
Does the good professor know about the Burroughs B5500, B6500, B7500 series and their follow-ups? Those machines had a push down stack architecure so that they could effectively use Algol as the language in which the operating system could be implemented. As opposed to C which is a high-level machine-oriented language, you had a true high-level language for writing operating systems. And those machines were equally efficient in running Fortran, COBOL and such languages which did not need a stack for their statically declared variables.
And if you want register-to-register instructions only (this is claimed to be an essential feature of RISC computers) as opposed to the IBM 360's register-to-register, register-to-memory and memory-to-memory types of (CISC) instructions, then I can tell you that the CDC 3200 dating back to the early 1960's had that type of instruction set. Every arithmetic instruction meant that the programmer painfully accessed the memory to load the two operands into two registers, performed the arithmetic operation (add, subtract, multiply, divide) on the registers and then stored the result back in main memory. What a pain for the programmer who had to program that beast in assembly language! It is the speed increases in hardware over 30 years that enabled a RISC computer to perform fast. If one implements the CDC 3200 using today's VLSI technology, I am sure it would beat any RISC processor in performance.
Write a nice proposal to get a grant, get a bunch of PhD students to design the chip and write compilers for C or Smalltalk or C++, and you have a nice decade-long run of research publications and more research funding. That is what RISC was about.
I'm not sure why Lex doesn't know about RISC and CISC. Having just finished my undergrad CS degree, one of my later classes had us make a rudimentary processor from scratch gates, we coded a little in assembly and converted it into machine language by hand, and we learned some about the history of RISC ARM/CISC x86 as well.
Did I just get an abnormally competent professor?
Would be neat for a GAN to make a game of writing instruction sets and compilers.
Some folks got their Smalltalk VM to be resident in the RISC CPU cache
if risc is better than cisc, then why does "just in time compiled" code work so well; the thing that can best understand how to execute a complex instruction would be a CPU. the breakdown of a complex instruction into micro instructions is what happens inside a CISC cpu, why is this less efficient than the compiler doing it up front into RISK instructions?
It was touched upon in the video, but if you don't know the answer already it's easy to miss. The genuine CISC instructions come at a higher penalty through the translation layer. In more detail a compiler that targets modern x86 will greatly favor the "CISC" instructions which are actually 1:1 with the hidden internal RISC instructions. I know, you're thinking "but isn't it just coming up with the same instructions? Why is it slower?" The CPU just has to do more work to get usable instructions out of these CISC instructions and unlike a JIT compiler it doesn't have 16gigs of RAM to store the results for next time. There's also the general efficiency of the instructions in particular. These instructions tend to operate on specific registers, so software that uses them has to use EXTRA instructions to move data from RAM to registers and back again to make use of them. With a 386 this was acceptable since all instructions had that limitation in some fashion, but on a modern version of the ISA you can save all that overhead by using the simpler instructions that can operate on the registers that make the code simpler directly. I'm sure many an Intel engineer has argued for the moving of the CISC decoder to software and exposing the internal RISC to the outside to save space on the die, save power, lower heat, etc. but for business reasons Intel doesn't want to do that.
It’s a legit question.. but JIT is still a kind of compiling. I think Dr Paterson argues that the compiler will better match the available instructions of RISC than CISC.
Worked for Compaq in the mid 80's and remember these arguments. Back then x86=Intel=PC=DOS = "Inexpensive" computer. Yes, Compaq had SGI workstations to help out designing the x86 boxes being sold. It wasn't which architecture was technically superior, everyone new THAT, it was what chip set is the cheapest. Just ask "Sun Micro, Silicon Graphics, DEC, HP, NeXT". CISC/RISC is a dead argument in the multi-core universe we live in.
> CISC/RISC is a dead argument in the multi-core universe we live in.
I don't see why. Since CISC vs RISC has nothing to do with single or multi-core.
It is rather a dead argument, because X86 is RISC internally, and many RISC chips, which started as a pure RISC design, have more and more instructions and complexity added to them over time.
Yep Intel chips run UNIX also..
lol this comment won’t age well.. this decade will be the decade of RISC, CISCs days are numbered
The distinction is arbitrary but the discussion is interesting.
"machine language" was the term I was taught in data processing in the '70s. Apple will be using RISC in all their products.
RISC-V ?
yes!
WoW great clip from the guy whose text books we have read in uni
I still don’t get the main question: why not having complex operations in cpu works faster. Hardware must be faster than software so calculating sha256 directly in cpu must be faster then by running primitive instructions. The only I can imagine that silicon space for logic for translating from cisc to some microcode can be used for more processing.
I am following Lex since when he had 2k connections in linkedin. He also replied to me in past multiple times. He's my idol. Honestly the apex of male peak performance.
Heisenberg guy is telling truth listen to him 😂 Hope it won’t end up like in the movie..
Talk about CRISP micro processor and DEC(Digital Equipment Corp), IBM, APPLE, Motorola 68K processors, Quick Draw, WNT, VMS, OpenVMS, OPS5, MVS, VM, VAX, APLHA, AIX, POWER, ZOS. System P, System Z, System X, System I, etc
Ricky from DEC and IBM and APPLE
Amazing interview!
We need him back to know his take on the new AI harware arms race
Loving all your interviews! Great questions and good for an audience that does not know all the details. What I don’t understand is your suit ? Why you having a fancy out fit but your table looks like a mess! If you want a style element in the show hide the cables under the table !
They jumped the open source instruction set
10:40 good sir, if they are inefficient languages, still in the context of compiled languages, which ones are?
I love these tech radicals.
I personally like the word Opcode or operation code rather than calling it instruction or instruction set
Awesome intervju!
One thing that I have always wondered. avx instructions, sure you might have to use an intrinsic for the compiler to use it, but it’s really great way to parallelise. How would you those instructions compare to a risk alternative? You touched a little bit on it at the end but the answer was a little short for such an important part.
There is a vector extension to risc-v.
Complexity of instructions don't make the difference between CISC or RISC.. the basic idea is that RISC has the same size instructions for everything, instead of variable length, they all take a similar time to compute, and they use a load and store architecture - where you load the registers with the needed information l, then you execute the instructions to operate on them, then you store the required registers back to memory. With CISC, like x86, you can have an instruction of variable length up to 15 bytes, so it'll keep pulling in more information for that instruction, the contents for the registers, the task to operate on the registers, and then where to put the registers, in one single instruction. With CISC it could be a simple short instruction like xor a, a, or addsubps.. which I don't even want to explain what it does.. because I don't fully understand.
He forgot Intel 8088 the 8 bit version of the 8086, the interview cut of just when it was getting interesting. Like to know what he thought about Intel iAPX 432, it seems to have so much potential?
> Like to know what he thought about Intel iAPX 432, it seems to have so much potential?
iAPX 432 was a total disaster right from the beginning. The idea was to have even higher level instructions than with CISC. Making the processor even more complex than with CISC. What do you expect one of RISC inventors thinking about such a braindead idea?
Prior to university, of course, the vast majority of kids or students who were into computers, which at the time meant 8-bit home computers had almost no access to the Hennessy and Patterson RISC research. All I knew was from articles in the mid-1980s on the Inmos Transputer and Acorn Risc Machine.
archive.org/details/PersonalComputerWorld1985-11/page/136/mode/2up
So, we were properly introduced to RISC only at University (in my case UEA, Norwich) as part of the computer architecture modules. So, normally, I've understood RISC to be a performance or energy optimisation trade-off. That is, the question is how to get the most work out of a given set of transistors in a CPU, and what RISC does is trade under-utilised circuitry (e.g. for seldom used instructions) for speed. In a similar sense, complex decoding represents an under-utilisation of circuitry (which adds to propagation delays, thus limiting pipeline performance) and because microcode is effectively a ROM cache: ISA ==> Microcode ==> Control Signals, it's better to use the resources to implement an actual cache or a larger register set. Etc.
Excellent video
What an interesting interview.
I really like this interview. I would say however that I think the RISC vs CISC debate just doesn't make sense anymore. Jim Keller makes the point that instruction sets just aren't what matters and I find those that take sides in the RISC vs CISC debate seem to vary in when RISC becomes CISC. ARM is considered RISC, RISC-V is considered RISC and named RISC. ARM has AES instructions and I can find papers on AES instruction extensions for RISC-V though I'm not sure if they are officially part of the spec. AES is particular because one it is a very very complex instruction compared to what was originally considered a suitable RISC instruction and two it is generally required to be done in hardware to avoid sidechannel attacks related to timing and/or power use. So do/would AES instructions make RISC-V a CISC architecture? I'm also curious about the argument about the increased number of instructions vs the speed at which they can be executed. Does this account for the fact that RAM is dramatically slower than processors or just assume that the program is held in cache? Does it account for extra cache usage and performance when there are multiple programs fighting for use of memory? I don't know that the definitions of RISC and CISC have ever really been pedantic enough to classify hybrid architectures and I'm pretty sure both concepts pros and cons have been used to create modern architectures that do what makes the most sense for their use-cases. I would also say that given all the SIMD instructions and crypto instructions and instructions specifically designed instructions for things like video processing I believe most people would classify x86-64 as a CISC instruction set. I've seen talk of CISC processors built around RISC cores but also talk of the concepts just not making sense anymore due to the lack of the kinds of transistor density limitations that used to exist and that we just have processors that are designed to go fast based on ideas from each camp. I'm not old enough to remember the original battle, but as far as I can tell I don't think it ended with a winner but a dissolution of the opposing tribes.
Ironically, ARM added Jazelle to execute Java bytes codes natively.
Nice talk
nice video!
CISC is not secured, easy to put backdoor in it; hard to audit CISC platform.
One man’s software is another man’s hardware
Always cracks me up how people think modern RISC is more efficient and shit but in reality other than maybe 2 or 3 details and actual RISC-V chip is not very different than a CISC chip, they are all equally as complex and RISC is just an ISA, it doesn't decide how efficient an chip will be.
The only selling point of RISC-V is the Open ISA, the actual RISC philosophy is flawed and died long time back, modern RISC processors use micro-codes and all sorts of stuff that modern CISC chip do.
Meth to silicon?
Why is RISC not so efficient to access the ram?
Compiled instructions have to be stored in memory before being loaded into the CPU. CISC systems can narrow down the amount of RAM utilized by keeping the number of bytes to store compiled instructions small. The biggest bottleneck between the CPU and RAM is the MMU (Memory Management Unit) which has a fixed-size in how many bits it can transfer at any given clock cycle. Since CISC can use less memory, it can load more information in the same unit of time as a RISC system.
A good example of this is the mult instruction to multiply two values. In RISC, you would need to do two load instructions for each given value you want to multiply by whereas in CISC you could fit all of this into the size of the MMU (so for 64-bit this would be stored in only eight bytes of memory).
So CISC improves the number of lines of instruction whereas RISC improves the number of clock cycles per instruction (only one instruction per clock cycle). The bottleneck is the bandwidth of the MMU.
That's my understanding of it but please keep in mind I come from the software development perspective not the hardware development perspective. I could be wrong about my interpretations.
@@MagnumCarta Thanks, i begin to understand. The Intel core2 CPU can execute up to 4 integer instructions at the same time, if the instructions are pairable. I think this works with one complex and three simple instructions. I never used a compiler, but i am familar with assembler on Intel 80386.
@@maxmuster7003 It isn't really about how many instructions you can execute in parallel but about how quickly you can pull instructions into the CPU. A simple example would be, say a line of C code, may compile into a single CISC machine code instruction. While on RISC it may turn into 4 instructions. However that single CISC instruction may take 4 clock cycles to execute, while each one of the RISC instructions take 1 cycle. Hence in principle there is no performance difference.
However this means that for a larger program the RISC processor will fill up its CPU cache faster than the CISC processor. That is why RISC processors tend to have larger caches.
However it is apparently not as bad as it sounds for RISC. RISC processor avoid a lot of load and store instruction by having a lot more registers than CISC processors. As far as I understand, a good compiler will be able to arrange things so that a RISC doesn't need to have that many more instructions than CISC.
Anyway that is my understanding. I am also a learner here. I stopped caring about RISC and CISC ever since Apple switched to intel. But it is becoming a more interesting topic again.
@@povelvieregg165
> However it is apparently not as bad as it sounds for RISC.
Also because of instruction caches having a high hit rate.
@Max Muster
You can combine both worlds. ARM for examples does this with the thumb instruction set. Those are "compressed" short RISC instructions that are expanded to their long versions during instruction fetch.
In essence, x86 does this as well. It wasn't planned, though.
I think description of this video should be changed from "AI Podcast" to "Lex Friedman Podcast".
wow! Walter White now started working on CPUs
Wow, I didn't know Walter White knew so much about microprocessors.
He is the one who knocks!
My textbook author ;) great book
Haha I was like ”Hey I know that book!”
He also mentioned his friend John Hennessy
but I'm so nostalgic about x86. can't let go
and I somehow started disliking mobile devices with ARM chips
Brilliant.
Heisenberg actually didn’t die, he just switched to manufacturing processors
Now I see where Malcolm got his genes from..
he's now in the microprocessor empire business
Walter white?
PS3 was RISC, right?
Sophie Wilson.
Think the guys that write the compiler code are the real wizards, but I'm kinda stupid.
For a moment I though this is a Breaking Bad episode...
Science, bi***! 🤓
Damn Mr White you know about computers too!?
hey breaking bad chracter is back, nice to see you professor
With Apple switching all of their computers over to RISC, and RISC running inside almost all tablets and cellphones, it sounds like RISC won.
I am not familar with ARM CPUs, so i use the x86 DOSBOX emulator on my Android tablet for x86 assembly. I do not like Apple with or without CISC.
Especially seeing that Intel and AMD are constantly trying to fix newly discovered speculative execution attack vulnerabilities.
I am so proud of you RISC !!!, It's been 40 years and you finally did IT !!!!
@@andrewdunbar828 Wrong, x86-64 is owned by AMD and it runs microcode that can implement RISC-like routines not RISC itself. It makes CISC cpus extremely versatile having vast capabilities.
@@stevecoxiscool RISC didnt do anything the CISC based market share in terms of revenue is like %90 of the computing market.
If you look at current intel architecture, it is not a pure CISC processor, it is a hybrid (th-cam.com/video/NNgdcn4Ux1k/w-d-xo.html) 14:40 . It has a CISC wrapper around a RISC core .
It start with the Pentium architecture?
@@andrewdunbar828 It is in this clip
that's what I don't get, these days nothing is pure risc or cisc. We have heterogenous x86 cpus, microprogrammed arm chips and every fucking thing in-between. And I love them all.
@@maxmuster7003
> It start with the Pentium architecture?
Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
@@andrewdunbar828
> consensus seems to be that the RISC inside cisc analogy is badly flawed.
It is a simplified explanation, sure, but certainly not "badly flawed".
> but too far off the mark if you know how CPUs work.
Then it would have been explained in this way at 14:35 in the video.
HAL says hello
"RISC is good"
RISC will start catching up if you could pay less than us$2k for a server and run Linux on it.
But, but, but, isn't the memory speed your limiting factor? If you execute more instructions and your are waiting on the memory to serve them, wouldn't that make it slower? Have you accomplished your goal? I don't really want to debate this here, I'm just saying that the Intel itanium wasn't a successful microprocessor. Macs ran for a long time on Rs6000 chips and now run on Intel. I just don't see that RISC is commercially successful. Perhaps, it is a better microprocessor design, but then why aren't Macs still using them? I've been in the computer biz for a long time. Written a bunch of assembly language. I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.
> itanium wasn't a successful microprocessor.
Yep. It was a giant failure. But Itanium was VLIW not RISC.
> it is a better microprocessor design, but then why aren't Macs still using them?
Apple just announced to use ARM based processors. Which are RISC. They call it "Apple Silicon".
Search "Mac transition to Apple Silicon" on Wikipedia.
> I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.
Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
@@Conenion Well, it is not important how it works internally, but, this translating to RISC internally, does that mean microcode? If yes, machines have been doing that for a long time. If I recall, the IBM 360 was a microcoded machine.
Walter White if he not a chemist
Did anyone make a Malcolm in the Middle reference in the comments yet? Y'know, like something about Hal designing HAL? I'll leave the completion of this joke as an exercise for the reader.
Multiple layers of joke here given "Hal" authored a book on this stuff, haha
Prof. Patterson :) Go Bears!
If you didn't know this you didn’t know much about computers
i shall not add... lol to touchy of situations.
Almost all chips are RISC. Most Chips just convert there conventional code to a simple code in the CPU. Intel did this with the Pentium 4. So RISC did win. Also VLIW was superseded by SIMD in the the FPU via special instructions or in the GPU. Modern Chips just glue all these different approaches together and hide it in the compiler or CPU instruction decoder.
> Intel did this with the Pentium 4
Before that. Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).
I feel kind of like he is wrong. On arm processors all instructions take 4 cycles since and x86 instructions are variable, today x86 machines basically are 4 times faster.
I still want epic ( Explicitly parallel instruction computing ) architecture .
Curtis ARM instructions take 1 cycle on average to finish because they are pipelined. That is after all the whole point of RISC having the same number of cycles per instruction. It is to make pipelining a lot easier. I am not up to date on the current status of x86 but at least back in the PowerPC days of Apple, it was a point often made that pipelining worked bad with x86. It was hard to keep it full at all times, with variable number of cycles.
ARM also has a bunch of instructions very well suited for pipelining, such as conditional arithmetic operations. It means you can avoid branching which drains the pipeline.
Hmm, 6:00. Thats not how it works. Most of the extra instructions that have been added the last 20 years have been only accelerators. For example SIMD SSE4, or a more obvious example AES instruction set that makes compression and decompression about 20 times faster. All modern heavy compute operations on windows rely on running modern compilers with support for a few optimized instructions like AVX2. You also have pipe lining and branch prediction making the x86 side much more attractive. The instruction set was between AMD and intel have ended but 20 years ago we had competing and completely different instruction sets like 3dNow.
12:30, thats BS if you know programming. The very efficient instruction sets are widely used even by more high level languages. I have been a computer engineer/developer for over 15 years so what do I know. 😎 I think the majority was right back then if we look at where we are now.
General compute is never as fast as ASICS which is basically what advanced instruction sets are.
Yeah, this discussion makes it seem like Patterson has not looked at a serious CPU architecture in 25 years. His arguments may have made sense against the 80386, or Morotola 68K, but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong. Today, there is no such thing as a "high performance RISC"'; the only way to achieve performance is to get a multi-core x86. RISC has been relegated to low-cost/hardware integrated solutions.
@@websnarf
> but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong.
You have obviously never heard of the Alpha processor.
> the only way to achieve performance is to get a multi-core x86.
X86 translates from CISC to RISC-like instructions internally since Pentium Pro in 1995.
Which avoids long RISC instructions for simple instructions like INC.
> 12:30, thats BS if you know programming.
No, its not. For a compiler it is still very difficult to map a code snippet to a special instruction doing the same. I doubt that a compiler will replace C code that does AES encryption or decryption with an AES instruction. Taking your example.
@@juliuszkopczewski5759
Sure, I know. That is exactly the reason, why you add instructions to the instruction set without caring about the compiler. But this is not "general purpose" code, and for such a code the argument from Prof Patterson is still true to this day. Albeit a bit less so, because compilers are smarter today, than they were 30 years ago.
My simple analogy
Big block muscle cars have much more scalability than small piston high rpm rice burners Because the rice burner technology or in this case the complex sets don't scale well and hit a wall
Bad analogy.
The RISC vs CISC debate really turns out to be "six of one and a half dozen of another". If one architecture ran software twice as fast as the other then it would have clearly won and we would all be using that design. This is not the case.
why we could not have something close to universal or even dynamic or reprogrammable instruction set instead of 17 different hidden and fixed sets ?