Thank you for all that useful information. Although, I believe you should have talked more about the majorly reduced power consumption of the Apple M1. Saving energy and having your device(s) run at much lower temperatures is very important.
As a CS student I have learnt all this in class and I must say I was very surprised at how detailed you went in the 100 seconds! Great job! Love the channel
Not very detailed, there is another short video on a youtube which shows on a very simple processor example how does things work. It's much more helpful
Is it just me or was there not nearly enough focus on ARM Vs X86? Surely that's the most significant difference between the Intel chip and the M1, rather than the M1 being an SOC?
Thought the same thing, surprised this wasn't commented more. ARM being a simpler architecture is by far the main contributor to the massive increase in efficiency.
the m1 is just not only a arm chip which makes it special rather , it is very special on its own way. Like Sharing the GPU memory and CPU memory, they call it "Unified Memory", it supports Out Of order execution too, and stuff etc
@@mrmeseeks8790 thanks just watched it. Very good explanation - it's not quite as simple anymore is it? I wonder, though, if the CISC chips will ever get to the point where, like the M1, high performance can be achieved whilst still keeping power usage and heat really low. That seems to be the really big difference.
Yeah, like SoC's aren't a new concept it's extremely common, and they aren't the reason M1 is fast. I've personally come to hate SoCs to some degree. Removing modularity when you don't need to makes the whole thing useless sooner.
@@ChrisD__ We can only hope the good 'ol "gaming PC" market doesn't change. I think it'd be pretty hard to change, because it's mostly standardized pretty well. Sadly laptops are now completly like "mobile" devices and are super non-upgradeable
The fact that people are innovative, driven and smart enough to make this kind of stuff blows my mind. We went from a key, a kite and some lighting to mass producing hyper-powered chips that have billions of tiny parts, and computer systems that can share live video and audio with each other all over the world, all in the span of a couple hundred years.
Let this be a lesson that if you’re consistently working on yourself and getting better each day you too can one day become impressive with your talents and knowledge
impressive details just researching on CPU mining and wonder why apple switched to M1 kill intel but Im still a PC geek even with apple certification APPLE is just prestige
i feel like these types of videos should be shown at the beginning. then, you'd have a good baseline idea of what you're learning for the next few months
The situation on Android has really changed since this video, its now compiled natively, this includes Android studio and the emulator also runs natively. Build times have decreased to mere seconds. I run a pretty complex app and it builds in less then 1 minute every time. Edit: it has gotten even better since I last posted this comment
thx for that info - i just started to regret hammering company staff to finally get me a macbook m1/2 due to that old imac i5 compiling ~20x longer than my i7 notebook.
That 6502 CPU die at 0:20 and 0:44 was launched around 1975 (although this design isn't the original 6502 die, it actually looks closer to the Rockwell R6502 die, because of different placement of the bonding pads, ). Anyway it has circa 4500 transistors. The pretty grid pattern at top is the Instruction Decoder, clock is above it at the far right end of that grid, and the ALU is in the lower half just to the left of centre (along with shift registers and other things). That chip was used in the original Apple I and Apple II desktops that Woz designed which gave Apple its starting products. You can see the individual transistors on its die with a 180x optical scope.
Jeff you're an Angel for featuring all of our channels. I really appreciate you as a software community member. This was a great vid!! Thank you. Well done @Alexander Ziskind I watched a few of your M1 vids.
Intel chips or x86 moves the complexity to the chip itself(more instructions). so it consumes more energy M1 or ARM-based architectures is based on reduced instructions (RISC). It moves the complexity to the software(you have less and basic instructions to play with) so it takes larger size on memory but it's more efficient. it can be faster or slower than x86 depends on the design and optimization
Most knowledgeable people will say CISC is more specialty instructions that take more clock cycles and RISC simpler instructions that make longer code, but this is not nearly as accurate as it used to be. The M1 is a great example if a RISC architecture that has many specialty components and commands that we would traditionally consider the domain of CISC architecture It’s still useful to mention the traditional differences between the two, but I think these differences are becoming less and less accurate as time passes.
@@gumbilicious1 true specially with how Intel actually works for years they have been using micro-op as there’s a layer that takes CISC complete instructions & breaking them down to smaller operations similar to RISC This means extra work = extra heat
I thought we might create our own CPU from scratch when we went beyond 100 secs but Mr. Alex just nailed it🤩 I actually watched many of his M1 vs Intel test videos. Those are also great💓
I loved the first few minutes about processors... then the rest was about Apple silicon. Not much about how the "slow" alternatives differ. I thought Socs were developed for mobile phones? How is Apple silicon New apart from being more powerful. Please forgive my ignorance, I really wanted to learn some basics.
It is now more powerfull. This is only true when using tools that the M1 have on the hardware. If you try to use something that Apple do not care about: Like Webm vídeos, JXL images. AOL encoder, APIs that are not Vulkan then it will be slower.
dude i think about this every day like it was a picture and said size 14 atoms... 7nm architecture... just the degree in which weve micronized electronics... most know pcb replaced wires but damn a cpu is exponentially smaller then i think they could ever dream back then... idk compared to a human hair its stilll wayyy smaller
Ok, so this is basically 2 minutes of explanations about cpu architectures followed by a ten-minute ad for the M1... Too bad, I was really eager to learn.
The beginning of the video was nice and I wanted more from it, but then it felt like it just turned into an advertisement for Apple, with the rest of the video being a Biased take on why SoC design and apple are better than standard desktops when that's not really true for a lot of people and use cases. I went into this video hoping to learn more about how CPU architecture looks at a microscopic level and what an ALU looks like and how it functions. But instead it ended up being a more high level look at a CPU followed by an Apple advertisement which is not what I wanted from this video.
The title literally has “Apple Silicon” in the title… going over the M1 chip and its improvements isn’t a “biased advertisement”, it’s just reality. People get so defensive whenever Apple gets any praise, even when deserved. It’s weird.
Overclocking doesn't always lead to lower life expectancy because of undervolting it will get you very close to stock life expectancy resulting in it running cooler and more closer to stock temps while you get more performance.
@@rb1471 I really couldn't tell you because most my experience undervolting has been on desktop. I don't know if you can undervolt a laptop from the BIOS. Might even be dangerous if you can, because the CPU's are already adjusted by the manufacturer like ASUS or dell to give you the best battery life or cooling.
Day 5: Elixir/Phoenix in 100 seconds, iOS Development in 100 seconds, Android Development in 100 seconds, Rust in 100 seconds, C in 100 Seconds, TailwindCSS in 100 seconds, JS Testing in 100 seconds, Ruby/Rails in 100 Seconds, C++ in 100 Seconds
Really love these videos. The only thing I will say that I'm sure someone has pointed out already but needs to be said: it's "OPcode" not "OPTcode" :) Keep up the good work
I recommend to change the title to something related to M1 for developers because I was searching to find alike video and I didn't find any near helpful and detailed than that and I quite searching then, thank you so much for such useful information
I primarily dev in Android Studio and I had to get an M1 because of the benefits while waiting for a better SoC on a larger MBP or smaller mini… but I changed a few settings; mainly memory heap and editor refresh rate, and it runs smooth as butter now. Plus they added ARM AVD support now and they run pretty well, emulating an ARM device on an ARM SoC. I will agree it’s not ideal, but I keep myself to one project open at a time and can barely notice a difference at this point. If they release a silicone native AS build it should be at least as good as the performance of my older intel MBP workhorse. Luckily Apple devices make it easy to share, drop off and pick up work between devices. So I can easily hop over to my intel machine if need be, but I haven’t used it for Android development since getting my M1. After seeing WWDC I think instead of trading in my M1 and some cash for a new MBP M1X, Ill just buy an additional mini M1X at the same specs and sell my older 15” MBP. I think eventually everything will be, or have an ARM build available.
developers: lets start using microservices guys so if some small service died we can replace it apple: nah monolith approach so if a line of code is wrong the user will have to come and buy an entire application
Web tech stack isn't really concerned with speed all that much, because microservices is slower. You have to use network to call services rather than a simple function call in a monolith application.
Oh man, the awesome video dropped like a gem, as I was hoping this kind of videos out of software and tech-stack videos, so you got it, thank you. It started with Linux Distro, then Arch Enemy Microsoft, and now Hardware stuff.... You taught almost my half semester's course, as COA-Computer Organization and Architecture, explaining CISC vs RISC differences for 8 Marks.
Excellent explanation of the low level activity of the CPU, yet I also had visions of Tron movie scenes of flying discs and light cycles going through my head, as I was watching your explanation! 🤖
I was really waiting when this type of Processor related content hit youtube ....And here you come with 100sec. Loved it! Can you please do a video on how a Processor engineer designs a CPU....I mean do the use a EDA software like normal or they use computer algorithms.....And also how processor supply chains work....and processor foundry's work
One of the major drawbacks of SOC, is that chips cannot be repaired but only replaced. So the overall cost to get back normal would be very high (for the company/user), considering the number of components it has fabricated on it.
What *really* makes M1 faster is that decoding an ARM instruction is much simpler than decoding an x86 instruction. For example, each x86 or AMD64 instructions have different number of bytes. Some of them are 4 bytes, some of them are 6, 8, or even 12, etc... On the other hand, ARM always has fixed length. I don't know about 64-bit ARM, but I do know that 32-bit ARM always use 4-byte(32-bit) per instruction(which includes both opcode and operands). What this means is that it is very easy to predict where to fetch next instruction from, which allows decoding more and more instructions at the same time. Then when an instruction finishes, everything needed to execute next instruction would be immediately ready. Also not to mention that ARM, which is RISC(Reduced Instruction Set Computing) architecture, has far fewer instructions than x86. So instruction decoder itself would be also a lot simpler.
It would be interesting to know what was the bottleneck when running the tests. Was it only the CPU or other thing such as memory access or disk access?
I always check the playback speed to make sure I'm not on 1.25x which is my eternal state of being on youtube. Thanks for not dragging shit out, and it takes a lot of knowledge to be this succinct.
So, basically, this has been a 12 minute ad for Apple. I came here to learn about the logic gates and operation codes between two architectures, not be sold an overpriced SoC.
Yeah, if M1 is faster because it is an SoC, then why does an Intel Atom SoC not run circles around an i7? This video is a load of crap designed to make Apple fans happy.
@@porcorosso4330 it doesn’t, outside of either hyper specialized workloads like workloads that utilize machine learning ASIC on M1 or in performance per watt. While M1 is amazing in its own right, it can’t break how physics works. Lot of stuff that’s running faster on the M1 is because it’s specially coded to utilize fixed function asic or the JIT reconciler/emulation of Rosetta can sometimes skip a lot of steps which isn’t great for mission critical work if need an error free environment. But most Rosetta emulated apps run slower still. Also, Lot of apps don’t utilize fixed function asic on intel or AMD for some reason, been a issue for a long time and nothing magical just because see it happening on M1. People look at premier for the best example of how much faster M1 is vs intel, the reality is by default premier disables intel QSV optimizations on windows and on Mac it’s just not available for some..reason? Turn those QSV functions on in premier on windows and if you have a recent intel CPU the M1 isn’t that far ahead if at all. Another aspect is synthetic benchmarks, Comparing what they believe to be “raw” performance. In reality, most of these benchmarks are very small and can easily fit within the cache operations without much eviction of code to DRAM, and thus get a huge boost in performance. This is also why we are seeing in same benchmarks, a lot of zen based chips running rings at the top too.. none of this specifically invalidates the benchmark, but it’s not a true raw compute scenarios in my book. Take geek-bench for example, it can run each test with minimal code eviction. Cinebench R20-R23 is the same way, this is why memory speed plays less role in this “CPU benchmark” and can even go single channel on my platforms and score similar or higher on some platforms. This because each tile tries to work within the cache. The M1 has a fairly large chunk of cache that’s unified between cores, it also acts as a buffer to DRAM, allowing the M1 to hit the DRAM less often.
Yeah, if the fact that M1 is an SoC, was the reason it's so fast, then Atom and any Snapdragon would have been dominating the laptop and tablet space right now...
@@monke4319 Hehe. She knows how to count binary from age 7 or so (one hand, 0-15). Funny moment when it dawned on her, what this is actually good for :-) We also did tube-triods to explain RAM, and floppy disk tracks + sectors. Internet-age kids have problems to understand the concept of storage, and where it happens when. There is no more blinking stuff, you can actually touch. It was very interesting for me too, to draw battery symbols and to know which was plus, and direction of current etc - after all these years.
SoC's have one advantage and one HUGE drawback... The advantage is obviously the power consumption, and that might lead people to think it's more e ological to do it like this, but here comes the drawback... The less stuff you can upgrade in your computer, the less it's gonna serve you, eventually leading to uprise in throwing out new stuff just to buy new, whereas modular machines sometimes just need more memory or an expansion card and there thry go serve for another one or two years. Keep in mind that producing PCBs in mass includes TONS of toxic materials...
May work on laptop but not on regular PC, PC is like that for customizable feature that allow you to modify it as needed while laptop probably need the cooling system and power efficiency to be better.
An important point: The M1 can only be fast when using tools that are optimized in it. If you try to encode a lot of jxl images or some av1 video the M1 will be really slow. The M1 do not have support for nothing except the Metal API, and that meand that a program using Vulkan will perform slower too.
This is a misleading description of x86-64 (Intel/AMD in most cases) vs ARM. ARM is a RISC (Reduced Instruction Set Computer) style processor, so what you gain in speed is lost in feature set and wider compatibility with Windows and other x86-64 based operating systems. There is also nuance in the testing performed here that might be missed by the uninformed viewer, i.e compiler optimizations done in the background automatically for ARM, that would explain the apparent performance difference. ARM is indeed a powerful and typically less power hungry platform, but there are pros and cons not discussed here that would and do apply when looking to develop native code for x86-64 or other architectures. Web applications built on JavaScript and other platform-indepenent languages would likely benefit from development on ARM, however any desktop apps for x86-64 would likely have little to no difference in compile time/runtime performance especially when running on Apple's translation layer. Since TH-cam doesn't seem to like my links just search m1 vs x86 for more info
The main benefit is the 8 deep interpreters for multicore performance thanks to ARM, a centimeter extra distance at the speed of light does not make a big difference.
JS and Py actually compile into bytecode, which is executed by their respective interpreters. The interpreter, though, is the one running on machine code. Also, the execution phase is more complex than that; there are all kinds of opcode extensions, prefix bytes, addressing modes, immediate values, etc., etc. But an important note is that the operand (which isn't even always there, like with RET, or of which there are multiple, like with IDIV) doesn't always reference memory-it can also reference CPU registers (?ax, ?bx, ..., and r8 through r15 in 64-bit operating mode) in the register-only addressing mode. In addition to those, there are also some special registers that allow the CPU to function; such as ?BP (base pointer) ?IP (instruction pointer) FLAGS (CPU state flags), and so on. * The question marks indicate a register size. E = 32 bits and R = 64 bits. For 16 bits, take out the question mark. For 8 bits, use l and h for low and high. AL -> low 8 bits of accumulator BH -> high 8 bits of base CX -> 16-bit counter EDX -> 32-bit data RIP** -> 64-bit inst. pointer ** Not available to instructions Oops, I'm rambling again. I'll see myself out. Edit 0: I've seen myself back in to correct the r range. r0-r7 actually refer to the previously-mentioned registers. (Source: stackoverflow.com/a/9130707 )
Incredible video. Thank you both! Recently bought the most bottom tier Macbook Air M1 8GB mem / 256 GB just to do video calls for work and was blown away with how much better it runs my dev stuff compared to my 10600K / 32GB Windows pc. Question for @Alexander, do you think it was a good strategy to go cheapo with M1 for my purchase with the idea that when the M2 machines get announced later this year I can sell and upgrade?
Thank you for your thorough review! What about "pure computation power"? I use Comsol software for physics simulations and I really would like to know whether I should switch to Apple RISCs or, for this kind of job (lots of RAM, lots of cores), I better stay with Intel or AMD? Thanks in advance!
I am MSc microelectronics engineer, some points are missed cause the deal. You are comparing SoC and Mothetboard. Obviously Primary and Secondary memory access and operations are create the difference. The right way the compare intel with m1 is looking at architectural design. At the beginning we were looking clock speed, more ticks more operations, then converted to operation number in one second. Now, the deal is changed. At the architectural based both probably using advanced harvard, apple is developed around Arm which is common, lots of SoC develop on Arm is also harvard. Intel doesn't use Arm, intel is already the architecture creator while apple is sticking another firm's (but widely using and reliable) architecture.
Great video! :)) When do you think we will be getting the same highly performent ARM cpus on the desktop market if we will at all? Since these are just laptop cpus, aren't we able to go beyond that on PC?
Arm cores are winning on power efficiency, on PC that not focused on that, nobody will use low power processors. You'd get better performance with amd APUs if you want is SoC integration. Also because ARM sells the design of the processor to foundries, it is possible to mix x86 processor with ARM, some already have it iirc
no thanks. I'd rather have the ability to upgrade my PC as and when i choose. dont want some company selling me overpriced shit that i cant upgrade and have to throw away in a few years
I'd be totally down to buy an M1 laptop! Just have to wipe the drive and install a dual boot with a Linux installation and windows 10... You'll never catch me bending over backwards trying to conform to Mac OS. Prefer it if I could get it in a different case as well..? Then again Apple is afraid of change so I guess I'd be stuck with the 13 year old design.
Thanks so much for having me on your channel, Jeff. It's been really fun making this one with you!
Smart to show NativeScript as the last stack you were showing
you can build your own virtuel CPU by following this channel th-cam.com/channels/lQEB7Jq0LKZPWmzoKoe6bQ.html
Thank you for all that useful information. Although, I believe you should have talked more about the majorly reduced power consumption of the Apple M1. Saving energy and having your device(s) run at much lower temperatures is very important.
@@ShawnRitch you’re right, but there is only so much I can fit into a short video :)
did you test NodeJS on node 16 with ARM support or Node in Rosetta?
As a CS student I have learnt all this in class and I must say I was very surprised at how detailed you went in the 100 seconds! Great job! Love the channel
@@user-if1de8pt2j lol my bad 😂 by “100 seconds” I was referring to the series
Not very detailed, there is another short video on a youtube which shows on a very simple processor example how does things work. It's much more helpful
@@grumpy_cat1337 you’re very helpful not even pointing us in the right direction let alone giving us a link
@@user-if1de8pt2j well does the same person talk for 12mins?
@Chris you’re roasting me for being a student?
Is it just me or was there not nearly enough focus on ARM Vs X86? Surely that's the most significant difference between the Intel chip and the M1, rather than the M1 being an SOC?
Thought the same thing, surprised this wasn't commented more.
ARM being a simpler architecture is by far the main contributor to the massive increase in efficiency.
the m1 is just not only a arm chip which makes it special rather , it is very special on its own way. Like Sharing the GPU memory and CPU memory, they call it "Unified Memory", it supports Out Of order execution too, and stuff etc
@@mrmeseeks8790 thanks just watched it. Very good explanation - it's not quite as simple anymore is it?
I wonder, though, if the CISC chips will ever get to the point where, like the M1, high performance can be achieved whilst still keeping power usage and heat really low. That seems to be the really big difference.
Yeah, like SoC's aren't a new concept it's extremely common, and they aren't the reason M1 is fast. I've personally come to hate SoCs to some degree. Removing modularity when you don't need to makes the whole thing useless sooner.
@@ChrisD__
We can only hope the good 'ol "gaming PC" market doesn't change. I think it'd be pretty hard to change, because it's mostly standardized pretty well. Sadly laptops are now completly like "mobile" devices and are super non-upgradeable
this is not 100 seconds
how a cpu works was in 100 seconds
:trolled:
@@frankdiariesdiscord mod
Technically he explained it the intro...
It never is.
The fact that people are innovative, driven and smart enough to make this kind of stuff blows my mind. We went from a key, a kite and some lighting to mass producing hyper-powered chips that have billions of tiny parts, and computer systems that can share live video and audio with each other all over the world, all in the span of a couple hundred years.
Yep. Misanthropes are wrong. Humanity largely rocks.
Vastly overrating the importance of ben franklin m8
Let this be a lesson that if you’re consistently working on yourself and getting better each day you too can one day become impressive with your talents and knowledge
It’s all exponential. At this point all knowledge is passed down and those people with the knowledge are paving the way for new innovation.
@@BRBallin1some people are just born different
The first 2 minutes is exactly the basis of what I leaned in an entire CS class specifically computer organization. Very well explained.
Everyone is a CS student
impressive details just researching on CPU mining and wonder why apple switched to M1 kill intel but Im still a PC geek even with apple certification APPLE is just prestige
honestly that 1 minute mark is the simpliest and compact explaination of my whole semester
i feel like these types of videos should be shown at the beginning. then, you'd have a good baseline idea of what you're learning for the next few months
@@EthanDyTioco colleges arent there to teach people they are there to rip you off
The situation on Android has really changed since this video, its now compiled natively, this includes Android studio and the emulator also runs natively. Build times have decreased to mere seconds. I run a pretty complex app and it builds in less then 1 minute every time. Edit: it has gotten even better since I last posted this comment
thx for that info - i just started to regret hammering company staff to finally get me a macbook m1/2 due to that old imac i5 compiling ~20x longer than my i7 notebook.
O
That 6502 CPU die at 0:20 and 0:44 was launched around 1975 (although this design isn't the original 6502 die, it actually looks closer to the Rockwell R6502 die, because of different placement of the bonding pads, ). Anyway it has circa 4500 transistors. The pretty grid pattern at top is the Instruction Decoder, clock is above it at the far right end of that grid, and the ALU is in the lower half just to the left of centre (along with shift registers and other things). That chip was used in the original Apple I and Apple II desktops that Woz designed which gave Apple its starting products. You can see the individual transistors on its die with a 180x optical scope.
Gems are always hidden in comments.
thanks dude, really informative.
You are a genius
As smart as Woz was, he still needed Chuck Peddle (6502's designer) help in getting it working.
Jeff you're an Angel for featuring all of our channels. I really appreciate you as a software community member. This was a great vid!! Thank you. Well done @Alexander Ziskind I watched a few of your M1 vids.
All my weeks of study depreciated to 100s 😭 😂
Intel chips or x86 moves the complexity to the chip itself(more instructions). so it consumes more energy
M1 or ARM-based architectures is based on reduced instructions (RISC). It moves the complexity to the software(you have less and basic instructions to play with) so it takes larger size on memory but it's more efficient. it can be faster or slower than x86 depends on the design and optimization
I think RISC vs CISC should have been mentioned other than that this vid is really great
Most knowledgeable people will say CISC is more specialty instructions that take more clock cycles and RISC simpler instructions that make longer code, but this is not nearly as accurate as it used to be. The M1 is a great example if a RISC architecture that has many specialty components and commands that we would traditionally consider the domain of CISC architecture
It’s still useful to mention the traditional differences between the two, but I think these differences are becoming less and less accurate as time passes.
@@gumbilicious1 true specially with how Intel actually works for years they have been using micro-op as there’s a layer that takes CISC complete instructions & breaking them down to smaller operations similar to RISC
This means extra work = extra heat
he didnt say much about what ARM is did he? Did I miss it?
The last Intel MacBooks are also 9th gen intel. It's important to note from a "processing speed" perspective.
I thought we might create our own CPU from scratch when we went beyond 100 secs but Mr. Alex just nailed it🤩
I actually watched many of his M1 vs Intel test videos. Those are also great💓
Whoa! Thanks so much!
How do you make a cpu?
@@JatPhenshllem th-cam.com/video/qm67wbB5GmI/w-d-xo.html&ab_channel=DIYwithBen (How a CPU is made)
@@conradmbugua9098 Thanks
@@JatPhenshllemFPGA + RISC-V
Wow! Love the way you have explained the technical bits in a such a simple way.
1:45 minor correction, it’s called the opcode
Yep, operation code
Good call!
opt code works fine too
I’ve never seen such a beautiful Collab on TH-cam. Wow. Well done guys 👏🏽🙆🏽♂️
Thanks for the 11 minute apple ad
Such a massive job you've done here guys. Thanks.
It feels nice that I remember all of this till date
The beginning of the video made me think it was gonna be a great video explaining the cpu. What I got instead was a 10 minute long ad.
Same :c
Oh man Alex nailed it 👌 Thank you for inviting him
Alex is underrated.
@@shanegilbert6574 I couldn't agree more!
You are one of the only TH-camrs I can listen to talk about tech without wanting to my blow my brains out 😅 Thank you!
I loved the first few minutes about processors... then the rest was about Apple silicon. Not much about how the "slow" alternatives differ. I thought Socs were developed for mobile phones? How is Apple silicon New apart from being more powerful. Please forgive my ignorance, I really wanted to learn some basics.
I mean…he fully explained in the video
It is now more powerfull. This is only true when using tools that the M1 have on the hardware. If you try to use something that Apple do not care about: Like Webm vídeos, JXL images. AOL encoder, APIs that are not Vulkan then it will be slower.
Whoever figured out how to make the parts in a cpu as small as they are is an absolute madlad
dude i think about this every day like it was a picture and said size 14 atoms... 7nm architecture... just the degree in which weve micronized electronics... most know pcb replaced wires but damn a cpu is exponentially smaller then i think they could ever dream back then... idk compared to a human hair its stilll wayyy smaller
Thanks both of you!
Digital circuit and logic paper revision after years in just 100 secs and more 😁 ❤️
Thanks !
Ok, so this is basically 2 minutes of explanations about cpu architectures followed by a ten-minute ad for the M1... Too bad, I was really eager to learn.
This feels a little different than usual but still incredible! It would be great if you made more videos like this, thank you!
Its basically the usual informative stuff at the beginning and then a 10 minute ad.
@@physikus7888what?
Title: How a CPU works in 100 seconds
Me: How a CPU works in 10 minutes
Is this an Apple commercial?
Apparently
Obviously
It said in the title that it was comparing both the chips
The beginning of the video was nice and I wanted more from it, but then it felt like it just turned into an advertisement for Apple, with the rest of the video being a Biased take on why SoC design and apple are better than standard desktops when that's not really true for a lot of people and use cases. I went into this video hoping to learn more about how CPU architecture looks at a microscopic level and what an ALU looks like and how it functions. But instead it ended up being a more high level look at a CPU followed by an Apple advertisement which is not what I wanted from this video.
EXACTLY how I felt! I stopped right away.
Same here. The first part was great, the second felt not only off topic, but heavily biased towards Apple. Disappointing 🙁
fr I feel so disappointed
Thanks, you saved me 10 minutes
The title literally has “Apple Silicon” in the title… going over the M1 chip and its improvements isn’t a “biased advertisement”, it’s just reality. People get so defensive whenever Apple gets any praise, even when deserved. It’s weird.
Thank you very much you all videos are very helpful and helps to understand basics easily from your 100s videos ❤️❤️
Started working with ARM64 devices.
Came back to this 3 year old video for a refresher.
Thank you.
Overclocking doesn't always lead to lower life expectancy because of undervolting it will get you very close to stock life expectancy resulting in it running cooler and more closer to stock temps while you get more performance.
How does undervolting work regarding battery output?
@@rb1471 I really couldn't tell you because most my experience undervolting has been on desktop. I don't know if you can undervolt a laptop from the BIOS. Might even be dangerous if you can, because the CPU's are already adjusted by the manufacturer like ASUS or dell to give you the best battery life or cooling.
I think you meant opcode, not optcode 😉🤓 Nice video!
Defi application in 100 seconds
Omg yes!!
hell, you compress half of the intorduction to operating system course in university into 100 seconds, and explain better than my lecturer
Day 5: Elixir/Phoenix in 100 seconds, iOS Development in 100 seconds, Android Development in 100 seconds, Rust in 100 seconds, C in 100 Seconds, TailwindCSS in 100 seconds, JS Testing in 100 seconds, Ruby/Rails in 100 Seconds, C++ in 100 Seconds
Lets goooo, Fireship recognized this!
U forgot to add Go
@@ibrahimshehuibrahim918 thanks, ill add it next time.
UE 6 in 100 Seconds, Interpreters in 100 seconds, Forensics in 100 seconds, malware analysis in 100 seconds.
Your hard work for these videos and for the editor of this video is surely worth watching to us and useful
from software to the hardware you got me cover! one heart please
Thanks for the heart
@@RajvirSingh1313 One heart please
What do you use for your EVERYTHING man? Like the audio, editing software, the animation software, these are perfect videos
Really love these videos.
The only thing I will say that I'm sure someone has pointed out already but needs to be said: it's "OPcode" not "OPTcode" :)
Keep up the good work
I recommend to change the title to something related to M1 for developers because I was searching to find alike video and I didn't find any near helpful and detailed than that and I quite searching then, thank you so much for such useful information
I primarily dev in Android Studio and I had to get an M1 because of the benefits while waiting for a better SoC on a larger MBP or smaller mini… but I changed a few settings; mainly memory heap and editor refresh rate, and it runs smooth as butter now. Plus they added ARM AVD support now and they run pretty well, emulating an ARM device on an ARM SoC. I will agree it’s not ideal, but I keep myself to one project open at a time and can barely notice a difference at this point. If they release a silicone native AS build it should be at least as good as the performance of my older intel MBP workhorse. Luckily Apple devices make it easy to share, drop off and pick up work between devices. So I can easily hop over to my intel machine if need be, but I haven’t used it for Android development since getting my M1. After seeing WWDC I think instead of trading in my M1 and some cash for a new MBP M1X, Ill just buy an additional mini M1X at the same specs and sell my older 15” MBP. I think eventually everything will be, or have an ARM build available.
Dude just taught us a topic it takes a whole painful semester to cover, in 12 goddamn minutes.
Did you _actually_ watch the video? And did you _actually_ take a course about this subject (how CPUs work) and know what they teach there?
@@DrorF yes smarty pants I took CS429 (Computer Organization and Architecture) as an elective during my bachelor's. Pretty boring.
@@vishal24000 he didnt teach u anything.. u just didnt learn anything in ur college....
developers: lets start using microservices guys so if some small service died we can replace it
apple: nah monolith approach so if a line of code is wrong the user will have to come and buy an entire application
Web tech stack isn't really concerned with speed all that much, because microservices is slower. You have to use network to call services rather than a simple function call in a monolith application.
@@comradepeter87 that wont be slow if they r on the same network 🙄
@@zedmagdy says who?
Yes. Apple is here to make money. Surprise!!!!
so true
Man videos like this really make me appreciate the times we are living in ! I love technology and I love smart people explaining technology !
Oh man, the awesome video dropped like a gem, as I was hoping this kind of videos out of software and tech-stack videos,
so you got it, thank you.
It started with Linux Distro, then Arch Enemy Microsoft, and now Hardware stuff....
You taught almost my half semester's course, as COA-Computer Organization and Architecture, explaining CISC vs RISC differences for 8 Marks.
The sandwich analogy would make more sense if it was instead comparing a supermarket to specific stores.
Yooo, epic content. Love to see it, keep up the great work!
Amazing explanation, we studied this on the last years class
Excellent explanation of the low level activity of the CPU, yet I also had visions of Tron movie scenes of flying discs and light cycles going through my head, as I was watching your explanation! 🤖
This is a very nice collaboration. Thanks guys :)
I was really waiting when this type of Processor related content hit youtube ....And here you come with 100sec. Loved it!
Can you please do a video on how a Processor engineer designs a CPU....I mean do the use a EDA software like normal or they use computer algorithms.....And also how processor supply chains work....and processor foundry's work
I like you get straight to the point , not time waste
One of the major drawbacks of SOC, is that chips cannot be repaired but only replaced. So the overall cost to get back normal would be very high (for the company/user), considering the number of components it has fabricated on it.
Yup, nothing new if you already buy Apple products lol
@@phantasyphotography3813 Lol
Coc...capitalism on a chip? Sorry...made me chuckle.
This made me subscribe in the first 6 seconds of the video ..
Great explanation.
Keep up the good work 👍
What *really* makes M1 faster is that decoding an ARM instruction is much simpler than decoding an x86 instruction.
For example, each x86 or AMD64 instructions have different number of bytes. Some of them are 4 bytes, some of them are 6, 8, or even 12, etc...
On the other hand, ARM always has fixed length. I don't know about 64-bit ARM, but I do know that 32-bit ARM always use 4-byte(32-bit) per instruction(which includes both opcode and operands).
What this means is that it is very easy to predict where to fetch next instruction from, which allows decoding more and more instructions at the same time. Then when an instruction finishes, everything needed to execute next instruction would be immediately ready.
Also not to mention that ARM, which is RISC(Reduced Instruction Set Computing) architecture, has far fewer instructions than x86. So instruction decoder itself would be also a lot simpler.
Isnt ARM64 variable length instructions?
I work in visual studio on M1 Mac and it runs really good
It would be interesting to know what was the bottleneck when running the tests. Was it only the CPU or other thing such as memory access or disk access?
great useful 👍😊❤
thank u
from Gilgit-Baltistan
Yessss I was hoping there’d be a new one today
I always check the playback speed to make sure I'm not on 1.25x which is my eternal state of being on youtube. Thanks for not dragging shit out, and it takes a lot of knowledge to be this succinct.
So, basically, this has been a 12 minute ad for Apple.
I came here to learn about the logic gates and operation codes between two architectures, not be sold an overpriced SoC.
Best explanation on the differences with general CPU.
Love your content man , keep this up and thank your for what you do.
Due to explanations the process in making sandwich, you deserved a like and new subscribe.
Damn SOCs have been around for a very long time.
But, it just made sense to me now. 🙏
Yeah, if M1 is faster because it is an SoC, then why does an Intel Atom SoC not run circles around an i7?
This video is a load of crap designed to make Apple fans happy.
@@ralf391
I am also sceptical about the claim that M1 out perform i9.
@@porcorosso4330 it doesn’t, outside of either hyper specialized workloads like workloads that utilize machine learning ASIC on M1 or in performance per watt.
While M1 is amazing in its own right, it can’t break how physics works. Lot of stuff that’s running faster on the M1 is because it’s specially coded to utilize fixed function asic or the JIT reconciler/emulation of Rosetta can sometimes skip a lot of steps which isn’t great for mission critical work if need an error free environment. But most Rosetta emulated apps run slower still. Also, Lot of apps don’t utilize fixed function asic on intel or AMD for some reason, been a issue for a long time and nothing magical just because see it happening on M1. People look at premier for the best example of how much faster M1 is vs intel, the reality is by default premier disables intel QSV optimizations on windows and on Mac it’s just not available for some..reason? Turn those QSV functions on in premier on windows and if you have a recent intel CPU the M1 isn’t that far ahead if at all.
Another aspect is synthetic benchmarks, Comparing what they believe to be “raw” performance. In reality, most of these benchmarks are very small and can easily fit within the cache operations without much eviction of code to DRAM, and thus get a huge boost in performance. This is also why we are seeing in same benchmarks, a lot of zen based chips running rings at the top too.. none of this specifically invalidates the benchmark, but it’s not a true raw compute scenarios in my book. Take geek-bench for example, it can run each test with minimal code eviction. Cinebench R20-R23 is the same way, this is why memory speed plays less role in this “CPU benchmark” and can even go single channel on my platforms and score similar or higher on some platforms. This because each tile tries to work within the cache. The M1 has a fairly large chunk of cache that’s unified between cores, it also acts as a buffer to DRAM, allowing the M1 to hit the DRAM less often.
This is a summary, which is so useful
The second part seems more like Apple ad...
Yeah, if the fact that M1 is an SoC, was the reason it's so fast, then Atom and any Snapdragon would have been dominating the laptop and tablet space right now...
офигеть, как же я без тебя жил, министр! теперь кэш течёт рекой! большое тебе спасибо за идеи!
Right on time to show my teenage daughter. She got some questions, how Python actually gets executed :-)
Please make a video on this too
@@monke4319 Hehe. She knows how to count binary from age 7 or so (one hand, 0-15). Funny moment when it dawned on her, what this is actually good for :-) We also did tube-triods to explain RAM, and floppy disk tracks + sectors. Internet-age kids have problems to understand the concept of storage, and where it happens when. There is no more blinking stuff, you can actually touch. It was very interesting for me too, to draw battery symbols and to know which was plus, and direction of current etc - after all these years.
@@leoingson dayum negga she knows a lot
Two channels that create great content, all in a single video, that was unexpected! Great one guys! Nice to see Alex here.
SoC's have one advantage and one HUGE drawback...
The advantage is obviously the power consumption, and that might lead people to think it's more e ological to do it like this, but here comes the drawback... The less stuff you can upgrade in your computer, the less it's gonna serve you, eventually leading to uprise in throwing out new stuff just to buy new, whereas modular machines sometimes just need more memory or an expansion card and there thry go serve for another one or two years. Keep in mind that producing PCBs in mass includes TONS of toxic materials...
May work on laptop but not on regular PC, PC is like that for customizable feature that allow you to modify it as needed while laptop probably need the cooling system and power efficiency to be better.
The thing with SoC's is that since they're constrained by size they may actually be less efficient
Alex and you are awesome. I liked both of your videos.
An important point: The M1 can only be fast when using tools that are optimized in it. If you try to encode a lot of jxl images or some av1 video the M1 will be really slow. The M1 do not have support for nothing except the Metal API, and that meand that a program using Vulkan will perform slower too.
Finally someone who explains well the differences!
This is a misleading description of x86-64 (Intel/AMD in most cases) vs ARM. ARM is a RISC (Reduced Instruction Set Computer) style processor, so what you gain in speed is lost in feature set and wider compatibility with Windows and other x86-64 based operating systems. There is also nuance in the testing performed here that might be missed by the uninformed viewer, i.e compiler optimizations done in the background automatically for ARM, that would explain the apparent performance difference.
ARM is indeed a powerful and typically less power hungry platform, but there are pros and cons not discussed here that would and do apply when looking to develop native code for x86-64 or other architectures. Web applications built on JavaScript and other platform-indepenent languages would likely benefit from development on ARM, however any desktop apps for x86-64 would likely have little to no difference in compile time/runtime performance especially when running on Apple's translation layer.
Since TH-cam doesn't seem to like my links just search m1 vs x86 for more info
The main benefit is the 8 deep interpreters for multicore performance thanks to ARM, a centimeter extra distance at the speed of light does not make a big difference.
JS and Py actually compile into bytecode, which is executed by their respective interpreters. The interpreter, though, is the one running on machine code.
Also, the execution phase is more complex than that; there are all kinds of opcode extensions, prefix bytes, addressing modes, immediate values, etc., etc. But an important note is that the operand (which isn't even always there, like with RET, or of which there are multiple, like with IDIV) doesn't always reference memory-it can also reference CPU registers (?ax, ?bx, ..., and r8 through r15 in 64-bit operating mode) in the register-only addressing mode. In addition to those, there are also some special registers that allow the CPU to function; such as ?BP (base pointer) ?IP (instruction pointer) FLAGS (CPU state flags), and so on.
* The question marks indicate a register size. E = 32 bits and R = 64 bits. For 16 bits, take out the question mark. For 8 bits, use l and h for low and high.
AL -> low 8 bits of accumulator
BH -> high 8 bits of base
CX -> 16-bit counter
EDX -> 32-bit data
RIP** -> 64-bit inst. pointer
** Not available to instructions
Oops, I'm rambling again. I'll see myself out.
Edit 0: I've seen myself back in to correct the r range. r0-r7 actually refer to the previously-mentioned registers. (Source: stackoverflow.com/a/9130707 )
This is a great visual aid in learning programming ground up for me. Thanks
Incredible video. Thank you both! Recently bought the most bottom tier Macbook Air M1 8GB mem / 256 GB just to do video calls for work and was blown away with how much better it runs my dev stuff compared to my 10600K / 32GB Windows pc.
Question for @Alexander, do you think it was a good strategy to go cheapo with M1 for my purchase with the idea that when the M2 machines get announced later this year I can sell and upgrade?
If you can wait for the M2's then you should wait, IMO
8 GB of RAM. You must not run Docker
Clever Advertisement for gaining newer generation Apple cultists.👍
How I wish you had released this video during my campus days
0:54 rare typo.... thanks for another great video. Fabulous channel
Can we get an awesome video (which is any video on this channel) about WebGL ?
It's on the list
@@Fireship You are awesome!! Thanks!
Love your channel. Very good wealth of information 🔥
That was the longest 100 seconds of my life.
Title: 100 seconds
Video: 12mins
Me: confused
Hotel: trivago
Thank you for your thorough review! What about "pure computation power"? I use Comsol software for physics simulations and I really would like to know whether I should switch to Apple RISCs or, for this kind of job (lots of RAM, lots of cores), I better stay with Intel or AMD? Thanks in advance!
I am MSc microelectronics engineer, some points are missed cause the deal. You are comparing SoC and Mothetboard. Obviously Primary and Secondary memory access and operations are create the difference. The right way the compare intel with m1 is looking at architectural design. At the beginning we were looking clock speed, more ticks more operations, then converted to operation number in one second. Now, the deal is changed. At the architectural based both probably using advanced harvard, apple is developed around Arm which is common, lots of SoC develop on Arm is also harvard. Intel doesn't use Arm, intel is already the architecture creator while apple is sticking another firm's (but widely using and reliable) architecture.
Its funny listen a *web* developer talking about cpu architecture, when literally none of their work relay directly on any of this stuff.
Agreed.
It's even funnier that the comparison is made on intel mac and m1 mac and not windows intel/amd + linux intel/amd + intel mac + m1 mac...
@@AntonySimkin which dev uses windows anyways
@@KimYoungUn69 most pc's run on win... Why not?
Best Explanation Ever 👍
So you didn't mention that the M1 chip is basically ARM? And apple stole a concept once again and made it theirs?
ARM + SoC. Both are not new. This video is very biased.
A video on logic gates would be nice 🙏 Keep up the good work man
Great video! :))
When do you think we will be getting the same highly performent ARM cpus on the desktop market if we will at all? Since these are just laptop cpus, aren't we able to go beyond that on PC?
Arm cores are winning on power efficiency, on PC that not focused on that, nobody will use low power processors. You'd get better performance with amd APUs if you want is SoC integration. Also because ARM sells the design of the processor to foundries, it is possible to mix x86 processor with ARM, some already have it iirc
no thanks. I'd rather have the ability to upgrade my PC as and when i choose. dont want some company selling me overpriced shit that i cant upgrade and have to throw away in a few years
"How a CPU works in 100 seconds"
Literally explains the entire school program
temmie
Love it! Ruby on rails in 100 seconds 🙏💎
Groovy on Grails in 100 seconds
isn't that stuff super dead, last I heard only japanese people used it for some reason.
thanks bro, i got a cpu and my performance got boosted a lot :D
I'd be totally down to buy an M1 laptop! Just have to wipe the drive and install a dual boot with a Linux installation and windows 10... You'll never catch me bending over backwards trying to conform to Mac OS. Prefer it if I could get it in a different case as well..? Then again Apple is afraid of change so I guess I'd be stuck with the 13 year old design.
CUTTING EDE 13 year old design!
Thank you for making a video on this topic....