It seems your true interest is in electrical/computer engineering, vs. CS - this is a VERY welcome addition to TH-cam, where this kind of clear, concise, well-animated, and perfectly-paced content may already exist for CS, but is essentially nonexistent for CE. Please don't stop :)
I am an electrical engineer, have some knowledge of some programming and hardware description languages, have been working for many many years, and am familiar with many educational materials and lectures. I can tell you this much, your way of presenting and showing things are by far the most intuitive and understandable I have ever seen. I am also familiar with the Branch Education videos, which provide an incredible level of detail and make it tangible to the viewer. But your presentation goes so much deeper into the basics that not only newcomers but even experienced people can't help but say FINALLY. I take my hat off to you and your work. The greatest respect! PS: Maybe you could make a video about why NAND flash or memory in SSDs, for example, is slower than DRAM/SRAM. Especially in view of the fact that you have described very well how SRAM gets its "storing" property when reading, a further presentation could show that it is not comparable to NAND flash or non-volatile memory. In my opinion, this would be a good bridge to explain the last bottleneck (memory) in terms of CPU(cache)->RAM>non-volatile memory.
I don't have any actual degrees, but I do have the knowledge and understanding of most of these fields from computer science to software and hardware engineering and I was thinking the same thing in regard to volatile vs no-volatile memory. I'd also be curious in a fine detail explanation of atomic operations.
When I was writing a piece about Commodore (my background is in economics), I always thought it was weird for Jack Tramiel, the cheapest man in the world, to use SRAM in his first 2 successful home computers. Seeing how complicated it is, and the necessity of DRAM refresh, I understand why now
Commodore only used SRAM in the C64 for colour RAM. Only the original PET had SRAM and the later models, from the 2001-N onwards, used DRAM. The amount of memory in the VIC-20 was so small that it didn't make that much of a difference in terms of cost.
There’s a type of memory in-between dynamic and static RAM called “lambda memory”. It uses a reverse biased diode as a constant current source and a pair of depletion mode mosfets. It’s called a lambda memory cause the current through it rises then falls with voltage. Because of parasitic resistance/leakage the current actually rises then falls then rises again. Due to this it can store 3 voltage states at constant current (LOW, MID, HIGH). It also has another enhancement mode mosfet for reading/writing. In total, 1 diode, 2 depletion mosfets and 1 enhancement mosfet gives 1 memory cell that has 3 states and uses 7 semiconducting junctions. Compared to a normal static memory cell that has 2 states and uses at least 12 junctions, it stores 1/3 more data in 1/2 as much space. Quite a bargain. Unfortunately, it is not in use due to very tight tolerances for manufacturing each memory cell since the nonlinear behaviour of the silicon is sensitive to even slight imperfections or doping variations.
@@oglothenerd For example, majority of DDR5 server sticks stay below 5000MT/s* while consumer DDR5 quite often has 6000MT/s* and some even go above 7000MT/s*. This of course comes with instability issues (even without current Intel blunders) so PC build guides recommend keeping clock speeds modest for professional users. And now I know where that instability comes from! * I use MT/s here, which is likely the correct unit, but RAM clock speed units provided in specs are a hot mess, so take the unit with a grain of salt. The main point still stands tho.
Manufacturers lately have not trusted their capacitors lately. Based off The refresh cycles every 30 ms And they're adding error correction to the die in ddr5.
thanks for such a great, well-explained, well-structured video explaining how the DRAM works at the hardware level. I paused and thought many times to let my brain process and understand it thoroughly. really appreciate your hard-working
When reading docs like “The Optimization Guide,” I sometimes see the term “cache hit.” I always wondered why there were caches in the CPU and why care about it. It makes no sense that fewer distance increases the speed of memory access. So thank you very much!
Nice job. You succeed to simplify while remaining complete. Continue in this direction... I would like to see more programmers having interest in hardware mechanics. It really helps understanding complexity and program improvements.
Very, very well done! I'm also a software engineer - albeit one hiding a logic analyzer and soldering iron behind his back. So a few comments and nitpicking. At the level of your video I think the finer details of newer memory types such as DDR memory imho can safely be ignored. That's basically additional details that should be left for a closer look. Memory refresh is complicated and some memory controllers have ample options to configure refresh. For many if not most hardware this is undocumented black magic. This kind of setup is usually performed by firmware in early initialization right after the CPU itself is ready. Depending on the CPU the cache SRAMs might contain junk such as data with invalid parity or ECC which needs to be cleared first because the CPU can perform a cached memory access without blowing itself up. Even an implicit memory access such as for the stack could do so, so at this stage subroutine calls are taboo. You'd think hardware'd make that easy but wiring a reset line to everything that needs to be initialized for use once after reset is something that gets harrdware guys rioting and point their fingers at the software guys "you do it" 🙂 Next memory controllers. The cache may be working but DRAM still can't be accessed. In older systems that was as simple as writing a few constants into the memory controller. Some systems had to perform strange voodoo to figure out how much memory is actually physically present. Yet more modern systems have feature known as SPD allowing the system to detect the quantity, type and speed of memory. Software then programs the memory controller accordingly. Still no stack access so such code often is a unholy mess of deeply nested C macros. Optimal programming includes the use of features such as interleaving where possible and many more, so it's not trivial Once this has been completed memory may need to be cleared to avoid parity or ECC errors. And after that sanity arrives, everything else is much simpler now that "normal" programming is possible. Some very old systems are nice in that they don't need any software initialization at all for their memory controller. The hardware is (in hindsight) unsophisticated enough to just know what to do without being told to. Finally caches may not always consist of SRAM. One of the systems I worked with had three levels of cache. The CPU was switched to a different architecture and the new CPU architecture had a different bus, so conversion logic was needed. But that logic slowed down memory access. That was fixed / kludged (you choose the term) by adding a 64MB L4 DRAM cache. The only DRAM cache I know off but I haven't researched that exhaustively.
16:53 - in earlier computers, the ram chips handled a single bit of a memory location and you put multiple together to make up the width of a memory location. The address lines on the chips would only handle half of the address lines and would have pins that indicated whether the value on the represented at that moment a row or a column (RAS & CAS).
This is a very well made video. As an electrical engineering student, I'm sending this channel to all of my classmates for our list of educational TH-cam channels
i love your YT-videos. I have always looked for an explanation how the actual hardware of CPUs works. And I always got these zoomed out views that never explain how storage and code actually is stored in hardware. Thanks
I don't know... but i subscribed to your channel a long time ago & finished all the Previous videos... still i didn't get it recommended... this Channel is Seriously Criminally Underrated by TH-cam algos... your contents are truly unique...
I felt bad b/c I thought I hadn't subscribed. Realized I had subscribed many videos ago. Good decisions were made. I really hope you keep making these videos. You have a clear talent for it. And I LOVE learning stuff like this. I'd much rather watch this than the brain rot BS others are making. 10/10 channel content
Yet another absolutely amazing video! I am so happy you make those videos, because they answer a lot of questions that always bothered me but would take hours or days to research. And that visual aspect helps so much!. Do you plan on making a video about clocks and their role in components? They are seemingly crucial for computers, but don't really appear in your videos to reduce complexity. Yet I'm still curious how clocks keep everything in running and in sync, so such video would be amazing!
I see how scalability of DRAM is so good! You can basically keep the part with mux-demux and sense amplifiers and extend in the other direction, for which only a bigger decoder is needed. I wonder, on the physical RAM memory chips, does this concept get used? The memory chips themselves are rectangular, so it is tempting to assume that the mux-demux and sense amplifier part is along the shorter side.
This is great. Your explanation was very easy to understand. I wish you had explained the refresher more in depth; it seems quite difficult to make it work with the existing circuitry you explained before.
This was a great video explaining how DRAM works! 🫡 It's not within the scope of this video per se, but this also helps explain why DRAM has timing/latency values. A capacitor can't charge/discharge instantaneously so you have to wait a certain period of time for them do to that. And if you don't wait long enough, the voltage values are probably not within tolerance.
Simplifying to the essentials to make it understandable to people not involved in designing chips, which is the vast majority of viewers. Great job deciding on what is important to show in detail, and what to show with vague blocks with no internal detail.
Your content is incredible. I did startt getting confused around 8 minute mark. Idk why but all of a sudden it stopped clicking in my head. Just wanted to provide feedback.
Since you're usually reading a lot of data at once tho when you do perform a read operation, computers do often cache the data in fairly large chunks, up to a few kb. It makes sense to do that since you're already reading the whole row, each extra byte you grab has pretty minimal cost. As far as refresh rates go, iirc 50-60ms is a pretty common interval, but you could go lower to like 20-30ms if you were really concerned about rowhammer attacks or similar
In older 8 bit computers each bit of a byte was handled by a separate chip, much like your initial example. Hence the banks of chips you would see on their motherboards. Even on modern DRAM there's multiple chips on each stick and the load is spread across them. As far as I know, but haven't checked, the data bus will be much wider than the 64 bits the CPU normally processes. Especially considering that most systems work with pairs of DRAM sticks, not just one, so they can operate together to be even wider than a single stick would allow. In server architectures this can be even more complex with banks of 4 sticks working together.
Please make a video discribing all the type of memories like.. Registers, Cache, Flash, Magnetic disks, Ram, Rom, and comparision in terms of cost speed etc. It will be very helpful...
What happens when there is a cache miss during an instruction such as a load or add or sub instructions that has to now use slower ram? Also similarly what happens when it has to use drive - Is that when a process goes D state temporarily?
well the core has an out of order execution unit so it will execute other instruction instead or even switch the thread. there is always time wasted waiting and the trick is to make it do as many as possible.
GPUs need an entire book maybe even a couple of books to explain. Primarily because GPUs rely heavily on fixed function hardware so you need to explain every function how they work and why they are needed.
As a professional JS hater I really appreciate your hardware-related videos, can you recommend any books or other materials for learning more about electrical engineering?
8:55 here i don't understood how exactly it re sets i mean how it make the capacitor to its initial state , just like u have simulated at beginning of charging and discharging if u can please explain this also like that
Hi George, when you say some things are oversimplified can you please tell what are the other aspects that weren't mentioned? I would like to learn them too! Thanks❤️
Fun fact: A MOSFET in integrated circuits is a 4 Terminal device. The BULK. it is just always connected to the gate when it is produced as a discrete device. In the more elaborate icon for a MOSFET, this is made visible.
You can get four-terminal FETs in discrete form. Usually the B terminal is used for biasing. You don't see them often, but they are most common in high-frequency usage where the input signal is so difficult to work with that just putting the correct bias on it is difficult - it's sometimes easier to have the bias voltage entirely separate from the signal path.
>it's just always connected to the gate Did you mean to the SOURCE? Or am I missing something? On the MOSFET icon the arrow thing is connected to source..
Hopefully, that's the next episode. The reason I haven't finished it is because I'm also developing an interactive tool (related to that topic) so you guys can use it in the browser.
Because as soon as the gate of the MOSFET is opened, the capacitor starts to discharge. The voltage provided by the capacitor is proportional to its charge; so it decreases while it loses charges. In that scenario bitlines would be outputting a variable voltage. Also, using the bitlines as capacitors doesn't require the process to completely charge or discharge the capacitors, so when those capacitors need to be refreshed at the end of the operation, the process won't require to wait for a fully charge or discharge, which for obvious reasons would take more time than only charging or discharging it "a little bit".
I think we need to remove the ghost from the previous write and also bias the pre amps on the edge. The capacitors are charged if not too old. We just want to know the polarity.
One thing to note is that the bitlines, like every other component, have some capacitence. In fact, when you consider that to reach however many gigabits of capacity a modern RAM chip has, a bit line must cross hundreds of thousands of rows, we can see that these are rather large structures, and so the capacitence can be presumed to be quite significant. Did I mention that we also want to make the data storing capacitors as small as possible so we can fit more of them? What this means is that DRAM capacitors are very likely not large enough to fully charge or discharge an entire bitline, not even close. But they don't need to. If we precharge the bitline to right about the threshold voltage for the sense amplifier to switch one way or the other, then just a small change in voltage is enough to tip the balance and read the bit.
2:32 the transistor model doesn't actually map the gate model of the static ram cells: the transistor model is a double-(cmos)-inverter cell with two access transistors while the gate model is a double-nand cell with no further access method except of course the second input from both nand gates.
This is a great visualization why 32bit computers cannot read a bool, and actually store a bool bit as a byte. Though my code will never see a console, or a 90's era pc, I try to code as efficiently as hardware will support
One thing I was hoping you'd go over, but it seems you (understandably) didn't, is what makes up the capacitors on the physical die. I know that MOSFETs are said to have parasitic capacitance, so is that what's being used? Or do they have special layers of materials for capacitors, specifically? How big on the die is a capacitor, compared to a transistor? I've seen conflicting answers when I try researching those things on my own. One of the things I remember seeing is a VLSI layout diagram that showed a capacitor being absolutely massive compared to a transistor, which would seem to imply that it should be possible to pack more SRAM into a space than DRAM, but if that were the case then nobody would use DRAM.
Modern DRAM processes use quite complicated techniques to fit as much capacitance as possible into as small an area as possible. They typically use trenches with a conductive layer, a layer of oxide, and then another conductive layer to make the capacitor. So although the capacitor is quite big compared to the transistor, most of its size is in the vertical direction rather than the horizontal direction.
but isnt a mosfet itself basically a transistor that also acts as a capacitor? because if you for example apply a voltage to the gate and then remove the voltage source, it will still allow current to flow from source to drain due to its capacitance. or am i missing something?
Wouldn't adding a second transistor that acts as an and gate, requiring both row and column to be powered solve the issue with reading a whole row of bits?
It seems your true interest is in electrical/computer engineering, vs. CS - this is a VERY welcome addition to TH-cam, where this kind of clear, concise, well-animated, and perfectly-paced content may already exist for CS, but is essentially nonexistent for CE. Please don't stop :)
I’d love to learn more about this ultra low level stuff in computers
I just take it for granted but someone had to think about it
Ben eater?
You might also like Ben Eater
but please don't use acronyms 😭
Words cannot describe how much i love your videos, please never stop.
That's what a wordline is for, i suppose 😇😁
@@Mrfebani I'll try opcodes next time lol
Never stop me loving you
I am an electrical engineer, have some knowledge of some programming and hardware description languages, have been working for many many years, and am familiar with many educational materials and lectures. I can tell you this much, your way of presenting and showing things are by far the most intuitive and understandable I have ever seen. I am also familiar with the Branch Education videos, which provide an incredible level of detail and make it tangible to the viewer. But your presentation goes so much deeper into the basics that not only newcomers but even experienced people can't help but say FINALLY. I take my hat off to you and your work. The greatest respect!
PS: Maybe you could make a video about why NAND flash or memory in SSDs, for example, is slower than DRAM/SRAM. Especially in view of the fact that you have described very well how SRAM gets its "storing" property when reading, a further presentation could show that it is not comparable to NAND flash or non-volatile memory. In my opinion, this would be a good bridge to explain the last bottleneck (memory) in terms of CPU(cache)->RAM>non-volatile memory.
I don't have any actual degrees, but I do have the knowledge and understanding of most of these fields from computer science to software and hardware engineering and I was thinking the same thing in regard to volatile vs no-volatile memory. I'd also be curious in a fine detail explanation of atomic operations.
When I was writing a piece about Commodore (my background is in economics), I always thought it was weird for Jack Tramiel, the cheapest man in the world, to use SRAM in his first 2 successful home computers. Seeing how complicated it is, and the necessity of DRAM refresh, I understand why now
Proud owner of a Commodore VIC-20 and Commodore 64, here 🙋🏽♂️
Commodore only used SRAM in the C64 for colour RAM. Only the original PET had SRAM and the later models, from the 2001-N onwards, used DRAM. The amount of memory in the VIC-20 was so small that it didn't make that much of a difference in terms of cost.
There’s a type of memory in-between dynamic and static RAM called “lambda memory”.
It uses a reverse biased diode as a constant current source and a pair of depletion mode mosfets.
It’s called a lambda memory cause the current through it rises then falls with voltage. Because of parasitic resistance/leakage the current actually rises then falls then rises again. Due to this it can store 3 voltage states at constant current (LOW, MID, HIGH).
It also has another enhancement mode mosfet for reading/writing.
In total, 1 diode, 2 depletion mosfets and 1 enhancement mosfet gives 1 memory cell that has 3 states and uses 7 semiconducting junctions. Compared to a normal static memory cell that has 2 states and uses at least 12 junctions, it stores 1/3 more data in 1/2 as much space. Quite a bargain. Unfortunately, it is not in use due to very tight tolerances for manufacturing each memory cell since the nonlinear behaviour of the silicon is sensitive to even slight imperfections or doping variations.
Wow, I had no idea that my RAM was so sketchy! Now I am frightened! 😆
Yeah, I was surprised as well! That would explain why servers use way lower clock speeds and ECC
@@el_quba Hmmmmmm... I didn't know that about servers!
@@oglothenerd For example, majority of DDR5 server sticks stay below 5000MT/s* while consumer DDR5 quite often has 6000MT/s* and some even go above 7000MT/s*.
This of course comes with instability issues (even without current Intel blunders) so PC build guides recommend keeping clock speeds modest for professional users. And now I know where that instability comes from!
* I use MT/s here, which is likely the correct unit, but RAM clock speed units provided in specs are a hot mess, so take the unit with a grain of salt. The main point still stands tho.
@@el_quba This is good info to know! Thank you!
Manufacturers lately have not trusted their capacitors lately. Based off The refresh cycles every 30 ms And they're adding error correction to the die in ddr5.
You basically teach people from grounds up. And you don't hide it behind paywall. Thank You.
Happy to help!
Your videos are always very clear, and I understand them so well.
Thank you for doing this for us!
thanks for such a great, well-explained, well-structured video explaining how the DRAM works at the hardware level. I paused and thought many times to let my brain process and understand it thoroughly. really appreciate your hard-working
When reading docs like “The Optimization Guide,” I sometimes see the term “cache hit.” I always wondered why there were caches in the CPU and why care about it. It makes no sense that fewer distance increases the speed of memory access. So thank you very much!
I'm an ECE grad student taking a class on this right now. This is unbelievably helpful. Thank you.
Thanks
Another video that I will watch again and again over time. The recommended two videos are also explanatory.
Nice job. You succeed to simplify while remaining complete. Continue in this direction... I would like to see more programmers having interest in hardware mechanics. It really helps understanding complexity and program improvements.
Very, very well done!
I'm also a software engineer - albeit one hiding a logic analyzer and soldering iron behind his back. So a few comments and nitpicking.
At the level of your video I think the finer details of newer memory types such as DDR memory imho can safely be ignored. That's basically additional details that should be left for a closer look.
Memory refresh is complicated and some memory controllers have ample options to configure refresh. For many if not most hardware this is undocumented black magic. This kind of setup is usually performed by firmware in early initialization right after the CPU itself is ready. Depending on the CPU the cache SRAMs might contain junk such as data with invalid parity or ECC which needs to be cleared first because the CPU can perform a cached memory access without blowing itself up. Even an implicit memory access such as for the stack could do so, so at this stage subroutine calls are taboo. You'd think hardware'd make that easy but wiring a reset line to everything that needs to be initialized for use once after reset is something that gets harrdware guys rioting and point their fingers at the software guys "you do it" 🙂
Next memory controllers. The cache may be working but DRAM still can't be accessed. In older systems that was as simple as writing a few constants into the memory controller. Some systems had to perform strange voodoo to figure out how much memory is actually physically present. Yet more modern systems have feature known as SPD allowing the system to detect the quantity, type and speed of memory. Software then programs the memory controller accordingly. Still no stack access so such code often is a unholy mess of deeply nested C macros. Optimal programming includes the use of features such as interleaving where possible and many more, so it's not trivial Once this has been completed memory may need to be cleared to avoid parity or ECC errors. And after that sanity arrives, everything else is much simpler now that "normal" programming is possible.
Some very old systems are nice in that they don't need any software initialization at all for their memory controller. The hardware is (in hindsight) unsophisticated enough to just know what to do without being told to.
Finally caches may not always consist of SRAM. One of the systems I worked with had three levels of cache. The CPU was switched to a different architecture and the new CPU architecture had a different bus, so conversion logic was needed. But that logic slowed down memory access. That was fixed / kludged (you choose the term) by adding a 64MB L4 DRAM cache. The only DRAM cache I know off but I haven't researched that exhaustively.
16:53 - in earlier computers, the ram chips handled a single bit of a memory location and you put multiple together to make up the width of a memory location.
The address lines on the chips would only handle half of the address lines and would have pins that indicated whether the value on the represented at that moment a row or a column (RAS & CAS).
Absolutely Great.I can't express my gratitude to you in words.
I love how clear concepts are presented in your content. Please make a series of OS/RTOS topics.
You definitely have the best visuals when showing how all of this stuff works
This is a very well made video. As an electrical engineering student, I'm sending this channel to all of my classmates for our list of educational TH-cam channels
i love your YT-videos. I have always looked for an explanation how the actual hardware of CPUs works. And I always got these zoomed out views that never explain how storage and code actually is stored in hardware. Thanks
This channel is perfect for engineering
This is now my favorite channel.
Hey, you got yourself a sponsorship, well deserved!
The animations on this video are so smooth and well executed, even tho I already knew most of this it was still so engaging and satisfying to watch
Wow, such clear animations to illustrate your treatise! Great work!!
I don't know... but i subscribed to your channel a long time ago & finished all the Previous videos... still i didn't get it recommended... this Channel is Seriously Criminally Underrated by TH-cam algos... your contents are truly unique...
Yeah, I've noticed the algo is not recommending me lately.
I felt bad b/c I thought I hadn't subscribed.
Realized I had subscribed many videos ago.
Good decisions were made.
I really hope you keep making these videos. You have a clear talent for it.
And I LOVE learning stuff like this. I'd much rather watch this than the brain rot BS others are making.
10/10 channel content
These types of videos take you deeper into programming. Thank you very much ❤
i really love your videos. one of my favorite TH-cam content right now and i always wait for new episodes. (im from germany btw)
Thanks for the support!
Yet another absolutely amazing video! I am so happy you make those videos, because they answer a lot of questions that always bothered me but would take hours or days to research. And that visual aspect helps so much!.
Do you plan on making a video about clocks and their role in components? They are seemingly crucial for computers, but don't really appear in your videos to reduce complexity. Yet I'm still curious how clocks keep everything in running and in sync, so such video would be amazing!
Video about Clocks is definitely on my list!
Well suggested! @el_quba
I see how scalability of DRAM is so good! You can basically keep the part with mux-demux and sense amplifiers and extend in the other direction, for which only a bigger decoder is needed.
I wonder, on the physical RAM memory chips, does this concept get used? The memory chips themselves are rectangular, so it is tempting to assume that the mux-demux and sense amplifier part is along the shorter side.
Man I love this channel. Great content. Very informative
This is great. Your explanation was very easy to understand. I wish you had explained the refresher more in depth; it seems quite difficult to make it work with the existing circuitry you explained before.
We need this type of explanation 🎉
This was a great video explaining how DRAM works! 🫡
It's not within the scope of this video per se, but this also helps explain why DRAM has timing/latency values. A capacitor can't charge/discharge instantaneously so you have to wait a certain period of time for them do to that. And if you don't wait long enough, the voltage values are probably not within tolerance.
Wonderful explanation 👏
Keep uploading brother❤
Simplifying to the essentials to make it understandable to people not involved in designing chips, which is the vast majority of viewers. Great job deciding on what is important to show in detail, and what to show with vague blocks with no internal detail.
Excellent exposition. Thank you.
Yeeaaah i was waiting for this 🎉🎉🎉🎉
🔥🔥🔥🔥🔥
Best chanel in TH-cam ❤
Please keep making videos like these!
So that is the difference between Dynamic RAM and Static RAM. Amazing!
I've been in engineering for over 20 years... and I finally realized where the D and S in DRAM and SRAM come from 😅😂
In Polish, "SRAM" is also a word that means: Im making a sh*t.
@@norbert.kiszka 😂
I always thought it was for Downloadable RAM
:0
what do you use to visualize these circuitry and animate them?
Despite I knowing all this, it still nice to refresh my memory over a relaxing video.
This is insanely good and in depth
your programme is one of the most benificients
Your content is incredible. I did startt getting confused around 8 minute mark. Idk why but all of a sudden it stopped clicking in my head. Just wanted to provide feedback.
Thanks for the feedback, I'd use it to improve in later videos.
Since you're usually reading a lot of data at once tho when you do perform a read operation, computers do often cache the data in fairly large chunks, up to a few kb. It makes sense to do that since you're already reading the whole row, each extra byte you grab has pretty minimal cost.
As far as refresh rates go, iirc 50-60ms is a pretty common interval, but you could go lower to like 20-30ms if you were really concerned about rowhammer attacks or similar
One more epic video. Great video bro!
Great channel, I am just wondering which software you use to make these videos ?
Thanks, nice work on this video!
In older 8 bit computers each bit of a byte was handled by a separate chip, much like your initial example. Hence the banks of chips you would see on their motherboards. Even on modern DRAM there's multiple chips on each stick and the load is spread across them. As far as I know, but haven't checked, the data bus will be much wider than the 64 bits the CPU normally processes. Especially considering that most systems work with pairs of DRAM sticks, not just one, so they can operate together to be even wider than a single stick would allow. In server architectures this can be even more complex with banks of 4 sticks working together.
Please make a video discribing all the type of memories like.. Registers, Cache, Flash, Magnetic disks, Ram, Rom, and comparision in terms of cost speed etc. It will be very helpful...
Cool video. I'm also a software engineer, but I love this stuff.
Have fallen in love with your videos 😌 ....
What happens when there is a cache miss during an instruction such as a load or add or sub instructions that has to now use slower ram? Also similarly what happens when it has to use drive - Is that when a process goes D state temporarily?
well the core has an out of order execution unit so it will execute other instruction instead or even switch the thread. there is always time wasted waiting and the trick is to make it do as many as possible.
good sponsor recommendation, thank you
Thank you for explaining what my professor couldn't in 3h, all in under 20m
Kindly publish a video on GPU intern workings compared CPU
GPUs need an entire book maybe even a couple of books to explain. Primarily because GPUs rely heavily on fixed function hardware so you need to explain every function how they work and why they are needed.
As a professional JS hater I really appreciate your hardware-related videos, can you recommend any books or other materials for learning more about electrical engineering?
Hmmm for data-storage devices like SD cards or thumb drives, which type of RAM is most often used?
8:55 here i don't understood how exactly it re sets i mean how it make the capacitor to its initial state , just like u have simulated at beginning of charging and discharging if u can please explain this also like that
Another great video on computer science, many thanks1
Hi George, when you say some things are oversimplified can you please tell what are the other aspects that weren't mentioned? I would like to learn them too! Thanks❤️
Watching eveery single vid u make so far
U r really amazing
Yet another banger
Fun fact: A MOSFET in integrated circuits is a 4 Terminal device. The BULK. it is just always connected to the gate when it is produced as a discrete device. In the more elaborate icon for a MOSFET, this is made visible.
You can get four-terminal FETs in discrete form. Usually the B terminal is used for biasing. You don't see them often, but they are most common in high-frequency usage where the input signal is so difficult to work with that just putting the correct bias on it is difficult - it's sometimes easier to have the bias voltage entirely separate from the signal path.
>it's just always connected to the gate
Did you mean to the SOURCE? Or am I missing something? On the MOSFET icon the arrow thing is connected to source..
Man, there is no doubt that you do great videos. But I'm really waiting for the "how loops and conditionals work" video. You have promised 😉
Hopefully, that's the next episode. The reason I haven't finished it is because I'm also developing an interactive tool (related to that topic) so you guys can use it in the browser.
@@CoreDumppeddefinitely, that will be a video I'm searching for many years. Thanks, man!
@@CoreDumpped That sounds very tempting!
but one thing i dont completely understand is why do we need to precharge the bitlines ? why cant we just read if there is any voltage or not ?
Because as soon as the gate of the MOSFET is opened, the capacitor starts to discharge. The voltage provided by the capacitor is proportional to its charge; so it decreases while it loses charges. In that scenario bitlines would be outputting a variable voltage.
Also, using the bitlines as capacitors doesn't require the process to completely charge or discharge the capacitors, so when those capacitors need to be refreshed at the end of the operation, the process won't require to wait for a fully charge or discharge, which for obvious reasons would take more time than only charging or discharging it "a little bit".
@@CoreDumpped thanks for the reply I think I understand now
I think we need to remove the ghost from the previous write and also bias the pre amps on the edge. The capacitors are charged if not too old. We just want to know the polarity.
One thing to note is that the bitlines, like every other component, have some capacitence. In fact, when you consider that to reach however many gigabits of capacity a modern RAM chip has, a bit line must cross hundreds of thousands of rows, we can see that these are rather large structures, and so the capacitence can be presumed to be quite significant. Did I mention that we also want to make the data storing capacitors as small as possible so we can fit more of them?
What this means is that DRAM capacitors are very likely not large enough to fully charge or discharge an entire bitline, not even close. But they don't need to. If we precharge the bitline to right about the threshold voltage for the sense amplifier to switch one way or the other, then just a small change in voltage is enough to tip the balance and read the bit.
2:32 the transistor model doesn't actually map the gate model of the static ram cells: the transistor model is a double-(cmos)-inverter cell with two access transistors while the gate model is a double-nand cell with no further access method except of course the second input from both nand gates.
How do you make those animations?
powerpoint
Yeah, everything you watch on my videos are just PowerPoint slides.
Superb brother!! Superb!!
Thanks for your video,
I am great fan of your videos.. I would have clicked that like button at least a thousand times!!! if possible!
Sir I have commented a question on your "How Transistors Remembers Data" video. It would be really helpful if you reply with an answer for that 😊🙏
love your videos!!
masterpiece, keep it up
Bro is a legend
That jlpcb gonna save that thx! 👀👍 nice vid 👍
This is a great visualization why 32bit computers cannot read a bool, and actually store a bool bit as a byte.
Though my code will never see a console, or a 90's era pc, I try to code as efficiently as hardware will support
I would have loved to have these videos for my Masters in Electrical Engineering courses that explain these kind of systems. Thanks anyways!
This is so cool!
thank you, this is invaluable
All the time while watching this I more and more get the impression that this is not to dissimilar in concept from how core memory works.
After Chrome wrote the video buffer to my RAM, they became self-conscious and detonated me and my computer
Good video, would highly recommend
Awesome video, yet again!
Amazing content, what tools do you use for your animations?
"i'm a software engineer, not electrical engineer, so.."
...so i make best videos on youtube on electrical engineering
One thing I was hoping you'd go over, but it seems you (understandably) didn't, is what makes up the capacitors on the physical die. I know that MOSFETs are said to have parasitic capacitance, so is that what's being used? Or do they have special layers of materials for capacitors, specifically? How big on the die is a capacitor, compared to a transistor?
I've seen conflicting answers when I try researching those things on my own. One of the things I remember seeing is a VLSI layout diagram that showed a capacitor being absolutely massive compared to a transistor, which would seem to imply that it should be possible to pack more SRAM into a space than DRAM, but if that were the case then nobody would use DRAM.
Modern DRAM processes use quite complicated techniques to fit as much capacitance as possible into as small an area as possible. They typically use trenches with a conductive layer, a layer of oxide, and then another conductive layer to make the capacitor. So although the capacitor is quite big compared to the transistor, most of its size is in the vertical direction rather than the horizontal direction.
@@dfs-comedy Huh, interesting. I wonder if that's modelable in Electric (open source VLSI program).
i love his videos
and appyl logic in minecraft
Very Impressive
❤
Damn, I’m glad I don’t have to deal with this vastly inefficient type of memory.
Binge watching your videos 🙂
could you make a playlist on your channel with all the videos on your channel, it makes it easier for us to watch multiple videos in a row :D
but isnt a mosfet itself basically a transistor that also acts as a capacitor? because if you for example apply a voltage to the gate and then remove the voltage source, it will still allow current to flow from source to drain due to its capacitance. or am i missing something?
Wouldn't adding a second transistor that acts as an and gate, requiring both row and column to be powered solve the issue with reading a whole row of bits?
These animations are cool, how do you make them.