To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CoreDumped. You’ll also get 20% off an annual premium subscription.
For a 2nd-year university systems paper, the final assignment was to create a multi-tasking kernel. The program would receive a set of parameters for each-sub program, and then use a hardware timer to switch the actively running program every so often (based on some arbitrary time slice value). When the slice expires, all register values would be saved to entries in a pre-allocated block of memory, and then load values for the next scheduled task into registers. It was a fantastic way to learn how a form of concurrency can be achieved!
@@quanmcvn30 You need help with task state switching but you're an osdev? Do you mean you're an osdev student? There are many small kernels on github, for various architectures, that you can look at to see how they handle TSS.
Something to point out: While Multics was important where UNIX was concerned, the first time-sharing OS was the Compatible Time-Sharing System, developed at MIT for their IBM mainframes. The name comes from the fact that it was also capable of batch processing, when needed. Eventually, as that fell out of requirement, the Incompatible Time-sharing System was developed.
This is actually amazing. I enjoy learning how the code I write actually works. I started programming in JavaScript where I had no idea how the computer executed my code. Now I program in C# and I constantly think about how my code runs.
A cool, another developer in C# here, and more languages. Definitely applicable there, as Tasks and async await are very important features in the language. C# was also basically the first language implementing that. Programming language level scheduling systems are basically a solution to move scheduling away from OS level (threads) to the logical level within the application itself, which is especially useful when you have a lot of concurrency, like many IO tasks. Because of using a whole magnitude less resources. OS threads are far more expensive. The scheduling mechanism still maintains multiple threads, but on a queue far more tasks will be scheduled to be consumed by these threads in very similar fashion of how an OS scheduler works.
Fantastic video. A request. Can you make 3 videos on the following topics? 1. OS paging and virtual memory 2. A computer program (lets say C program) compilation to machine code translation to execution (all on physical hardware level) 3. The entire process of when we type something on keyboard and send a search query to google, then get back the response (everything in between starting from key signal from hardware to os and there after networking concepts on how data travels)
@@camelCase60 No. I have to give a presentation to college freshers who have just joined at the office and I want to make it an animated presentation by simplifying the concepts. I am very bad at creating animations and explaining things in a simplified way. I have been searching up all kinds of materials on TH-cam but it's not coming out very well. Such videos will be helpful. Teaching is a skill which I do not possess so I am asking for help.
I've been working in IT for 34 years and 30 of those years I've been developing software. I started on DOS 3.3 & Windows 3.0. I remember the cooperative multitasking of Windows back then. Anyways, even with all that experience I still learned tons from this fantastic video. Really appreciate the animated graphics which help the concepts to be understandable. Really amazing!
Low-level stuff has always fascinated me. I was a little afraid but for a couple of months ago I've started to study the x64 encoding because I'm working on a compiler as a fun project. So as the time goes by I'm doing better with low-level stuff and when I see I understand more some concepts behind this video, I'm glad about the progress I made. Your videos are bangers by the way!
Low-level stuff has been my passion since the 1970s. Back then, there were few alternatives other than BASIC. I think the first C compiler I had was late 80s.
You by far produce the best explanation videos of computing architecture. These are video topics I have dying to know. I personally would really appreciate it if you would post your resources/references in the description so I could learn further on my own. Also, the AI voice was concerning at first, since I figured it was another low effort nonsensical channel, but that's definitely not the case.
Scheduling is such a genius way of handling multitasking. It's been around forever and works amazingly. IMO understanding how execution contexts switch, especially with normally untouched registers such as the segment registers and CR(N) registers can be such a eureka moment in systems software development, and understand how "it all" works.
A bunch of electrons travel through wires, and hardware/software decode those signals as different things depending on what their purpose is. Tada!!!! Although I'm just joking, this is pretty much what it is lol
I am afraid what you're looking for is not here, this channel is more like a low level way of looking at computer, I would say the programming part of the Operating System, of course networking requires some low level and programming stuff but that wont help you manage cisco, or any networking technology, Start with cisco and their free courses on skills for all platform. Rick Graziani channel is also a good idea for fundamentals, he's maybe a cisco instructor since he was in many skills for all courses, David Bombal has also some free CCNA course
Ben Eater has an excellent series on networking. Her works up maybe the first three or four OSI layers from an electrical engineering perspective. From there, you can find other tutorials to round out your knowledge.
Pretty cool. Only thing I think was missing and should have been referenced is instruction-level parallelism and things like instruction pipelining, scalar vs superscalar processors, etc
N64 from 1996 was scalar, in-order. Itanium was in order super scalar. Dec Alpha could execute out of order as did Cray. Other super scalar CPUs are unsafe: spectre . Branch prediction was invented by IBM and slowed down their processor. Now it allows attacks.
Spectacular, I am amazed by this quality in the videos. Although this time I didn't find much for my programming language, it explains too many concepts that I barely realized were very vague. Thank you for these spectacular explanations!
Concurrency also helps maximize processor utilization. In general, the CPU is the fastest part of the machine, so it will spend lots of time waiting on I/O if executing only 1 process. With concurrency, other processes may get CPU time while 1 process waits on some I/O component.
Honestly I have to say, your videos have been a really great refresher for the topics I heard in a course on Operating Systems in University. Really love your explanations.
I wasn't expecting this kind of video, but this was nice if someone didn't already understand the basics and the history. I could really see this becoming a general OS series. I don't know what your background is, but if you know anything about both ARM and x86 it would be nice to see a series of videos comparing their different processes for booting up and running an OS, such as what equivalents an ARM processor would have for the IDT's and GDT's on x86.
On Windows 95: it still used cooperative multitasking for legacy applications, which, at its launch, were almost all applications, leading to Windows 95 (and 98, Me) being considered unsafe for bad behaving applications. After NT was merged into the consumer versions from XP onward, only pre-emptive multitasking remained.
This was a really helpful video! It would be really nice to have a clarification of process scheduling and how that works in conjunction with threads. Ie. I have a programming language that allows multithreading, do all threads run on the same process as far as the OS is concerned, or does it get it's own process per thread?
Yes, treads are handled very similar to processes, they're also scheduled. There are some differences tho, for example if the main process is terminated, all of its threads are also terminated (unless otherwise specified). I' ll record a video about more details on this.
A "process" is, in a lot of ways, the same thing as a "program". It can have one or many threads, but the crucial thing is that they all share the same memory space, so, for instance, a pointer from one thread can be used by another thread in the same process without any issue beyond the normal multithreading hazards. By contrast, if you tried to dereference a pointer from another process, you'd very likely crash with a segfault, or at the very least, access the wrong data. A process is the top most level in the OS environment. Each process is given its own address space, and they cannot access memory allocated to other process unless both processes explicitly request the OS to allocate a shared memory buffer. This is why on modern operating systems, a program can (usualy) crash without taking down the entire OS and forcing you to push the reset button. Also, when you bring up the task manager in Windows, it gives you a list of all processes running on the system. Note that some programs will spin up multiple processes in addition to multiple threads in each process. One good example is a client-server model, where you want the option of multiple users interacting in some way, but for single-user situations, it can be convenient to run the exact same server program on the same computer as the client, rather than having an entirely seperate single-user mode. It can also be useful when you're dealing with untrusted code, such as anything on the internet. If your browser, for instance, runs each tab in a seperate process, then you've created an additional barrier between a malicious scripting exploit on one website in one tab and your bank's login screen that you left open on another tab.
Something that will likely be pointed out in a subsequent video: Another common aspect of cooperative multitasking was memory allocation. Everything that's been described is based on pre-emptive multitasking, where the OS allocates memory on the fly through interrupts. However, earlier OSs could only allocate fixed blocks, often only at load time. Infamously, this crippled Mac OS up to System 8-9, especially compared to Windows and its Virtual Memory based systems.
What if you wanted to do a big operation but OpenOS was like "Too long without Yielding" Context: Minecraft's OpenComputers has a default OS you can craft called OpenOS and for the longest time a program could just flat out freeze your OS if it gets stuck in an infinite loop, forcing you to powercycle your computer. To fix this in an update they added a timer to all programs that if they go too long without allowing input then it would crash the program. Only issue is they made this limit really oppressive and didn't really provide an in-code way to deal with it (there was no way to halt or yield in code, your entire process had to be done within that limit) it was so bad that even the OS itself would run into this limit causing your entire computer to crash every time you tried to do anything even remotely complex, which resulted in an even higher necessity to powercycle.
I was expecting this to be hard to follow and to miss important concepts. Well done, this is an excellent overview with interesting (to me) history about concurrency and parallelism. For those new to the concepts, this video is good enough for a re-watch to make sure these concepts sink in. Thanks!
Many years ago, and luckily for me, just after they switched from using punch cards, I was on a college mini computer. Most of the time, it was very quick, but when there was a midterm or final project due for the advanced courses, the system ground almost to a halt. We had SOROC terminals on a Pr1me Mini.
This is actually a really good technical explanation of the topic, but I would add that specifically what is discussed in the video is the kernel, a component of an operating system. Even the rest of the operating system need to go trough the kernel to access i/o. An operating system itself contains the full usable environment which in case of Unix for example has the shell and coreutilities among other things. They too are user-level programs but still a part of the operating system.
This is one the best animation videos I have seen on how a cpu works. I was never much of a fan of low level computing and how things work under the hood, but this video opened my mind to a whole new world. Keep up the work! :)
15:10 AmigaOS had preemptive scheduling which is why it didn't have the tendency that one program would lock up the entire machine - unlike the contemporary Windows variants.
Is the "async and await " key words in programing languages use the concurrency? And it will be so great if you make video about this topic... Thanks for your amazing content
After watching this video I get anxious even swiping windows with my touchpad, thinking how much I'm triggering those memory pointers and how complex the process is
I didn't understand why if is the CPU itself which executes the interruption instructions (such as Read/write) why can it dequeue the next process in the scheduler if it is busy doing those tasks? PS: the quality of the videos is superior, you are really doing a great job. By far, my favorite yt channel.
How does the OS regain control is something I've always kind of wondered but never knew how to even find the answer. This was such an enlightening video!
One problem with pushing cycle path away from intersections is it makes travel longer for people using their muscles to move, basically to allow motorists keep driving fast (autos don’t get tired though). I am always puzzled by very curvy cycle/pedestrian path. Do engineers consider cycling or walking is only for recreation ?
There's two programs: a = 1 print(a) a = 2 print(a) They are written on assembly like this: MOV a, 1 OUT a MOV a, 2 OUT a What about this situation? If we execute by 1 instruction simultaneously, it will be like that, based on video: MOV a, 1 MOV a, 2 OUT a OUT a And CPU will print 2 and 2, which is wrong. Maybe CPU is like taking snapshots like you said once in this video, but before changing every instruction? Otherwise, one program will overwrite other program's registers.
The snapshot includes registers too. What it doesn’t include though is main memory (ram). This is the reason why programs with multiple threads can share memory, but also why programming multiple threads is difficult and has many pitfalls. In reality there’s also something called virtual memory, and it makes it so that threads (or even whole processes) can voluntarily share memory, but a rogue process cannot read/write the memory of another process
It's Fascinating and mind blowing how operating systems works, Could u explain how a piece of software (the operating system (logical entity)) controls and talks to a piece of hardware (The CPU (physical entity)) ?
I really enjoyed the history segment of this video. Understanding history is incredibly insightful and important for understanding why conventions are the way they are. Have you considered making a long in-depth video discussing the history of computers, programming languages, etc? It would be a wonderful watch.
I'm excited for a video about threads, please make a video about it, I think I need a video like this, especially how the methods/functions related to it work. Thank you very much anyway
Great content as usual I'm learning a lot of things even better than in university, kind of sad for the current educative system but a good new for the ones really sant to learn, keep it going!
Maaaaan, what an absurd channel! A real hidden gem! Amazing teaching and design skills. You explained 6 months of OS college subject in just few minutes ❤❤❤❤
Dude you're amazing for teaching all this. I don't like AI if it's just about shitposting a lot to make money. But this is top content and it's not about fake topics or anything. Just pure gold.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CoreDumped. You’ll also get 20% off an annual premium subscription.
Could you make a video for floating point?? how is it work??
You son of a B, I'm in.
Really, your videos just make me want to know more 🎉
This channel is like reading an OS textbook, but it's actually intriguing and doesn't put you to sleep.
Wait what are you doing here😂😂
os.sleep
Try reading Remzi's "OS: Three Easy Pieces".
For a 2nd-year university systems paper, the final assignment was to create a multi-tasking kernel. The program would receive a set of parameters for each-sub program, and then use a hardware timer to switch the actively running program every so often (based on some arbitrary time slice value). When the slice expires, all register values would be saved to entries in a pre-allocated block of memory, and then load values for the next scheduled task into registers. It was a fantastic way to learn how a form of concurrency can be achieved!
Can I get a peek at your code? I'm also in osdev but I'm quite loss atm.
Sorry guys, won’t be able to help you there.
Yes can we get the code please
Thank god I didn't go into computing
@@quanmcvn30 You need help with task state switching but you're an osdev? Do you mean you're an osdev student? There are many small kernels on github, for various architectures, that you can look at to see how they handle TSS.
Something to point out: While Multics was important where UNIX was concerned, the first time-sharing OS was the Compatible Time-Sharing System, developed at MIT for their IBM mainframes.
The name comes from the fact that it was also capable of batch processing, when needed. Eventually, as that fell out of requirement, the Incompatible Time-sharing System was developed.
Thanks
Great stuff, thanks!
The 6809 had OS-9 which was an awesome preemptive multitasking OS for the time.
This is actually amazing. I enjoy learning how the code I write actually works. I started programming in JavaScript where I had no idea how the computer executed my code. Now I program in C# and I constantly think about how my code runs.
I'm working on a video about how computers execute code, you'll love it!
@@CoreDumpped This was a great video and I subscribed after watching it. This comment is worthy of hitting the bell.
And I'm thinking of how there are many observable asynchronous codes didn't collide with each other
A cool, another developer in C# here, and more languages. Definitely applicable there, as Tasks and async await are very important features in the language. C# was also basically the first language implementing that.
Programming language level scheduling systems are basically a solution to move scheduling away from OS level (threads) to the logical level within the application itself, which is especially useful when you have a lot of concurrency, like many IO tasks. Because of using a whole magnitude less resources. OS threads are far more expensive.
The scheduling mechanism still maintains multiple threads, but on a queue far more tasks will be scheduled to be consumed by these threads in very similar fashion of how an OS scheduler works.
Fantastic video.
A request. Can you make 3 videos on the following topics?
1. OS paging and virtual memory
2. A computer program (lets say C program) compilation to machine code translation to execution (all on physical hardware level)
3. The entire process of when we type something on keyboard and send a search query to google, then get back the response (everything in between starting from key signal from hardware to os and there after networking concepts on how data travels)
lol do you have an SRE interview coming up?
1 and 2 for a short video.
3 is a massive undertaking.
@@camelCase60 No. I have to give a presentation to college freshers who have just joined at the office and I want to make it an animated presentation by simplifying the concepts. I am very bad at creating animations and explaining things in a simplified way. I have been searching up all kinds of materials on TH-cam but it's not coming out very well. Such videos will be helpful. Teaching is a skill which I do not possess so I am asking for help.
@@deanvangreunen6457 agree.
@@phoneix24886so you want him to do your job for free?
My brain was at the start like: say it, say it SAY IT DAMN (scheduler was the word I waited for)
I was like "Concurrency? Well, context switching, of course..."
I've been working in IT for 34 years and 30 of those years I've been developing software. I started on DOS 3.3 & Windows 3.0. I remember the cooperative multitasking of Windows back then. Anyways, even with all that experience I still learned tons from this fantastic video. Really appreciate the animated graphics which help the concepts to be understandable. Really amazing!
Thank you so much. As a no computer science degree programmer, this helps a lot and fills the needed gaps to understand programming better.
Low-level stuff has always fascinated me. I was a little afraid but for a couple of months ago I've started to study the x64 encoding because I'm working on a compiler as a fun project. So as the time goes by I'm doing better with low-level stuff and when I see I understand more some concepts behind this video, I'm glad about the progress I made.
Your videos are bangers by the way!
I love it too!
Low-level stuff has been my passion since the 1970s. Back then, there were few alternatives other than BASIC. I think the first C compiler I had was late 80s.
easily one of the best comp sci channels on youtube. anoter banger and a half!
Omgg i wanted to find out how they run simultaneously and i searched but those stackoverflow posts were confusing and you posted a video :D
You by far produce the best explanation videos of computing architecture. These are video topics I have dying to know. I personally would really appreciate it if you would post your resources/references in the description so I could learn further on my own. Also, the AI voice was concerning at first, since I figured it was another low effort nonsensical channel, but that's definitely not the case.
The AI voice is really high quality tbh
I completely missed it until the voice pointed it out 😮
i genuinely didnt realize it was ai
I took OS class last semester and this channel really helped me consolidate what I learned and make me appreciate it a lot more, thank you!!
love this channel, hands down my favorite channel on youtube
@CoreDumpped typo in the title .. change "your" to -> "you".
Fixed, thanks!
Scheduling is such a genius way of handling multitasking. It's been around forever and works amazingly. IMO understanding how execution contexts switch, especially with normally untouched registers such as the segment registers and CR(N) registers can be such a eureka moment in systems software development, and understand how "it all" works.
Seems like compiling back then wasn't that much slower than compiling big projects in Rust.
can you explain networking from scratch?..(completely)
In the begining, the bing bang happend 13.8 billion years ago.... then humans invented data, and then the ip protocol
A bunch of electrons travel through wires, and hardware/software decode those signals as different things depending on what their purpose is. Tada!!!!
Although I'm just joking, this is pretty much what it is lol
fasterthanlime's video on explaining the internet might just be somewhat the video you're looking for in the meantime
I am afraid what you're looking for is not here, this channel is more like a low level way of looking at computer, I would say the programming part of the Operating System, of course networking requires some low level and programming stuff but that wont help you manage cisco, or any networking technology, Start with cisco and their free courses on skills for all platform. Rick Graziani channel is also a good idea for fundamentals, he's maybe a cisco instructor since he was in many skills for all courses, David Bombal has also some free CCNA course
Ben Eater has an excellent series on networking. Her works up maybe the first three or four OSI layers from an electrical engineering perspective. From there, you can find other tutorials to round out your knowledge.
Great content
Pretty cool. Only thing I think was missing and should have been referenced is instruction-level parallelism and things like instruction pipelining, scalar vs superscalar processors, etc
N64 from 1996 was scalar, in-order. Itanium was in order super scalar. Dec Alpha could execute out of order as did Cray. Other super scalar CPUs are unsafe: spectre . Branch prediction was invented by IBM and slowed down their processor. Now it allows attacks.
Spectacular, I am amazed by this quality in the videos. Although this time I didn't find much for my programming language, it explains too many concepts that I barely realized were very vague.
Thank you for these spectacular explanations!
Amazing work George!
The animations really help
This is way better than reading the textbook
Concurrency also helps maximize processor utilization. In general, the CPU is the fastest part of the machine, so it will spend lots of time waiting on I/O if executing only 1 process. With concurrency, other processes may get CPU time while 1 process waits on some I/O component.
Honestly I have to say, your videos have been a really great refresher for the topics I heard in a course on Operating Systems in University.
Really love your explanations.
This channel is incredible. Please never stop.
I wasn't expecting this kind of video, but this was nice if someone didn't already understand the basics and the history. I could really see this becoming a general OS series. I don't know what your background is, but if you know anything about both ARM and x86 it would be nice to see a series of videos comparing their different processes for booting up and running an OS, such as what equivalents an ARM processor would have for the IDT's and GDT's on x86.
On Windows 95: it still used cooperative multitasking for legacy applications, which, at its launch, were almost all applications, leading to Windows 95 (and 98, Me) being considered unsafe for bad behaving applications. After NT was merged into the consumer versions from XP onward, only pre-emptive multitasking remained.
Computer science is so fascinating
Have never seen this topic broken down so well and visually pleasing at the same time! Keept the good content coming :)
This was a really helpful video! It would be really nice to have a clarification of process scheduling and how that works in conjunction with threads. Ie. I have a programming language that allows multithreading, do all threads run on the same process as far as the OS is concerned, or does it get it's own process per thread?
Yes, treads are handled very similar to processes, they're also scheduled. There are some differences tho, for example if the main process is terminated, all of its threads are also terminated (unless otherwise specified). I' ll record a video about more details on this.
A "process" is, in a lot of ways, the same thing as a "program". It can have one or many threads, but the crucial thing is that they all share the same memory space, so, for instance, a pointer from one thread can be used by another thread in the same process without any issue beyond the normal multithreading hazards. By contrast, if you tried to dereference a pointer from another process, you'd very likely crash with a segfault, or at the very least, access the wrong data.
A process is the top most level in the OS environment. Each process is given its own address space, and they cannot access memory allocated to other process unless both processes explicitly request the OS to allocate a shared memory buffer. This is why on modern operating systems, a program can (usualy) crash without taking down the entire OS and forcing you to push the reset button. Also, when you bring up the task manager in Windows, it gives you a list of all processes running on the system.
Note that some programs will spin up multiple processes in addition to multiple threads in each process. One good example is a client-server model, where you want the option of multiple users interacting in some way, but for single-user situations, it can be convenient to run the exact same server program on the same computer as the client, rather than having an entirely seperate single-user mode. It can also be useful when you're dealing with untrusted code, such as anything on the internet. If your browser, for instance, runs each tab in a seperate process, then you've created an additional barrier between a malicious scripting exploit on one website in one tab and your bank's login screen that you left open on another tab.
Something that will likely be pointed out in a subsequent video: Another common aspect of cooperative multitasking was memory allocation. Everything that's been described is based on pre-emptive multitasking, where the OS allocates memory on the fly through interrupts. However, earlier OSs could only allocate fixed blocks, often only at load time. Infamously, this crippled Mac OS up to System 8-9, especially compared to Windows and its Virtual Memory based systems.
your up there with Ben Easter on how well you explain all these processes damn, keep it up! :D
What if you wanted to do a big operation
but OpenOS was like "Too long without Yielding"
Context: Minecraft's OpenComputers has a default OS you can craft called OpenOS and for the longest time a program could just flat out freeze your OS if it gets stuck in an infinite loop, forcing you to powercycle your computer. To fix this in an update they added a timer to all programs that if they go too long without allowing input then it would crash the program. Only issue is they made this limit really oppressive and didn't really provide an in-code way to deal with it (there was no way to halt or yield in code, your entire process had to be done within that limit) it was so bad that even the OS itself would run into this limit causing your entire computer to crash every time you tried to do anything even remotely complex, which resulted in an even higher necessity to powercycle.
Fabulous video and underated channel! I'm hooked for the CPU Scheduling Video already. Keep it up
I was expecting this to be hard to follow and to miss important concepts. Well done, this is an excellent overview with interesting (to me) history about concurrency and parallelism. For those new to the concepts, this video is good enough for a re-watch to make sure these concepts sink in. Thanks!
I feel privileged to be able to watch this level of quality videos and such sublime explanation for free. Thank you so much!
Glad you enjoy it!
Your explainations are really great! Accurate, to the point, great comparisons... Well done, once again!
new core dumped vid, lets goooo
Many years ago, and luckily for me, just after they switched from using punch cards, I was on a college mini computer. Most of the time, it was very quick, but when there was a midterm or final project due for the advanced courses, the system ground almost to a halt. We had SOROC terminals on a Pr1me Mini.
This is actually a really good technical explanation of the topic, but I would add that specifically what is discussed in the video is the kernel, a component of an operating system.
Even the rest of the operating system need to go trough the kernel to access i/o. An operating system itself contains the full usable environment which in case of Unix for example has the shell and coreutilities among other things. They too are user-level programs but still a part of the operating system.
This is one the best animation videos I have seen on how a cpu works. I was never much of a fan of low level computing and how things work under the hood, but this video opened my mind to a whole new world. Keep up the work! :)
15:10 AmigaOS had preemptive scheduling which is why it didn't have the tendency that one program would lock up the entire machine - unlike the contemporary Windows variants.
These AI voices are so terrifying i cant get through the video giving me the creepies
another great video! ty
Wow what an amazing introduction to Operate System it is..... Waiting For the next videos...
❤ From Bangladesh 🇧🇩
I absolutely LOVE your channel, man. Thanks for doing this. I crave the low level knowledge!
Thank You Man... By Far the Best Explanation with all the Pre-Historic Knowledge... I just loved it... Subscribing instantly☺️
The small pauses between some bits of information was really helpful. 👍
Is the "async and await " key words in programing languages use the concurrency?
And it will be so great if you make video about this topic...
Thanks for your amazing content
Fantastic lecture! I loved it! Are there any books on the history of this stuff you (or anyone else) can recommend?
the best channel i've seen in this topic, greetings from Brazil 🇧🇷 🇧🇷
After watching this video I get anxious even swiping windows with my touchpad, thinking how much I'm triggering those memory pointers and how complex the process is
I didn't understand why if is the CPU itself which executes the interruption instructions (such as Read/write) why can it dequeue the next process in the scheduler if it is busy doing those tasks?
PS: the quality of the videos is superior, you are really doing a great job. By far, my favorite yt channel.
One after another
How does the OS regain control is something I've always kind of wondered but never knew how to even find the answer. This was such an enlightening video!
Im currently learning python but I want to learn low level programming and your videos help understanding things that are hidden away
is the narration done by AI?
Yes
if you fully watched you would know, in case you dont want to (yes, it says so at the end)
Great vídeo, one of the bests about de subject. Simple but with a lot of animations and history. Please keep doing this content!!
i was going to search for a video explaining concurrency and youtube recommended this. 10/10 video, loved it
One problem with pushing cycle path away from intersections is it makes travel longer for people using their muscles to move, basically to allow motorists keep driving fast (autos don’t get tired though).
I am always puzzled by very curvy cycle/pedestrian path. Do engineers consider cycling or walking is only for recreation ?
Incredibly well explained, keep up with these amazing videos !
There's two programs:
a = 1
print(a)
a = 2
print(a)
They are written on assembly like this:
MOV a, 1
OUT a
MOV a, 2
OUT a
What about this situation? If we execute by 1 instruction simultaneously, it will be like that, based on video:
MOV a, 1
MOV a, 2
OUT a
OUT a
And CPU will print 2 and 2, which is wrong.
Maybe CPU is like taking snapshots like you said once in this video, but before changing every instruction?
Otherwise, one program will overwrite other program's registers.
The snapshot includes registers too. What it doesn’t include though is main memory (ram). This is the reason why programs with multiple threads can share memory, but also why programming multiple threads is difficult and has many pitfalls. In reality there’s also something called virtual memory, and it makes it so that threads (or even whole processes) can voluntarily share memory, but a rogue process cannot read/write the memory of another process
When I first heard the AI voice I though that this channel is some more stupid TTS ChatGPT-pasta. But this is gold.
GPTpasta is now an official term
This is amazing. You have sparked my interest in CS.
It's Fascinating and mind blowing how operating systems works, Could u explain how a piece of software (the operating system (logical entity)) controls and talks to a piece of hardware (The CPU (physical entity)) ?
It runs on it? Ben Eater might offer you a more hands down explanation. Real breadboards instead of blackboards.
your videos are extremely informative, can't wait for future instalments of this series
this is way better explained than my courses at college
congrats and thank you
This channel is a treasure! Keep up the good work!
Thanks, will do!
Love your videos. They are always high quality and easy to understand.
Please talk about virtualization next.
+ thanks for the interesting video!
congrats on getting sponsored, great quality vids as always
Another masterpiece dropped.... Thanks man as always, your vids are amazing.
I really enjoyed the history segment of this video. Understanding history is incredibly insightful and important for understanding why conventions are the way they are. Have you considered making a long in-depth video discussing the history of computers, programming languages, etc? It would be a wonderful watch.
It is amazing the approach that you used in the video. Thanks :)
I'm excited for a video about threads, please make a video about it, I think I need a video like this, especially how the methods/functions related to it work. Thank you very much anyway
The Amiga true multitasking OS was indeed exceptional for its time.
Your explanation was brilliant. You gained a new subscriber
dont stop, keep making, make more
Wow, just found your channel, insanely entertaining and nice visualizations. Keep up the good work ❤
I would love to see a video talking about RISC-V architecture from you
Im so glad that i met this channel
Great content as usual I'm learning a lot of things even better than in university, kind of sad for the current educative system but a good new for the ones really sant to learn, keep it going!
Amazing video, thanks a lot for the explanations! Very informative
These videos are so good. Keep up the great work!
wake up babe, new Core Dumped video dropped
was looking for such a video kudos
Another day, another banger from core dumped
Your vidoes are super entertaining to watch! I look forward to your nexxt banger!
Glad you like them!
I never leave comments but when i see new video from you i become very happy❤❤
Timed PERFECTLY with my pintos course…
This channel is massively underrated
So it's like AC power or interlaced video, but for code? Neat
Core dumped doing great job again ❤
Thanks for the knowledge!
Very informative, thanks💯
Maaaaan, what an absurd channel! A real hidden gem! Amazing teaching and design skills. You explained 6 months of OS college subject in just few minutes ❤❤❤❤
Dude you're amazing for teaching all this. I don't like AI if it's just about shitposting a lot to make money. But this is top content and it's not about fake topics or anything. Just pure gold.
Thank you for this content.
man, this video is not lengthy at all, thanks for the great job!
Excellent overview of a very complex topic. The constant in computer technology is change.