Hyperthreading would be interesting. I guess that the "Power" of the core (the operations/threads they can handle) is split in half and so instead of 1 thread per core 2 threads per core can be run simultaneously as if there were 2 actual cores. The limitation is just that 2 GHz would cut down to 2x 1GHz?
That's concurrency sadly, he's doing two tasks at the same time but not splitting up individual tasks to speed them up so each task still take the same amount of time he just does both at once (concurrency) where as if he was able to have an extra arm or mouth he could type or speak multiple words at once (parallelism)
There is an important nuance between "process" and "thread". Modern computer operating systems (because very old operating systems didn't do this) support a concept of memory protection. That is to say that one process is allocated some of the main memory (RAM) for it's "code" (instructions -- in some operating systems they refer to these as "text" pages rather than "code" pages -- they are the same idea) and "data" (things like variables -- not code). If a second process attempts to access memory that was not allocated to it, that process will get a segmentation violation (attempt to access data outside it's assigned memory segment). This prevents two different "processes" from overwriting each other's data. But things are different in threads. Here, two threads (forked from the same process) "share" the same memory. The smallest chunk of memory that a program can read or write is a "page" (this is architecture dependent ... 4k is a common size ... but it could be 8k, etc.). Suppose I have a variable ... an integer ... it's value is "5". This only requires one byte of memory (although on a modern OS the integer might occupy several bytes ..... but even a 64-bit integer would only occupy 8 bytes). Assume that is 8 bytes on a "pagesize" of 4k (4096 bytes). There are a LOT of other variables that occupy the SAME page in RAM. This creates a problem. Suppose there are two completely different variables but they happen to reside within the same "page". Two different threads could "read" the value of their variable ... but they really have to read the entire page and then disregard everything *except* the few bytes they care about. Meanwhile suppose they both modify their respective (but unique) variables ... whichever thread "writes" last (because a "write" back to main memory ALSO writes the ENTIRE page) wipes out everything *except* the few bytes that were changed. This means whichever thread "wrote" the page last ... will wipe out the change of the other thread. This is a problem. To solve this, modern operating systems have a concept of a "mutual exclusion lock" on a memory page (the same way that a multi-user database might do "row locking" or "column locking"). Unix operating systems handle this with something called a "mutual exclusion" lock (mtx). If two "threads" try to access the same page (for purposes of modification) then when the FIRST thread reads the page, it will ALSO apply a "mutual exclusion lock" (typically the first "bit" on the page is reserved for this purpose). If that bit is set to '1' then the page is "locked". If a second thread attempts to access the page, it will be told to wait until the mutex bit is cleared (a "spin on mutex" or smtx condition). In multi-processing (two different processes) this isn't a problem because they don't share the same memory segments. But in multi-threading, the threads DO share the same memory segments ... so it is possible for different threads to "step on each other's toes" and the OS is designed to protect against this). BTW, the whole point of multi-threading had to do with efficiency. Processors are VERY fast if the data they have to manipulate is already in the processor registers. But if the data resides elsewhere (if it has to "fetch" the data from RAM ... or storage) then the amount of clock cycles until that fetch or read completes is a veritable ETERNITY for the CPU. So it may as well be put to good use doing something else. Multi-threading vastly made programs more efficient because they could do *something* while long-running steps (such as memory IO or storage IO were being performed) was completed. A single processor core that is capable of multi-threading (aka hyperthreading) even within the *same* core cannot technically execute BOTH threads at the same time (if scheduled to execute on the same core), They can technically execute at the same time if scheduled to run on different cores.
Dude you can write a blog with just this comment. Please do it and share the link, I recently started with the whole architecture think. Would appreciate if I get to learn from you too 🙏❤️
Thanks. There's a lot to wrap my head around in there. I'd like to add that you can access memory in another process manually, using readProcessMemory & writeProcessMemory.
Tim I just wanted to thank you for taking the time to make this content. I’ve struggled to understand threading for the longest until I came across this video. You rock 🤟🏼
🎯 Key Takeaways for quick navigation: 00:01 *🧵 Introduction to Threading and Processor Cores* - Understanding the basics of threading and processor cores. - Processor cores determine the maximum parallel operations possible. - Clock speed of cores and its significance in operation execution. 03:32 *⚙️ Threads and CPU Execution* - Explanation of threads as individual sets of operations. - Threads are assigned to processor cores for execution. - Threading allows scheduling of different operations on the same CPU core. 06:03 *🔄 Concurrent Programming with Threads* - Concurrent programming involves executing threads in different timing sequences. - Threads enable efficient CPU core utilization by switching between operations. - Threads are beneficial for handling tasks asynchronously, preventing program hang-ups. 07:13 *🔄 Single Core vs. Multi-threaded Execution* - Comparison of single-threaded and multi-threaded execution on a single core. - Multi-threading allows overlapping of operations, reducing overall execution time. - Use cases for multi-threading include web applications and gaming for uninterrupted user experience. Made with HARPA AI
That's a very easy to understand and helpful explanation. I found your video after looking for articles do better understand this topic, then I decided to look on youtube and found you. I've already subscribed.
So this 11 minute video explains the brief of threading which takes my university 2 hours to comply but still doesn't made me understand. Thank you Tim!
I have been battling to grasp this-thread and-process concept for too long until I watched this video. This information is very informative and straightforward, I hope you will share videos like this one in the future.
Thanks a lot for this, because I needed it for my online pygame game which also I learnt from your playlist. I rely more on your videos than the official documentation, lol. Thanks a ton again.
Hello, Tim! I guess, you explained threads better than most of resources I've read before! And now I have some kind of understanding, thank you very much!:)
Can we say this ? => If a program loads into a memory. It become active (process). A single process is assign to a single physical core. And that program is written in such a way ( different logical parts) that it can run its different logical (threads) part independently. If 1 part (thread) is waiting the other part(thread) can run to increase efficiency. This is called concurrency (max throughput).
Excellent description! i was curious that how do you decide whether your code/function needs multi threading(concurrent) or mult processing(parallel execution)?
2:50 That conversion sounds right since the large amount of bloat on the CPU/OS and microcode generation reduces the billions of cycles (not necessarily instructions since some instructions take up a considerable amount of cycles because scaling and IPC) to potentially in the high tens of millions of instructions per second.
Excellent explanation. But I have some doubts. If I had 20 functions to run at the same time. Should I use multithreading or multiprocessing? What are the pros and cons of each?
Depends on the scenario, for me I had something that worked with many files in a directory and did a time-consuming operation on each. With threading I could start the operation on multiple files at the same time instead of waiting for each one to be done before starting a new one.
Hi, the tutorial and the series are really helpful and explained intuitively. Can you please tell me what books I should follow for getting better at concurrent programming in C++ and python?
I guess the part that confuses me is the fact that isn’t sleep(10) technically a process where the computer sleeps for a set amount of time? If so, you mention how the computer moves onto the other thread to print the number 2 while waiting for sleep(10) to finish… but doesn’t this technically mean that you are printing the number 2, while the CPU is simultaneously counting to 10? This is where I get confused as it seems as if it’s undergoing tasks from different threads at the same time?
So if I run a program using threads, all the threads of the same program are going to get distributed to different cores? Or are they going to stay in the same core?
What other threading topics or examples would you like to see? Let me know!
Thread Safe.
How does GIL work
Simultaneous process
exatcly
Hyperthreading would be interesting.
I guess that the "Power" of the core (the operations/threads they can handle) is split in half and so instead of 1 thread per core 2 threads per core can be run simultaneously as if there were 2 actual cores. The limitation is just that 2 GHz would cut down to 2x 1GHz?
Pretty impressive parallel processing with him drawing and talking at the same time
That's concurrency sadly, he's doing two tasks at the same time but not splitting up individual tasks to speed them up so each task still take the same amount of time he just does both at once (concurrency) where as if he was able to have an extra arm or mouth he could type or speak multiple words at once (parallelism)
There is an important nuance between "process" and "thread". Modern computer operating systems (because very old operating systems didn't do this) support a concept of memory protection. That is to say that one process is allocated some of the main memory (RAM) for it's "code" (instructions -- in some operating systems they refer to these as "text" pages rather than "code" pages -- they are the same idea) and "data" (things like variables -- not code). If a second process attempts to access memory that was not allocated to it, that process will get a segmentation violation (attempt to access data outside it's assigned memory segment).
This prevents two different "processes" from overwriting each other's data.
But things are different in threads. Here, two threads (forked from the same process) "share" the same memory.
The smallest chunk of memory that a program can read or write is a "page" (this is architecture dependent ... 4k is a common size ... but it could be 8k, etc.).
Suppose I have a variable ... an integer ... it's value is "5". This only requires one byte of memory (although on a modern OS the integer might occupy several bytes ..... but even a 64-bit integer would only occupy 8 bytes). Assume that is 8 bytes on a "pagesize" of 4k (4096 bytes). There are a LOT of other variables that occupy the SAME page in RAM.
This creates a problem. Suppose there are two completely different variables but they happen to reside within the same "page". Two different threads could "read" the value of their variable ... but they really have to read the entire page and then disregard everything *except* the few bytes they care about. Meanwhile suppose they both modify their respective (but unique) variables ... whichever thread "writes" last (because a "write" back to main memory ALSO writes the ENTIRE page) wipes out everything *except* the few bytes that were changed. This means whichever thread "wrote" the page last ... will wipe out the change of the other thread. This is a problem.
To solve this, modern operating systems have a concept of a "mutual exclusion lock" on a memory page (the same way that a multi-user database might do "row locking" or "column locking").
Unix operating systems handle this with something called a "mutual exclusion" lock (mtx). If two "threads" try to access the same page (for purposes of modification) then when the FIRST thread reads the page, it will ALSO apply a "mutual exclusion lock" (typically the first "bit" on the page is reserved for this purpose). If that bit is set to '1' then the page is "locked". If a second thread attempts to access the page, it will be told to wait until the mutex bit is cleared (a "spin on mutex" or smtx condition).
In multi-processing (two different processes) this isn't a problem because they don't share the same memory segments. But in multi-threading, the threads DO share the same memory segments ... so it is possible for different threads to "step on each other's toes" and the OS is designed to protect against this).
BTW, the whole point of multi-threading had to do with efficiency. Processors are VERY fast if the data they have to manipulate is already in the processor registers. But if the data resides elsewhere (if it has to "fetch" the data from RAM ... or storage) then the amount of clock cycles until that fetch or read completes is a veritable ETERNITY for the CPU. So it may as well be put to good use doing something else. Multi-threading vastly made programs more efficient because they could do *something* while long-running steps (such as memory IO or storage IO were being performed) was completed.
A single processor core that is capable of multi-threading (aka hyperthreading) even within the *same* core cannot technically execute BOTH threads at the same time (if scheduled to execute on the same core), They can technically execute at the same time if scheduled to run on different cores.
Dude you can write a blog with just this comment. Please do it and share the link, I recently started with the whole architecture think. Would appreciate if I get to learn from you too 🙏❤️
Excellent explanation thank you
Great comment!
For io intensive task, you dont need multithreading as you can use asynchronous methods. If it's cpu intensive task, multi threading is necessary
Thanks. There's a lot to wrap my head around in there. I'd like to add that you can access memory in another process manually, using readProcessMemory & writeProcessMemory.
Tim knows what he's talking about, this is just not another TH-cam tutorial. Just mind-blowing explanation.
Tim I just wanted to thank you for taking the time to make this content. I’ve struggled to understand threading for the longest until I came across this video. You rock 🤟🏼
🎯 Key Takeaways for quick navigation:
00:01 *🧵 Introduction to Threading and Processor Cores*
- Understanding the basics of threading and processor cores.
- Processor cores determine the maximum parallel operations possible.
- Clock speed of cores and its significance in operation execution.
03:32 *⚙️ Threads and CPU Execution*
- Explanation of threads as individual sets of operations.
- Threads are assigned to processor cores for execution.
- Threading allows scheduling of different operations on the same CPU core.
06:03 *🔄 Concurrent Programming with Threads*
- Concurrent programming involves executing threads in different timing sequences.
- Threads enable efficient CPU core utilization by switching between operations.
- Threads are beneficial for handling tasks asynchronously, preventing program hang-ups.
07:13 *🔄 Single Core vs. Multi-threaded Execution*
- Comparison of single-threaded and multi-threaded execution on a single core.
- Multi-threading allows overlapping of operations, reducing overall execution time.
- Use cases for multi-threading include web applications and gaming for uninterrupted user experience.
Made with HARPA AI
believe me guys! This is the single most simple and insightful explanation of the Threads.
I am literally learning this at my college right now, great explanation Tim.
it's pretty hard to find someone who explains coding well to beginners,thank you for helping us newbies start out!!
Coding day 1: “Hello World”
Coding day 2: *creates parallel universe”
#Parallel Universe with Infinity thoughts...... Hell yeah man..
69 like very cool
wow, i watched this video for 3 minutes, i couldn't help it but like the video and subscribe right away, that's how good of a teacher you are.
The balance of precision and simplicity is just laser-sharp. What a talented instructor!
This is the best high level explanation. I was trying to figure out why multithreading isn't the same as parallelism
Great : Explaining how it happens from "panoramic view" instead of coding as a first step to programming. With a mind mapping, much better !!!
2.6GHZ = 2 600 000 000 instructions per second , thank u so much tim
You're seriously underrated man
A bunch of my friends is also using your useful contents, you know what , thank you so much!
That explanation was beautiful. The whiteboard really helped too for actually seeing what is going on
That's a very easy to understand and helpful explanation. I found your video after looking for articles do better understand this topic, then I decided to look on youtube and found you. I've already subscribed.
Thanks for subscribing!
Your drawing skills are amazing
What a rockstar. Thank you so much for such an easy-to-understand explanation of this
I was very lucky to come across this video. Great explanation and illustrations
Bro you are amazing. For a while now I struggled to understand this concept but you realllllly broke it down and made it easy to grasp!
So this 11 minute video explains the brief of threading which takes my university 2 hours to comply but still doesn't made me understand. Thank you Tim!
I have been battling to grasp this-thread and-process concept for too long until I watched this video. This information is very informative and straightforward, I hope you will share videos like this one in the future.
best explanation as thread suppose to be explain using figure before the code. kudos to you bro!
Every beginner in Python should subscribe to this channel..
Enhanced my understanding of many concepts and added more great stuff
You explain this topic in very easier way bro
Best Explanation so far
Thank you m8! Very easy explanation ... I'm struggled to understand, you make it izi!
The explanation is so good that I feel compelled to join the channel membership. Thanks for the helpful material
Thanks a lot for this, because I needed it for my online pygame game which also I learnt from your playlist. I rely more on your videos than the official documentation, lol. Thanks a ton again.
Thank you! Simple, concise explanation. Love your channel.
Thanks for watching!
Clear and straight to the point. Great explanation!
A very clear explanation, thank you so much
Really clear and concise explanation - thank you so much Tim!
Wow your explanation was incredible clear, thank you!
very useful video to get into the topic, thank you very much sir
Mr. Tim , Great Explainer
This is such a good explanation, really helped me understand the concept
Amazing explanation
I was really looking for this
Thank you
thank u so much this really what i was searching for
You explained this better than my professor. Thanks!
Impressive! Outstanding explanation.
YOU CRUSHED THIS! Thank you!
Wow this is an amazing tutorial and so interesting. Thank you!
Great explanation. You earned another subscriber!
great example. thank you!!
perfect explanation
good explained video .. thanks bro
Awesome explanation, thank you so much
Impressive explanation.. really liked it..
you are gold I was looking for this
Great teacher
Hello, Tim!
I guess, you explained threads better than most of resources I've read before! And now I have some kind of understanding, thank you very much!:)
YOU ARE AMAZING!!!!!!! and I love you!!! thank u for your videos!!!!!!!!!!!
Can we say this ? => If a program loads into a memory. It become active (process). A single process is assign to a single physical core. And that program is written in such a way ( different logical parts) that it can run its different logical (threads) part independently. If 1 part (thread) is waiting the other part(thread) can run to increase efficiency. This is called concurrency (max throughput).
This tutorial is really clear. Thanks
excellent very good explanation
Nice explanation. Thanks a bunch.
Dude, you are awesome !!!!!
Great explanation, super clear. Thanks Tim!
Great explanation, thank you you so much
First view
Notifications ftw
So good explaination ❤️
Great explanation!
Excellent description! i was curious that how do you decide whether your code/function needs multi threading(concurrent) or mult processing(parallel execution)?
Great explanation Tim
Graet as usual. I don't make this kind of content, but this can be so useful. Thanks!
Awesome! I am very interested in concurrency and parallelism in python
YES FINALLY THANK YOU SOOOO MUCH!!
Great content! Thanks a lot Tim :)
veryyyyyy good tim
That was very helpful, thanks!
Really good explanation! keep it up!
2:50 That conversion sounds right since the large amount of bloat on the CPU/OS and microcode generation reduces the billions of cycles (not necessarily instructions since some instructions take up a considerable amount of cycles because scaling and IPC) to potentially in the high tens of millions of instructions per second.
Good content.
thank you very much for these threads videos
great explanation! thx
Excellent
Great video, thank you!
nice explanation man
Excellent explanation. But I have some doubts. If I had 20 functions to run at the same time. Should I use multithreading or multiprocessing? What are the pros and cons of each?
Yes! Was waiting on a great threading tutorial. Thanks tim!!! Can you possibly get into multi threading with socket programming later on perhaps?
Freaking A! Thanks so much for breaking this down for us. I really do have a better understanding of threading in Python now
yea, I wanted to learn this topic!!
If I can recall correctly, 1 Ghz means that core can do 1 billion operations per second.
Hi Tim, thanks for the videos! Something I would really love to see is integrating threading with PyQt5 gui applications.
well explaine Tim!
good job
weww explained greatly
Interesting... so, does it help to optimize Python?
not to optimise python but to optimise your python script
@@johnbobbypringle
That's what I meant by optimization.
Depends on the scenario, for me I had something that worked with many files in a directory and did a time-consuming operation on each. With threading I could start the operation on multiple files at the same time instead of waiting for each one to be done before starting a new one.
Thanks man !
you're saving my ass in college... thank you!
Hi, the tutorial and the series are really helpful and explained intuitively. Can you please tell me what books I should follow for getting better at concurrent programming in C++ and python?
I guess the part that confuses me is the fact that isn’t sleep(10) technically a process where the computer sleeps for a set amount of time?
If so, you mention how the computer moves onto the other thread to print the number 2 while waiting for sleep(10) to finish… but doesn’t this technically mean that you are printing the number 2, while the CPU is simultaneously counting to 10?
This is where I get confused as it seems as if it’s undergoing tasks from different threads at the same time?
So if I run a program using threads, all the threads of the same program are going to get distributed to different cores? Or are they going to stay in the same core?
thanks
great video