Since semaphores are almost always discussed along with mutex, another good difference to mention is that that semaphores are not necessarily owned by a thread. Let me give an example. Consider a semaphore_a which has been initialized to have a value of 0 1) thread_a could wait on it (i.e. it will only decrement the semaphore count) 2) thread_b could post on it (i.e. it will only increment the semaphore count) This is a valid design using semaphores (but not with mutex) since the same thread does no need to wait and also post a semaphore. Here thread_a will only execute semaphore wait & never execute semaphore post. Similarly, thread_b will only execute semaphore post and never semaphore wait. The example you gave could be mistaken to suggest a thread which executes semaphore wait (& decrements the semaphore count), also needs to post the semaphore too (& increment the semaphore count). However, the is not true with mutex. If a thread locks a mutex, the same thread needs to unlock the mutex. A good example this could be a thread which waits for a particular event to be available and an ISR which gets called every time the event occurs. Let me explain a bit: - You have an event, let us say a GPIO interrupt and everytime the GPIO goes to 1 & interrupt is signaled. In the application , you want a certain function to be executed when the GPIO goes to 1. A simple design for this would be: - Create an ISR to handle the GPIO interrupt & a thread which executes the GPIO function. - During initialization, we create a semaphore such that the ISR does a post when an interrupt occurs and the thread does a wait. - When there are no interrupts, the ISR never executes semaphore post and hence thread does keeps waiting on the semaphore. - When an interrupt does occur, the ISR will post the semaphore (increment to 1), then when the thread gets to execute, the wait would return and the GPIO function will be executed. - If multiple interrupts occur, then ISR executes semaphore post each time & accordingly increment the semaphore count. And the thread will execute the GPIO function repeatedly until the semaphore count is zero.
Very nice explanation! It's a very important distinction to make. I will pin this for future viewers. I do tackle this topic in the video comparing binary semaphores with mutexes here: code-vault.net/lesson/bu9ehmp0mx:1609433599490
Mutexes are for sharing a single resource exclusively between threads, semaphores are for like a queue in which one thread consumes something the other other thread produces. Of course you could implement the same behavior with mutexes and viceversa but i think this is the more convenient way to look at it
2 years ago, i went through all ur videos for an interview. Cracked it. Now again for an upcoming one. I pass or not, three things are constants. 1. You are the absolute BEST. 2. Your channel is waaay under rated. 3. I love you, totally. THANK YOU
Thank you for this. I had a difficult time understanding how to implement this after our instructor introduced it to us. You explained it wonderfully. The visual at 5:55 also helped immensely.
One of the best explanations on semaphores. I had a bit of trouble following what all the code did, but the demonstration starting at 5:55 really made it click!
Fantastic explanation of semaphores! I watched two other videos and read a few articles on semaphores, but did not fully understand them until watching this video! Thank you!
It won't waste CPU cycles since it's actually waiting for a signal and signal handlers are handled fairly efficiently in modern OSes. (Though, some implementations could waste cycles I guess)
9:54 won't there be a race condition in this instance since both are going to try and increase the same value at the same time? what if a thread goes to sleep by the OS before hitting post() while another hits post() without interference?
That's a good question. An important aspect to note here is that sem_post and sem_wait are both atomic functions. They are basically incrementing/decrementing the values in one single CPU instruction meaning that no race conditions can occur Race conditions occur because, usually, you have one instruction reading the data and another writing to it and, if the thread pauses in between, that data might have changed. But if you do both the read and write in the same instruction, there's no time for a race condition to even occur
I have a question: At 3:18 on line 21, you are allocating memory for the local variable a. Why? I thought dynamic memory allocation is only used for global variables or when size is not known at compile time. Anyway excellent video, it helps me a lot with my systems programming course.
Dynamic memory is memory that you have control over when it is allocated and deallocated. I dynamically allocate the a there so that for each iteration of the for loop, each thread gets a separate place in memory for that a. Otherwise, all of them would use the same memory address and have the same value for all the threads.
Nice explanation of what you COULD do with semaphores. BUT which use cases are examples of situations where you WANT to have more than 1 thread into the critical code section between wait and post? If we use semaphores, the idea is to guarantee exclusive access to the critical section, so why would we want to bypass that idea?
For example, when you're using a thread pool you might need multiple threads executing parts of a critical section simultaneously. The main point of learning about semaphores is that they are the basis of the more complex synchronization entities that exist in the pthread API (barriers, mutexes, conditions etc.)
It is better to use SysV semaphores, they are more versatile and you can wait for several operations in a SysV semaphore set at the same time. And if you want to start several threads at the same time using a semaphore, you can not only increment the semaphore by one, but directly by a "short" value. In principle, semaphores are needed relatively rarely; mutexes and condition variables, some of which are based on semaphores, are usually sufficient.
I am a python programmer but your course is realy interesting and allows me to have a better understanting of python thread api and multiprocessing. Thanks!
@@CodeVault Hi, yes in python you have two standard libraries : threading and multiprocessing. You can find the same concepts such as : semaphore, lock(mutex), barrier, condition, event and some data structure(thread safe) such as Queue, Value, Array etc...! sorry for my english : from Paris :) .
There's a video about it, although you'll need a separate machine that runs Linux. It could also probably be done with WSL... but I never got it running properly
Hi friend . I really love all your videos . Appreciate the great amount effort you have put in creating these videos Do you plan to make videos on data structures and algorithms..? Or suggest a good one to start with Thanks
Awesome explanation! Small doubt, semaphores are used to access critical sections right? If we are allowing multiple threads in CS using semaphores then how is data consistency achieved? Thanks.
sir, when we initialize the value of semaphore as 2 , two threads are running in parallel . so will there be any chance of getting the race condition while the threds are decrementing the semaphore value?? i mean like both may read 2 and decrement 2 only.
No. The decrementing and incrementing operations of semaphores are atomic (effectively they execute in 1 instruction so there's no room for a race condition)
Sounds like I may have designed something similar to a semaphore that does not assume a limit, instead it leaves limiting active threads as a decision the developer can make, all they need is the char* pointer that's linked by the object I designed, I've still to get round to testing the object but the functions I designed for it amount to just 2, SeekGrip( &shared, &thread ) and FreeGrip( &shared, &thread ), no extras, you just declare your global like this: GRIP *shared = NULL; and your thread like this: GRIP thread = {NULL}; You can then extract the taken ranges just by doing something like this: seeking = {0}; seeking.end = size; for ( GRIP *bound = shared; bound; bound = bound->next ) { while ( bound->lock[0] == '\0' ); sscanf( bound->lock, "%p", &range ); if ( /* seeking falls in range conditions */ ) { shift = range->start - range->end; seeking->start += shift; seeking->end += shift; } } Thinking about it now I suppose it could also be used for inter-thread communication as a pseudo queue
Loved your video but it would be nice if you give links to previous videos for us to refer to. Like here giving a link to the video where you wrote this thread code would have been very helpful.
Oh sorry. I forgot to mention that you can find the whole course about Threads in C on the website: code-vault.net/course/6q6s9eerd0:1609007479575/lesson/v9l3sqtpft:1609091934815 The code that I am presenting should be from this video: code-vault.net/course/6q6s9eerd0:1609007479575/lesson/18ec1942c2da46840693efe9b51f24b6
Can you pls guide me how to set up VS Code to use threads in builds. Also my sleep function doesn't seem to work, I tried your code but still sleep didn't work. I am using WSL 1
I'm using a remote version of VSCode called code-server (link here: github.com/cdr/code-server) and running this on my Linux server (So it's like I'm using VSCode on Linux) And Linux has the pthread API built in. There's this library which you can include if you're on Windows: sourceware.org/pthreads-win32/
@@CodeVault Thanks for the reply..understood. With Linux, by default Posix libs are available. I tried integrating a standalone pthread library with Vscode on Windows. But it's not working. Anny how, i will try as you suggested.
Interesting. On Mac they seem to be deprecated now. You could refer to this answer for the solution: stackoverflow.com/questions/27736618/why-are-sem-init-sem-getvalue-sem-destroy-deprecated-on-mac-os-x-and-w
I feel like I understand the explanation but when I run the code, a second passes, and then all the print statements happen. Can anyone explain why this is?
Infinite number of threads: no. Since the OS does limit the number of threads a system can have running at a time. But you could definitely use this with a really large number of threads
Hello, much appreciate for all these well made videos. Could you make a video on named semaphores. On mac os, sem_init and sem_destroy apis are deprecated as unnamed semaphore is not supported by macos.
@@CodeVault If we are using a global variable and printing its value after incrementing it.Output is dependent on order of threads execution leaading to race condition.
First, args is a void* so we have to cast it before using it. *(int*) args can be read as: 1) cast args from a void* (void pointer) to an int* (int pointer). This is what the part `(int*) args` means 2) Look at the address this args pointer is pointing and get the value at that address
It's just a tool like any other. Semaphores are the most basic entity for synchronization between threads. With it you can make mutexes for example. Look into the next videos, I show some uses in the producer/consumer problem: code-vault.net/lesson/tlu0jq32v9:1609364042686
Since semaphores are almost always discussed along with mutex, another good difference to mention is that that semaphores are not necessarily owned by a thread. Let me give an example.
Consider a semaphore_a which has been initialized to have a value of 0
1) thread_a could wait on it (i.e. it will only decrement the semaphore count)
2) thread_b could post on it (i.e. it will only increment the semaphore count)
This is a valid design using semaphores (but not with mutex) since the same thread does no need to wait and also post a semaphore. Here thread_a will only execute semaphore wait & never execute semaphore post. Similarly, thread_b will only execute semaphore post and never semaphore wait. The example you gave could be mistaken to suggest a thread which executes semaphore wait (& decrements the semaphore count), also needs to post the semaphore too (& increment the semaphore count). However, the is not true with mutex. If a thread locks a mutex, the same thread needs to unlock the mutex.
A good example this could be a thread which waits for a particular event to be available and an ISR which gets called every time the event occurs. Let me explain a bit:
- You have an event, let us say a GPIO interrupt and everytime the GPIO goes to 1 & interrupt is signaled. In the application , you want a certain function to be executed when the GPIO goes to 1. A simple design for this would be:
- Create an ISR to handle the GPIO interrupt & a thread which executes the GPIO function.
- During initialization, we create a semaphore such that the ISR does a post when an interrupt occurs and the thread does a wait.
- When there are no interrupts, the ISR never executes semaphore post and hence thread does keeps waiting on the semaphore.
- When an interrupt does occur, the ISR will post the semaphore (increment to 1), then when the thread gets to execute, the wait would return and the GPIO function will be executed.
- If multiple interrupts occur, then ISR executes semaphore post each time & accordingly increment the semaphore count. And the thread will execute the GPIO function repeatedly until the semaphore count is zero.
Very nice explanation! It's a very important distinction to make. I will pin this for future viewers.
I do tackle this topic in the video comparing binary semaphores with mutexes here: code-vault.net/lesson/bu9ehmp0mx:1609433599490
this functionality looks just like completions.h. Why are there completions?
Mutexes are for sharing a single resource exclusively between threads, semaphores are for like a queue in which one thread consumes something the other other thread produces. Of course you could implement the same behavior with mutexes and viceversa but i think this is the more convenient way to look at it
2 years ago, i went through all ur videos for an interview. Cracked it. Now again for an upcoming one. I pass or not, three things are constants. 1. You are the absolute BEST. 2. Your channel is waaay under rated. 3. I love you, totally. THANK YOU
Good luck at the interview!
Counting my blessings this year. You are one of them. Didn't use to like OS much, after watching your videos, got me hooked on OS. Stay blessed.
Honestly mate you are phenomenal. I appreciate your videos and time, you are actually helping me so much I can't thank you enough. Keep it up buddy!
"Sangam Thapa"
Dude you are the best ! so appreciated , greetings from the Turkey
Hangi üniversitedesin kardeş ?
@@mahmutkoroglu7072 Ben İTÜ'den katılıyorum.
@@goosebosluk9573 Ben de TED'den .YKS' de sikmişsin beni
Thank you for this. I had a difficult time understanding how to implement this after our instructor introduced it to us. You explained it wonderfully. The visual at 5:55 also helped immensely.
One of the best explanations on semaphores. I had a bit of trouble following what all the code did, but the demonstration starting at 5:55 really made it click!
this guy really deserves more views. Thanks a lot
Today I understood semaphore concept.
These graphical representation will help me a lot in interviews for me.
Thank you...
Literally saved my life with this. So straightforward and so helpful!!!
I can not thank you more. Very clear explanation with code examples. Thank you. Please continue what you are doing.
You are helping me in my OS Lab basics, a lot.
Dude your explanation is insaaaaneee!! When i first face with the semaphores, i couldn't get it but now i got it. Again Thanks for explanation 😊
Thank you so much ! You really helped me. Watched a lot of your vids now. And they are awesome ! Greetings from Germany
Very well explained. You really know how to break such complex things in small chunks so everyone can understand. Thank you mate!
You are literally a life saver! Wonderful video!
Again, you are awesome! Thanks from Moscow! Great job!
Fantastic explanation of semaphores! I watched two other videos and read a few articles on semaphores, but did not fully understand them until watching this video! Thank you!
Thank you so much, mate! Just saved me from my O.S. class assignment.
You are a good teacher. Thanks, mate!
Excellent explanation. And a great mix of Bottom-Up-Bottom approach.
Great video and well explained, mate! Thanks for sharing your knowledge with us!
top explanation of semaphores
you are better than my professor, 😂😂😂 Thank you life saver
thank you, my friend... I love you, from Brazil.
Hello I have a question here 5:27 by wait do you mean getting blocked? Or would it still waste cpu cycles while waiting
It won't waste CPU cycles since it's actually waiting for a signal and signal handlers are handled fairly efficiently in modern OSes. (Though, some implementations could waste cycles I guess)
Thanks a lot man for all your videos! You are a mine of knowledge! Subscribed !
Really appreciate the work you are doing. Very helpful videos.
love your videos bro thanks a lot keep it up. blessing for you.
This helped a lot, thanks and greetings from Germany! 👍
9:54 won't there be a race condition in this instance since both are going to try and increase the same value at the same time? what if a thread goes to sleep by the OS before hitting post() while another hits post() without interference?
That's a good question. An important aspect to note here is that sem_post and sem_wait are both atomic functions. They are basically incrementing/decrementing the values in one single CPU instruction meaning that no race conditions can occur
Race conditions occur because, usually, you have one instruction reading the data and another writing to it and, if the thread pauses in between, that data might have changed. But if you do both the read and write in the same instruction, there's no time for a race condition to even occur
I have a question: At 3:18 on line 21, you are allocating memory for the local variable a. Why? I thought dynamic memory allocation is only used for global variables or when size is not known at compile time.
Anyway excellent video, it helps me a lot with my systems programming course.
Dynamic memory is memory that you have control over when it is allocated and deallocated.
I dynamically allocate the a there so that for each iteration of the for loop, each thread gets a separate place in memory for that a. Otherwise, all of them would use the same memory address and have the same value for all the threads.
@@CodeVault Ah yes, of course the threads share the same address space so that makes sense. Thank you!
Thank you SO much! this is extremely helpful and you are so good at explaining this very complicated subject.
Thank you
You are amazing DUDE ! LOVE YOU
Nice explanation of what you COULD do with semaphores. BUT which use cases are examples of situations where you WANT to have more than 1 thread into the critical code section between wait and post? If we use semaphores, the idea is to guarantee exclusive access to the critical section, so why would we want to bypass that idea?
For example, when you're using a thread pool you might need multiple threads executing parts of a critical section simultaneously. The main point of learning about semaphores is that they are the basis of the more complex synchronization entities that exist in the pthread API (barriers, mutexes, conditions etc.)
Thanks man this is what i really needed for this week. What a coincidence!
It is better to use SysV semaphores, they are more versatile and you can wait for several operations in a SysV semaphore set at the same time. And if you want to start several threads at the same time using a semaphore, you can not only increment the semaphore by one, but directly by a "short" value. In principle, semaphores are needed relatively rarely; mutexes and condition variables, some of which are based on semaphores, are usually sufficient.
Thank you very much. I'm still not quite getting this but i feel this just got me 20% clouser, thank you
I am a python programmer but your course is realy interesting and allows me to have a better understanting of python thread api and multiprocessing. Thanks!
Oh, does python have a similar API? I didn't know that
@@CodeVault Hi, yes in python you have two standard libraries : threading and multiprocessing. You can find the same concepts such as : semaphore, lock(mutex), barrier, condition, event and some data structure(thread safe) such as Queue, Value, Array etc...!
sorry for my english : from Paris :) .
@CodeVault . what is the editor you are using in this video . appreciated if you could share steps to install and run it on my windows machine
There's a video about it, although you'll need a separate machine that runs Linux. It could also probably be done with WSL... but I never got it running properly
Hi friend . I really love all your videos . Appreciate the great amount effort you have put in creating these videos
Do you plan to make videos on data structures and algorithms..? Or suggest a good one to start with
Thanks
They are on the TODO list
Did not understand how semaphores work from Tanenbaum's OS book chapter.
This vid was great in giving hands-on example
Awesome explanation! Small doubt, semaphores are used to access critical sections right?
If we are allowing multiple threads in CS using semaphores then how is data consistency achieved?
Thanks.
What do you mean? The API guarantees that sem_wait and sem_post are atomic operations and can't have race conditions
10/10 video and tutorial, thank you!
sir, when we initialize the value of semaphore as 2 , two threads are running in parallel . so will there be any chance of getting the race condition while the threds are decrementing the semaphore value?? i mean like both may read 2 and decrement 2 only.
No. The decrementing and incrementing operations of semaphores are atomic (effectively they execute in 1 instruction so there's no room for a race condition)
nice visualizations, neat and clear
Sounds like I may have designed something similar to a semaphore that does not assume a limit, instead it leaves limiting active threads as a decision the developer can make, all they need is the char* pointer that's linked by the object I designed, I've still to get round to testing the object but the functions I designed for it amount to just 2, SeekGrip( &shared, &thread ) and FreeGrip( &shared, &thread ), no extras, you just declare your global like this: GRIP *shared = NULL; and your thread like this: GRIP thread = {NULL}; You can then extract the taken ranges just by doing something like this:
seeking = {0};
seeking.end = size;
for ( GRIP *bound = shared; bound; bound = bound->next )
{
while ( bound->lock[0] == '\0' );
sscanf( bound->lock, "%p", &range );
if ( /* seeking falls in range conditions */ )
{
shift = range->start - range->end;
seeking->start += shift;
seeking->end += shift;
}
}
Thinking about it now I suppose it could also be used for inter-thread communication as a pseudo queue
fd = open("a.txt", O_RDONLY);
sem_acquire(one); ==> take_lock(one);
sem_acquire(two); ==> take_lock(two);
critical section;
sem_release(one); ==> rel_lock(one);
sem_release(two); ==> rel_lock(two);
dup(fd);
is above sequence correct for single process?
I'm not exactly sure what the code is trying to achieve
Loved your video but it would be nice if you give links to previous videos for us to refer to. Like here giving a link to the video where you wrote this thread code would have been very helpful.
Oh sorry. I forgot to mention that you can find the whole course about Threads in C on the website: code-vault.net/course/6q6s9eerd0:1609007479575/lesson/v9l3sqtpft:1609091934815
The code that I am presenting should be from this video: code-vault.net/course/6q6s9eerd0:1609007479575/lesson/18ec1942c2da46840693efe9b51f24b6
Can you pls guide me how to set up VS Code to use threads in builds. Also my sleep function doesn't seem to work, I tried your code but still sleep didn't work. I am using WSL 1
You'll have to add "-l pthread" to the gcc parameters. I'm not sure about setting it up on WSL, haven't done that successfully yet.
It would be nice to tackle the subject of semaphores in shared memory multi-process
Will look into it
thank you so much what a great teacher you're
Hi, Could you please tell me, how did you get the pthread extension in VScode?
I'm using a remote version of VSCode called code-server (link here: github.com/cdr/code-server) and running this on my Linux server (So it's like I'm using VSCode on Linux) And Linux has the pthread API built in. There's this library which you can include if you're on Windows: sourceware.org/pthreads-win32/
@@CodeVault Thanks for the reply..understood. With Linux, by default Posix libs are available. I tried integrating a standalone pthread library with Vscode on Windows. But it's not working. Anny how, i will try as you suggested.
nice explanation. Thanks bro.
Excellent explanation, tks!
It gives me a warning regarding the sea_init being deprecated, same for the destroy, so it won't work. Anyone knows how to resolve this?
Interesting. On Mac they seem to be deprecated now. You could refer to this answer for the solution: stackoverflow.com/questions/27736618/why-are-sem-init-sem-getvalue-sem-destroy-deprecated-on-mac-os-x-and-w
thanks bro u are so much better than my professor
Nice visualisation!
Such a good explanation!
can someone explain how does it work when we set sem to 0 initially instead of 1 or 2
Basically the same way except some thread needs to sem_post first before any thread that uses sem_wait can continue execution
I feel like I understand the explanation but when I run the code, a second passes, and then all the print statements happen. Can anyone explain why this is?
Hmm... Maybe you forgot to add a
at the end of the printf line? Or something is wrong with the code maybe
Great Video, This helped me a lot!!
Can we use it to create an infinite number of threads but only run a limit number at time ?
Infinite number of threads: no. Since the OS does limit the number of threads a system can have running at a time. But you could definitely use this with a really large number of threads
sir, can you also please make a playlist on network/socket programming?
Yes, it's on my todo list
Hello, much appreciate for all these well made videos. Could you make a video on named semaphores. On mac os, sem_init and sem_destroy apis are deprecated as unnamed semaphore is not supported by macos.
I didn't know that about MacOS. I'll look into it
you such a legend sir❤
are increment and decrement operations inside wait and post atomic?? then how it takes care race condition i.e. two thread simultaneously. pls answer.
Yes, from the user's perspective, they are atomic, no need to worry about race conditions there
@@CodeVault If we are using a global variable and printing its value after incrementing it.Output is dependent on order of threads execution leaading to race condition.
Good explanation
Why I can't init my semaphore?
the return value of this function is -1
:(((
Can you share the whole code?
Could you make a video on POSIX message queues??
They are on my TODO list
Can someone explain for me what does it mean *(int*)args please
First, args is a void* so we have to cast it before using it.
*(int*) args can be read as:
1) cast args from a void* (void pointer) to an int* (int pointer). This is what the part `(int*) args` means
2) Look at the address this args pointer is pointing and get the value at that address
@@CodeVault The best explanation ever! thank you
Thank you so much!
very well explained
This is better than the best university
thank you very much, you are the best
If semaphore value is 0 initially does it crash the code?
It shouldn't, you can try it!
The sound of your keyboard is delicious…mind sharing the model?
Pretty sure it's the Corsair K65 with red switches. I don't really recommend it, honestly
pure gold! :)
well explained tutorial.
Nta rojola azb knbghik fi llah❤️❤️
Thank you
Can't I subscribe, like, a 100 times to this channel?
Thank you!
thanks bud.
THE BEST
Thankssssss
Fantastic
Awesome
cheers
still no clue what the purpose of a semaphore is lol :(
It's just a tool like any other. Semaphores are the most basic entity for synchronization between threads. With it you can make mutexes for example.
Look into the next videos, I show some uses in the producer/consumer problem: code-vault.net/lesson/tlu0jq32v9:1609364042686
What in the world are you doing with all of these pointers??????????????
your voice is not clear sir.
difficulty in understanding.
I recommend using the automated captions
i believe that your voice is brilliant and intresting to hear, thanks for the video
@@CodeVault
Thank you!