2:09 : Concurrency is the composition of independently executing processes. 2:23 : It is about *dealing* with a lot of things as once. It is about structure. The goal is to structure things so that you could possibly employ parallelism to do a better job. But parallelism is not the end goal of concurrency. When you structure task/things into pieces, you need to coordinate those tasks/pieces with some form of communication. Ref: Communicating sequential processes (CSP) by Tony Hoare. 2:13 : Parallelism is the simultaneous execution of multiple tasks/things that may or may not be related. 2:27 : Parallelism is about *doing* a lot of things at once. It is about execution.
10:20 We don't have to worry about parallelism when we're doing concurrency. If we get concurrency right, the parallelism is a free variable that we can decide. 12:04 Concurrent Decomposition: Conceptually, this how you think about parallelism. You don't just think about running a problem statement in parallel. Break the problem down into independent components(that you can separate, understand, and get right), and then compose to solve the whole problem together.
22:30 I solved a concurrency issue lately, and part of the solution was to pair up the channel with the request. I'm glad to see that I came up with a solution that the designer of Go considers valid.
This is actually how I do things.. It's way more easier this way to code. It's like I am leading my code to do something structurally. Each worker has a job. That worker only does one job. I never tried any other design pattern in an OOP language. Just the concurrency way because it is easier to understand "which code does this". I just didn't know about pararellism.
CompletableFuture, ha? Virtual Threads in Java 19 are implemented even better than goroutines. Short disclaimer: I also love Go, but it's not ideal. Java is still superior in many aspects.
@@jackdanyal4329 nice, but how many companies are using Java 19? almost none. Like 80% of companies still using Java 8 or 11, so you will not be able to use the new features on your work.
@@rj7250a in the company where I'm working right now (one of the biggest in the Netherlands) the lowest version of Java is 17. I don't care about companies who still use 1.8 ver. It's their problem, not mine. By using virtual threads in production is just a matter of a short time. But you will get a much better implementation of concurrency + solid java with all libraries, community, etc. At least we see that Java does smth right and trying to become more modern. But what does Go do? For several years they said "we don't need generics", and what? now they have generics. but still doesn't have a usable collection types, error handling, etc and etc. and as I mentioned CompletableFuture allows you use concurrency w/o locks and sync problems. and tbh it's much easier to understand the completable future rather trying to understand the whole flow of channels in a not trivial project.
@@jackdanyal4329 Virtual Threads in Java 19 are implemented even better than goroutines. They're not... even ... remotely ... close. Java and Go have stolen a lot of concepts from one another, but Java has the burden of being a 20 ton gorilla that you need to wrestle, and Go just works. Go is faster, uses less resources, and so on. Java is so inferior, it's not even close.
He should have started with performance parity between the processor and the other components. Processor is way way more faster than any other components eg ram, disk, network, without even considering wire latency. Since it is fast, it can switch tasks quickly while the interacting components can keep doing their own work when the processor attending to another task. Otherwise it is impossible to understand concurrency without paralellism.
At his last slide he used a buffered channel with the length of the amount db connections, but wouldn't a channel with the size of 1 make more sense since we only need 1st result?
With an unbuffered channel, each worker will block until someone receives their result over the channel. Since only the first result will be received from the channel , the remaining workers will block forever. Blocked goroutines cannot be garbage collected, so we would end up with leaked goroutines. Using a buffered channel allows all workers to send their work to the channel regardless of there being a receiver or not, and finish execution cleanly.
Actually, there is no need to create a buffered channel at all. You can just wrap a writing to the channel with a select statement and add a default clause to prevents a goroutine leak.
@ernestnguyen1169, in this case, you are right. But in general, would it be a good idea for a server to swallow its result if it cannot be sent immediately?
At 7:48, the codel with 4 gophers runs faster than the one in previous slide with 3 gophers, given the gopher bringing the cart loaded with books, and the gopher that returns an empty cart happen in "parallel" ( meaning there are more than 1 carts) right?
Yea, I had the same question. It’s a detail not explicitly mentioned, but without multiple carts it’s right back the performance of the original design, where we’re bottlenecked by having only one cart.
What I've learned is that there are 2 types of concurrency: 1. Parallelism 2. Time-sliced Single-threaded applications like python and node js uses time-sliced concurrency. Not Parallelism.
Correct me if i'm wrong, the summary of this video is: if something is concurrent this don't mean that its also parallel but if something is parallel it will always be concurrent
Those terms are related, but they do not describe disjoint sets of things. The meanings overlap and vary by situation. In the context of programming, concurrency is the ability of your code to be "composed" into bits of logic that could be run at the same time. Parallelism (when combined with concurrency) is taking said code and running it on a 100-core machine.
While I know he is knowledgeable and I learned things here, I am kinda surprised by the example in the beginning: As if solving a problem concurrently and not sequentially is an "unexpected discovery" will improve the performance... isn't this obvious? or did I miss something? (I am not joking in this comment and not belittling the talk)
There is no difference. Not between the two words. They literally mean the same thing. There is of course difference between the concepts he's describing (parallelization and multi-tasking) but he's using the wrong words to describe them, needlessly sowing confusion.
That is so true, Concurrency is not Parallelism, two very different things, 99% "programmers" wouldn't care since it is irrelevant to them since Concurrency programming gives the illusion of Parallelism. When "professional" programmers need "performance"... thats when "Concurrency" and "Parallelism" needs to be understood exactly what they are. Looking down to the physical hardware of a CPU, it can only then be understood how much is it concurrent vs parallelism. When reading the whole OS and computer architecture, I instantly questioned today's CPU consumer grade architecture, having more cores really doesn't help when needing RAW performance. This is why Intel and AMD have sperate hella expensive CPUs series specifically designed for raw performance in true Parallelism, these CPU lines are sold for web servers and research companies, its sad that consumer grade stuff doesnt have "true Parallelism". The Apple engineers are smart, it makes sense why they use server grade Intel CPUs in their consumer desktop hardware, it also makes sense why Apple ditched Intel and started making their own custom CPUs from ARM.
This talk was from eight years ago, long before AI was invented. If you are now typing a response to this to inform me that AI was not in fact invented in the past few years, you make me sad.
It's not just about the program. It also depends on the machine architecture and the input data. I'd say some optimizations can be done (and have been, although not with LLMs which are a very bad tool for that job) but I doubt that fully optimal structure can be achieved.
concurrency is when you utlize single cpu core effeciently. your cpu is fast at executing instructions, your program is alot slower, concurrency is when your cpu wait time becomes less, and you are utilzing that wait time to deliver or executing something else. IE utillzing cpu effeciently. so simple information, but jargoned in words so no one can understand it. You computer science people make simple thing so difficult to understand. deal with alot of things my foot here.
No. Concurrency and parallelism *are the same thing.* What you call "concurrency" or "composition of independently executable processes" should in fact be called PARALLELIZATION. This is an effort to transform a program - which by default is a single sequence of instructions executed one after another - in such a way that various sub-sequences of it can be efficiently executed by multiple execution units in parallel (concurrently) while minimizing their idle time. I think you needlessly confuse a lot of people with your definitions.
its not, concurrency involves running multiple virtual/green threads and switching between them so they appear to run at the same time, but parallelism is actually the multiple things are running at the same time in separate cores, you cannot run multiple things on the same core without doing context switches.
@@vxcute0 No. What you are describing is not concurrency. That's called multi-tasking, or time-sharing, or multiplexing (in networks). I.e. dividing a single resource like a processing unit or a communication channel into multiple time-pieces so that during some longer time period it seems as if it was available to multiple users "at once". But this is only emulated concurrency. My point was different. I'm saying that terms "concurrency" and "parallelism" have the exact same meaning. Running multiple tasks at the same time = concurrently = in parallel. (This goes beyond computers. It applies to general planning and scheduling of tasks like in factory lines, or even trivial daily tasks.) What he is describing in the video as "concurrency" should NOT be called concurrency, nor parallelism, but paralleliZATION. An act of designing processes in such a way that their sub-tasks that are independent on each other are not prevented from running in parallel. It doesn't matter in this context whether that parallelism is real (on multiple units) or emulated (on a single unit that pretends to be multiple units).
@@kyjo72682 I know time-sharing, but in my understanding green threads works the same, also do you have any other reputible resource that defines concurrency as running tasks at the same time ? because this is not the definition I see any were the definition I see is like what rob pike said.
"Dealing at the same time, doing things at the same time...." Philosophers call this demagogy. concurrency and parallelism is the same, man. Don't try to mix definitions to create new ones, because if you can't explaine smth for 7 years old kid using "only" your hands, you don't understand it good enought yet.
Disciplines usually have precise definitions of words to encapsulate concepts. Math, computer science, etc, all do this. The formal inter-discipline definitions usually doesn't perfectly match the colloquial definitions, which you are alluding to in the dictionary. The formal definition of concurrency in computer science has been around for decades.
Lets say your right, and hes somehow wrong for trying to distinguish between paralellism and concurrency: What phrases do you suppose people like Rob Pike and the plethora of library/language designers who are reasoning about these problems do? Should they invent new phrases? I think you're a better software developer if you're able to think about this talk when you find yourself speaking with someone who is carefully differentiating between "parallelism" and "concurrency". To me this phrasing becomes very important when we thinking about concurrency in UI design where thread-confinement is a factor: we very much want concurrent decomposition, but not for a throughput increase but for a latency reduction. That is, we want concurrency without parallelism as a way to control for responsiveness, and if you don't understand this, well its good that you don't have my job. Further, to your use of Enstein's maxim [I've never heard the "hand gestures only" variant]: I wonder what you think Rob's use of Gofers is supposed to be if not an attempt to explain something to a six year old? He's trying, I think rather successfully, to explain a very fine point using very elementary components.
"Concurrency is about dealing with lot of things at once, Parallelism is about doing a lot of things at once" Thanks for simplifying it Rob!
"these ideas are not deep, they're just good". My new favourite quote.
2:09 : Concurrency is the composition of independently executing processes. 2:23 : It is about *dealing* with a lot of things as once. It is about structure. The goal is to structure things so that you could possibly employ parallelism to do a better job. But parallelism is not the end goal of concurrency. When you structure task/things into pieces, you need to coordinate those tasks/pieces with some form of communication. Ref: Communicating sequential processes (CSP) by Tony Hoare.
2:13 : Parallelism is the simultaneous execution of multiple tasks/things that may or may not be related. 2:27 : Parallelism is about *doing* a lot of things at once. It is about execution.
10:20 We don't have to worry about parallelism when we're doing concurrency. If we get concurrency right, the parallelism is a free variable that we can decide.
12:04 Concurrent Decomposition: Conceptually, this how you think about parallelism. You don't just think about running a problem statement in parallel. Break the problem down into independent components(that you can separate, understand, and get right), and then compose to solve the whole problem together.
22:30 I solved a concurrency issue lately, and part of the solution was to pair up the channel with the request. I'm glad to see that I came up with a solution that the designer of Go considers valid.
Shame the video doesn't show the slides at the right times. Had planning for such a good talk
This guy is good, I think few people may write some code using this new language he's proposing
😂
sarcasm?
@@MichaelSwitzer-t3f As an ai model you do not get sarcasm!
are you still alive and sure about the golang?
"8 gophers on the fly and books being burned at a horrific rate" lol
This is actually how I do things.. It's way more easier this way to code. It's like I am leading my code to do something structurally. Each worker has a job. That worker only does one job.
I never tried any other design pattern in an OOP language. Just the concurrency way because it is easier to understand "which code does this".
I just didn't know about pararellism.
excellent explanation of how go make concurrency work out! Appreciate it!!
"It's time to have some drinks" - Creator of Go
Meanwhile Rust programmers: "Cannot borrow as immutable"
Lol he's named the "Commander, Google" at 20:57
lord praise the commander
"Burning up those C++ books" 🤣
"No locking or synchonization". Indirect attack to Java 🤣
CompletableFuture, ha? Virtual Threads in Java 19 are implemented even better than goroutines. Short disclaimer: I also love Go, but it's not ideal. Java is still superior in many aspects.
@@jackdanyal4329 nice, but how many companies are using Java 19? almost none.
Like 80% of companies still using Java 8 or 11, so you will not be able to use the new features on your work.
@@rj7250a in the company where I'm working right now (one of the biggest in the Netherlands) the lowest version of Java is 17. I don't care about companies who still use 1.8 ver. It's their problem, not mine. By using virtual threads in production is just a matter of a short time. But you will get a much better implementation of concurrency + solid java with all libraries, community, etc. At least we see that Java does smth right and trying to become more modern. But what does Go do? For several years they said "we don't need generics", and what? now they have generics. but still doesn't have a usable collection types, error handling, etc and etc. and as I mentioned CompletableFuture allows you use concurrency w/o locks and sync problems. and tbh it's much easier to understand the completable future rather trying to understand the whole flow of channels in a not trivial project.
@@jackdanyal4329why specifically do you think that Java virtual threads are implemented better than goroutines?
@@jackdanyal4329 Virtual Threads in Java 19 are implemented even better than goroutines.
They're not... even ... remotely ... close. Java and Go have stolen a lot of concepts from one another, but Java has the burden of being a 20 ton gorilla that you need to wrestle, and Go just works. Go is faster, uses less resources, and so on. Java is so inferior, it's not even close.
This is pure gold.
💯% Agree... I mean who am I to Object Rob Pike😅🤞🏾
Every Gopher should watch this talk.
I am watching this video Today (29August2024)
He should have started with performance parity between the processor and the other components. Processor is way way more faster than any other components eg ram, disk, network, without even considering wire latency. Since it is fast, it can switch tasks quickly while the interacting components can keep doing their own work when the processor attending to another task. Otherwise it is impossible to understand concurrency without paralellism.
I think that's a given for any Comp Sci student.
@@metabolic_jam that’s true, but not everyone who wants to learn about concurrency and parallelism is a cs student
@@metabolic_jam except not everyone watching this is/was a comp sci student
Need a deeper stack for that
Amazing
amazing talk
At his last slide he used a buffered channel with the length of the amount db connections, but wouldn't a channel with the size of 1 make more sense since we only need 1st result?
With an unbuffered channel, each worker will block until someone receives their result over the channel. Since only the first result will be received from the channel , the remaining workers will block forever. Blocked goroutines cannot be garbage collected, so we would end up with leaked goroutines. Using a buffered channel allows all workers to send their work to the channel regardless of there being a receiver or not, and finish execution cleanly.
Actually, there is no need to create a buffered channel at all. You can just wrap a writing to the channel with a select statement and add a default clause to prevents a goroutine leak.
@ernestnguyen1169, in this case, you are right. But in general, would it be a good idea for a server to swallow its result if it cannot be sent immediately?
At 7:48, the codel with 4 gophers runs faster than the one in previous slide with 3 gophers, given the gopher bringing the cart loaded with books, and the gopher that returns an empty cart happen in "parallel" ( meaning there are more than 1 carts) right?
Yea, I had the same question. It’s a detail not explicitly mentioned, but without multiple carts it’s right back the performance of the original design, where we’re bottlenecked by having only one cart.
What I've learned is that there are 2 types of concurrency:
1. Parallelism
2. Time-sliced
Single-threaded applications like python and node js uses time-sliced concurrency. Not Parallelism.
Nice shirt
Where is the video editor? Forgot conpletly to change the camera as the presenter show new slide. Ouch
The example with burning books evokes Nazi vibes.
Is there any way to tag all programmers here? Every programmer should know this. Honestly, this guy should write dialogues for hollywood.
who's here watching the video are for operating system assignment?
Correct me if i'm wrong, the summary of this video is: if something is concurrent this don't mean that its also parallel but if something is parallel it will always be concurrent
you are wrong, he says about sawzall: "its an incredible parallel language but has absolutely no concurrency"
"concurrency makes parallelism easy"
Those terms are related, but they do not describe disjoint sets of things. The meanings overlap and vary by situation. In the context of programming, concurrency is the ability of your code to be "composed" into bits of logic that could be run at the same time. Parallelism (when combined with concurrency) is taking said code and running it on a 100-core machine.
While I know he is knowledgeable and I learned things here, I am kinda surprised by the example in the beginning:
As if solving a problem concurrently and not sequentially is an "unexpected discovery" will improve the performance... isn't this obvious? or did I miss something?
(I am not joking in this comment and not belittling the talk)
No, it won't automatically do that.
What does "Commander" mean? Is that his job title or nickname? Google is not helping me with this haha....
I just assumed it somehow was related to Commander Pike in the original Star Trek TV series pilot. We are of that age. . .
He savagely roasted the folks who can't differentiate between "parallelism" and "concurrency".
There is no difference. Not between the two words. They literally mean the same thing.
There is of course difference between the concepts he's describing (parallelization and multi-tasking) but he's using the wrong words to describe them, needlessly sowing confusion.
Who said that concurrency is parallelism?
Isn't this the guy from the Penn & Teller episode of Letterman?
yes
wait what
@@AlbertBalbastreMorte th-cam.com/video/fxMKuv0A6z4/w-d-xo.html
That is so true, Concurrency is not Parallelism, two very different things, 99% "programmers" wouldn't care since it is irrelevant to them since Concurrency programming gives the illusion of Parallelism. When "professional" programmers need "performance"... thats when "Concurrency" and "Parallelism" needs to be understood exactly what they are. Looking down to the physical hardware of a CPU, it can only then be understood how much is it concurrent vs parallelism. When reading the whole OS and computer architecture, I instantly questioned today's CPU consumer grade architecture, having more cores really doesn't help when needing RAW performance. This is why Intel and AMD have sperate hella expensive CPUs series specifically designed for raw performance in true Parallelism, these CPU lines are sold for web servers and research companies, its sad that consumer grade stuff doesnt have "true Parallelism". The Apple engineers are smart, it makes sense why they use server grade Intel CPUs in their consumer desktop hardware, it also makes sense why Apple ditched Intel and started making their own custom CPUs from ARM.
did he talk over Rust Programming Language at the end? I couldn't handle that.
Beta
What about employing AI to identify the optimal concurrency structure of an algorithm/program?
This talk was from eight years ago, long before AI was invented. If you are now typing a response to this to inform me that AI was not in fact invented in the past few years, you make me sad.
@@Naeddyr AI was not invented in the past few years. This applies both to AI in general and LLMs specifically. LLMs are a thing for at least 15 years.
It's not just about the program. It also depends on the machine architecture and the input data. I'd say some optimizations can be done (and have been, although not with LLMs which are a very bad tool for that job) but I doubt that fully optimal structure can be achieved.
Terrible production value, can barely read the code on the slides.
😂
concurrency is when you utlize single cpu core effeciently. your cpu is fast at executing instructions, your program is alot slower, concurrency is when your cpu wait time becomes less, and you are utilzing that wait time to deliver or executing something else. IE utillzing cpu effeciently.
so simple information, but jargoned in words so no one can understand it. You computer science people make simple thing so difficult to understand. deal with alot of things my foot here.
No. Concurrency and parallelism *are the same thing.* What you call "concurrency" or "composition of independently executable processes" should in fact be called PARALLELIZATION. This is an effort to transform a program - which by default is a single sequence of instructions executed one after another - in such a way that various sub-sequences of it can be efficiently executed by multiple execution units in parallel (concurrently) while minimizing their idle time.
I think you needlessly confuse a lot of people with your definitions.
its not, concurrency involves running multiple virtual/green threads and switching between them so they appear to run at the same time, but parallelism is actually the multiple things are running at the same time in separate cores, you cannot run multiple things on the same core without doing context switches.
@@vxcute0 No. What you are describing is not concurrency. That's called multi-tasking, or time-sharing, or multiplexing (in networks). I.e. dividing a single resource like a processing unit or a communication channel into multiple time-pieces so that during some longer time period it seems as if it was available to multiple users "at once". But this is only emulated concurrency.
My point was different. I'm saying that terms "concurrency" and "parallelism" have the exact same meaning. Running multiple tasks at the same time = concurrently = in parallel. (This goes beyond computers. It applies to general planning and scheduling of tasks like in factory lines, or even trivial daily tasks.)
What he is describing in the video as "concurrency" should NOT be called concurrency, nor parallelism, but paralleliZATION. An act of designing processes in such a way that their sub-tasks that are independent on each other are not prevented from running in parallel. It doesn't matter in this context whether that parallelism is real (on multiple units) or emulated (on a single unit that pretends to be multiple units).
@@kyjo72682 I know time-sharing, but in my understanding green threads works the same, also do you have any other reputible resource that defines concurrency as running tasks at the same time ? because this is not the definition I see any were the definition I see is like what rob pike said.
The word "concurrent" literally means "same time".
At 1:42 he literally said "[...] as it is intended to be used in computer science"
idiot
Am I the only one who finds the example of burning books highly misplaced and disrespectful?
Yes, yes you are.
It is C++ joke, it would be distasteful if it was real
[insert that bjarne gigachad quote about people loving to hate the most useful programming languages here]
"highly misplaced and disrespectful"
burn the snowflakes!! or as he said, the "lesser minds" : D
It is provocative, but clearly not an endorsement of actual book-burning
"Dealing at the same time, doing things at the same time...." Philosophers call this demagogy. concurrency and parallelism is the same, man. Don't try to mix definitions to create new ones, because if you can't explaine smth for 7 years old kid using "only" your hands, you don't understand it good enought yet.
Disciplines usually have precise definitions of words to encapsulate concepts. Math, computer science, etc, all do this. The formal inter-discipline definitions usually doesn't perfectly match the colloquial definitions, which you are alluding to in the dictionary. The formal definition of concurrency in computer science has been around for decades.
Артем Арте, dude, go home. You are drunk and/or high.
what the fuck are you talking about
Lets say your right, and hes somehow wrong for trying to distinguish between paralellism and concurrency: What phrases do you suppose people like Rob Pike and the plethora of library/language designers who are reasoning about these problems do? Should they invent new phrases? I think you're a better software developer if you're able to think about this talk when you find yourself speaking with someone who is carefully differentiating between "parallelism" and "concurrency". To me this phrasing becomes very important when we thinking about concurrency in UI design where thread-confinement is a factor: we very much want concurrent decomposition, but not for a throughput increase but for a latency reduction. That is, we want concurrency without parallelism as a way to control for responsiveness, and if you don't understand this, well its good that you don't have my job.
Further, to your use of Enstein's maxim [I've never heard the "hand gestures only" variant]: I wonder what you think Rob's use of Gofers is supposed to be if not an attempt to explain something to a six year old? He's trying, I think rather successfully, to explain a very fine point using very elementary components.
LMAO! Right, it's Rob Pike that doesn't understand parallelism. Good one, Ya almost had me. Hilarious!