I scoured internet to understand this for last 2 hours. And i stumbled upon hussein's video's again... everything is crystal clear now. Excited for your new course on OS , bought it but need to go through it 😁😁😁😁
I believe with basic Consistent Hashing approach there were two problems - 1) Unequal partition (a partition is the hash space b/w adjacent servers) size between two servers in hash ring - it is impossible to keep same size of partitions on the hash ring for all servers considering a server can be added or removed. 2) Unequal distribution of keys on servers - it is possible to have a non-uniform key distribution on the hash ring. Two solve both problems a technique called Virtual Node or replica of Servers (not replica of keys) is used. With that N number of virtual server will be placed for real server to make more balanced distribution
I am watching random videos on TH-cam to understand Consistent Hashing. With this video, the subject is clear now for me. I will read again the Chord paper, and now probably i will understand. I subscribed your channel. You are a good teacher.
Awesome video!!. One question with the ring topology, how can we mantain a “right” amount of load through the servers?. In your example if you add servers that go around the same degrees you might have a debalanced system (like 4 servers between 0-90 and just 2 between 180-270). Is this a limitation or I dont quite get it right? 🤔
in real life there are virtual nodes for each servers like s0_0, s0_1, s0_2..and so on. These are distributed in the ring, so eventually you will have a more balanced distribution
Some ideas: 1. While adding server: we can have dedicated slave for each server and we can promote the slave as master like in this case slave of S90 will act as S50. 2. When removing server: instead of dedicated server we can also have slave on adjacent nodes for eg S90 slave will be in S0 and S180 and if we want to remove S90 then slave data of S90 in S180 will act as leader.
Can you elaborate the slave idea? Would the slave be another server along with S90 called S90slave (for example)? And when a new one is added (S50 in this case), the S90slave becomes S50 but with only the values that belong to S50?
very interesting. thanks hussein! i do something similar for flat file db's. similar that is in ranging. so for every object between numbers n and n, find them in file x.
I did not understand why to use consistent hashing, after watching this I now understand why and also only noticed why it is called consistent. The hashing that is consistent even when adding new nodes.
Well taking this example, you cannot have more than 360 servers, right? What if the number of servers increases to 361, how will we handle such complexities? Even if we change the modulo to, lets say, 720: this will add up more to complexities, like earlier result was 1 (when we did %360) now it will be 361. How would we handle such scenarios?
Does introducing a new server in your example not result result in the load being unevenly distributed? With 4 nodes, each node is receiving an equal proportion of the total range 0-360. Adding a node between S0 and S90 means that the range between S270 and S90 is served by 3 nodes while the rest of the hash range is still served by 2 nodes. It sounds like one needs to add 4 new nodes to get an even distribution of load.
The most common way to mitigate this is using more than one hash function for the servers. A good value could be log(M), where M is the max number in the circle, in this case 360. This way you "ensure" the are not big chunks without a server in the circle and keys get distributed uniformly. Off course you pay the price of complexity, and when remove/add a server, you need to reallocate log(M) times. Cheers!
If you really think about it, that's not only about system design but also some algorithmic stuff that many people hate on the coding interviews. Please make more videos related to system design, there are a lot of interesting things(you probably covered most of them ahaha). I was nostalgic watching this video if you know what I mean. 😊
When we add another server to the ring, won’t that server and the next server in the ring get half the load? I.e the loads are now unbalanced. Is there a solution for this or is this a given caveat?
But I guess in the first place, if your load is initially balanced and you start running out of space, you run out of space on all the servers, not just one. So you would add N servers and your load is balanced again. 3 comments instead of just editing the first for that sweet sweet youtube algorithm :) great vid!
They add the concept of virtual nodes mean multiple virtual nodes are associated with one physical node and are distributed along the hash ring. Hence it may reduce the uneven distribution of moving data.
when we add a new server, rather than new server talking to next server to transfer keys, how about it gets populated lazily as and when cache miss happens
Somebody shall ping the data number destribution when have an issue there you can't guess on that or just turn a script look for every single one in the array do there can be said upgrade to that do I can't spoil it, bc it is coming 👍
Learn about the fundamentals of Database Engineering Get my course database.husseinnasser.com
I scoured internet to understand this for last 2 hours. And i stumbled upon hussein's video's again... everything is crystal clear now.
Excited for your new course on OS , bought it but need to go through it 😁😁😁😁
I believe with basic Consistent Hashing approach there were two problems -
1) Unequal partition (a partition is the hash space b/w adjacent servers) size between two servers in hash ring - it is impossible to keep same size of partitions on the hash ring for all servers considering a server can be added or removed.
2) Unequal distribution of keys on servers - it is possible to have a non-uniform key distribution on the hash ring.
Two solve both problems a technique called Virtual Node or replica of Servers (not replica of keys) is used. With that N number of virtual server will be placed for real server to make more balanced distribution
I am watching random videos on TH-cam to understand Consistent Hashing. With this video, the subject is clear now for me. I will read again the Chord paper, and now probably i will understand. I subscribed your channel. You are a good teacher.
My mind is blown the way you explained this topic
This explanation is so beautiful man, loved it. Consistent Hashing feels like magic
Beautiful explanation of this complicated topic :) Thank you Hussein !
Possibly the best explanation for Consistent Hashing. :)
What an amazing explanation without any fancy writing and animation
Hussein, you explain things so nicely! The depth of your knowledge can be easily seen. Keep doing the great work :)
I was avoiding this topic from a very long time due to complexity, you explained it so well. Thanks.
Great topic, its amazing to see great products such as Dynamo and Cassandra use this advanced ideas!
Would like to see more topics like that
Excellent explanation! Seen many videos but this one cleared everything....
i finally get it thanks! everytime i visit your channel i'm rest assured that im going to actually understand what you're teaching :)
You have incredible talent to teach
After watching gaurav sen video. I would say, hussain nesser is best👍💯 🙏
🤣
Amazing explanation, Thank You.
Awesome video!!. One question with the ring topology, how can we mantain a “right” amount of load through the servers?. In your example if you add servers that go around the same degrees you might have a debalanced system (like 4 servers between 0-90 and just 2 between 180-270).
Is this a limitation or I dont quite get it right? 🤔
in real life there are virtual nodes for each servers like s0_0, s0_1, s0_2..and so on. These are distributed in the ring, so eventually you will have a more balanced distribution
What a great talk Hussain. ❤
What a great effort to explain all this!!
and now this is what we call a state of art
Nicely explained.
Some ideas:
1. While adding server: we can have dedicated slave for each server and we can promote the slave as master like in this case slave of S90 will act as S50.
2. When removing server: instead of dedicated server we can also have slave on adjacent nodes for eg S90 slave will be in S0 and S180 and if we want to remove S90 then slave data of S90 in S180 will act as leader.
Can you elaborate the slave idea? Would the slave be another server along with S90 called S90slave (for example)? And when a new one is added (S50 in this case), the S90slave becomes S50 but with only the values that belong to S50?
@@shahman1 Yes S90slave will act as leader for S50 and can discard data outside the degree.
very interesting. thanks hussein! i do something similar for flat file db's. similar that is in ranging. so for every object between numbers n and n, find them in file x.
I did not understand why to use consistent hashing, after watching this I now understand why and also only noticed why it is called consistent. The hashing that is consistent even when adding new nodes.
Almost consistent
Nicely described your struggle show in this video keep it up bro
nice video understood perfectly, thnx
Can you talk about virtual nodes and data replication as the next part of this video ?
Thank you so much sir for making this concept super simple
very important algorithm in distributed computing specially in database systems such as Apache Cassandra and DynamoDB.
Great explanation
are you reading my mind, I was just writing a hashtable for my opengl renderer to cache unifroms :)
Great explanation!
great video as always. thank you
Well taking this example, you cannot have more than 360 servers, right? What if the number of servers increases to 361, how will we handle such complexities? Even if we change the modulo to, lets say, 720: this will add up more to complexities, like earlier result was 1 (when we did %360) now it will be 361. How would we handle such scenarios?
Awesome content. Keep doing it!
In consistent hashing , if one datanase is full and goes to read only mode but can't insert new data, how does this works?
How does it handle range based querie on non partition key ?
Beautiful.
Does introducing a new server in your example not result result in the load being unevenly distributed? With 4 nodes, each node is receiving an equal proportion of the total range 0-360. Adding a node between S0 and S90 means that the range between S270 and S90 is served by 3 nodes while the rest of the hash range is still served by 2 nodes. It sounds like one needs to add 4 new nodes to get an even distribution of load.
The most common way to mitigate this is using more than one hash function for the servers. A good value could be log(M), where M is the max number in the circle, in this case 360. This way you "ensure" the are not big chunks without a server in the circle and keys get distributed uniformly. Off course you pay the price of complexity, and when remove/add a server, you need to reallocate log(M) times. Cheers!
What would happen if your server count goes above 360 in the above scenario? Do you have to reshuffle all keys one time using higher number?
NIcely explained
If you really think about it, that's not only about system design but also some algorithmic stuff that many people hate on the coding interviews.
Please make more videos related to system design, there are a lot of interesting things(you probably covered most of them ahaha).
I was nostalgic watching this video if you know what I mean. 😊
When we add another server to the ring, won’t that server and the next server in the ring get half the load? I.e the loads are now unbalanced.
Is there a solution for this or is this a given caveat?
Interesting trade off… adding a server moves less data, but you end up with an unbalanced load
But I guess in the first place, if your load is initially balanced and you start running out of space, you run out of space on all the servers, not just one. So you would add N servers and your load is balanced again.
3 comments instead of just editing the first for that sweet sweet youtube algorithm :) great vid!
They add the concept of virtual nodes mean multiple virtual nodes are associated with one physical node and are distributed along the hash ring. Hence it may reduce the uneven distribution of moving data.
@@hoangnguyendinh1107 That is very very clever! I could see how that solves the problem. Thanks!
Rendezvous hashing is more simple than consistent hashing and solves the same problem. But there are other tradeoffs.
Thx for the cupon
when we add a new server, rather than new server talking to next server to transfer keys, how about it gets populated lazily as and when cache miss happens
Somebody shall ping the data number destribution when have an issue there you can't guess on that or just turn a script look for every single one in the array do there can be said upgrade to that do I can't spoil it, bc it is coming 👍
Another problem is that data load not evenly distributed if many servers are close on the ring
Right! Hot spots.
@@hnasr a way to mitigate that is to have multiple hash functions for the servers.Cheers!
So this video doesn't discussion V-Nodes yet?
Index 3!
this is like a HEX
and then we play endless snake that consume more and more trouble how we explain it XD
wow
Hussein I am a fan. Please pin this comment.
Expected a better explanation. Expect a long follow up video.
first
🎖️this for your
Now enjoy