i absolutely love your videos, please keep them coming. Its refreshing to watch a technical video that also incorporates humor & sports references haha. You the real MVP
Your videos are the best, I work as a data engineer and by watching your first videos I was able to achieve a deeper knowledge about distributed systems, these videos help a lot
Jordan, I'm eagerly anticipating this series and love your approach. Don't be discouraged by view counts. However, I thought you were planning to reduce the jokes. The first thing I heard was an STD joke😂
Hey Jordan, this series is🔥🔥. I recently finished Designing Data Intensive Application and now your videos on top of it makes so much easy to cover all the aspects. Keep up the good work 🙇♂️
LOVE this new type of content. I am in the same boat where now that I am understanding the breadth of distributed systems and system design, I am now looking into tackling concepts at depth. I would love if you could create some sort of whitepaper / eng blog roadmap. I find it hard to sift through and find the most relevant articles and papers, especially to certain topics.
Thank you Jordan for this series. In the paper when comparing b/w 3 partitions approaches its written that for fixed size 3rd strategy, the membership information stored at each node is reduced by three orders of magnitude. Whereas in previous paras, its mentioned that 3rd strategy stores not only no of tokens (servers) hash as in 1st but also which partition info stored on each node. Isn’t that contradictory info or am misunderstanding something. Ideally third partition scheme should contain more membership information per node. Assuming that they have not changed request forwarding from O(1) hops to logn dht like chord or pastry like routing whereas each node stores only limited number of nodes information sacrificing direct hops.
I'm actually not quite sure, but I imagine that for fixed tokens perhaps you can just give each token a name and say which token belongs to which node, rather than explicitly listing out the ranges, which saves a bit of information to propagate (maybe if there's 128 tokens for example, you only need a short to communicate a token name)
But babe, I thought the next series would be 'Lifting and Leetcode,' where you max out on bench and then superset that with a Leetcode problem :( Anyways, loved the content as always 🙌🙌
2:35 According to DDI (Martin Kleppmann) book chapter about leaderless replication, Amazon Dynamo is not the same as DynamoDB of AWS. It kicked off a "trend" of Dynamo-style database like Cassandra and Riak Never used any of those db, but the book really do a good job of explaining the concept of leaderless replication (along with various other concepts)
It means two "partitions" that don't know to gossip with one another and thus assume they're the only partition. Seeds ensures that we don't run into such a situation since every node will always communicate with the seed so that information about the nodes in the cluster get propagated.
Hi Jordan, how would you design the feature on TH-cam/Netflix where a user can watch a video and pick up where they left off when they come back to that video? I have an interview coming up soon and I couldn't find anything about this feature on your system design TH-cam/Netflix video about this :(
Put a row in a database for the last timestamp a user has watched per video. Every x seconds while they're watching you can place it in Kafka or something and then write to the db asynchronously
@@jordanhasnolife5163 are you placing it in Kafka so that you can reduce write load on the db? Also why are you choosing Kafka over rabbitMQ here? Would this queue need to be durable?
Hi Jordan, Thanks for uploading this video and overall great content from this channel. I have a question regarding the write back cache, if a durable write is performed on less than W nodes and the other nodes (where a cache write was performed) go down afterwards, it would appear that the write succeeded even though it didn't. How is that dealt with? Is that done with a WAL?
Yep, the write would be lost. That's the risk we take with non-durable writes. Ideally one is alive and we can eventually propagate it through sibling resolution.
@@jordanhasnolife5163 Thanks for the clarification. Btw, I commented on this thread some time earlier and it showed up at the time, but it's gone now (it kept the thumbs up though), it appears this may have been a non-durable write (hopefully this one goes through).
I failed to understand the fixed size partition part: If the partitions are predefined fixed size data partitions, how are new nodes being added? Random hash does not guarantee to hit the boundaries of fixed size partitions. Could you elaborate a little more? Thank you!
@@jordanhasnolife5163 I understand Partitions != Physical nodes. Let's say we have 1000 fixed size partitions among the key space 2^64. When a new node is added, its corresponding hash H100 falls in the 100th partition key range [Start-7, End-7]. Which node does key ranges [Start-7, H100] and [H100, End-7] fall into? From what I can see, if the purpose of fixed-size partition is to not to recompute merkel tree. But a new node's hash falls in between a partition key range will result in merkel tree recomputation at least for s [Start-7, H100] and [H100, End-7]. We cannot control the hash of the new node to be exactly on the partition boundaries.
@@jordanhasnolife5163 appreciate it! Those systems are for sure their own beasts and wading through literature is a pain. But spark / presto / Pinot would all be awesome to see!
i absolutely love your videos, please keep them coming. Its refreshing to watch a technical video that also incorporates humor & sports references haha. You the real MVP
Your videos are the best, I work as a data engineer and by watching your first videos I was able to achieve a deeper knowledge about distributed systems, these videos help a lot
Jordan, I'm eagerly anticipating this series and love your approach. Don't be discouraged by view counts. However, I thought you were planning to reduce the jokes. The first thing I heard was an STD joke😂
Reduce I said, reduce!
I continue thinking, this is the best channel about system design, bro dropped a new playlist.
Thanks Jordan!!
Great video. Your channel is very motivating to keep learning things beyond surface level
Thank you Jordan. This is the only tech channel which i can't fast forward, dude speaks content whole video.
Jordan. I am impressed by you. Keep up the good work
Single handedly making me very interested in systems. Much thanks
Great video distilling very dense material. This is on my whitepaper mount rushmore along with GFS, BigTable, and Chubby.
Awesome video! Really great way to stay engaged because my brain really doesnt like reading long academic papers 😅
welcome back and excited to this new series :D
Hey Jordan, this series is🔥🔥. I recently finished Designing Data Intensive Application and now your videos on top of it makes so much easy to cover all the aspects. Keep up the good work 🙇♂️
LOVE this new type of content. I am in the same boat where now that I am understanding the breadth of distributed systems and system design, I am now looking into tackling concepts at depth.
I would love if you could create some sort of whitepaper / eng blog roadmap. I find it hard to sift through and find the most relevant articles and papers, especially to certain topics.
Thanks! I can try on the roadmap, but I have to read a lot more first lol
Thank you Jordan, and congrats on new way !!!!
Thank you, Jordan. I learned a lot from your video. Appreciate it!
More views than the questions video.... Congratulations 🎉
Thank you Jordan for this series. In the paper when comparing b/w 3 partitions approaches its written that for fixed size 3rd strategy, the membership information stored at each node is reduced by three orders of magnitude. Whereas in previous paras, its mentioned that 3rd strategy stores not only no of tokens (servers) hash as in 1st but also which partition info stored on each node. Isn’t that contradictory info or am misunderstanding something. Ideally third partition scheme should contain more membership information per node.
Assuming that they have not changed request forwarding from O(1) hops to logn dht like chord or pastry like routing whereas each node stores only limited number of nodes information sacrificing direct hops.
I'm actually not quite sure, but I imagine that for fixed tokens perhaps you can just give each token a name and say which token belongs to which node, rather than explicitly listing out the ranges, which saves a bit of information to propagate (maybe if there's 128 tokens for example, you only need a short to communicate a token name)
my man, lets goo! this is what I'm in for!
Finally an update from my favourite Vtuber
love the videos man keep em coming! if possible please make a video on project ideas to get more hands on learning on distributed systems topics :)
Yoooooo! This channel just became peak second monitor content!
third monitor, OF is on monitor 2
Can you do the spanner paper next? Great job btw, really helps with understanding first principles
Probably won't be next but will get there!
Nice video. Continue the series...
But babe, I thought the next series would be 'Lifting and Leetcode,' where you max out on bench and then superset that with a Leetcode problem :(
Anyways, loved the content as always 🙌🙌
Lmao - 50k special?
This is it! This is the video you surpassed Stefan
It took this long huh
2:35
According to DDI (Martin Kleppmann) book chapter about leaderless replication, Amazon Dynamo is not the same as DynamoDB of AWS. It kicked off a "trend" of Dynamo-style database like Cassandra and Riak
Never used any of those db, but the book really do a good job of explaining the concept of leaderless replication (along with various other concepts)
Thanks! Yeah I try to make it clear they aren't the same
DynamoDB using single leader replication was a fantastic troll job
Half way through it and really interesting! Please make one on opensearch too, TIA!
Hi Jordan! nice new format :) Hope there will be a video about Zookeeper someday. Still cant fully understand this teck
Probably 2 from now since it's based on chubby
@@jordanhasnolife5163 I didnt know that Zookeeper is based on Chubby. It seems that the entire Internet is based on Google developments
Great Video please keep it up
Amazing content !! ❤ Question: What are logical partitions ? How do seeds help avoid them ? Why do we want to avoid them ?
It means two "partitions" that don't know to gossip with one another and thus assume they're the only partition. Seeds ensures that we don't run into such a situation since every node will always communicate with the seed so that information about the nodes in the cluster get propagated.
Got it, thanks
Jordan you are awesome. I know you already know that ;)))
Hi Jordan, how would you design the feature on TH-cam/Netflix where a user can watch a video and pick up where they left off when they come back to that video? I have an interview coming up soon and I couldn't find anything about this feature on your system design TH-cam/Netflix video about this :(
Put a row in a database for the last timestamp a user has watched per video. Every x seconds while they're watching you can place it in Kafka or something and then write to the db asynchronously
@@jordanhasnolife5163 are you placing it in Kafka so that you can reduce write load on the db? Also why are you choosing Kafka over rabbitMQ here? Would this queue need to be durable?
Hi Jordan, Thanks for uploading this video and overall great content from this channel.
I have a question regarding the write back cache, if a durable write is performed on less than W nodes and the other nodes (where a cache write was performed) go down afterwards, it would appear that the write succeeded even though it didn't. How is that dealt with? Is that done with a WAL?
Yep, the write would be lost. That's the risk we take with non-durable writes. Ideally one is alive and we can eventually propagate it through sibling resolution.
@@jordanhasnolife5163 Thanks for the clarification.
@@jordanhasnolife5163 Thanks for the clarification. Btw, I commented on this thread some time earlier and it showed up at the time, but it's gone now (it kept the thumbs up though), it appears this may have been a non-durable write (hopefully this one goes through).
I failed to understand the fixed size partition part: If the partitions are predefined fixed size data partitions, how are new nodes being added? Random hash does not guarantee to hit the boundaries of fixed size partitions. Could you elaborate a little more? Thank you!
Partitions != Physical nodes. We have very many partitions, and they're equally distributed over the nodes.
@@jordanhasnolife5163 I understand Partitions != Physical nodes. Let's say we have 1000 fixed size partitions among the key space 2^64. When a new node is added, its corresponding hash H100 falls in the 100th partition key range [Start-7, End-7]. Which node does key ranges [Start-7, H100] and [H100, End-7] fall into?
From what I can see, if the purpose of fixed-size partition is to not to recompute merkel tree. But a new node's hash falls in between a partition key range will result in merkel tree recomputation at least for s [Start-7, H100] and [H100, End-7]. We cannot control the hash of the new node to be exactly on the partition boundaries.
Using your example @37:00 to illustrate, the new node 4 falls exactly on a partition boundary. I feel that is not always the case, thus my confusions.
@@thunderzeus8706 It is always the case if we code it to be (which they do)
@@thunderzeus8706 Reiterating, yes we can. We control the hash ring and can place the node wherever we want.
Had me in the first half 😭
This is gold!!
hope this series lasts longer than my relationships
low bar eh
omw to becoming a 0.5x engineer!!
.25 to .5! Great progress!
Plz keep going.😊
love this
Great 👍
New here and love this
Good job
Any plans for presto/trino?
it'll probably be a bit but yeah!
@@jordanhasnolife5163 appreciate it! Those systems are for sure their own beasts and wading through literature is a pain. But spark / presto / Pinot would all be awesome to see!
👌👌👌👌
0:41 does that mean you were positive in your STD test
Dunno what you're talking about
please use some coloured highlighter, my midwit brain can't comprehend whats happening
My midwit brain doesn't know how to use multiple colors
first one, like, thank you
STD positive
So you have more life than me
you need a better mic
No, I need a girlfriend
@@jordanhasnolife5163noooo! Girlfriend means no Saturday videos!
@@romanivanov6183 Great point, no life it is
I am the first viewer.