System Design for Twitter (Timeline, Live Updates, Tweeting) | System Design Interview Prep

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 มิ.ย. 2024
  • Visit Our Website: interviewpen.com/?...
    Join Our Discord (24/7 help): / discord
    Join Our Newsletter - The Blueprint: theblueprint.dev/subscribe
    Like & Subscribe: / @interviewpen
    This is an example of a full video available on interviewpen.com. Check out our website to find more premium content like this!
    Problem Statement:
    Provide a high-level, barebones, design for a service like Twitter.
    *Users will be able to:*
    - Tweet (create text/media posts)
    - Follow other users
    - See a timeline (syndicated from the posts of those they follow)
    *Assumptions*
    - You can assume the user is authenticated during interaction with your design
    *Additional Resources*
    - OSI Model: en.wikipedia.org/wiki/OSI_model) (layers of system interaction
    Table of Contents:
    0:00 - Supported Functionality
    1:40 - Basic Visuals & App Design
    2:41 - Visit interviewpen.com
    3:00 - Endpoints
    4:54 - The Timeline
    9:06 - Starting Design (naive)
    11:11 - Database Schema
    15:15 - Load Management
    17:35 - Scaling the Database
    19:21 - Breaking the System Up
    20:57 - Sharding the Database
    26:46 - Realtime Pushes
    28:27 - Timeline Cache
    30:24 - Tweet Fanout & Message Queue
    32:45 - The Full Design
    34:44 - Recap
    36:01 - Extension
    37:03 - Visit interviewpen.com
    Socials:
    Twitter: / interviewpen
    Twitter (The Blueprint): / theblueprintdev
    LinkedIn: / interviewpen
    Website: interviewpen.com/?...

ความคิดเห็น • 72

  • @interviewpen
    @interviewpen  ปีที่แล้ว +1

    Thanks for watching! Visit interviewpen.com/? for more great Data Structures & Algorithms + System Design content 🧎

    • @kieutruong406
      @kieutruong406 ปีที่แล้ว

      Thank you for sharing the design....I am a Business Analyst beginner, and the talk is a bit fast for me to follow so I could not catch up with the part about Pagination, I wonder in which aspect a Business Analyst should know about it and how deep as I heard some seniors talk about it and I have no idea how this knowledge would help...Thank you in advance and looking forwards to more of your videos :)

  • @69k_gold
    @69k_gold 2 หลายเดือนก่อน +4

    This content extends beyond interviews, it's the essence of the world of SaaS, something every software engineer MUST eventually run into in this day and age

    • @interviewpen
      @interviewpen  2 หลายเดือนก่อน +1

      Absolutely!

  • @adrian333dev
    @adrian333dev หลายเดือนก่อน +1

    Wow! This type of System Design Interview I was looking for the last few weeks...⭐

    • @interviewpen
      @interviewpen  หลายเดือนก่อน

      Glad you enjoyed it!

  • @davidjordan9365
    @davidjordan9365 10 หลายเดือนก่อน +10

    I’m watching every video here multiple times. Y’all are filling a niche that is much needed.

  • @user-qy6pi9iy9j
    @user-qy6pi9iy9j 2 หลายเดือนก่อน +1

    The MOST CLEAR design of Twitter!

  • @dshvets1
    @dshvets1 6 หลายเดือนก่อน +6

    Great video! We should probably add/consider some details on how to manage followers relationships to perform fanout tasks. One idea could be to use a separate graph database and possibly a distributed cache on top of the database.
    Also, for follow/unfollow API we can be more consistent with RESTful rule as follows:
    Follow POST /following
    Unfollow DELETE /following
    with UserId as the parameter for both.

    • @interviewpen
      @interviewpen  6 หลายเดือนก่อน

      Sounds good, thanks!

  • @roadtoenlightenment-wu2te
    @roadtoenlightenment-wu2te 7 หลายเดือนก่อน +4

    I've seen multiple system design interview prep videos but this one is by far the most eye-opening and practical explanation. Thank you for posting this video!

    • @interviewpen
      @interviewpen  6 หลายเดือนก่อน

      Thanks for watching!

  • @LeoLeo-nx5gi
    @LeoLeo-nx5gi ปีที่แล้ว +10

    This is truly awesome, I love the complex things explained in all the videos, thanks!! (waiting for more)

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for the kind words - we'll be posting more!

  • @poonam-kamboj
    @poonam-kamboj 9 หลายเดือนก่อน +8

    this was so easy to understand going from basic design and then introducing the components based on complexity, scaling and needs rather than thinking about them at the very first. Thanks and looking forward for more such design videos

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน +1

      Thanks for watching 👍

  • @throxe
    @throxe ปีที่แล้ว +2

    Thank you very much for this, may you be rewarded with abundant goodness👍🙏

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks - more videos coming!

  • @vimalneha
    @vimalneha ปีที่แล้ว +2

    well explained design! Was quite useful.

  • @FullExtension1
    @FullExtension1 9 หลายเดือนก่อน +1

    Great content, thanks guys.

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      Sure - thanks for watching

  • @arunachalamkaruppaiah8486
    @arunachalamkaruppaiah8486 ปีที่แล้ว +1

    Continue this series.this channel is so underrated 🎉❤

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      We will!! More content coming.

  • @maxbarbul
    @maxbarbul ปีที่แล้ว +3

    Great video! Enjoyed watching it. One thing really bothered me - that a write API would have to calculate and produce messages for followers’ timelines. I would probably make it produce messages with write operations, than have some consumers to process what update goes where and produce new message to notify users. Although, even this split wouldn’t allow for some more agile logic, ie prioritizing tweets going to timelines based on dynamic factors like time of the day, breaking news, change in users preferences.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +2

      Really glad you liked it. This is a super interesting point to bring up, and I agree that separating the timeline logic from the write API would make the system more maintainable. And as you mentioned, introducing a queue between the write API and this hypothetical timeline service would make tweeting faster on the user end while enabling the write API to handle higher loads. As far as I know, tweets always stream to the top of the feed and the rest of the timeline never changes, so this approach should work fine for "dynamic" timeline algorithms as well (but let me know if I'm misunderstanding). Stay tuned for more content :D

    • @maxbarbul
      @maxbarbul ปีที่แล้ว +1

      @@interviewpen Thank you for replying. It’s a great point about the tweets going on top of accumulated timeline. I believe, it would work for most services with timeline/feed.

  • @__simon__
    @__simon__ 9 หลายเดือนก่อน +3

    This is amazing content.
    An alternative design perhaps could rely much more heavily in Kafka. Saving all the tweets in a Topic/partition and saving to the DB after 1y (or whatever) old.
    In this way you could retrieve the timeline easily and also stream the new tweets. The DB would be more simple and perhaps we could get rid of the Mem Cache...

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      Thanks! Interesting thoughts, would be curious to see how using Kafka in that way would perform in practice :)

  • @timmyyyyyyyyyyyydimkakoopa5732
    @timmyyyyyyyyyyyydimkakoopa5732 3 หลายเดือนก่อน

    visual presentation of how thought are coming up with any solution is great, but I would rather find a solution to 'celebrity effect' for that twitter

  • @dibll
    @dibll ปีที่แล้ว +2

    This is Great!! Could you pls cover designing a chat system and Monitoring system(time series DB), if possible. Thank!

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      We'll add it to the list, thanks for watching!

    • @dibll
      @dibll ปีที่แล้ว +1

      @@interviewpen Also I think it would be helpful to add some memory/storage estimates and global audience (multiple regions) use case.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      will do!

  • @yuganderkrishansingh3733
    @yuganderkrishansingh3733 ปีที่แล้ว +1

    Thanks for the content. I have the following questions:
    - Where does sharding logic reside? I think it must be application doing sharding. Pls correct if wrong.
    - How does using tweetId+timestamp actually helps in preparing the timeline? For timeline need tweets from the folks the user is following and the approach mentioned at 21:57 doesn't help(Is it to do something with using Joins as it's a relational db?). The useful thing would be IMHO to have all tweets pertaining to a timeline on a single shard as if it's on multiple shards then thats lot of requests across shards to fetch the tweets.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      1. Generally whatever database you choose will include sharding, you're just responsible for picking the key! MongoDB is a good example of a database that supports sharding out of the box.
      2. Using tweet ID and timestamp allows the data to be distributed evenly across shards, meaning there aren't certain shards that have significantly more traffic or more data than others. You're right--to get a user's timeline, the user would have to query every node, but as long as the data is indexed properly within each node, this will still result in each node doing less work and will allow us to handle higher traffic. There's no great way to shard by timeline directly (ex. think about what happens when a user has no followers, where do their tweets go?), but the Redis cache should help the problem as it is organized by user. There's tons of ways to set up a sharded database and each has pros and cons, so great question!
      Thanks for watching!

    • @yuganderkrishansingh3733
      @yuganderkrishansingh3733 ปีที่แล้ว

      Could someone also explain the issue with pages and fetching n tweets at 6:25? What I understood is that with new tweets the backend needs to ensure that it carefully calculates the "n" tweets keeping new tweets that's coming into system.
      But it's a potential candidate such that even if new tweets come we can keep appending them to top which means earlier we have tweets 1-10(assuming n as 10) and let's say. new tweets came then it will be (1-7)+3 new tweets.

  • @f41z37
    @f41z37 ปีที่แล้ว +2

    Amazing.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks! We have a lot more content coming.

    • @qinzhexu206
      @qinzhexu206 10 หลายเดือนก่อน

      @@interviewpen cool

  • @v4n1shx
    @v4n1shx 7 หลายเดือนก่อน +1

    Right.

  • @gbf_juster
    @gbf_juster 7 หลายเดือนก่อน

    Awesome Video, very well explained. Subbed and Liked. Just like to add some thought into this design, i understand that there is always pros and cons to any system design
    However i would like to point out a potential issue related to the websocket connect to push event back to client to display a popup of sorts to let the client perform the API call to fetch the new timeline.
    Based on the final design, The logic between Client LB READ API makes sense, the LB can have sticky session based load balancing to hit the same READ API instance, as i believed the READ API is scaled horizontally correct?
    However, does that mean this design the READ API every scaled instance will have to be a unique consumer group, else if the collection of READ API share the same consumer group there can be an event where
    - Client Connects Server1
    - Server3 picks up the event but does not have the connection to Client to push the new timeline update.
    So, if every scaled instance of the READ API is using a unique consumer group in Kafka, then the event can be "fanned out" to all instances. This design will resolve the issue, but leads to many events dropped or consumed and ignored. Another point is that for this event there is no need to add more than one partition for the topic as the there is only 1 consumer instance running per unique group ID.
    Feel free to point out if any inconsistency in my explanation here.

    • @interviewpen
      @interviewpen  7 หลายเดือนก่อน +1

      Really great thoughts, thank you! You're right that usually the data would have to be fanned out to all consumers in a setup like this. Consuming and ignoring a message shouldn't be too much overhead, but at a very large scale it could become problematic. Another approach would be to use a separate topic for each user--this would mean an API node could subscribe to only the users it's connected too, but it adds a ton of overhead in the Kafka cluster itself. Perhaps a more traditional message queue like RabbitMQ might be better for this use case--we could set up a queue for each API node, and when a user connects, its API node could create a binding to route that user's data to its queue. Hope that helps!

    • @gbf_juster
      @gbf_juster 7 หลายเดือนก่อน

      @@interviewpen ​ awesome, yes good approach as well. Thank you for sharing!

  • @iamgordonsmith
    @iamgordonsmith ปีที่แล้ว +12

    Genuine question, why should we POST to modify existing data? Shouldn’t a follow be a PUT and an unfollow be a DELETE?

    • @interviewpen
      @interviewpen  ปีที่แล้ว +8

      Good question! It depends on how you structure things, and the main difference between PUT/DELETE and POST is idempotency. One solution would be to have a /following endpoint that PUTs or DELETEs can be made to, and in this case these operations would be idempotent. In the case of /follow and /unfollow endpoints, following a user you're already following would likely result in an error, in which case the operations are not idempotent and thus should be done with POST requests. Using POST endpoints to represent operations is a pretty common practice if that makes more sense than structuring things as resources. At the end of the day, these are just examples to show what patterns the system needs to handle, so do whatever makes sense to you. Thanks for watching!

    • @iamgordonsmith
      @iamgordonsmith ปีที่แล้ว +1

      @@interviewpen thank you that was really helpful 🙏

  • @SP-db6sh
    @SP-db6sh ปีที่แล้ว +2

    Respect 🙌🙌🙌 🇮🇳💝💝💝💝💝

  • @quantic7244
    @quantic7244 9 หลายเดือนก่อน +1

    Maybe I misunderstood the proposal, but how exactly is the memory cache going to work if it is an in-memory solution?
    That necessarily has to take into account the number of active users. For example, say we have 1MIL active users per day, than we need to maintain a cache of 1MIL entries (1 entry for each user) with 100 tweet each (this is only for day 1, simplification).
    If we store the tweet ID only, that could potentially work as it means we need 1MIL users * 100 records of size in the order of bytes - say 16bytes (random number). In this scenario we would need 1.6GB of memory which sounds reasonable for a cache, although we would need to fetch each tweet content still which in turns sounds a bit messy.
    On the other hand if we need to store the tweet content AND tweet ID, we would require roughly 224GB of memory assuming we had 16 bytes TweetID and 140 bytes of content, which sounds not feasible.
    EDIT1: typos 😅

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน +1

      Good thinking, all your math sounds reasonable. However, I disagree that 224GB of memory is unreasonable...there's plenty of systems with that much memory or more. It is also possible to shard an in-memory cache if the dataset becomes too large to fit on one node. There's also a number of other caching strategies for a system like this that may be more or less effective :) Thanks for watching!

  • @amanuel2135
    @amanuel2135 9 หลายเดือนก่อน

    What app do you guys use to draw, its so beautiful 😭

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน +2

      We use GoodNotes on an iPad. Thanks!

  • @firezdog
    @firezdog 8 หลายเดือนก่อน +1

    I'm not sure I understand using tweet id + timestamp as a shard key -- doesn't each tweet have a unique id? wouldn't that lead to as many shards as there are tweets? (and no tweet has multiple timestamps so...)
    if it were twitter id (uid) i think it makes sense, since you want to request the tweets of a given user over a given time period.

    • @interviewpen
      @interviewpen  8 หลายเดือนก่อน

      Sharding by user could work, but we run into some problems since data from certain users is being accessed far more frequently than others. It's OK to have as many shards as tweets--a single node in our database cluster can be responsible for many records with different shard keys.This does lead to a lot of queries having to check every node in the cluster, but that's why we have the cache!

  • @keyurshah6298
    @keyurshah6298 10 หลายเดือนก่อน

    Arriving here after threads by instagram is out. Pretty sure someone from meta saw this video and gave the idea to the higher management, lol

    • @interviewpen
      @interviewpen  10 หลายเดือนก่อน +1

      🤠 thanks for watching

  • @shiiswii4136
    @shiiswii4136 ปีที่แล้ว +1

    👍

  • @Tony-dp1rl
    @Tony-dp1rl 7 หลายเดือนก่อน

    I wouldn't have gone down the Sharding approach with cache for a write database, too complex. Writes are a small percentage of traffic. You still have consistency problems (CAP theorem) anyway , and all you have done is add hops and latency and reduce your SLA. A write queue would have been more simple IMHO.

    • @interviewpen
      @interviewpen  7 หลายเดือนก่อน

      Right, we'd have to go into the math to see if a single node could handle the writes and storage size. Thanks for watching!

  • @ryzny7
    @ryzny7 9 หลายเดือนก่อน

    "120 chars as we on the normal Twitter" ... the good old days :D

  • @engridmedia
    @engridmedia 9 หลายเดือนก่อน

    I thought I knew everything 😂

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      Lol :) Glad you enjoyed it

  • @darthkumamon5541
    @darthkumamon5541 ปีที่แล้ว +1

    这字写的也太丑了,及其影响观感

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      ok - we'll try to write a bit neater next time! Thanks for watching!

    • @darthkumamon5541
      @darthkumamon5541 ปีที่แล้ว

      @@interviewpen favour