Design Scalable News Feed System Similar to Instagram, Facebook & Twitter | System Design
ฝัง
- เผยแพร่เมื่อ 20 พ.ค. 2024
- Design a scalable news feed system similar to feeds on Instagram, Facebook, and Twitter! We start with a simple working version and then build up to an optimized decoupled architecture while talking about the different tradeoffs that we are making.
System Design Playlist: • System Design Beginner...
00:00 Final Architecture Teaser
01:27 High-Level Requirements
02:20 Data Models
04:55 Creating A Post
06:50 Kafka CDC + Stream Processors
09:25 CDC Streams + Kafka
12:25 Getting User’s Feed
15:30 Problems with Computing Feed at Every Request
17:00 Pre-computing Feed with Redis Cache
20:45 Populating Feed Cache in Realtime
24:25 Populating Feed Cache Offline
25:55 Final Architecture Summary
29:52 Outro & Future Videos
#systemDesign #softwareArchitecture #interview
Visit me at: irtizahafiz.com
Reach me at: irtizahafiz9@gmail.com
So crisp and clear.
why do we need scheduled job to update cache for every user?
Super explanation!
What a great channel!
Thanks so much!
Great . thank you
good video, love it
Thanks
great design and clearly articulate! thanks a lot! i just wonder, why does stream processor needs to talk to feedservice? i thought feed service now just read results from redis cache. could you help clarify?
I should have been clearer. You are right, feed service directly reads from redis cache.
You made me to start thinking on a lot of things in my project. Thank you very much!
A question to Irtiza or anyone:
Step 1) So I fill the Feed cache with new post ids that belongs to a user, that should be displayed for user.
Step 2) Probably I should remove the cached posts at a time... But when? When the user saw the post? Or should there be an expiration on each cached post?
Hi, great content! Why do we need a Post_User table? We could have a UserId column in the Post table that would record an owner's ID ?
Yeah you could do that too. But having a post_user table will let you store more fields about the relationship if needed.
I have one doubt in designing data model. What would happen if We do create separate table POST_USER and include User_id in Post table.
Thanks for the amazing content! Rather than using a CDC, can we simply write a "post_created" event directly to Kafka from the post service? So the post service does 2 jobs. One, write to the database and two, write an event to Kafka.
Yup! That works too. Totally depends on what kind of architecture you have.
For the Scheduled Job, you said that you will iterate through all the users in your database and update the Feed Cache. For the Scheduled Job, if it updates the Feed for every single user in our system (let's say 5M), would you be adding 5M rows to the Feed Cache?
My thought was that the Feed Cache would only store a percentage (lets say 20%) of daily users.
Hi! You can do it both ways depending on what kind of infra you have for database and cache.
3:05 Post_User does not need an ID, the primary key is as a weak entity
🔥🔥
For pagination !
Lets assume you have 100 posts cached for each user.
Would you consider another service to add more posts to this user cache on reaching last available posts ?
You can store all the IDs in your cache, and paginate there. Given you are only storing IDs, and not post details, you can add a ton there.
Awesome videos.
What is the name of the tool that you used for the diagrams?
Miro :)
One thing missing in the design is about what happens for influencers and celebrities where the Push mode would not make sense.
Do you mind sharing how to account for this op
@@vinaymiriyala4522 use fanout on read model for celebrities. Just before rendering feed fetch all saved feeds along with celebs post which you follow
I have a question. Let's say if A and B are friends. When A creates a post, it writes to the redis cache on server1 to build the feed for friend B.
However, friend B gets routed to server2, which means it won't have access to this cache.
In other words, if A has 100 friends, and when A creates a post, how do we update the feed cache for these 100 friends? They are in different servers and their cache will not be in server1.
Why do you need separate ID column for Friends and PostUser table when you can just use composite key (postID, userID - PostUser) which uniquely determines a row?
I always prefer having an auto incrementing ID column for all my tables. It helps with JOINs in the future, if you are not considering all your use cases right now. And it's worth the performance tradeoff given the simplicity of that column.
👍🚀
what if the redis cache does not have the user id for whom the feed is getting loaded, then the feed service needs to talk to post service? Or will you return no feed for them, which is a poor experience?
Yes. If you run into cache miss, you should always consult the DB with the "same logic".
What happens for the posted data when it fails moderation but is still being implemented processes by other workers / has been written into storage
Depending on your tolerance level, you can start processing after moderation, or go ahead and delete records / evict caches after something is flagged as inappropriate by the moderation system.
Is Moderation service updating Post User table if any post found out to be malicious
Yes that would be the idea.
But this design is at a very high level, so I might have not mentioned that explicitly.
1.) Why does feedStreamProcessor need to talk to Post service?
2.) How does Feed Service fetch the information of a user whose entry isn't present in cache at all? It should be talking to Friend service, Ranking service and then fetch the relevant details and then push it to cache and return the response, right?
1. The stream processor will need to pull details of the post. It usually deals with IDs only.
2. Yes, that's correct.
Missing one context, Why feed stream processor interacts with feed service. You were saying "The feed of users". May I know what it is?
The feed is a precomputed set of posts that the user sees on their home/feed page.
if the posts get stored in the CDC before it hits the Modification Stream Processor, then hits the Feed Stream Processor, how is it going to prevent offending messages from being posted?
That's a great point!
would you upload the lecture slide
Hi! Unfortunately, I don't have the slides for this one. For most of the other ones I started uploading PDFs or slides. Hope that helps!
I don't think storing age just as interger will make sense rather storing dob and parsing that to obtain age at run time is the approach
Yup! I agree.
The purpose of the video was to design the whole system, not dive deeper into individual data model. So I decided to keep things simple : )
@@irtizahafiz gotcha
api gateways knows which service to hit not load balancer
This is great but I don't think it is efficient to create feeds for users that you don't know if they will use the service at all. On Twitter or other social networks there must be millions of inactive users, that maybe are following Elon Musk, so everytime Elon twits you are doing a lot of unnecessary work for those millions of inactive users. Besides that, I'd like to have more details about the Raking service. On the first example I don't see efficient to get all the post in order to send them to the Ranking service.
Agreed. It is a trade-off to be made in terms of the freshness of the feed. So one solution could be to refresh the feed only if the user visits and refreshes their Newsfeed Page.
It is the price you pay for having the users feed already computed. Users will not use it if they need to wait 1 minute to be ready. And this approach only works for regular users. For users with huge amount of followers, don't follow the same approach
Twitter actually does create feeds for every user with every new post. It’s counter-intuitive, but they do own their servers for performing the compute.