You made me to start thinking on a lot of things in my project. Thank you very much! A question to Irtiza or anyone: Step 1) So I fill the Feed cache with new post ids that belongs to a user, that should be displayed for user. Step 2) Probably I should remove the cached posts at a time... But when? When the user saw the post? Or should there be an expiration on each cached post?
1. Feels like it 2. Both seem valid but second makes more sense. Lets say there is a list of posts for some users but the users have not used the app for a while. It does make sense to remove the post after sometime
great design and clearly articulate! thanks a lot! i just wonder, why does stream processor needs to talk to feedservice? i thought feed service now just read results from redis cache. could you help clarify?
For the Scheduled Job, you said that you will iterate through all the users in your database and update the Feed Cache. For the Scheduled Job, if it updates the Feed for every single user in our system (let's say 5M), would you be adding 5M rows to the Feed Cache? My thought was that the Feed Cache would only store a percentage (lets say 20%) of daily users.
I have a question. Let's say if A and B are friends. When A creates a post, it writes to the redis cache on server1 to build the feed for friend B. However, friend B gets routed to server2, which means it won't have access to this cache. In other words, if A has 100 friends, and when A creates a post, how do we update the feed cache for these 100 friends? They are in different servers and their cache will not be in server1.
what if the redis cache does not have the user id for whom the feed is getting loaded, then the feed service needs to talk to post service? Or will you return no feed for them, which is a poor experience?
if the posts get stored in the CDC before it hits the Modification Stream Processor, then hits the Feed Stream Processor, how is it going to prevent offending messages from being posted?
Thanks for the amazing content! Rather than using a CDC, can we simply write a "post_created" event directly to Kafka from the post service? So the post service does 2 jobs. One, write to the database and two, write an event to Kafka.
Depending on your tolerance level, you can start processing after moderation, or go ahead and delete records / evict caches after something is flagged as inappropriate by the moderation system.
For pagination ! Lets assume you have 100 posts cached for each user. Would you consider another service to add more posts to this user cache on reaching last available posts ?
Why do you need separate ID column for Friends and PostUser table when you can just use composite key (postID, userID - PostUser) which uniquely determines a row?
I always prefer having an auto incrementing ID column for all my tables. It helps with JOINs in the future, if you are not considering all your use cases right now. And it's worth the performance tradeoff given the simplicity of that column.
1.) Why does feedStreamProcessor need to talk to Post service? 2.) How does Feed Service fetch the information of a user whose entry isn't present in cache at all? It should be talking to Friend service, Ranking service and then fetch the relevant details and then push it to cache and return the response, right?
Yup! I agree. The purpose of the video was to design the whole system, not dive deeper into individual data model. So I decided to keep things simple : )
This is great but I don't think it is efficient to create feeds for users that you don't know if they will use the service at all. On Twitter or other social networks there must be millions of inactive users, that maybe are following Elon Musk, so everytime Elon twits you are doing a lot of unnecessary work for those millions of inactive users. Besides that, I'd like to have more details about the Raking service. On the first example I don't see efficient to get all the post in order to send them to the Ranking service.
Agreed. It is a trade-off to be made in terms of the freshness of the feed. So one solution could be to refresh the feed only if the user visits and refreshes their Newsfeed Page.
It is the price you pay for having the users feed already computed. Users will not use it if they need to wait 1 minute to be ready. And this approach only works for regular users. For users with huge amount of followers, don't follow the same approach
Twitter actually does create feeds for every user with every new post. It’s counter-intuitive, but they do own their servers for performing the compute.
why do we need scheduled job to update cache for every user?
3:05 Post_User does not need an ID, the primary key is as a weak entity
So crisp and clear.
You made me to start thinking on a lot of things in my project. Thank you very much!
A question to Irtiza or anyone:
Step 1) So I fill the Feed cache with new post ids that belongs to a user, that should be displayed for user.
Step 2) Probably I should remove the cached posts at a time... But when? When the user saw the post? Or should there be an expiration on each cached post?
1. Feels like it
2. Both seem valid but second makes more sense. Lets say there is a list of posts for some users but the users have not used the app for a while. It does make sense to remove the post after sometime
One thing missing in the design is about what happens for influencers and celebrities where the Push mode would not make sense.
Do you mind sharing how to account for this op
@@vinaymiriyala4522 use fanout on read model for celebrities. Just before rendering feed fetch all saved feeds along with celebs post which you follow
What a great channel!
Thanks so much!
Super explanation!
great design and clearly articulate! thanks a lot! i just wonder, why does stream processor needs to talk to feedservice? i thought feed service now just read results from redis cache. could you help clarify?
I should have been clearer. You are right, feed service directly reads from redis cache.
For the Scheduled Job, you said that you will iterate through all the users in your database and update the Feed Cache. For the Scheduled Job, if it updates the Feed for every single user in our system (let's say 5M), would you be adding 5M rows to the Feed Cache?
My thought was that the Feed Cache would only store a percentage (lets say 20%) of daily users.
Hi! You can do it both ways depending on what kind of infra you have for database and cache.
I have a question. Let's say if A and B are friends. When A creates a post, it writes to the redis cache on server1 to build the feed for friend B.
However, friend B gets routed to server2, which means it won't have access to this cache.
In other words, if A has 100 friends, and when A creates a post, how do we update the feed cache for these 100 friends? They are in different servers and their cache will not be in server1.
Is there a way to shut down the background music?
Unfortunately, no :( I was trying out a different style, which clearly didn't do well haha.
good video, love it
Missing one context, Why feed stream processor interacts with feed service. You were saying "The feed of users". May I know what it is?
The feed is a precomputed set of posts that the user sees on their home/feed page.
Awesome videos.
What is the name of the tool that you used for the diagrams?
Miro :)
what if the redis cache does not have the user id for whom the feed is getting loaded, then the feed service needs to talk to post service? Or will you return no feed for them, which is a poor experience?
Yes. If you run into cache miss, you should always consult the DB with the "same logic".
if the posts get stored in the CDC before it hits the Modification Stream Processor, then hits the Feed Stream Processor, how is it going to prevent offending messages from being posted?
That's a great point!
Thanks for the amazing content! Rather than using a CDC, can we simply write a "post_created" event directly to Kafka from the post service? So the post service does 2 jobs. One, write to the database and two, write an event to Kafka.
Yup! That works too. Totally depends on what kind of architecture you have.
What happens for the posted data when it fails moderation but is still being implemented processes by other workers / has been written into storage
Depending on your tolerance level, you can start processing after moderation, or go ahead and delete records / evict caches after something is flagged as inappropriate by the moderation system.
For pagination !
Lets assume you have 100 posts cached for each user.
Would you consider another service to add more posts to this user cache on reaching last available posts ?
You can store all the IDs in your cache, and paginate there. Given you are only storing IDs, and not post details, you can add a ton there.
can Post_user and Post in one table?
there is potential bottleneck on the post api to user-post table before the cdc Kafka. Maybe can partitioning or sharding this part
Or swap this part with nosql server
Is Moderation service updating Post User table if any post found out to be malicious
Yes that would be the idea.
But this design is at a very high level, so I might have not mentioned that explicitly.
Great . thank you
Hi, great content! Why do we need a Post_User table? We could have a UserId column in the Post table that would record an owner's ID ?
Yeah you could do that too. But having a post_user table will let you store more fields about the relationship if needed.
I have one doubt in designing data model. What would happen if We do create separate table POST_USER and include User_id in Post table.
Yes, that should be done, as The Post can not exist along, it must have POST_ID and USER_ID association.
Why do you need separate ID column for Friends and PostUser table when you can just use composite key (postID, userID - PostUser) which uniquely determines a row?
I always prefer having an auto incrementing ID column for all my tables. It helps with JOINs in the future, if you are not considering all your use cases right now. And it's worth the performance tradeoff given the simplicity of that column.
1.) Why does feedStreamProcessor need to talk to Post service?
2.) How does Feed Service fetch the information of a user whose entry isn't present in cache at all? It should be talking to Friend service, Ranking service and then fetch the relevant details and then push it to cache and return the response, right?
1. The stream processor will need to pull details of the post. It usually deals with IDs only.
2. Yes, that's correct.
Thanks
I don't think storing age just as interger will make sense rather storing dob and parsing that to obtain age at run time is the approach
Yup! I agree.
The purpose of the video was to design the whole system, not dive deeper into individual data model. So I decided to keep things simple : )
@@irtizahafiz gotcha
🔥🔥
would you upload the lecture slide
Hi! Unfortunately, I don't have the slides for this one. For most of the other ones I started uploading PDFs or slides. Hope that helps!
👍🚀
This is great but I don't think it is efficient to create feeds for users that you don't know if they will use the service at all. On Twitter or other social networks there must be millions of inactive users, that maybe are following Elon Musk, so everytime Elon twits you are doing a lot of unnecessary work for those millions of inactive users. Besides that, I'd like to have more details about the Raking service. On the first example I don't see efficient to get all the post in order to send them to the Ranking service.
Agreed. It is a trade-off to be made in terms of the freshness of the feed. So one solution could be to refresh the feed only if the user visits and refreshes their Newsfeed Page.
It is the price you pay for having the users feed already computed. Users will not use it if they need to wait 1 minute to be ready. And this approach only works for regular users. For users with huge amount of followers, don't follow the same approach
Twitter actually does create feeds for every user with every new post. It’s counter-intuitive, but they do own their servers for performing the compute.
api gateways knows which service to hit not load balancer