A summary of questions and answers asked in the comments below. 1. Can we use use hash maps but flush it's content (after converting to heap) each few seconds to the storage instead of using CMS? For small scale it is totally fine to use hash maps. When scale grows, hash maps may become too big (use a lot of memory). To prevent this we can partition data, so that only subset of all the data comes to a Fast Processor service host. But it complicates the architecture. The beauty of CMS is that it consumes a limited (defined) memory and there is no need to partition the data. The drawback of CMS, it calculates numbers approximately. Tradeoffs, tradeoffs... 2. How do we store count-min sketch and heap into database? Like how to design the table schema? Heap is just a one-dimensional array. Count-min sketch is a two-dimensional array. Meaning that both can be easily serialized into a byte array. Using either language native serialization API or well-regarded serialization frameworks (Protobufs, Thrift, Avro). And we can store them is such form in the database. 3. Count-min sketch is to save memory, but we still have n log k time to get top k, right? Correct. It is n log k (for Heap) + k log k (for sorting the final list). N is typically much larger then k. So, n log k is the dominant. 4. If count-min sketch is only used for 1 min count, why wouldn't we directly use a hash table to count? After all the size of data set won't grow infinitely. For small to medium scale, hash tables solution may work just fine. But keep in mind that if we try to create a service that needs to find top K lists for many different scenarios, there may be many such hash tables and it will not scale well. For example, top K list for most liked/disliked videos, most watched (based on time) videos, most commented, with the highest number of exceptions during video opening, etc. Similar statistics may be calculated on channels level, per country/region and so on. Long story short, there may be many different top K lists we may need to calculate with our service. 5. How to merge two top k lists of one hour to obtain top k for two hours? We need to sum up values for the same identifiers. In other words we sum up views for the same videos from both lists. And take the top K of the merged list (either by sorting or using a Heap). [This won't necessarily be a 100% accurate result though] 6. How does count min sketch work when there are different scenarios like you mentioned.... most liked/disliked videos. Do we need to build multiple sketch? Do we need to have designated hash for each of these categories? Either ways, they need more memory just like hash table. Correct. We need its own sketch to count different event types: video views, likes, dislikes, submission of a comment, etc. 7. Regarding the slow path, I am confused by the data partitioner. Can we remove the first Distribute Messaging system and the data partitioner? The API gateway will send messages directly to the 2nd Distribute Messaging system based on its partitions. For example, the API gateway will send all B message to partition 1, and all A messages to partition 2 and all C messages to partition 3. Why we need the first Distribute Messaging system and data partitioner? If we use Kalfa as Distribute Messaging system, we can just create a topic for a set of message types. In case of a large scale (e.g. RUclips scale), API Gateway cluster will be processing a lot of requests. I assume these are thousands or even tens of thousands of CPU heavy machines. With the main goal of serving video content and doing as little of "other" things as possible. On such machines we usually want to avoid any heavy aggregations or logic. And the simplest thing we can do is to batch together each video view request. I mean not to do any aggregation at all. Create a single message that contains something like: {A = 1, B = 1, C = 1} and send it for further processing. In the option you mentioned we still need to aggregate on the API Gateway side. We cannot afford sending a single message to the second DMS per each video view request, due to a high scale. I mean we cannot have three messages like: {A = 1}, {B = 1}, {C = 1}. As mentioned in the video, we want to decrease request rate at every next stage. 8. I have a question regarding the fast path through, it seems like you store the aggregated count min sketch in the storage system, but is that enough to calculate the top k? I felt like we would need to have a list of the websites and maintain a size k heap somewhere to figure out the top k. You are correct. We always keep two data structures: a count-min sketch and a heap in Fast Processor. We use count-min sketch to count, while heap stores the top-k list. In Storage service we also may keep both or heap only. But heap is always present. 9. So in summary, we still need to store the keys...count-min sketch helps achieve savings by not having to maintain counts for keys individually...when one has to find the top k elements, one has to iterate thru every single key and use count-min sketch to find the top k elements...is this understanding accurate? We need to store the keys, but only K of them (or a bit more). Not all. When every key comes, we do the following: - Add it to the count-min sketch. - Get key count from the count-min sketch. - Check if the current key is in the heap. If it presents in the heap, we update its count value there. If it not present in the heap, we check if heap is already full. If not full, we add this key to the heap. If heap is full, we check the minimal heap element and compare its value with the current key count value. At this point we may remove the minimal element and add the current key (if current key count > minimal element value). This way we only keep a predefined number of keys. This guarantees that we never exceed the memory, as both count-min sketch and the heap has a limited size. Video Notes by Hemant Sethi: tinyurl.com/qqkp274
Hi Saurabh. This is amazing! Thank you for collecting all these questions and answers in one place. I would like to find time to do something like this for other videos as well. I have pinned this comment to be at the top. Thank you once again!
I'm devastated. I just got out of a last round interview, it was my first time ever being asked a system design question. I used this channel, among others, to study, and this video is the ONLY video I didn't have time to watch. My interview question was exactly this, word for word. I made up a functional and relatively scalable solution on the fly, and the interview felt conversational + it lasted 10 minutes more than it should have, so I think I did alright, but I still struggled a lot in the begining and needed some help. Life is cruel sometimes.
THIS GUY is SO COOL. Who else feel that when he's speaking, explaining difficult concepts in the most concise way possible - and also touching on what we really need to hear about?!
As luck would have it i had a similar question for make or break round in google and I nailed it since I watched it several times over before the interview. Got a L6 role offered at Google. Thanks for making my dream come true.
All videos in this channel are the best on YT in this category even to this date. You can find many other channels which may give similar data divided into more than 5 videos with a lot of fluff. Mikhael's video touches upon every important part without beating around the bush and also gives great pointers in identifying what the interviewer may be looking for. Kudos to all the videos in this channel !
You shouldn't feel bad. With this much knowledge, he must be getting atleast $500k+ on his current job. And by now he must be looking beyond money and must be looking for making meaningful contribution to the society.
I wish all sys interview tutorials are like yours, with so much information precisely and carefully explained in a clear manner, with diff trade offs and topics to discuss interviewers along the way! Thank you so much
Thank you very much!! I had gone over all your videos multiple times to understand it well. I had 2 interviews with FAANG in the last week and was offered a job in both! I have to say a lot of the credit goes to you!
Thank you for this video, swe with 3.5 yoe 2 of that working full time in a saas/paas/iaas platform team this video is so helpful never thought this deep something might be asked for a interview. This video literally covers everything.Thank you for making such content most books dont cover.
For people wondering why heap complexity is O(nlog(k)) for single host top k, we do a simple optimization to pop least frequent item when heap size reaches K, so we have n operations each taking order log(k).
I think it is admirable that you explained all the inner workings. In a real interview you can probably skip the single host solution with the heap, that's good for an explanation on youtube. What I think is more valuable is to also propose some actual technologies for the various components to make it clear that you are not proposing building this from scratch. I'm surprised that Kafka Streams was not mentioned. Also for the long path, it is worth discussing the option to store the raw or pre-aggregated requests in an OLAP db like Redshift. The olap can do the top k efficiently for you with a simple sql query (all the map reduce magic will be handled under the hood), can act as main storage, and will also make you flexible to other analytics queries. Integrates directly with various dashboarding products and one rarely wants to do just top k.
19:05 slow path 22:00 faster than map reduce but more accurate than countmin 22:43 fast path 25:38 Data partitioner is basically kafka that reads message(logs, processed logs w counts,etc..) and stores them to topics
So funny, found this channel yesterday and watched this video and been asked pretty much same question at my interview at LinkedIn today. Thanks a lot.
Actually got an offer from Amazon, LinkedIn, Roku and probably Google as well. A lot of it because of this channel. Can’t recommend it enough! Thanks again!
I was asked this same question at my interview last Friday and found out your video today :( Didn't nail it though, hope I can do better next time. Thank you Mikhail, hope you can spend time to create more video like this.
Ohhhh why I did not find this channel before.... The way you approach the problem and take it forward it make it so easy else the realm of system design concepts are huge.... We need more videos like this.... This is design pattern of system design.... Good Job!!!!
Excellent video. A key thing that you did at the end (and is very useful IMHO) is that you identified many other interview questions that are really the same problem in disguise. That is very good thinking that we all probably need to learn and develop. I encourage you to do that in your other design solutions as well. Thank you for another excellent video.
I got an offer from an interview I did next day after binging all your videos (looking forward to your distributed counter video!) on top of studying and reviewing all my previous notes on networking and algo. This really bridges a gap of knowledge for some of us here who had some experience in specific areas but don't have enough to put a whole system together or think about it this way, and when i used yours as part of my review material I always found myself feeling mentally prepared and confident to be in the driver's seat!
Hi SupremePancakes. Really glad for you! Thanks for sharing. Always nice to hear feedback like this! Even though you passed the interview already, please come back to the channel from time to time. I want this channel not only help with interviews, but even more important, help to improve system design skill for your daily job. Helping someone to become a better engineer is what makes this all worthwhile for me.
Hi Tej. Please take a look at this comment and let me know if more details are needed: ruclips.net/video/kx-XDoPjoHw/видео.html&lc=UgzcpyPR8nmCoaxTV3Z4AaABAg.8xFD1xe1cgU91u3EpZgosP
Hands down the best system design videos so far !! and I have watched lots of the system design videos. Love how you start from simple and work all the way to complex structure and how it can applies to different situations.
I think your great coverage of the topic show how you really know it and understand it compared to other guys who just share what they read last night. Thank you
By some reason RUclips hides valid comments. I can only see such comments in the inbox, but there is no way for me to reply. Let me re-post such comments on behalf of people who submitted it. From @sachinjp Very detailed and in depth technical video on system design. Thanks for putting so much effort into this.
This is yet another great System Design video in this channel! I have two thoughts that might help improve the solution: 1. The question of "top K frequent elements" does not require us to sort those top K elements, thus we can use "Quick Select" algorithm merely to find the kth element. The point is after we find the kth element using Quick Select, the array is partitioned such that the top K elements are in the first K positions (but not sorted). This gives the answer in log(n) time, which is a reduction from nlog(k); 2. When you really have a huge amount of data and counts to handle, why not partition the data simply using round-robin for each key? This way, each partition contains (about) the same data so we only need to calculate the result from one partition only. With this approach, we may consider all other partitions 'virtual' or imaginary (without actually using server nodes) so we save the design cost. What do you think?
Hi Alexander. Thank you for the feedback and great questions! Here are some of my thoughts: - Quick Select has O(n) best and average case time complexity. O(n*n) in the worst case. You are correct that it still may be a bit faster on the fixed-sized list of size n. But I cannot say the same for a streaming data, when new events keep coming and we need to calculate/update top K list when every new event arrives. Heap guarantees log(k) time complexity. Running Quick Select on already partially sorted array should be around the same time, but I cannot say what is guaranteed worst-case complexity in this case. - I believe when you say round-robin you mean hash-based, right? So that events for the same video always go to the same partition. Because a "classic" round-robin means "choose the next one in a sequence of machines", which may mean that events for the same video may go to different partitions. So, if you mean hash-based, you are correct, we can use this approach. Two notes, though a. Hash-based partition may lead to "hot partitions" problem. I mention this in a video as well as talk in a bit more details in the latest (step by step interview guide) video. b. When we use count-min sketch, we do not need to partition data at all. Partitioning is needed to guarantee that only limited amount of data will be routed to a particular machine. But because both a count-min sketch and a heap use limited memory (independently how much data is coming), partitioning is not required at all. But this is true for the fast path only, when we calculate approximate results. To calculate accurate results we need to partition. Please keep sharing your thoughts!
It is not enough to send the count min sketch matrix to storage only, you also need to send a list of all the event types that were processed, otherwise you have no way of moving from the matrix data to the actual values (before hashing). The only advantage over the map solution is that you don't need to keep all of it in memory at once, you can stream it as you go from disk for example. Calculating the min for each key is O(number of hash functions, H) and you need to do that for all types of events, so O(E*H). Then you use the priority queue to get the top K, O(E*log(K)), so total time complexity is O(E*H*log(K)).
Well, you are right. But I think the video is more about one of a general design for a single event type. Then we can start from here based on the functional requirement.
This is a massive flaw you have highlighted. I don’t think people understand the consequence of it. You are pretty much showing that using a min count sketch is a bad design decision as the key is lost (unlike in a hashmap) and that once we have found the K top elements from the min count sketch we still need to iterate through all the potential keys to see which ones are matching our top K rows. Moreover the collision issue is quite profound in this case as when two keys collide on all the rows we have no way of knowing which one is the real top K.
It's really helpful. I already watched each videos so many times, I learned a lot. Initially, I was so frustraded with the accent(I am not native Eng speaker either). But now I am okay watching it without CC.
I think the open question on this video is how the fast path stores and retrieves data. It's not really answered clearly in any of the comments I could find. It seems like we are missing an "aggregator" component, which combines the count-mins/heaps from all the fast processors. The video seems to imply we'd have a single count-sketch / heap per time interval. But this will put a huge contention on the database - every fast processor will have to lock the count-sketch and heap, add its local count-sketch/update heap, and store it back. So we will have a large contention on the DB. In addition, like others pointed out, we need the list of all video IDs to do this - so we can rebuild the heap. But that becomes impractical at large volumes. Only things I can think of are : 1) Each fast processor stores its heap into the db (local heap) for the time interval. On query, we aggregate all the local heaps for the interval and build a new global top K heap. The query component can then store this in a cache like redis, so it doesn't need to be recalculated. This approach however requires we partition by video_id all views that are sent to the fast processor. Otherwise we can't accurately merge the local Ks. The problem with this, though, is we can get hot videos and those video counts will be handled entirely by a single processor. 2) Use a DB with built in topK support, like Redis. In this case, we don't need to partition views at all and can balance across all fast processor. Each fast processor then stores a local map of video counts for a short period of time (like 5s), and periodically flushes all the counts to Redis. Redis takes care of storing topK in its own probabilistic data structure. Redis should be able to handle 10k RPS like this. If we need to scale it further, then we have to partition Redis on video_id, for example. And again, our query component will have to aggregate on read all the partitioned local topKs and merge sort them.
For option 1, if fast processors sends their local top-k to the aggregator, that should be enough to calculate global top-k for 1-minute. I don't think there's any need to send CMS to the aggregator. The aggregator creates 1-minute top-k by merging the local heaps, and the query service can simply read the value.
OMG. I love these videos. Thank you so much for creating these. Please write a book or open a course, it may fund you to focus much time on very helpful content like this. I am very happy today.
I have seen lot of system design videos but this content's quality is way above rest. Really appreciate the effort. Please keep posting new topics. Or you can pick top k heavy hitters system design problem requests from comments :)
This is an excellent video, but I am left with these questions: 1. Count min-sketch does not really keep track of video IDs in its cells. Each cell in the table could be from several collisions from different videos. So once we have our final aggregated min-sketch table, we pick the top k frequencies, but we can't tell which video ID each cell corresponds to. So how would it work? I haven't come up with an answer for this. 2. What would be type of database used to store the top k lists? I would just use a simple MySql database since the number of rows would not be very large if we have to retain top k lists for a short window of time (say for 1 week) and k is not too big. We can always add new instances of the db for each week of data if we need preserve data for older weeks. We would have to create an index for the time range column to efficiently search.
For 1, so we still keep a heap of k items. that part doesn't change. The original problem with that is, we lost count for a lot of them items, if we don't count everything and store everything in the hashtable. Now we have this count-min sketch, only to replace the hashtable, and the count here is used to build the heap. we don't lose count for any item(by having the estimation)
One of the best system design channel ive come across! great job! I particularly liked how you were able to describe a fundamental pattern that can be applied in multiple scenarios
Excellent video and great explanation. One further improvement can be done in Slow processing path: Instead of using hadoop MapReduce, if you use Apache Spark for MapReduce, it will save more time. Because Spark uses in memory processing i.e. it does not store intermediate stage results on HDFS (it keeps them in memory), which makes it faster than Hadoop.
My intuition for why we need the entire data set of video view events in order to calculate the top-k videos in an hour: If k=2, and during the first 1-minute period the top three videos are A with 3 views, B with 3 views, and C with 2 views. In the second 1-minute period, the top two videos are D with 3 views, E with 3 views, and C with 2 views. When computing the top-k videos for a 1-hour period, If we only had the 1-minute data for videos available, we would not have the data for the "C" video available because we only stored data for the top two videos at each video. However, over the 2-minute period, the "C" video has actually been viewed the most (4 times).
Great video. Request you to cover couple of popular System Design questions when get chance: (1) recommendation of celebrity on Instagram or Song Recommendation (2) Real time coding competition and display 10 top winners.
29:48 If someone is wondering, like I was, why merging two 1hr top-K lists will not give an accurate one 2-hr list here is the explanation: Each top-K list, in that hour is based on the data available for that hour only. That means, data is local to 1hr window only and not cumulative. When we move to the next hour, all the previous data is discarded. So, while a video might be the most watched in some hour X, it might not be watched at all in hour X+1, but will still be eligible as a candidate while create top-K list from 2K elements(2 * 1hr top-K list elements). Simple example: X hour top 5(k=5) = V1(10), V2(9), V3(8), V4(7), V5(6). Where V1(10) means, Video 1 watched 10 times. Say a video V6 was watched 5 times, so it could not make to the list X+1 hour top 5 = V8(11), V9(10), V10(9), V11(7), V12(6). V6 was watched 5 times again, but could not make it to the list But the interesting part is all the top videos V1-V5 in X hour were never watched in X+1 hour. Likewise, V8-V12 were never watched in X hour. If we create a 2hour top-5 list from these two, V6 will not even be considered even though it was watched total of 10 times in X and X+1 hours and our final list would be: V8(11), V9(10), V1(10), V2(9), V10(9)
Hi Mikhail, I was going over this video again. I am not clear how count min sketch will save memory. Even if we have a predefined size width and height. We still need to know all the videos like A, B, C, D... so we can calculate the different hash values for them before doing a min operation to find the count. So that means we need to persist this list of videos somewhere for the period of time we are accumulating the counts.
Hey, Thank you so much all your knowledge sharing. I am able to perform very nice in all my interviews. Keep up the good work. More power to you. Keep rocking!!!
One of the main reasons for inaccurate results is that events from certain API gateways may be delayed, or arrive late because of congestion. This is the primary motivation for the slow path. Was not obvious till I started thinking
i would use Flink(with apache Beam or not) which can substitute lambda architecture, since it can handle both batch and stream processing and do precise calculations using windowing. Basically u use windowing for aggregations and triggers to output intermediate result when needed.
Thanks for the amazing content! In this architecture, we are keeping the data in the server memory like partition server(~ 5 mins) and API gateway even for a few seconds. How do we secure the data if any server dies? And, How do we handle hot partitions?
Jesus christ this guy's material is amazing... and each video is so compact. He basically never wastes a single word....
I have to pause or rewind constantly, and watch every video twice to digest it.
@@antonfeng1434 me too
@@antonfeng1434 Same here
@@xordux7 Same here
A summary of questions and answers asked in the comments below.
1. Can we use use hash maps but flush it's content (after converting to heap) each few seconds to the storage instead of using CMS?
For small scale it is totally fine to use hash maps. When scale grows, hash maps may become too big (use a lot of memory). To prevent this we can partition data, so that only subset of all the data comes to a Fast Processor service host. But it complicates the architecture. The beauty of CMS is that it consumes a limited (defined) memory and there is no need to partition the data. The drawback of CMS, it calculates numbers approximately. Tradeoffs, tradeoffs...
2. How do we store count-min sketch and heap into database? Like how to design the table schema?
Heap is just a one-dimensional array. Count-min sketch is a two-dimensional array. Meaning that both can be easily serialized into a byte array. Using either language native serialization API or well-regarded serialization frameworks (Protobufs, Thrift, Avro). And we can store them is such form in the database.
3. Count-min sketch is to save memory, but we still have n log k time to get top k, right?
Correct. It is n log k (for Heap) + k log k (for sorting the final list). N is typically much larger then k. So, n log k is the dominant.
4. If count-min sketch is only used for 1 min count, why wouldn't we directly use a hash table to count? After all the size of data set won't grow infinitely.
For small to medium scale, hash tables solution may work just fine. But keep in mind that if we try to create a service that needs to find top K lists for many different scenarios, there may be many such hash tables and it will not scale well. For example, top K list for most liked/disliked videos, most watched (based on time) videos, most commented, with the highest number of exceptions during video opening, etc. Similar statistics may be calculated on channels level, per country/region and so on. Long story short, there may be many different top K lists we may need to calculate with our service.
5. How to merge two top k lists of one hour to obtain top k for two hours?
We need to sum up values for the same identifiers. In other words we sum up views for the same videos from both lists. And take the top K of the merged list (either by sorting or using a Heap). [This won't necessarily be a 100% accurate result though]
6. How does count min sketch work when there are different scenarios like you mentioned.... most liked/disliked videos. Do we need to build multiple sketch? Do we need to have designated hash for each of these categories? Either ways, they need more memory just like hash table.
Correct. We need its own sketch to count different event types: video views, likes, dislikes, submission of a comment, etc.
7. Regarding the slow path, I am confused by the data partitioner. Can we remove the first Distribute Messaging system and the data partitioner? The API gateway will send messages directly to the 2nd Distribute Messaging system based on its partitions. For example, the API gateway will send all B message to partition 1, and all A messages to partition 2 and all C messages to partition 3. Why we need the first Distribute Messaging system and data partitioner? If we use Kalfa as Distribute Messaging system, we can just create a topic for a set of message types.
In case of a large scale (e.g. RUclips scale), API Gateway cluster will be processing a lot of requests. I assume these are thousands or even tens of thousands of CPU heavy machines. With the main goal of serving video content and doing as little of "other" things as possible. On such machines we usually want to avoid any heavy aggregations or logic. And the simplest thing we can do is to batch together each video view request. I mean not to do any aggregation at all. Create a single message that contains something like: {A = 1, B = 1, C = 1} and send it for further processing. In the option you mentioned we still need to aggregate on the API Gateway side. We cannot afford sending a single message to the second DMS per each video view request, due to a high scale. I mean we cannot have three messages like: {A = 1}, {B = 1}, {C = 1}. As mentioned in the video, we want to decrease request rate at every next stage.
8. I have a question regarding the fast path through, it seems like you store the aggregated count min sketch in the storage system, but is that enough to calculate the top k? I felt like we would need to have a list of the websites and maintain a size k heap somewhere to figure out the top k.
You are correct. We always keep two data structures: a count-min sketch and a heap in Fast Processor. We use count-min sketch to count, while heap stores the top-k list. In Storage service we also may keep both or heap only. But heap is always present.
9. So in summary, we still need to store the keys...count-min sketch helps achieve savings by not having to maintain counts for keys individually...when one has to find the top k elements, one has to iterate thru every single key and use count-min sketch to find the top k elements...is this understanding accurate?
We need to store the keys, but only K of them (or a bit more). Not all.
When every key comes, we do the following:
- Add it to the count-min sketch.
- Get key count from the count-min sketch.
- Check if the current key is in the heap. If it presents in the heap, we update its count value there. If it not present in the heap, we check if heap is already full. If not full, we add this key to the heap. If heap is full, we check the minimal heap element and compare its value with the current key count value. At this point we may remove the minimal element and add the current key (if current key count > minimal element value).
This way we only keep a predefined number of keys. This guarantees that we never exceed the memory, as both count-min sketch and the heap has a limited size.
Video Notes by Hemant Sethi: tinyurl.com/qqkp274
Hi Saurabh. This is amazing! Thank you for collecting all these questions and answers in one place. I would like to find time to do something like this for other videos as well.
I have pinned this comment to be at the top. Thank you once again!
Thanks a lot for this Saurabh!
Need more people like you. Thank you
@@atibhiagrawal6460 glad it's helpful!
Might be worth posting a link to your notes in a standalone comment too so that everyone can see it
@@saurabhmaurya94 That is good idea ! Thank you :D
couldn't solve this problem in an interview. found this gem of a video a month after. will get them next time!
I'm devastated.
I just got out of a last round interview, it was my first time ever being asked a system design question.
I used this channel, among others, to study, and this video is the ONLY video I didn't have time to watch.
My interview question was exactly this, word for word.
I made up a functional and relatively scalable solution on the fly, and the interview felt conversational + it lasted 10 minutes more than it should have, so I think I did alright, but I still struggled a lot in the begining and needed some help.
Life is cruel sometimes.
THIS GUY is SO COOL. Who else feel that when he's speaking, explaining difficult concepts in the most concise way possible - and also touching on what we really need to hear about?!
As luck would have it i had a similar question for make or break round in google and I nailed it since I watched it several times over before the interview. Got a L6 role offered at Google. Thanks for making my dream come true.
PLEASE come back and make videos again. There's no resource quite like this channel.
All videos in this channel are the best on YT in this category even to this date. You can find many other channels which may give similar data divided into more than 5 videos with a lot of fluff. Mikhael's video touches upon every important part without beating around the bush and also gives great pointers in identifying what the interviewer may be looking for. Kudos to all the videos in this channel !
Your accent is hard to understand initially, but now I fall in love with you accent.
he's ruZZian
5 years later, this is still the best video on this topic on RUclips.
i feel bad that im not paying for this video! the quality is beyond amazing
You shouldn't feel bad. With this much knowledge, he must be getting atleast $500k+ on his current job. And by now he must be looking beyond money and must be looking for making meaningful contribution to the society.
He is staff at stripe, 1M plus easy. He is just sharing his knowledge
one of the best technical discussions I have seen
Thanks, Stefan. Appreciate the feedback!
How can someone even downvote this? This is just so amazing. Have not learnt so much in 30 minutes in my whole life.
OMG, this is still the best system design video i've ever seen. it's not only for interview, but also for actual system solution design.
This s one of the best system design video I came across in long time .. keep up the good work !
Thank you, Sourav. Appreciate the feedback.
Great work. I am a senior engineer at a big tech company and I'm still learning a lot from your videos.
I love Mikhail's content, the video is so interactive that it looks like he is talking to you and he knows what is going inside your head :)
I wish all sys interview tutorials are like yours, with so much information precisely and carefully explained in a clear manner, with diff trade offs and topics to discuss interviewers along the way! Thank you so much
You're amazing, by far the most detailed and deeply analysed solution I've seen on any design channel. Please never stop making videos.
Thank you very much!! I had gone over all your videos multiple times to understand it well. I had 2 interviews with FAANG in the last week and was offered a job in both! I have to say a lot of the credit goes to you!
I had an interview step with AWS a couple of days ago and they asked me exactly this question. Thank you for your videos.
Very clean explanation, which is rare nowadays, why did you stop ? It would be nice to see your new videos , good luck man!
I agree. Can you please continue doing this?
PLEASE MAKE MORE VIDEOS. WE WILL PAY FOR IT (ADD JOIN BUTTON)!
Thank you for this video, swe with 3.5 yoe 2 of that working full time in a saas/paas/iaas platform team this video is so helpful never thought this deep something might be asked for a interview.
This video literally covers everything.Thank you for making such content most books dont cover.
For people wondering why heap complexity is O(nlog(k)) for single host top k, we do a simple optimization to pop least frequent item when heap size reaches K, so we have n operations each taking order log(k).
I think it is admirable that you explained all the inner workings. In a real interview you can probably skip the single host solution with the heap, that's good for an explanation on youtube. What I think is more valuable is to also propose some actual technologies for the various components to make it clear that you are not proposing building this from scratch. I'm surprised that Kafka Streams was not mentioned. Also for the long path, it is worth discussing the option to store the raw or pre-aggregated requests in an OLAP db like Redshift. The olap can do the top k efficiently for you with a simple sql query (all the map reduce magic will be handled under the hood), can act as main storage, and will also make you flexible to other analytics queries. Integrates directly with various dashboarding products and one rarely wants to do just top k.
19:05 slow path
22:00 faster than map reduce but more accurate than countmin
22:43 fast path
25:38 Data partitioner is basically kafka that reads message(logs, processed logs w counts,etc..) and stores them to topics
One of the most informative videos I've seen on system design. Would like to see more such content.
This is the best explanation on system design I've ever seen. Thanks Mikhail, that helps A LOT!
So funny, found this channel yesterday and watched this video and been asked pretty much same question at my interview at LinkedIn today. Thanks a lot.
Funny, indeed )) This world is so small ))
Thanks for sharing!
Actually got an offer from Amazon, LinkedIn, Roku and probably Google as well. A lot of it because of this channel. Can’t recommend it enough! Thanks again!
I was asked this same question at my interview last Friday and found out your video today :( Didn't nail it though, hope I can do better next time. Thank you Mikhail, hope you can spend time to create more video like this.
Wow, Sergey. You rock!
And thank you for the praise.
Time will come, Hugh. Just keep pushing!
Ohhhh why I did not find this channel before.... The way you approach the problem and take it forward it make it so easy else the realm of system design concepts are huge.... We need more videos like this.... This is design pattern of system design.... Good Job!!!!
Glad to have you aboard, coolgoose8555! Thank you for the feedback!
The best system design answer I have seen on RUclips. Thank you!
Thank you, Hugh, for the feedback.
Excellent video. A key thing that you did at the end (and is very useful IMHO) is that you identified many other interview questions that are really the same problem in disguise. That is very good thinking that we all probably need to learn and develop. I encourage you to do that in your other design solutions as well. Thank you for another excellent video.
Sr your videos are gold, I got no interview but it’s rare to find architecture so well explained, thanks
I wish I could give this video a thousand likes instead of just 1 !!! these contents are fantastic!!!
This is one of the best system design videos on this topic I have come across. Thanks & keep up the great work, Mikhail!
Wow! This is the best system design review video I've ever seen.
Thanks Mikhail. I can bet..this is the best channel on RUclips. Just binge watch all the videos from this channel and you will learn so much.
I got an offer from an interview I did next day after binging all your videos (looking forward to your distributed counter video!) on top of studying and reviewing all my previous notes on networking and algo. This really bridges a gap of knowledge for some of us here who had some experience in specific areas but don't have enough to put a whole system together or think about it this way, and when i used yours as part of my review material I always found myself feeling mentally prepared and confident to be in the driver's seat!
Hi SupremePancakes. Really glad for you! Thanks for sharing. Always nice to hear feedback like this!
Even though you passed the interview already, please come back to the channel from time to time. I want this channel not only help with interviews, but even more important, help to improve system design skill for your daily job.
Helping someone to become a better engineer is what makes this all worthwhile for me.
System Design Interview Of course!!! I look forward to more videos and how this channel grows
System Design Interview In the fast path how is the heap constructued from count min sketch table?
Hi Tej. Please take a look at this comment and let me know if more details are needed: ruclips.net/video/kx-XDoPjoHw/видео.html&lc=UgzcpyPR8nmCoaxTV3Z4AaABAg.8xFD1xe1cgU91u3EpZgosP
Among all the materials I have seen in youtube, this is really the top one. Keep up the good work and thanks for sharing
Hands down the best system design videos so far !! and I have watched lots of the system design videos. Love how you start from simple and work all the way to complex structure and how it can applies to different situations.
You are too kind to me, Joy! Thank you for the feedback!
I think your great coverage of the topic show how you really know it and understand it compared to other guys who just share what they read last night. Thank you
By some reason RUclips hides valid comments. I can only see such comments in the inbox, but there is no way for me to reply. Let me re-post such comments on behalf of people who submitted it.
From @sachinjp
Very detailed and in depth technical video on system design. Thanks for putting so much effort into this.
Thank you for the feedback, @sachinjp. Hopefully this message will find you.
This is yet another great System Design video in this channel! I have two thoughts that might help improve the solution: 1. The question of "top K frequent elements" does not require us to sort those top K elements, thus we can use "Quick Select" algorithm merely to find the kth element. The point is after we find the kth element using Quick Select, the array is partitioned such that the top K elements are in the first K positions (but not sorted). This gives the answer in log(n) time, which is a reduction from nlog(k); 2. When you really have a huge amount of data and counts to handle, why not partition the data simply using round-robin for each key? This way, each partition contains (about) the same data so we only need to calculate the result from one partition only. With this approach, we may consider all other partitions 'virtual' or imaginary (without actually using server nodes) so we save the design cost. What do you think?
Hi Alexander. Thank you for the feedback and great questions!
Here are some of my thoughts:
- Quick Select has O(n) best and average case time complexity. O(n*n) in the worst case. You are correct that it still may be a bit faster on the fixed-sized list of size n. But I cannot say the same for a streaming data, when new events keep coming and we need to calculate/update top K list when every new event arrives. Heap guarantees log(k) time complexity. Running Quick Select on already partially sorted array should be around the same time, but I cannot say what is guaranteed worst-case complexity in this case.
- I believe when you say round-robin you mean hash-based, right? So that events for the same video always go to the same partition. Because a "classic" round-robin means "choose the next one in a sequence of machines", which may mean that events for the same video may go to different partitions. So, if you mean hash-based, you are correct, we can use this approach.
Two notes, though
a. Hash-based partition may lead to "hot partitions" problem. I mention this in a video as well as talk in a bit more details in the latest (step by step interview guide) video.
b. When we use count-min sketch, we do not need to partition data at all. Partitioning is needed to guarantee that only limited amount of data will be routed to a particular machine. But because both a count-min sketch and a heap use limited memory (independently how much data is coming), partitioning is not required at all. But this is true for the fast path only, when we calculate approximate results. To calculate accurate results we need to partition.
Please keep sharing your thoughts!
It is not enough to send the count min sketch matrix to storage only, you also need to send a list of all the event types that were processed, otherwise you have no way of moving from the matrix data to the actual values (before hashing). The only advantage over the map solution is that you don't need to keep all of it in memory at once, you can stream it as you go from disk for example.
Calculating the min for each key is O(number of hash functions, H) and you need to do that for all types of events, so O(E*H). Then you use the priority queue to get the top K, O(E*log(K)), so total time complexity is O(E*H*log(K)).
Well, you are right. But I think the video is more about one of a general design for a single event type. Then we can start from here based on the functional requirement.
This is a massive flaw you have highlighted. I don’t think people understand the consequence of it. You are pretty much showing that using a min count sketch is a bad design decision as the key is lost (unlike in a hashmap) and that once we have found the K top elements from the min count sketch we still need to iterate through all the potential keys to see which ones are matching our top K rows. Moreover the collision issue is quite profound in this case as when two keys collide on all the rows we have no way of knowing which one is the real top K.
It's really helpful. I already watched each videos so many times, I learned a lot. Initially, I was so frustraded with the accent(I am not native Eng speaker either). But now I am okay watching it without CC.
These are by far the best videos on system design for interviews. Thanks a lot for taking the time to make and publish these!
I think the open question on this video is how the fast path stores and retrieves data. It's not really answered clearly in any of the comments I could find.
It seems like we are missing an "aggregator" component, which combines the count-mins/heaps from all the fast processors. The video seems to imply we'd have a single count-sketch / heap per time interval. But this will put a huge contention on the database - every fast processor will have to lock the count-sketch and heap, add its local count-sketch/update heap, and store it back. So we will have a large contention on the DB. In addition, like others pointed out, we need the list of all video IDs to do this - so we can rebuild the heap. But that becomes impractical at large volumes.
Only things I can think of are :
1) Each fast processor stores its heap into the db (local heap) for the time interval. On query, we aggregate all the local heaps for the interval and build a new global top K heap. The query component can then store this in a cache like redis, so it doesn't need to be recalculated. This approach however requires we partition by video_id all views that are sent to the fast processor. Otherwise we can't accurately merge the local Ks. The problem with this, though, is we can get hot videos and those video counts will be handled entirely by a single processor.
2) Use a DB with built in topK support, like Redis. In this case, we don't need to partition views at all and can balance across all fast processor. Each fast processor then stores a local map of video counts for a short period of time (like 5s), and periodically flushes all the counts to Redis. Redis takes care of storing topK in its own probabilistic data structure. Redis should be able to handle 10k RPS like this. If we need to scale it further, then we have to partition Redis on video_id, for example. And again, our query component will have to aggregate on read all the partitioned local topKs and merge sort them.
For option 1, if fast processors sends their local top-k to the aggregator, that should be enough to calculate global top-k for 1-minute. I don't think there's any need to send CMS to the aggregator. The aggregator creates 1-minute top-k by merging the local heaps, and the query service can simply read the value.
These videos are gem for System design noob like me.
OMG. I love these videos. Thank you so much for creating these. Please write a book or open a course, it may fund you to focus much time on very helpful content like this. I am very happy today.
Appreciate your feedback, Karthik!
So far I am loving it. Keeps me glued to ur channel. Fantastic job I must say
Very clear solution and something that can actually be used in an interview! Please keep making more of these.
I have seen lot of system design videos but this content's quality is way above rest. Really appreciate the effort. Please keep posting new topics. Or you can pick top k heavy hitters system design problem requests from comments :)
Thank you for the feedback Mohit! Much appreciated.
amazing, i was like wtf you talking about at the beginning. It all makes sense now after the data retrieval part.
Phenomenal. We do something very similar with hot and cold path in microsoft. Instead of countmin sketch we use hyperloglog
This is the most tech intense 30min video I've ever seen :) Thank you!
This is an excellent video, but I am left with these questions:
1. Count min-sketch does not really keep track of video IDs in its cells. Each cell in the table could be from several collisions from different videos. So once we have our final aggregated min-sketch table, we pick the top k frequencies, but we can't tell which video ID each cell corresponds to. So how would it work? I haven't come up with an answer for this.
2. What would be type of database used to store the top k lists?
I would just use a simple MySql database since the number of rows would not be very large if we have to retain top k lists for a short window of time (say for 1 week) and k is not too big. We can always add new instances of the db for each week of data if we need preserve data for older weeks. We would have to create an index for the time range column to efficiently search.
For 2nd we can use redia sorted set
For 1, so we still keep a heap of k items. that part doesn't change. The original problem with that is, we lost count for a lot of them items, if we don't count everything and store everything in the hashtable. Now we have this count-min sketch, only to replace the hashtable, and the count here is used to build the heap. we don't lose count for any item(by having the estimation)
The system design video to beat. PERIOD!!!
Thank you, Anubhav!
Bonus on mentioning using Spark and Kafka as I was thinking that during the video. Great stuff as usual!
Thank you, @Collected Reader. Glad to see you again!
This is one of the best system design content I have came across. Thanks a lot.
One of the best system design channel ive come across! great job! I particularly liked how you were able to describe a fundamental pattern that can be applied in multiple scenarios
These are the best videos on system design I've seen, thanks so much!
Excellent video! Has depth and breadth that isn’t seen elsewhere. Keep it up!
Appreciate the feedback, Abbas! Thanks.
Misha,
Loved the structure as well as depth and breadth of the topics you touched on!
Your content is PURE GOLD. Hats off! :)
Excellent video and great explanation. One further improvement can be done in Slow processing path: Instead of using hadoop MapReduce, if you use Apache Spark for MapReduce, it will save more time. Because Spark uses in memory processing i.e. it does not store intermediate stage results on HDFS (it keeps them in memory), which makes it faster than Hadoop.
This is just incredible! Please do publish more videos.
watched other videos before this.. so liking this before starting...
Awesome videos Mikhail... thanks a lot for sharing! That last part showing other problems with similar solutions was the cherry on top.
Nicely structured ! covering both depth and breadth of the concepts as much as possible.
My intuition for why we need the entire data set of video view events in order to calculate the top-k videos in an hour:
If k=2, and during the first 1-minute period the top three videos are A with 3 views, B with 3 views, and C with 2 views. In the second 1-minute period, the top two videos are D with 3 views, E with 3 views, and C with 2 views. When computing the top-k videos for a 1-hour period, If we only had the 1-minute data for videos available, we would not have the data for the "C" video available because we only stored data for the top two videos at each video. However, over the 2-minute period, the "C" video has actually been viewed the most (4 times).
Good example to the question 11:16
this channel has the best System design explanations ... thank you so much and keep up the good work!!
Thank you for the feedback, Soubhagyasri! Glad you like the channel!
This is by far the best content I have found on the System Design. I am addicted to this content.
Keep up the good work, waiting for more videos .. :)
Glad you enjoy it, Dinkar! Sure, more videos to come. I feel very busy these days. But I try to use whatever time is left to work on more content.
Please do more of them as your videos are very good from a content perspective :) Extremely informative ...
Great video. Request you to cover couple of popular System Design questions when get chance: (1) recommendation of celebrity on Instagram or Song Recommendation (2) Real time coding competition and display 10 top winners.
This is pure Gem!.. Take a bow ....
This is really the best tutorial, and I hope there is article like this content!
29:48 If someone is wondering, like I was, why merging two 1hr top-K lists will not give an accurate one 2-hr list here is the explanation:
Each top-K list, in that hour is based on the data available for that hour only. That means, data is local to 1hr window only and not cumulative. When we move to the next hour, all the previous data is discarded. So, while a video might be the most watched in some hour X, it might not be watched at all in hour X+1, but will still be eligible as a candidate while create top-K list from 2K elements(2 * 1hr top-K list elements).
Simple example:
X hour top 5(k=5) = V1(10), V2(9), V3(8), V4(7), V5(6). Where V1(10) means, Video 1 watched 10 times. Say a video V6 was watched 5 times, so it could not make to the list
X+1 hour top 5 = V8(11), V9(10), V10(9), V11(7), V12(6). V6 was watched 5 times again, but could not make it to the list
But the interesting part is all the top videos V1-V5 in X hour were never watched in X+1 hour. Likewise, V8-V12 were never watched in X hour.
If we create a 2hour top-5 list from these two, V6 will not even be considered even though it was watched total of 10 times in X and X+1 hours and our final list would be:
V8(11), V9(10), V1(10), V2(9), V10(9)
Thanks, this is the questions and answer I was looking for
Awesome. Simply awesome. You killed it completely!
All your videos are really amazing. I hope you would post it more often.
Thank you, Nikhil. I will surely come back with more regular video postings.
Hi Mikhail, I was going over this video again. I am not clear how count min sketch will save memory. Even if we have a predefined size width and height. We still need to know all the videos like A, B, C, D... so we can calculate the different hash values for them before doing a min operation to find the count. So that means we need to persist this list of videos somewhere for the period of time we are accumulating the counts.
HE explains so well
I was asked this problem during an interview recently and this system design was very helpful. Thanks :).
Thank you for sharing, Ashwin.
Awesome video. Discussion of various approach (with code snippet) and the drawback is the highlight. Thanks a lot!
The best that I have seen so far!
It will be great to have a system design related to metrics and how to handle percentile calculation
Added to the TODO list. Thanks.
Thanks for all the efforts you put here to describe. This is a great material.
Awesome and detailed explanation. Hats off
Thank you, Saurabh.
Thank you for such a detailed explanation. Awesome as usual!
Thank you @Memfis for providing consistent feedback!
That's awesome learning material! I hope you can keep publishing new video about system design
Glad you liked. Thanks for sharing the feedback!
The amount of info you have covered here is amazing! Thank you so much!
Hey, Thank you so much all your knowledge sharing. I am able to perform very nice in all my interviews. Keep up the good work. More power to you.
Keep rocking!!!
One of the main reasons for inaccurate results is that events from certain API gateways may be delayed, or arrive late because of congestion. This is the primary motivation for the slow path. Was not obvious till I started thinking
Amazing video. Thank you! The way you structured it is commendable.
Thank you, Algorithm Implementer. Glad to hear that!
Cant thank you enough for your efforts in sharing such a high quality content for us!
Hi Harish. Thanks!
Please, make more videos! Absolutely amazing explanation!!!!!!!!!!!
i would use Flink(with apache Beam or not) which can substitute lambda architecture, since it can handle both batch and stream processing and do precise calculations using windowing. Basically u use windowing for aggregations and triggers to output intermediate result when needed.
Thank you for making this video. It was very helpful. It will be great if you can post more such videos.
Great stuff, waiting for Distributed counters
Hi, Cenk. Thank you for the feedback. Will be ready with distributed counters video in 2-3 weeks.
Thanks for the amazing content! In this architecture, we are keeping the data in the server memory like partition server(~ 5 mins) and API gateway even for a few seconds. How do we secure the data if any server dies? And, How do we handle hot partitions?
Excellent explanation ! I really appreciate your work!
Appreciate the feedback! Thanks.