Hashes. You can even halve them for example and so long as the interviewer doesnt have any rules around specific length then add digits until it clears, can do things to make that fast as well. Hashes also help obfuscation so it's harder to scan and obtain the short urls and it makes looking up duplicates easier.
Thanks for the comment! You wold want the duplicate detection to occur directly after the HTML parser as we don't want to process the same data and extract the same URLs from the same page and that's why the URL Seen Detector and URL filter happen later on in the system. Hope this makes sense!
The RUclips algorithm has picked up your channel. Really good content
Hashes. You can even halve them for example and so long as the interviewer doesnt have any rules around specific length then add digits until it clears, can do things to make that fast as well. Hashes also help obfuscation so it's harder to scan and obtain the short urls and it makes looking up duplicates easier.
I like that these are short and sweet. It shouldn't take an hour to explain TinyURL or web crawler. Thanks!
Exactly 👍
Excellent! Could also talk about what kind of network protocols will be used for services to talk to eachother?
Great video man
How does the design of a web crawler not include geo located servers etc?
Is it Font queue prioritizer or Front queue prioritizer ?
During duplicate detection step, how Content Cache is being used? Could someone please explain?
Awesome video. Would it make sense for the url seen detector and url filter to come after the html parser step?
Thanks for the comment! You wold want the duplicate detection to occur directly after the HTML parser as we don't want to process the same data and extract the same URLs from the same page and that's why the URL Seen Detector and URL filter happen later on in the system. Hope this makes sense!