Web Crawler - System Design Interview Question

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 11

  • @games-are-for-losers
    @games-are-for-losers 6 месяцев назад +6

    The RUclips algorithm has picked up your channel. Really good content

  • @SirDrinksAlot69
    @SirDrinksAlot69 6 месяцев назад +1

    Hashes. You can even halve them for example and so long as the interviewer doesnt have any rules around specific length then add digits until it clears, can do things to make that fast as well. Hashes also help obfuscation so it's harder to scan and obtain the short urls and it makes looking up duplicates easier.

  • @LouisDuran
    @LouisDuran 4 месяца назад

    I like that these are short and sweet. It shouldn't take an hour to explain TinyURL or web crawler. Thanks!

  • @ChimiChanga1337
    @ChimiChanga1337 6 месяцев назад +1

    Excellent! Could also talk about what kind of network protocols will be used for services to talk to eachother?

  • @rajaryanvishwakarma8915
    @rajaryanvishwakarma8915 6 месяцев назад +1

    Great video man

  • @WINDSORONFIRE
    @WINDSORONFIRE 2 месяца назад

    How does the design of a web crawler not include geo located servers etc?

  • @LearningNewThings0407
    @LearningNewThings0407 4 месяца назад +1

    Is it Font queue prioritizer or Front queue prioritizer ?

  • @dibll
    @dibll 6 месяцев назад

    During duplicate detection step, how Content Cache is being used? Could someone please explain?

  • @jjlee4883
    @jjlee4883 6 месяцев назад

    Awesome video. Would it make sense for the url seen detector and url filter to come after the html parser step?

    • @TechPrepYT
      @TechPrepYT  6 месяцев назад

      Thanks for the comment! You wold want the duplicate detection to occur directly after the HTML parser as we don't want to process the same data and extract the same URLs from the same page and that's why the URL Seen Detector and URL filter happen later on in the system. Hope this makes sense!