Learn MapReduce with Playing Cards

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025
  • The special extended preview of my new MapReduce screencast available for purchase at pragprog.com/sc....
    To get access to my updated and in-depth course, go to my site at www.jesse-ander... and sign up. You'll get a free mini-course and then have the option to purchase the full 8 week course.

Комментарии • 133

  • @kart00nher0
    @kart00nher0 8 лет назад +13

    This is by far the best explanation of the MapReduce technique that I have come across. I especially like how the technique was explained with the least amount of technical jargon. This is truly an ELI5 definition for MapReduce. Good work!

  • @smushti
    @smushti 5 лет назад +2

    An innovative idea to use a pack of cards to explain the concept. Getting fundamentals right with an example is great ! Thank you

  • @ekdumdesi
    @ekdumdesi 9 лет назад +34

    Great explanation !! You Mapped the Complexity and Reduced it to Simplicity = MapReduce :)

  • @djyotta
    @djyotta 9 лет назад +2

    Very well done - not too slow, yet very clear and well structured.

  • @Useruytrw
    @Useruytrw 10 лет назад +2

    Jesse may you get all SUCCESS and BLESSINGS

  • @rodrigofuentealbafuentes695
    @rodrigofuentealbafuentes695 4 года назад

    Really good illustration.... really easy to understand for people as me that we are not computer experts.. thanks

  • @bit.blogger
    @bit.blogger 10 лет назад +1

    6:16 got a question!
    Would you please elaborate more on those moving data? Since there is two separate reduce task on those two nodes how does two different reduce tasks combine together? How do we choose which cards move to which node?

    • @jessetanderson
      @jessetanderson  10 лет назад +1

      That is called the shuffle sort. See more about that here www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-6/shuffle-and-sort.

    • @chandrakanthpadi
      @chandrakanthpadi 3 года назад

      Does the actual data in the node moves or copies of the data is moved?

  • @mmuuuuhh
    @mmuuuuhh 9 лет назад +44

    To wrap this up:
    Map = Split data
    Reduce = Perform calculations on small chunks of data in parallel
    Then combine the subresults from each reduced-chunk.
    Is that correct?

    • @jessetanderson
      @jessetanderson  9 лет назад +2

      +mmuuuuhh Somewhat correct. I'd suggest buying the screencast to learn more about the code and how it works.

    • @alphacat03
      @alphacat03 8 лет назад

      +mmuuuuhh merge-sort maybe?

    • @ienjoysandwiches
      @ienjoysandwiches 7 лет назад +5

      divide and conquer

    • @BULLSHXTYT
      @BULLSHXTYT 6 лет назад +1

      Map transforms data too

    • @dennycrane2938
      @dennycrane2938 6 лет назад +2

      No no... Map = Reduce the Data, Reduce = Map the Data . .... ....

  • @sukanyaswamy
    @sukanyaswamy 10 лет назад +1

    Great presentation. The visualization makes it so much easier to understand.

  • @kabirkanha
    @kabirkanha 4 года назад +9

    Never trust a man whose deck of playing cards has two 7s of Diamonds.

  • @vamsikrishnachiguluri8510
    @vamsikrishnachiguluri8510 3 года назад

    what a great effort, i am astonished by your teaching skills.we need teachers like you.Thanks for your best explanation
    .

  • @LetsBeHuman
    @LetsBeHuman 5 лет назад +1

    4:51 - - i'm kind of lost. so you said two papers as two sets of nodes.
    left is node1 and right is node2.
    then you said, "I have two nodes, where each node has 4 stacks of cards".
    I also understood that you are merging two varieties of cards in node1 and another two varieties of cards in node2.
    " a cluster is made of tens, hundreds or even thousands of nodes all connected by a network".
    so in this example, let's say two papers(nodes) are one cluster.
    the part I get confused is , when you say " the mapper on a node operates on that smaller part. the magic takes the mapper data from every node and brings it together on nodes all around the cluster. the reducer runs a node and knows it has access to everything with same key ".
    So if there are two nodes A and B that has mapper data, then the reducer part will happen on two other nodes C and D. I'm confused when you say "on nodes all around the cluster".

  • @scottzeta3067
    @scottzeta3067 2 года назад +1

    The only one I watched which can clearly introduce mapreduce to newbie

  • @amitprakashpandeysonu
    @amitprakashpandeysonu 3 года назад

    loved the idea. Now I understood how map reduce works. Thank you.

  • @furkanyigitozgoren3847
    @furkanyigitozgoren3847 2 года назад

    It was very nice. But I could not find the video that you showed the shuffling "magic part"

  • @menderbrains
    @menderbrains 5 лет назад

    Great explanation! This is how a tutor should simplify the understanding! Thanks

  • @doud12011990
    @doud12011990 9 лет назад

    really cool one. It is always nice to come back to the basics. Thanks for that one

  • @vscid
    @vscid 8 лет назад

    and that's how you explain any technical concept. simple is beautiful!

  • @victorburnett6329
    @victorburnett6329 3 года назад

    If I understand correctly, the mapper divvies up the data among nodes of the cluster and subsequently organizes the data on each node into key-value pairs, and the reducer collates the key-value pairs and distributes the pairs among the nodes.

    • @jessetanderson
      @jessetanderson  3 года назад

      Almost. Hadoop divvies up the data, the mapper creates key value pairs, and the reducer processes the collated pairs.

  • @vivek3350
    @vivek3350 8 лет назад

    Really liked your way of presentation....."Simple" and "Informative". Thanks for sharing!!

  • @rahulx411
    @rahulx411 10 лет назад

    an ounce of example is better than a ton of precept! --Thanks, this was great!

  • @ahmedatallahatallahabobakr8712
    @ahmedatallahatallahabobakr8712 9 лет назад

    Your explanation is majic! Well done

  • @rohitgupta025
    @rohitgupta025 9 лет назад +4

    Just wow...very nicely explained

  • @davidy2535
    @davidy2535 3 года назад

    amazing explanation! I love it. Huge Thanks!

  • @mgpradeepa554
    @mgpradeepa554 10 лет назад

    The explanation is wonderful.. You made me understand things easily.

  • @nkoradia
    @nkoradia 7 лет назад

    Brilliant approach to teach the concept

  • @prasann26
    @prasann26 10 лет назад

    Wow.. You have made this look so simple and easy... Thanks a ton !!!

  • @abhishekgowlikar
    @abhishekgowlikar 10 лет назад

    Nice video explaining the Map Reduce Practically.

  • @hazelmiranda8587
    @hazelmiranda8587 8 лет назад

    Good to understand for a layman! So its quite crucial to identify the basis of the grouping i.e. the parameters based on which the data should be stored in each node.
    Is it possible to revisit that at a later stage?

  • @thezimfonia
    @thezimfonia 7 лет назад

    That was very helpful Jesse. Thank you for sharing this!!

  • @asin0136-y6g
    @asin0136-y6g 5 лет назад

    Wonderful explanation ! Made it very simple to understand! Thanks a ton!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 6 лет назад

    best explanation of mapReduce. Thanks!

  • @mahari999
    @mahari999 8 лет назад

    Superb. Thank you Jesse Anderson

  • @arnavanuj
    @arnavanuj 3 года назад

    Good illustration. 😃

  • @sarthakmane2977
    @sarthakmane2977 4 года назад

    dude, whats the name of that magic??

  • @tkousek1
    @tkousek1 7 лет назад +1

    Great explanation!! worth a bookmark. Thank you sir!

  • @TheDeals2buy
    @TheDeals2buy 10 лет назад

    Good illustration using a practical example...

  • @gboyex
    @gboyex 6 лет назад

    Great video with good explanation technique.

  • @anandsib
    @anandsib 10 лет назад +1

    Good Explanation with simple example

  • @urvisharma7243
    @urvisharma7243 Год назад

    What if the node with clubs and hearts breaks down during the reduce operation? Will data be lost? Or will the complete Map Reduce job be repeated using the replicated data?

    • @jessetanderson
      @jessetanderson  Год назад

      The data is replicated and the reduce would be re-run on a different node.

  • @amirkazemi2517
    @amirkazemi2517 10 лет назад

    greta video. why is there performance issues with hadoop however?

    • @jessetanderson
      @jessetanderson  10 лет назад

      I'm not sure what you mean by performance issues.

  • @hexenkingTV
    @hexenkingTV 6 лет назад +2

    So it follows mainly the principle of divide and conquer?

    • @jessetanderson
      @jessetanderson  6 лет назад +2

      Following that analogy, it would be divide, reassemble, and conquer.

  • @AnirudhJas
    @AnirudhJas 5 лет назад

    Thanks Jesse! This is a wonderful video! I have 2 doubts.
    1. Instead of sum, if it is a sort function, how will splitting it into nodes work? Because then every data point should be treated in one go.
    2. The last part on scaling, how will different nodes working on a file and then combining based on key, be more efficient than one node working on one file?
    I am new to this and would appreciate some guidance and help on the same.

    • @jessetanderson
      @jessetanderson  5 лет назад +1

      1. This example goes more into sorting github.com/eljefe6a/CardSecondarySort 2. It isn't more efficient, but more scalable.

    • @AnirudhJas
      @AnirudhJas 5 лет назад

      @@jessetanderson Thank you!

  • @rogerzhao1158
    @rogerzhao1158 8 лет назад

    Nice tutorial! Easy to understand

  • @piyushmajgawali1611
    @piyushmajgawali1611 4 года назад

    I actually did this with cards.Thanks

  • @sebon11
    @sebon11 3 года назад

    Great explanation!

  • @arindamdalal3988
    @arindamdalal3988 9 лет назад

    really nice video and explain the terms in a simple way...

  • @patrickamato8839
    @patrickamato8839 10 лет назад

    Great summary - thanks!

  • @tichaonamiti4616
    @tichaonamiti4616 10 лет назад +6

    Thats wonderful ..... you are a gret teacher

  • @LetsBeHuman
    @LetsBeHuman 5 лет назад

    When you say nodes and clusters, does an input file of 1TB should definitely be run in more than one computer or we can install hadoop in a single laptop and virtually create nodes and clusters ?

  • @abdulrahmankerim2377
    @abdulrahmankerim2377 8 лет назад +1

    Very useful explanation.

  • @trancenut81
    @trancenut81 10 лет назад

    Excellent explanation!

  • @grahul007
    @grahul007 9 лет назад

    Excellent video explanation

  • @Dave-lc3cd
    @Dave-lc3cd 4 года назад

    Thanks for the great video!

  • @rodrigoborjas7727
    @rodrigoborjas7727 4 года назад

    Thank u very much for the explanation.

  • @gypsyry
    @gypsyry 5 лет назад

    Best explanation. Thanks a lot

  • @moofymoo
    @moofymoo 9 лет назад +48

    huge 1Tb file..
    anyone watching this in 2065?

  • @MrSpun1090
    @MrSpun1090 8 лет назад

    Thanks this really helped me for my exam !!

  • @MuhammadFarhan-ny7tj
    @MuhammadFarhan-ny7tj 3 года назад

    Which music is this in start of this video

  • @IvanRubinson
    @IvanRubinson 7 лет назад

    Well, that explains the interview question: How would you sort a ridiculously large amount of data?

  • @sarthakmane2977
    @sarthakmane2977 4 года назад

    great video by the way!!

  • @amandeepak8640
    @amandeepak8640 8 лет назад +1

    Thank You sir for such a wonderful explanation. :-)

  • @vincentvimard9019
    @vincentvimard9019 9 лет назад

    just great explanation !

  • @vigneshrachha8362
    @vigneshrachha8362 7 лет назад

    Superb video....thanks a lot sir

  • @ZethWeissman
    @ZethWeissman 8 лет назад

    Might be a bit clearer to understand the advantage of this if instead of having the same person run the cards on each node sequentially and have two people do it at the same time. Or go further and have four people show it. Then each person can grab all the cards of the suit from each node and can sum their values up, again, at the same time. Show a timer showing how long it took for the one person to do everything on one node and the time of having all four running at the same time.

  • @rrckguy
    @rrckguy 10 лет назад

    Great lesson. Thanks..

  • @MincongHuang
    @MincongHuang 9 лет назад +1

    Great video, thanks for sharing !

  • @alextz4307
    @alextz4307 6 лет назад

    Very nice, thanks a lot.

  • @irishakazakyavichyus
    @irishakazakyavichyus 6 лет назад

    thanks! that is an easy explanation!

  • @ajuhaseeb
    @ajuhaseeb 9 лет назад

    Aiwa. Simply explained.

  • @Luismunoz-jf2zv
    @Luismunoz-jf2zv 10 лет назад

    Now I get it, thanks!

  • @SamHopperton
    @SamHopperton 7 лет назад

    Brilliant - thanks!

  • @wetterauerbub
    @wetterauerbub 7 лет назад

    Hi Jesse, can I use map reduce only on document-oriented DBs, or also e.g. on Graph databases?

    • @jessetanderson
      @jessetanderson  7 лет назад

      Hessebub you can use it for both, but the processing Algorithms are very different between them.

    • @wetterauerbub
      @wetterauerbub 7 лет назад

      Alright, thanks very much for answering & doing the video in the first place!

  • @周大鹏-o1j
    @周大鹏-o1j 9 лет назад

    Great video

  • @abdellahi.heiballa
    @abdellahi.heiballa 5 лет назад +1

    my friend: i wish i had ur calm we having an exam tomorrow you watching how playing cards....

  • @devalpatel7243
    @devalpatel7243 5 лет назад

    Hat's of man. very well understood

  • @hemanthpeddi4129
    @hemanthpeddi4129 5 лет назад

    awesome explanation super

  • @bijunair3807
    @bijunair3807 10 лет назад

    Good explanation

  • @guessmedude9636
    @guessmedude9636 6 лет назад

    i like this technique nice keep it up

  • @logiprabakar
    @logiprabakar 9 лет назад +1

    Wonderful, you have used the right tool(cards) and made it simpler. Thank you.
    Am i correct in saying, in this manual shuffle and sort, the block size is 52 cards where as in a node it would be 128.

  • @__-to3hq
    @__-to3hq 5 лет назад

    wow this was great

  • @thiery572
    @thiery572 7 лет назад

    Interesting. Now I want to request a bunny comes out from a hat.

  • @iperezgenius
    @iperezgenius 7 лет назад

    Brilliant!

  • @yash6680
    @yash6680 7 лет назад

    awesome

  • @RawwestHide
    @RawwestHide 7 лет назад

    thanks

  • @pamgg1663
    @pamgg1663 9 лет назад

    excellent!!!

  • @ZFlyingVLover
    @ZFlyingVLover 5 лет назад

    The 'scalability' of hadoop has to do with the fact that the data being processed CAN be broken up and processed in parallel in chunks and then the results can be tallied by key. It's not an inherent ability of the tech other than HDFS itself.
    Like most technology or jobs for that matter the actual 'process' is simple it's wading through the industry specific terminology that has makes it unnecessarily complicated. Hell you can make boiling an egg or making toast complicated too if that's your intent.

    • @jessetanderson
      @jessetanderson  5 лет назад

      Sorry, you misunderstood.

    • @ZFlyingVLover
      @ZFlyingVLover 5 лет назад

      @@jessetanderson I didn't misunderstand you. Your explanation was great.

  • @mudassarm30
    @mudassarm30 9 лет назад

    spade clubs ... I think you used the wrong suite names for them :)

  • @niamatullahbakhshi9371
    @niamatullahbakhshi9371 8 лет назад

    so nice

  • @lerneninverschiedenenforme7513
    @lerneninverschiedenenforme7513 10 лет назад

    little bit long explanation. could be done faster (e.g. card-sorting). But after watching, you know what's happening. So all thumbs up!

  • @covelus
    @covelus 7 лет назад

    awesome

  • @sumantabanerjee9728
    @sumantabanerjee9728 6 лет назад

    Easiest explanation.

  • @Nyocurio
    @Nyocurio 6 лет назад +1

    Why did they come up with such a terribly unintuitive name as "MapReduce" ??? It's basically just "bin by attribute, then process each bin in parallel". BinProcess.

    • @jessetanderson
      @jessetanderson  6 лет назад

      It's a well-known functional programming paradigm.

  • @varshamehra8164
    @varshamehra8164 5 лет назад

    Cool

  • @haroonrasheed9739
    @haroonrasheed9739 9 лет назад

    Great

  • @kart00nher0
    @kart00nher0 8 лет назад

    IMO the key takeaway from the video is that MR only works when:
    a. There is one really large data set (e.g. a giant stack of playing cards)
    b. Each row in the data set can be processed independently. (e.g. sorting or counting playing cards does not require knowing the sequence of cards in the deck - each card is processed based on information on the face of card)
    To process real-world problems using MR, the data sets will need to be massaged and joined to satisfy the criteria listed above. This is where all the challenges lie. MR itself is the easy part.

    • @jessetanderson
      @jessetanderson  8 лет назад

      +Subramanian Iyer agreed MR is difficult, but the understanding of how to use and manipulate the data is far more complex. This is why I think data engineering should be a specific discipline and job title. www.jesse-anderson.com/big-data-engineering/

  • @glennt1962
    @glennt1962 5 лет назад

    This is a great example video without the accent to deal with.

  • @gregrell2441
    @gregrell2441 8 лет назад +5

    This is just a sales pitch

    • @jessetanderson
      @jessetanderson  8 лет назад +2

      I think the description is pretty clear that it's an extended preview of the screencast.

  • @joseblazquez8417
    @joseblazquez8417 2 года назад

    like si vienes por riwb

  • @hank-l6s
    @hank-l6s 6 лет назад

    keep kinging