Learn MapReduce with Playing Cards
HTML-код
- Опубликовано: 14 янв 2025
- The special extended preview of my new MapReduce screencast available for purchase at pragprog.com/sc....
To get access to my updated and in-depth course, go to my site at www.jesse-ander... and sign up. You'll get a free mini-course and then have the option to purchase the full 8 week course.
This is by far the best explanation of the MapReduce technique that I have come across. I especially like how the technique was explained with the least amount of technical jargon. This is truly an ELI5 definition for MapReduce. Good work!
+Subramanian Iyer Thanks!
An innovative idea to use a pack of cards to explain the concept. Getting fundamentals right with an example is great ! Thank you
Great explanation !! You Mapped the Complexity and Reduced it to Simplicity = MapReduce :)
Very well done - not too slow, yet very clear and well structured.
Jesse may you get all SUCCESS and BLESSINGS
Really good illustration.... really easy to understand for people as me that we are not computer experts.. thanks
6:16 got a question!
Would you please elaborate more on those moving data? Since there is two separate reduce task on those two nodes how does two different reduce tasks combine together? How do we choose which cards move to which node?
That is called the shuffle sort. See more about that here www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-6/shuffle-and-sort.
Does the actual data in the node moves or copies of the data is moved?
To wrap this up:
Map = Split data
Reduce = Perform calculations on small chunks of data in parallel
Then combine the subresults from each reduced-chunk.
Is that correct?
+mmuuuuhh Somewhat correct. I'd suggest buying the screencast to learn more about the code and how it works.
+mmuuuuhh merge-sort maybe?
divide and conquer
Map transforms data too
No no... Map = Reduce the Data, Reduce = Map the Data . .... ....
Great presentation. The visualization makes it so much easier to understand.
Never trust a man whose deck of playing cards has two 7s of Diamonds.
what a great effort, i am astonished by your teaching skills.we need teachers like you.Thanks for your best explanation
.
4:51 - - i'm kind of lost. so you said two papers as two sets of nodes.
left is node1 and right is node2.
then you said, "I have two nodes, where each node has 4 stacks of cards".
I also understood that you are merging two varieties of cards in node1 and another two varieties of cards in node2.
" a cluster is made of tens, hundreds or even thousands of nodes all connected by a network".
so in this example, let's say two papers(nodes) are one cluster.
the part I get confused is , when you say " the mapper on a node operates on that smaller part. the magic takes the mapper data from every node and brings it together on nodes all around the cluster. the reducer runs a node and knows it has access to everything with same key ".
So if there are two nodes A and B that has mapper data, then the reducer part will happen on two other nodes C and D. I'm confused when you say "on nodes all around the cluster".
The only one I watched which can clearly introduce mapreduce to newbie
loved the idea. Now I understood how map reduce works. Thank you.
It was very nice. But I could not find the video that you showed the shuffling "magic part"
Great explanation! This is how a tutor should simplify the understanding! Thanks
really cool one. It is always nice to come back to the basics. Thanks for that one
and that's how you explain any technical concept. simple is beautiful!
If I understand correctly, the mapper divvies up the data among nodes of the cluster and subsequently organizes the data on each node into key-value pairs, and the reducer collates the key-value pairs and distributes the pairs among the nodes.
Almost. Hadoop divvies up the data, the mapper creates key value pairs, and the reducer processes the collated pairs.
Really liked your way of presentation....."Simple" and "Informative". Thanks for sharing!!
an ounce of example is better than a ton of precept! --Thanks, this was great!
Your explanation is majic! Well done
Just wow...very nicely explained
amazing explanation! I love it. Huge Thanks!
The explanation is wonderful.. You made me understand things easily.
Brilliant approach to teach the concept
Wow.. You have made this look so simple and easy... Thanks a ton !!!
Nice video explaining the Map Reduce Practically.
Good to understand for a layman! So its quite crucial to identify the basis of the grouping i.e. the parameters based on which the data should be stored in each node.
Is it possible to revisit that at a later stage?
That was very helpful Jesse. Thank you for sharing this!!
Wonderful explanation ! Made it very simple to understand! Thanks a ton!
best explanation of mapReduce. Thanks!
Superb. Thank you Jesse Anderson
Good illustration. 😃
dude, whats the name of that magic??
Great explanation!! worth a bookmark. Thank you sir!
Good illustration using a practical example...
Great video with good explanation technique.
Good Explanation with simple example
What if the node with clubs and hearts breaks down during the reduce operation? Will data be lost? Or will the complete Map Reduce job be repeated using the replicated data?
The data is replicated and the reduce would be re-run on a different node.
greta video. why is there performance issues with hadoop however?
I'm not sure what you mean by performance issues.
So it follows mainly the principle of divide and conquer?
Following that analogy, it would be divide, reassemble, and conquer.
Thanks Jesse! This is a wonderful video! I have 2 doubts.
1. Instead of sum, if it is a sort function, how will splitting it into nodes work? Because then every data point should be treated in one go.
2. The last part on scaling, how will different nodes working on a file and then combining based on key, be more efficient than one node working on one file?
I am new to this and would appreciate some guidance and help on the same.
1. This example goes more into sorting github.com/eljefe6a/CardSecondarySort 2. It isn't more efficient, but more scalable.
@@jessetanderson Thank you!
Nice tutorial! Easy to understand
I actually did this with cards.Thanks
Great explanation!
really nice video and explain the terms in a simple way...
Great summary - thanks!
Thats wonderful ..... you are a gret teacher
When you say nodes and clusters, does an input file of 1TB should definitely be run in more than one computer or we can install hadoop in a single laptop and virtually create nodes and clusters ?
Very useful explanation.
Excellent explanation!
Excellent video explanation
Thanks for the great video!
Thank u very much for the explanation.
Best explanation. Thanks a lot
huge 1Tb file..
anyone watching this in 2065?
February 2019 (Go RAMS)
@@NuEnque July 21 2019
feb 2020
more like 2025
August 11, 2020!!
Thanks this really helped me for my exam !!
Which music is this in start of this video
I'm not sure where they got it from.
Well, that explains the interview question: How would you sort a ridiculously large amount of data?
great video by the way!!
Thank You sir for such a wonderful explanation. :-)
just great explanation !
Superb video....thanks a lot sir
Might be a bit clearer to understand the advantage of this if instead of having the same person run the cards on each node sequentially and have two people do it at the same time. Or go further and have four people show it. Then each person can grab all the cards of the suit from each node and can sum their values up, again, at the same time. Show a timer showing how long it took for the one person to do everything on one node and the time of having all four running at the same time.
Great lesson. Thanks..
Great video, thanks for sharing !
Very nice, thanks a lot.
thanks! that is an easy explanation!
Aiwa. Simply explained.
Now I get it, thanks!
Brilliant - thanks!
Hi Jesse, can I use map reduce only on document-oriented DBs, or also e.g. on Graph databases?
Hessebub you can use it for both, but the processing Algorithms are very different between them.
Alright, thanks very much for answering & doing the video in the first place!
Great video
my friend: i wish i had ur calm we having an exam tomorrow you watching how playing cards....
Hat's of man. very well understood
awesome explanation super
Good explanation
i like this technique nice keep it up
Wonderful, you have used the right tool(cards) and made it simpler. Thank you.
Am i correct in saying, in this manual shuffle and sort, the block size is 52 cards where as in a node it would be 128.
wow this was great
Interesting. Now I want to request a bunny comes out from a hat.
Brilliant!
awesome
thanks
excellent!!!
The 'scalability' of hadoop has to do with the fact that the data being processed CAN be broken up and processed in parallel in chunks and then the results can be tallied by key. It's not an inherent ability of the tech other than HDFS itself.
Like most technology or jobs for that matter the actual 'process' is simple it's wading through the industry specific terminology that has makes it unnecessarily complicated. Hell you can make boiling an egg or making toast complicated too if that's your intent.
Sorry, you misunderstood.
@@jessetanderson I didn't misunderstand you. Your explanation was great.
spade clubs ... I think you used the wrong suite names for them :)
so nice
little bit long explanation. could be done faster (e.g. card-sorting). But after watching, you know what's happening. So all thumbs up!
awesome
Easiest explanation.
Why did they come up with such a terribly unintuitive name as "MapReduce" ??? It's basically just "bin by attribute, then process each bin in parallel". BinProcess.
It's a well-known functional programming paradigm.
Cool
Great
IMO the key takeaway from the video is that MR only works when:
a. There is one really large data set (e.g. a giant stack of playing cards)
b. Each row in the data set can be processed independently. (e.g. sorting or counting playing cards does not require knowing the sequence of cards in the deck - each card is processed based on information on the face of card)
To process real-world problems using MR, the data sets will need to be massaged and joined to satisfy the criteria listed above. This is where all the challenges lie. MR itself is the easy part.
+Subramanian Iyer agreed MR is difficult, but the understanding of how to use and manipulate the data is far more complex. This is why I think data engineering should be a specific discipline and job title. www.jesse-anderson.com/big-data-engineering/
This is a great example video without the accent to deal with.
This is just a sales pitch
I think the description is pretty clear that it's an extended preview of the screencast.
like si vienes por riwb
keep kinging