It's incredible how you compress a complex paper that can take days or even weeks to fully grasp into a ten minute video. You are an amazing teacher. Props to your animation that is on point.
i cant even begin to explain the level of clarity i achieved after watching this video!! Thanks a lot sir! Please keep posting more videos, it is very helpful for students like us :)
After a long time I found excellent videos. May I request you to create videos/playlist on Kafka, Cassandra and AWS Cloud. I see them very tricky and hard to understand. Thanks for making awesome videos.
Can't we get read/write frequency count from GFS master log files itself which is Stored remotely since it have read write log for files, I Just learning so might i understood wrongly
GFS's responsibility is to act as massive hard-disk, it does not have understanding of what is written on files. If you check the GFS video, clients directly store data on individual machines, and GFS Master is not aware of what is being written.
Really good explanation! However, I have one question. I may have missed something but how exactly it deals with chunks replicated over a couple of nodes? There may be a case when we use some data twice so it can impact the result.
operations are run on only one of the 3 replicas ( remember that out of 3 servers, 1 is primary and other are secondary). If the primary fails, the GFS master sends the operation (map ) function to another secondary replica keeping the data and final result in the same server. my humble answer. Corrections are welcome.
It's incredible how you compress a complex paper that can take days or even weeks to fully grasp into a ten minute video. You are an amazing teacher. Props to your animation that is on point.
i cant even begin to explain the level of clarity i achieved after watching this video!! Thanks a lot sir! Please keep posting more videos, it is very helpful for students like us :)
You are an excellent teacher!
Please keep making more such videos.
Really very well explained in a very short amount of time! Much appreciated
The best explanation and pictorial representation of Map Reduce I came across. I saved this Playlist. It is too good and useful.
short and crisp explanation, thank you
thank u! can't wait for bigtable design review.
please do a zookeeper / etcd one.
awesome and crystal clear explanation. Such a big topic condensed to 10 minutes video. kudos to your work
Thank you for the video! Very clear explanation. I especially liked the examples part.
the best explaining video of this concept i have ever seen. Thanks :)
One of the best explanation you can find on internet ! Please make a video on HDFS
I highly appreciate the work you do. Keep up the great work
Excellent explanation....👍👍👍
After a long time I found excellent videos. May I request you to create videos/playlist on Kafka, Cassandra and AWS Cloud. I see them very tricky and hard to understand. Thanks for making awesome videos.
That was really good !!!
Excellent!!
Dude, this was an amazing explanation!!
Thanks for such awesome explanation. Keep doing the great work 😁👏
You are brilliant
Great explanation!!
very good that you also covering latest technologies like Hadoop ecosystem. Expecting more things like these. 🙂
Need Google big data table video as you promised in GFS video.
Can't we get read/write frequency count from GFS master log files itself which is Stored remotely since it have read write log for files, I Just learning so might i understood wrongly
GFS's responsibility is to act as massive hard-disk, it does not have understanding of what is written on files. If you check the GFS video, clients directly store data on individual machines, and GFS Master is not aware of what is being written.
Good
i hv certain questions related to java memory manegement and out of meemory...where i can send
Really good explanation! However, I have one question. I may have missed something but how exactly it deals with chunks replicated over a couple of nodes? There may be a case when we use some data twice so it can impact the result.
I think that's why client informs master right. I mean master has all info where the nodes are duplicated so it can avoid duplicate servers.
operations are run on only one of the 3 replicas ( remember that out of 3 servers, 1 is primary and other are secondary). If the primary fails, the GFS master sends the operation (map ) function to another secondary replica keeping the data and final result in the same server.
my humble answer. Corrections are welcome.