While reading the file why 2 jobs were created? Can some one help to understand? My assumption is, The file which we are going to read having total 9 blocks so that is why we total have 9 partitions and these process involved with total 9 tasks So here how number of jobs will decide? May no stages can be 1 beacause reading of the file may not be wider so it will be one stage for each job.
Sir, I am big follower of you, and am following you for long time, no one can explain the concepts like you.
Great Session, It helped to understand the key difference between MapReduce & Spark. Thanks.
Please post video more often..very nice explanation
Best video for spark vs map reduce
Many people complained ur course on udemy is nt for beginners,, y is that so?
While reading the file why 2 jobs were created? Can some one help to understand?
My assumption is, The file which we are going to read having total 9 blocks so that is why we total have 9 partitions and these process involved with total 9 tasks So here how number of jobs will decide?
May no stages can be 1 beacause reading of the file may not be wider so it will be one stage for each job.