I'm impressed , no copy paste , Covered whole architecture and done full justice to the title. Great work ,you are heading in the right direction, keep going :).
Great content shreya ,I really like all the videos and wanted to learn more about datalakes and best practices designing datalakes.If you have some time and bandwidth please keep a post on the above. Thanks for all your hardwork.
Mam you are awesome.. teaching talent with example is like god gifted us..... Mam, can u briefly tell me if we use yarn, then what would be the role of the cluster manager and its application master in spark.. because driver directly communicating to executor
Hello Mam I had gone through your video the way you have delivered is very nice. I have one dought though silly but I wants to know in local system that means on our laptop when we execute pyspark code who is executor and who is worker and on cloud please explain me executor and worker? Thanks in advance
@@guptaashok121 no it won't be cos actual data resides on executors, driver is just the coordinator. while assigning task principal of data locality is kept in mind.
I'm impressed , no copy paste , Covered whole architecture and done full justice to the title. Great work ,you are heading in the right direction, keep going :).
Thanks Nikhil
Getting confidence in spark because of you only. Thanks so so much!
Thanks
Thank you ma'am u explain it very nicely due to u I understand it
And ur voice is very beautiful
Thank you so much
quite formative for beginners and Appreciate the efforts made
Thanks rajesh
The next great video
Very nice !! Simple & very knowledgeable 👍🏻
Thanks dhaval
Thanks for making this video. Kindly make more videos on big data engineering.
Thanks Ramkumar surely
Great content shreya ,I really like all the videos and wanted to learn more about datalakes and best practices designing datalakes.If you have some time and bandwidth please keep a post on the above.
Thanks for all your hardwork.
Thanks sathwik.
Will surely do. Have posted one on what are data lakes ruclips.net/video/wHG0ljN3plg/видео.html
Will post on best practices too.
This week have posted on data lake best practices
Mam you are awesome.. teaching talent with example is like god gifted us.....
Mam, can u briefly tell me if we use yarn, then what would be the role of the cluster manager and its application master in spark.. because driver directly communicating to executor
mam there is shuffle in coalese even it is considering as narrow transformation why?
good explanation and easy!! :)
Hello Mam I had gone through your video the way you have delivered is very nice.
I have one dought though silly but I wants to know in local system that means on our laptop when we execute pyspark code who is executor and who is worker and on cloud please explain me executor and worker?
Thanks in advance
Is it driver which reads all data and divide in partition and sends to executor?
Yes executors job is to just execute.
@@BigDataThoughts will it not create bottleneck..
@@guptaashok121 no it won't be cos actual data resides on executors, driver is just the coordinator. while assigning task principal of data locality is kept in mind.
@@BigDataThoughts that's what my doubt is, if driver is not reading the data then how it sends command to executor on which data to work on
Thankyou 😊
Nice
Thanks sagar
Love it!! Become an online boss - Promo-SM !!
This is what you call WOMEN IN TECH.. absolutely fantastic!!! 💩
Thanks Mithun
Thanks mam ... Today I have subscribed BigData Thoughts
Thanks
Nice
Thanks