Spark Performance Tuning | Memory Architecture | Interview Question
HTML-код
- Опубликовано: 7 сен 2024
- #Apache #Spark #Performance #Memory
In this particular video, we have discussed Spark performance optimisation for the efficient memory management
Please join as a member in my channel to get additional benefits like materials in BigData , Data Science, live streaming for Members and many more
Click here to subscribe : / @techwithviresh
About us:
We are a technology consulting and training providers, specializes in the technology areas like : Machine Learning,AI,Spark,Big Data,Nosql, graph DB,Cassandra and Hadoop ecosystem.
Mastering Spark : • Spark Scenario Based I...
Mastering Hive : • Mastering Hive Tutoria...
Spark Interview Questions : • Cache vs Persist | Spa...
Mastering Hadoop : • Hadoop Tutorial | Map ...
Visit us :
Email: techwithviresh@gmail.com
Facebook : / tech-greens
Twitter : @TechViresh
Thanks for watching
Please Subscribe!!! Like, share and comment!!!!
The videos are awesome with good content and great explanation. but a summary at the end would be a great addition to recap all points covered in short. Thanks for the video.
Thanks, Sure we will take care of that:)
Nice, saw this on spark summit. but you explained it well. kudos to you.
very nice!!
Awesome explanation , Thanks :)
Thank you for the awesome videos..
Nice Explanation once again thank you!!!!
Thanks:)
Nice sir,keep on do more videos 😊
Thanks:)
Nice video sir
Thanks:)
Hi thanks for video, I have a question,
Suppose I have 10gb csv data, if I read using spark.read.csv by default how many partitions it will create? Will it consider every block as one partition and create 80 partitions?
Csv as data format in Hadoop can not be partitioned, when u read it in Spark number of partition sud be equal to the number of nodes in cluster.
@@TechWithViresh number of nodes, means we get very less partitions right? It will degrade performance?