Could have been better if he could have explained why they come to the conclusion on the numbers with before and after scenarios while setting the parameter values. And a demo would have been much better to see how the cluster works with before and after values for spark.executor.cores, spark.executor.memory, spark.driver.memory, spark.driver.cores and spark.executor.instances rather than dynamic allocation set to true with min and max values for executor instances.
FYI, when you hear executor “um”, he meant executor OOM
Slides: www.slideshare.net/databricks/tuning-apache-spark-for-largescale-workloads-gaoxiang-liu-and-sital-kedia
Thanks guys, wonderfully helpful talk !!
nice presentation mate. thanks for the information.
Wow, awesome! Thank you!!
Could have been better if he could have explained why they come to the conclusion on the numbers with before and after scenarios while setting the parameter values.
And a demo would have been much better to see how the cluster works with before and after values for spark.executor.cores, spark.executor.memory, spark.driver.memory, spark.driver.cores and spark.executor.instances rather than dynamic allocation set to true with min and max values for executor instances.
Thank you. It helps.
we really stalking him aren’t we…