5 cores per executor did not work for us. For us, the best number is 3 for on-prem, 2 for EMR. Number larger than that gave us IO exception. You need to adjust case by case.
Hi Mark, awesome explanation regarding exe and exe mem calculations. But this is for how can we use max number of cores or exe in the environment provide to achieve max parallelism . I would like to add one more point that if we are having so much memory load to deal with, we have to trade off number of exe\cores for executor memory. That means in the case of massive memory load we may have to go with lesser number of executers ( lesser than 17 exe) and keeping higher exe mem per exe ( more than 19 gb .....Please correct me if I am wrong...Thanks.
why can't they just let them speak and end their presentation for god's sake?? was it that big of a problem letting them finish their last 2 mistakes ? lol.. the last one (caching vs persisting) was very interesting
damn 5 years ago...i absolutely loved the presentation engaging is a difficult job..u did great also is it me or anyone else..these 2 faces looks too familiar by the time video ends
The data quality check article mentioned in 22:52 can be found here web.archive.org/web/20181116232422/blog.cloudera.com/blog/2015/07/how-to-do-data-quality-checks-using-apache-spark-dataframes/
Spark, by itself, is not intended to handle CPU-intensive operations on your data. If you have a process against the data that requires a lot of CPU or memory resources and/or is consuming CPU time, move that process into a microservice or competing consumer pattern. This problem will bog down your data handling and prevent you from using Spark effectively.
I would love to see an example of the salting side that is missing
Thanks for superbly breaking down the mistakes and their solutions. Thanks for the excellent presentation.
Anyone noticed Sameer Farooqui clicking photos when QnA started?
Awesome guys, all of them!
I am new to Spark and after viewing this presentation I see there's a lot to learn. I liked it a lot, thanks!
Excellent. Best wishes.
At 6:21 it should say divide by 1 + 0.07 not multiply by 1 - 0.07. Also, on more recent versions of Spark it's gone up from 7% to 10%.
Absolutely agree, the division is correct.
Thanks for clarification.
5 cores per executor did not work for us. For us, the best number is 3 for on-prem, 2 for EMR. Number larger than that gave us IO exception. You need to adjust case by case.
Great
Hi Mark, awesome explanation regarding exe and exe mem calculations. But this is for how can we use max number of cores or exe in the environment provide to achieve max parallelism . I would like to add one more point that if we are having so much memory load to deal with, we have to trade off number of exe\cores for executor memory. That means in the case of massive memory load we may have to go with lesser number of executers ( lesser than 17 exe) and keeping higher exe mem per exe ( more than 19 gb .....Please correct me if I am wrong...Thanks.
why can't they just let them speak and end their presentation for god's sake?? was it that big of a problem letting them finish their last 2 mistakes ? lol.. the last one (caching vs persisting) was very interesting
awesome sharing, great thanks
Thank you guys! Done a great job..
damn 5 years ago...i absolutely loved the presentation
engaging is a difficult job..u did great
also
is it me or anyone else..these 2 faces looks too familiar by the time video ends
Great topic, Great explanation!
it's awesome, thanks a lot!
What Cloudera knows about spark applications they dont even update their versions.
Thanks a lot. Very helpful!
These are also the top reasons Spark is still relatively unpopular :-/
Really? I thought It was already popular in 2020. If not, what else is gaining attention instead?
what about loading small files ?
but what to do if you have only 7 node cluster with 4 cores and 8GB ram?
awesome
what was the tool he was talking about for Spark unit testing ?
I think he said Junit
Very cool :) ..!
What is that special collection to do ETL?
I have the same question..till now i have been doing etl using df only, never used any custom collections..
what will be the solution of 2G Spark Shuffle size. ?
Limit the partitions
Resize the partion
where are the slides?
The data quality check article mentioned in 22:52 can be found here web.archive.org/web/20181116232422/blog.cloudera.com/blog/2015/07/how-to-do-data-quality-checks-using-apache-spark-dataframes/
How each node gets 3 executors at ruclips.net/video/WyfHUNnMutg/видео.html ?
Spark, by itself, is not intended to handle CPU-intensive operations on your data. If you have a process against the data that requires a lot of CPU or memory resources and/or is consuming CPU time, move that process into a microservice or competing consumer pattern. This problem will bog down your data handling and prevent you from using Spark effectively.
I can't understand what he is saying !!