Your course it best. But problem with you course is that you are not attching the github link for your sample data and code. Irequest you as your audience please do this. Thanks
Thanks Raja..Your video is really useful. Can you please create a video on debugging techniques and how we can use spark UI to debug and understand the bottleneck using use cases. Thanks a lot again
Thanks for the video, I have a question.. Is salting technique applied while reading the data from source or during intermediate processing of the application..
Hi Tq for such useful videos, i have one question, i am still confused about executor boundary and cores/tasks boundary. In your first video you mentioned executor can have many cores/ram and then this video you mention executor runs in its own jvm process , which means all the cores/tasks are running under one jvm process? Or under than parent jvm process there are many more jvm process are running which are equal to number of cores/tasks?
Yes AQE handles data skewness automatically. In later spark versions after 3.0, it is enabled by default. For prior versions of spark, we just need to enable AQE through spark config settings
@rajasdataengineering7585 Please explain salting in detail.It's not clear how you parition the German-1,_2 and so on .Each record will become one partition correct in this case?
i have doubt: when u say data is partitioned on country and there are five different countries, out of which lets say Germany has 80% of data, so how can I say that germany data is in single partition only? coz partition is determined on the size of the block and 1 parttion = 128mb size, so depending on its size, germany data could be splitted into multiple partitions automatically?
This is by far the best databricks and spark tutorial series on youtube... great job Raja
Glad you think so! Thanks for your comment
Thanks, Raja, your explanations are really good...can you please make a video on salting techniques with example? It will be very helpful.
Thank you Suman. Sure, will make a video on salting
Your course it
best. But problem with you course is that you are not attching the github link for your sample data and code. Irequest you as your audience please do this. Thanks
Thanks Raja..Your video is really useful. Can you please create a video on debugging techniques and how we can use spark UI to debug and understand the bottleneck using use cases. Thanks a lot again
Sure Asif, will post a video on debugging
Do you have a document with all these details ?if yes, that would be great to share on git., Really Great explanation. Thank you !!
Awesome content Thank You So much Sir
Glad you liked it
You are the best Raja 🙌
Thanks for the video, I have a question.. Is salting technique applied while reading the data from source or during intermediate processing of the application..
It is applied during transformation stage, not at data extraction
Thanks Bro
Hi Tq for such useful videos, i have one question, i am still confused about executor boundary and cores/tasks boundary. In your first video you mentioned executor can have many cores/ram and then this video you mention executor runs in its own jvm process , which means all the cores/tasks are running under one jvm process? Or under than parent jvm process there are many more jvm process are running which are equal to number of cores/tasks?
Your videos are very informative. Can you please post a video on Client mode vs Cluster mode vs local
Sure Merin, will post the video on this topic
You mainly focus on theoretical. It would be great if you write the code for salting as well.
Sure, will post another video with coding example
Hi Raja, QQ - Does AQE take care of salting and skew hint technique automatically in case of data skewness?
Or do we have to explicitly apply them?
Yes AQE handles data skewness automatically. In later spark versions after 3.0, it is enabled by default. For prior versions of spark, we just need to enable AQE through spark config settings
@@rajasdataengineering7585 thanks alot for your response. Do you have any telegram channel? And may I know your LinkedIn id please
Superb
Thank you
why cant we use set maxpartitionbytes to get equal size of partitions and handle data skewness?
nice
Thanks
@rajasdataengineering7585 Please explain salting in detail.It's not clear how you parition the German-1,_2 and so on .Each record will become one partition correct in this case?
thank you
Welcome!
i have doubt:
when u say data is partitioned on country and there are five different countries, out of which lets say Germany has 80% of data, so how can I say that germany data is in single partition only? coz partition is determined on the size of the block and 1 parttion = 128mb size, so depending on its size, germany data could be splitted into multiple partitions automatically?
same question i had
Same question