Hello, I understand the request, but it will not be possible to capture all issues/scenarios on RUclips sessions. I will try to create a mini series later which will cover this topic. Easiest way to create an OOM exception and the most common one - is to create a driver with smaller memory size and then read dataset with bigger size and collect() it for display. Collect will try to fit all data in driver memory which will result in OOM. And to fix this OOM to use take() in place of collect. Hope this helps.
@@easewithdata Thanks & I'm following you for more than a month its been a great learning experience , we want you to make End to End Project in Pyspark
Hi, where to get spark session master details in local spark. I am using local[8], I can see only driver using all the 8 cores but no executors after defining on session. I believe it could be cuz of master !
Hello, Local execution only supports with single node which is driver. It uses threads in your machine to execute tasks parallely. Now if you need more executors then you have to configure a cluster and use it in your master. Please checkout the beginning of the series to understand more.
In case you are working with in memory catalog, the metadata will be lost once the compute or cluster is restarted. This is why it is recommended to have a permanent catalog.
Fantastic, thanks for sharing this content!
It will become more fantastic when you share it with your network on LinkedIn and tag us... 🤩 We definitely need some exposure ☺️
Thanks for creating such an awesome content.
Thanks. Please make sure to share with your network 🛜
Could you please create a video on OOM exception and how to replicate it and what all scenarios we get it and how to avoid it
Hello,
I understand the request, but it will not be possible to capture all issues/scenarios on RUclips sessions. I will try to create a mini series later which will cover this topic.
Easiest way to create an OOM exception and the most common one - is to create a driver with smaller memory size and then read dataset with bigger size and collect() it for display. Collect will try to fit all data in driver memory which will result in OOM.
And to fix this OOM to use take() in place of collect.
Hope this helps.
@@easewithdata I can understand. Thanks you are my big data guru
hi sir will this many topics enough to learn pyspark ..?
Yes, this all should be sufficient for you to get started
what if both the tables are very small like one is 5 MB and other is 9 MB then which df is broadcasted across executor?
In that case it doesn't matter, however AQE always prefer to broadcast the smaller table.
@@easewithdata Thanks & I'm following you for more than a month its been a great learning experience , we want you to make End to End Project in Pyspark
Thank you.. 👍
Hi, where to get spark session master details in local spark. I am using local[8], I can see only driver using all the 8 cores but no executors after defining on session. I believe it could be cuz of master !
Hello,
Local execution only supports with single node which is driver. It uses threads in your machine to execute tasks parallely. Now if you need more executors then you have to configure a cluster and use it in your master.
Please checkout the beginning of the series to understand more.
what happens to the table we saved in the storage if we implement in memory catalog. will the table files get deleted after the session
In case you are working with in memory catalog, the metadata will be lost once the compute or cluster is restarted. This is why it is recommended to have a permanent catalog.
@@easewithdata Thank you. This is the best content I have seen about spark
How many videos are more to come in this course?
Three more to go before a wrap up.