Spark Scenario Interview Question | Persistence Vs Broadcast
HTML-код
- Опубликовано: 26 июл 2024
- #Spark #Persist #Broadcast #Performance #Optimization
Please join as a member in my channel to get additional benefits like materials in BigData , Data Science, live streaming for Members and many more
Click here to subscribe : / @techwithviresh
About us:
We are a technology consulting and training providers, specializes in the technology areas like : Machine Learning,AI,Spark,Big Data,Nosql, graph DB,Cassandra and Hadoop ecosystem.
Mastering Spark : • Spark Scenario Based I...
Mastering Hive : • Mastering Hive Tutoria...
Spark Interview Questions : • Cache vs Persist | Spa...
Mastering Hadoop : • Hadoop Tutorial | Map ...
Visit us :
Email: techwithviresh@gmail.com
Facebook : / tech-greens
Twitter :
Thanks for watching
Please Subscribe!!! Like, share and comment!!!!
This scenario was not clear when i go through external video but after your explanation i understood the difference. excellent
Super content thank you
Excellent... Billion-dollar video...
Thanks for explaining it in easy way :)
Data in memory and data in disk will not occupy same space. So if it is 12gb on disk in memory it can be 18gb
Hi viresh, Thanks for the video,
can you confirm the below statement,
In Persist executors save partition of the data frame in memory and in broadcast executor will save the entire data frame in memory?
Hi Viresh, Thanks for this nice video....I believe that the broadcast variable is used to broadcast a small table and join it with a huge table which would avoid shuffling....What happens if we broadcast a table with more number of columns into the executors? Assuming that the broadcast table is larger in size because of having more number of columns.
You would not be able to leverage the benifit of broadcast join in that case. Even though you forcefully enable broadcast, would not be noticing much impact or improvement
@@dipanjansaha6824 Sounds good...Thanks
This would fill up the memory quickly and can result in out of memory issue , that’s why default size for auto broadcast is 10 mb, Also as discussed in separate video, memory footprint in broadcast becomes 4 times of the actual size. Thanks
Thanks Viresh for the information.
Is it approximately 12GB after persisting? Any significant overhead when the data is in memory?
if we persist with serialize, we will have cpu overhead else no overhead.
why the number of partitions are three in case of broadcast join, Can we keep it low like 1or 2 orr why not to keep all the partitions into single executor
It will throw out of memory in that case for sure.
Hey Viresh , from where do you study these concepts . please share resources
broadcast variable is copy per node right ? why it will 36 GB ?
what I understood in case of broadcast variable, 12GB data is copied to all, 12 GB per node, nodes thus adding upto 36GB.
Hi Viresh, I am new to Spark so cut me some slack for asking newbie questions.
In persistence, the data frame is held either in memory or disc. Suppose the data from the data lake was held in executor memory which is in the given case is 4 GB and it is completely occupied.
Now I want to read another data frame, then how executor will deal with it since its memory is already occupied by the previous persisted data frame.
In Broadcast, the memory footprint is said to be 4 times the data frame memory. Where does this memory come from since each executor got only 4 GB? And I also read somewhere after Garbage Collection it is only left 3 times. Why is it so?
In persistence, the data is said to be stored in memory. Is it just the executor memory which is 4 GB or entire system memory?
Thanks in Advance.
why here u r taking only 3 no. of executor?
In the case of broadcast, why do we have to include the 12GB of existing DF? I feel it is unfair to compare persist with broadcast. It is possible to avoid the 12GB?
it is the primary /parent data using which we have created copies and shared it to executors. until this application ends this parent data is still part of the program.
not clear ..you have explain about the persistence ..
I dont understand the premise of sending the whole dataset to each executor. you are defeating the purpose of spark which is distributing data over network.
Second thing is , if you clearly state what comparison is then this is really a straightforward task (I guess you forgot about garbage collection of original 12 Gb Data as well, correct me if I am wrong). I would be more interested in comparing data in transit comparison.
Lasty I think more challenging compare shuffle and broadcast operation.
no voice clarity
Not clearly explained
Please post your doubt/question, we will try to answer the same. Thanks for the feedback.
@Technical Tutorials : Please be specific in giving feed back so people can help you !Be as specific as possible please