21 Broadcast Variable and Accumulators in Spark | How to use Spark Broadcast Variables

Поделиться
HTML-код
  • Опубликовано: 1 дек 2024

Комментарии • 18

  • @NiteeshKumarPinjala
    @NiteeshKumarPinjala Месяц назад +1

    Hi Subham, I have few questions on Cache and Broadcast
    1. Can we un broadcast the dataframes or variables like we unpersist?
    2. Whenever our cluster is terminated, restarted again, Does the broadcasted variables or cached data is still exist? or it get's vanished every time our cluster is terminated?

    • @easewithdata
      @easewithdata  18 дней назад +1

      1. you ca suppress the broadcast using spark config.
      2. Yes, the cluster is cleaned up.
      If you like my content, please make sure to share this with you network over LinkedIn 💓

  • @sureshraina321
    @sureshraina321 11 месяцев назад +2

    @8:50 , I have one small doubt " we have already filtered out the department_id == 6 , In that case we wont have any other department other than 6. Do we need to really groupBy(department_id) after filtering ?? ".

    • @easewithdata
      @easewithdata  11 месяцев назад +1

      Yes, since the data is already filtered you can directly apply sum on it. Group by is not mandatory

    • @sureshraina321
      @sureshraina321 11 месяцев назад

      ​@@easewithdata
      Thank you 👍

  • @ayyappahemanth7134
    @ayyappahemanth7134 15 дней назад

    one doubt sir, When I did direct where, sum, it took 0.8s for both stages. Whereas accumulator took 3s. Is it due to the forced use case for demonstration? Can you give me a example where accumulator could benefit? Even computation wise, accumulator went row by row, where as filter and exchange seems using less compute.

    • @easewithdata
      @easewithdata  14 дней назад

      Yes this was just for demonstration.
      If you like my content, Please make sure to share with your network over LinkedIn 👍

  • @devarajusankruth7115
    @devarajusankruth7115 6 месяцев назад

    hi sir, what is the difference between broadcast join and broadcast variable.
    in broadcast join also a copy of smaller dataframe is stored at each executor,so no shuffling happens across the executors

    • @easewithdata
      @easewithdata  6 месяцев назад +1

      Broadcast joins implements the same concept of broadcast variable. It simplifies the use in Dataframes

  • @sushantashow000
    @sushantashow000 5 месяцев назад

    can accumulator variables be used to calculate avg as well? as when we are calculating the sum it can do for each executors but average wont work in the same way.

    • @easewithdata
      @easewithdata  5 месяцев назад

      Hello Sushant,
      To calculate avg, the simplest approach is to use two variables one for sum and another for count. Later you can divide the sum with count to get the avg.
      If you like the content, please make sure to share with your network 🛜

  • @TechnoSparkBigData
    @TechnoSparkBigData 11 месяцев назад +1

    In last video you mentioned that we should avoid UDF but here you used it during getting the broadcast value. Will it impact the performance?

    • @easewithdata
      @easewithdata  11 месяцев назад +1

      Yes we should avoid Python UDF as much as possible. This example was just for demonstration of an use case of broadcast variable.
      You can always use UDF written in Scala and registered for use in Python.

    • @TechnoSparkBigData
      @TechnoSparkBigData 11 месяцев назад

      @@easewithdata thanks

  • @DEwithDhairy
    @DEwithDhairy 9 месяцев назад

    AWESOME

  • @at-cv9ky
    @at-cv9ky 9 месяцев назад

    pls can you provide the link to download sample data ?

    • @easewithdata
      @easewithdata  9 месяцев назад

      All datasets are available on GitHub. Checkout the url in video description