Performance Tuning in Spark

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 11

  • @EDWDB
    @EDWDB Год назад

    Thanks Bhawna, can you please make a video on monitoring and troubleshooting spark jobs via UI

  • @oldoctopus393
    @oldoctopus393 Год назад +2

    1) 0:54 - not correct. DataSets and DataFrame has to be serialized and de-serialized as well, but since these APIs impose structure on data collection these processes could be faster. Overall RDDs provide more control to Spark in terms of data manipulations;
    2) not all DataFrames could be cached;
    3) UDFs could be converted into native JVM bytecode with help of Catalyst optimizer. You may use df.explain() to see something like "Generated code: Yes" or "Generated code: No" in the output

  • @krishnasai7550
    @krishnasai7550 4 месяца назад

    Hi bawana,
    I learned somewhere we cannot uncache the data but we can unpersist so we use persist more inplace of a cache. but here you mentioned we can uncache. I'm bit confused which is correct?

  • @CoolGuy
    @CoolGuy Год назад

    Bucketing, salting are also good optimization techniques.

  • @AyushSrivastava-gh7tb
    @AyushSrivastava-gh7tb Год назад

    Hi Bhawna. Your videos have helped me immensely in my databricks journey and I've nothing but appreciation for your work.
    Just a humble request, could you also please make a video on Databricks Unity Catalog??

    • @cloudfitness
      @cloudfitness  Год назад +1

      Yes already done with a playlist in UC 😀

  • @tanushreenagar3116
    @tanushreenagar3116 11 месяцев назад

    So nice its helps a lot

  • @AbhinavDairyFarm
    @AbhinavDairyFarm 5 месяцев назад

    Please share this ppt that will help us

  • @stevedz5591
    @stevedz5591 Год назад

    How can we optimize spark Dataframe write to CSV it takes lot of time when it's a big file. Thanks in advance

  • @RohitSharma-ny1oq
    @RohitSharma-ny1oq Год назад

    Mem ur voice like #Soote ko jga d