74. Databricks | Pyspark | Interview Question: Sort-Merge Join (SMJ)

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии • 40

  • @omprakashreddy4230
    @omprakashreddy4230 2 года назад +7

    You are here to make our lives simple. Thank you so much !!

  • @moviestime2346
    @moviestime2346 Год назад +4

    No one can explain better than this..Thanks raja for your efforts and time.

  • @taikoktsui_sithlord
    @taikoktsui_sithlord 6 месяцев назад +3

    to-the-point explanation, thanks!

  • @rebalaashishreddy9908
    @rebalaashishreddy9908 2 года назад +1

    Best channel for data bricks

  • @suresh.suthar.24
    @suresh.suthar.24 Год назад +2

    hats of to you sir g ur explanation is next level.

  • @bhargavkumar4724
    @bhargavkumar4724 2 года назад +2

    Excellent Explanation!!!

  • @Lalamikuzinha
    @Lalamikuzinha Год назад +1

    The best explanation i´ve seen.

  • @vineethreddy.s
    @vineethreddy.s 2 года назад +4

    Say if we have deptid 111 in emp table a million times and deptid 111 in dept table over 500k times.
    During the shuffle spark would create 200 partitions. So deptid 111 of emptable may split across 20 partitions and deptid 111 of depttable may split across 10 partitions and if the sort and merge is performed on these partitions, then this would result in partial join. How does spark handle it internally?

  • @aswaniyettapu9992
    @aswaniyettapu9992 2 года назад +2

    Very good explanation

  • @prathapganesh7021
    @prathapganesh7021 9 месяцев назад +1

    Excellence explanation thank you

  • @saikiran-pl4cc
    @saikiran-pl4cc 2 года назад

    Thank you for clear explaination

  • @JimRohn-u8c
    @JimRohn-u8c 7 месяцев назад +1

    Is this the same as the Sort-Merge-Bucket (SMB) join?

  • @rahulmittal116
    @rahulmittal116 5 месяцев назад +1

    Hats off

  • @srinubathina7191
    @srinubathina7191 Год назад +1

    Thank You Sir

  • @prabhatgupta6415
    @prabhatgupta6415 Год назад +1

    Sir i have seen multiple join strategies are there . I could find in ur playlist.

  • @bikeshtiwari6418
    @bikeshtiwari6418 2 года назад

    ur awsome Spark Guru

  • @pavankumarveesam8412
    @pavankumarveesam8412 Год назад +1

    But in the third stage its not completed right lets say there is one more filter operation on the data frame it will still be in that stage only but if the data frame encounters a shuffle operation like join there will be another stage correct ?

  • @vineethreddy.s
    @vineethreddy.s 2 года назад +1

    Thanks, Helpful

  • @mohitupadhayay1439
    @mohitupadhayay1439 2 года назад +2

    Is this why we use BROADCAST join? Because normal joins are expensive?

    • @rajasdataengineering7585
      @rajasdataengineering7585  2 года назад +2

      Exactly, this is the reason why we need to use broadcast join to avoid expensive sort merge join

    • @mohitupadhayay1439
      @mohitupadhayay1439 2 года назад

      @@rajasdataengineering7585 One more question : How can we use broadcast if the small df couldn't occupy the memory? Wouldn't the data spill from the memory?

  • @oleg20century
    @oleg20century 11 месяцев назад

    Hello!
    1 executor unit is not 1 worker node unit? Maybe this worker node 1 is rack or little cluster? Or maybe this executors is actually containers (cores) on 1 executor (worker)?

  • @venkatasai4293
    @venkatasai4293 2 года назад +1

    Good explanation Raja. Few questions 1)Does number of partitions determined by number of cores in the cluster or input split size for example s3 bucket 128MB 2)what happens if the partition size greater than the executor size . Does it spill to the disk ? Is that impacts the performance ?

    • @rajasdataengineering7585
      @rajasdataengineering7585  2 года назад +2

      Thanks Venkat.
      1. Number of partitions are determined by various factors. If the input file is in splittable format, each core will start read the data in parallel and each core can produce one partition at 128 mb size. If the input file is much bigger, each core will produce multiple partitions of 128 mb. So number of partitions will be in multiples of number of cores.
      2. Usually partition size does not exceed executor onheap memory. If Dataframe (multiple distributed partitions across cluster) size is exceeding total size of on heap memory, it leads to data spill. So few partitions will be stored in local disk of worker node. Splilled data hits the performance as it needs to be recalculated every time.
      Hope it helps

    • @venkatasai4293
      @venkatasai4293 2 года назад +1

      @@rajasdataengineering7585 thanks raja