Partition vs bucketing | Spark and Hive Interview Question

Поделиться
HTML-код
  • Опубликовано: 27 июл 2024
  • This video is part of the Spark learning Series. Spark provides different methods to optimize the performance of queries. So As part of this video, we are covering the following
    What is Partitioning
    How does partitioning help to improve performance
    What is Bucketing
    How does bucketing helps to improve performance
    Difference between Partitioning and Bucketing
    How Spark's performance is impacted by Dynamic Partition Pruning
    Here are a few Links useful for you
    Git Repo: github.com/harjeet88/
    Spark Interview Questions: • Spark Interview Questions
    Spark performance tuning:
    If you are interested to join our community. Please join the following groups
    Telegram: t.me/bigdata_hkr
    Whatsapp: chat.whatsapp.com/KKUmcOGNiix...
    You can drop me an email for any queries at
    aforalgo@gmail.com
    #apachespark #sparktutorial #bigdata
    #spark #hadoop #spark3

Комментарии • 89

  • @alibinmazi452
    @alibinmazi452 3 года назад +57

    small file problem in Hadoop?
    According to me if we have lots of small files in cluster that will increase burden on namenode . bcoz namenode stores the meta data of file so if we have lots of small files name node keep noting address of files and hence if master down cluster also gone down.

    • @DataSavvy
      @DataSavvy  3 года назад +40

      That is right... In addition to this spark will also need to create more executor tasks... This will create unnecessary overhead and slow down your data processing

    • @saurabhgulati2505
      @saurabhgulati2505 3 года назад +14

      Also if these files are compressed, the executor core will get busy decompressing them.

    • @tanmaydash803
      @tanmaydash803 Год назад

      name node ?

    • @-leaflet
      @-leaflet 11 месяцев назад

      @@tanmaydash803 otherwise called the Master

  • @cajaykiran
    @cajaykiran 2 года назад +1

    I would have watched this video at least 5 times between yesterday and today. Thank you very

  • @prosperakwo7563
    @prosperakwo7563 3 года назад

    Thanks for the great video, very clear explanation

  • @ShashankGupta347
    @ShashankGupta347 2 года назад

    crisp & clear , Thanks !

  • @FaizanAli-we5wc
    @FaizanAli-we5wc Год назад

    You are too good sir thank you soo much for clearing our concepts❤

  • @ksktest187
    @ksktest187 3 года назад

    Great efforts ,keep it up

  • @sanketkhandare6430
    @sanketkhandare6430 2 года назад

    excellent explaination. helped a lot

  • @shikhargupta7552
    @shikhargupta7552 Год назад +1

    Please keep making more such videos.
    Also would be great if you could make something for cloud related big data technologies

    • @DataSavvy
      @DataSavvy  6 месяцев назад

      Thanks Shikhar, I will plan to create videos on cloud. Do u need videos on any specific topic on cloud?

  • @rakeshdey1702
    @rakeshdey1702 3 года назад

    This is nice explanation, But you are considering physical partition for hive , but memory level partition for spark to show difference no of files generated

  • @vutv5742
    @vutv5742 6 месяцев назад +1

    Nice explanation ❤ Completed ❤

  • @saurabhgarud6690
    @saurabhgarud6690 3 года назад

    Thanks for a very helpful video. My question here is, how we can perform optimisation using bucketing,? As in bucketing data is shuffled among different buckets, so it will not be sorted, so if i am using where condition over bucketed table how should i avoid irrelevant bucket scans like i do in partitioning? In short does where condition optimises bucketed table if not then what are other optimisations over bucketing ?

  • @HemanthKumardigital
    @HemanthKumardigital 2 года назад

    Thank you so much sir ☺️ .

  • @ayushjain139
    @ayushjain139 3 года назад

    How can I find if my bucketing was really utilized by the query? Can be visible from the physical plan? Also, I am believing that in the case of partition+bucketing, both the partition and bucket filters should be on my query?

  • @sumit_ks
    @sumit_ks 2 года назад +1

    Very well explained sir.

  • @subhajitroy5850
    @subhajitroy5850 3 года назад +4

    Really appreciate @Data Savvy for the effort. I have a question:
    The data searching/retrieval process in case of partitioned table can (to create an analogy) we understand, the way element retrieval is done in binary tree and in case of partitioned bucketed table, a way search is done in nested binary tree . I am referring to Binary tree in Data structure
    Recently, I followed one Mock Bigdata Interview video in your channel,liked a lot. If possible please upload a few more such videos. Thanks :)

    • @DataSavvy
      @DataSavvy  3 года назад +1

      Hi Subhajit... Thanks. More mock interviews are planned in next few weeks.. excuse me but I did not get your question :(

    • @subhajitroy5850
      @subhajitroy5850 3 года назад

      @@DataSavvy The way data is retrieved / searched in partitioned hive table, can we think / correlate the same with that of element retrieve in case of binary tree (Binary Tree in Data Structure).
      Not sure if this is a better version :)

  • @vikramrajsahu1962
    @vikramrajsahu1962 3 года назад

    Can we increase the performance of the Hive query while fetching the records, assuming table is already partitioned?

  • @sashikiran9
    @sashikiran9 2 года назад +4

    Important point - hive partitioning is not same as Spark partitioning. 7:34-9:14

  • @tanushreenagar3116
    @tanushreenagar3116 Год назад +1

    Best explanation

  • @anurodhpatil4776
    @anurodhpatil4776 Год назад

    excellent

  • @bhooshan25
    @bhooshan25 Год назад

    very useful

  • @nobinstren3798
    @nobinstren3798 3 года назад +1

    thanks men its help

    • @DataSavvy
      @DataSavvy  3 года назад +1

      Thanks Nobin. Pleasure... :)

  • @r.kishorekumar1388
    @r.kishorekumar1388 2 года назад +4

    Where there are lot of small files in hadoop, the namenode performance can be impact because of unable to fast process the data.. Actually Hadoop is for handling big data.. So creating too many small files may end up with namenode performance impact. I came across this problem in my project

    • @bharathraj4545
      @bharathraj4545 6 месяцев назад +1

      Hi bro iam new to big data can you guide me further

    • @DataSavvy
      @DataSavvy  6 месяцев назад

      Hi Bharath, happy to guide you. Drop me an email on aforalgo@gmail.com

  • @vamshi878
    @vamshi878 3 года назад +1

    @data savvy, i obesrved in my local system with multiple cores, partitionBy and bucketBy both doesn't perform any shuffle, there is no exchange in plan. That is why it is producing small files in both cases? Is that right? Will it perform shuffle in large cluster? I am jts reading from a file and writing in partitionby or bucket by no transformations, tell me in this case cluster level also no shuffle will be there?

    • @khanmujahid4743
      @khanmujahid4743 3 года назад

      It uses hash value of the search item and go to the bucket which matches with the hash value

  • @krunalgoswami4654
    @krunalgoswami4654 2 года назад

    I like it

  • @kumarsatyachaitanyayedida4717
    @kumarsatyachaitanyayedida4717 2 года назад

    How can we consider a particular column to use as partitioning or to use as bucketing

  • @uditmittal3816
    @uditmittal3816 2 года назад

    Thanks for the video.
    But i have one query ,how to insert data in bucketed table of hive using spark. I tried this, but it didn't give correct output.

  • @jonathasrocha6480
    @jonathasrocha6480 2 года назад

    Does Bucketing is used when the column have high cardinality ?

  • @anandraj2558
    @anandraj2558 3 года назад +1

    Nice. explanation.. Can you please also take Hive join example map side join and all other joins and performance tuning.

    • @DataSavvy
      @DataSavvy  3 года назад

      Sure will create videos on that

  • @bhavaniv1721
    @bhavaniv1721 3 года назад

    Hi,r u handling spark and scala training classes?

  • @routhmahesh9525
    @routhmahesh9525 3 года назад

    How can we decide the number of buckets in case after partitioning one file 128 mb ,2nd file 400mb ,3rd file 200 mb..kindly answer..thanks in advance

  • @raviranjan217
    @raviranjan217 3 года назад +1

    Small file problem is headache to name node since it has to manage metadata info. also spark need more number of executor which is again a overhead .

  • @rajeshp3323
    @rajeshp3323 3 года назад

    but what i herd is in spark 1 partition = 1 block size.... partitions are not created like in hive using specific column name
    again here in spark when comes to bucketing..as u said 1 bucket should be minimum of block size....so is it mean 1 bucket = 1 partition...then what is the need of bucketing in spark...im confused

  • @dheemanjain8205
    @dheemanjain8205 6 месяцев назад +1

    partition is same as group By and Bucketing is same as range

    • @DataSavvy
      @DataSavvy  6 месяцев назад

      Hi, it's actually different...

  • @anikethdeshpande8336
    @anikethdeshpande8336 9 месяцев назад

    is bucketing not used with save() method ?
    it works fine with saveAsTable()
    getting this error AnalysisException: 'save' does not support bucketBy and sortBy right now.

  • @xxxxxxxxxxa232
    @xxxxxxxxxxa232 Год назад +1

    Partitioning and bucketing are similar to GROUP BY ... and WHERE value in a range

  • @Apna_Banaras
    @Apna_Banaras 2 года назад

    Small file problem in hadoop?
    Its generates lot's of metadata . Than its increase the burden of name node

  • @kaladharnaidusompalyam851
    @kaladharnaidusompalyam851 3 года назад +1

    Hi Harjeet, i have came across a question in my latest interview.
    what are the packages we need when we want to impliment spark?

    • @DataSavvy
      @DataSavvy  3 года назад

      Hi... It depends on what dependencies are u using in your project... Check you sbt file

    • @sagarbalai1122
      @sagarbalai1122 3 года назад

      If you already have some project then check in sbt/ pom file but generally you need atleast spark-core, spark-sql to start with basic ops.

  • @sambitkumardash9585
    @sambitkumardash9585 3 года назад +1

    Sir, could you please give one example syntactically between Hive partition, bucketing vs spark partition, bucketing . And couldn't understand the last point of your summary, could you please give some more clarity on it .

    • @DataSavvy
      @DataSavvy  3 года назад

      Let me look into that

  • @selvansenthil1
    @selvansenthil1 11 месяцев назад

    How can we make bucket size to 128 mb as partion size would be 128 mb which will further devided into buckets.

  • @iketanbhalerao
    @iketanbhalerao Год назад

    without partitioning can we directly do bucketing in spark?

  • @likithaguntha8105
    @likithaguntha8105 3 года назад

    Can we partition after bucketing?

  • @alokdaipuriya4607
    @alokdaipuriya4607 3 года назад

    Hi Harjeet..... Thanks for such informative video.
    One qq here
    U choose country column for partition that's ok
    And u choose age column for buckets. So here why did u choose age column for bucketing ? Why not Name column ? Or we can choose any from name and age or there is some technicality behind to choose bucketing column ? If yes plz do comment.

    • @saketmulay8353
      @saketmulay8353 Год назад

      it depends on the filter you want to apply, if you want to apply filter on age and you are bucketing by name, then the problem will remain as it is and it won't make any sense.

  • @rajlakshmipatil4415
    @rajlakshmipatil4415 3 года назад +2

    No of bucket in spark = size of data /128
    Iam I correct so in that case as above we can't specify no of buckets in spark ?
    In which case should we go for bucketing and which case should go for partitioning can you give some example ?

    • @DataSavvy
      @DataSavvy  3 года назад

      If u use partitioning and it creates small files, then u should consider using bucketing there...

    • @rajlakshmipatil4415
      @rajlakshmipatil4415 3 года назад +1

      @@DataSavvy Thanks for answering

    • @kaladharnaidusompalyam851
      @kaladharnaidusompalyam851 3 года назад

      I ll tell you one thing here.
      Partitioning is done based upon the column & bucketing is done based upon the rows.
      (i.e., both concepts are splitting data into multiple pieces. But part based on column and buck based on rows/records.)
      Suppose if we have data 1-100 .we can bucket data like 1-25 in one bucket and 25 -50 in second bucket and 50-75 &75-100respectively. Based on rows.
      But partation is based on column.
      Ex. If you have column name (population in year wise from 2010-2020) we split data based on year wise . 2010 ,2011,2012...2020into 10 partations.
      If it is 100%correct .please comment some one. Dont feel bad. If im wrong i make it correct. Tq

    • @DataSavvy
      @DataSavvy  3 года назад +2

      Partitioning and bucketing both are done one column... only diff is , How the records are grouped. I think your statement is right but u are viewing these concepts in more complex way..

    • @DataSavvy
      @DataSavvy  3 года назад

      Thanks Rajlakshmi :)

  • @kaladharnaidusompalyam851
    @kaladharnaidusompalyam851 3 года назад +1

    what kind of problems we will face when there are a lot of small files in hadoop?
    My ans is : Hadoop is meant for handling large size of files in less number. i.e , hadoop can handle big size files with less count. hadoop wont give better results in efficient way for lot of small files, because there sould be SEEK time for reading data from hard disk to fetch a record . this would increase if you use lot of small files, it will increase system down time. and more over meta data also increases.

    • @DataSavvy
      @DataSavvy  3 года назад +1

      Thats Right :) . There will be few more issues. Please see pinned message

  • @ramchundi2816
    @ramchundi2816 3 года назад

    Thanks, Harjeet. It was a great explanation.
    Quick question for you - What will happen if we remove a partition key after loading the data (in managed and external tables)?

    • @nikhithapolanki
      @nikhithapolanki 3 года назад

      How can u remove partition key once table is created? If u drop and recreate table without partition, data present in physical location of table cannot be read by table. It will give parsing exception

  • @ADY_SR
    @ADY_SR 8 месяцев назад

    Volume of data would increase if we have small file..
    Volume can be alot of small files or few large files.. both are No No

  • @NN-sw4io
    @NN-sw4io 3 года назад

    Sir,
    What if the filter only by age? So how about the partition and bucket?

  • @sandipsawant7525
    @sandipsawant7525 3 года назад +1

    Thanks for this video.
    One question, in which kind of cases we need to use only bucketing , and how query search happens ?
    Thanks again🙏

    • @DataSavvy
      @DataSavvy  3 года назад +1

      When partition on a column will create small files, use bucketing without partition.. before doing sort merge join also u can create buckted table and improve performance of join

    • @sandipsawant7525
      @sandipsawant7525 3 года назад

      @@DataSavvy Thank you sir for answer.
      If I used 4 buckets, when I hit select query then it will go to only one specific bucket or it will search in all buckets? Because in partition we have folders with value, in case of bucketing, how query will know , in which bucket to search ?

    • @AtifImamAatuif
      @AtifImamAatuif 3 года назад +1

      @@sandipsawant7525 It will use " hash value" of the search item and go to the bucket , which matches with the "hash value "

    • @sandipsawant7525
      @sandipsawant7525 3 года назад

      @@AtifImamAatuif Thanks

    • @ayushjain139
      @ayushjain139 3 года назад

      @@DataSavvy "before doing sort merge join also u can create buckted table and improve performance of join" - Kindly explain how and why?

  • @mohitmehta3788
    @mohitmehta3788 3 года назад +1

    If we want to query the table for country= india and age=20. Now that we have create new bucketed table, do we have to query bucketed table or initial table. Little lost here.

    • @DataSavvy
      @DataSavvy  3 года назад

      u will query bucketed table :)

  • @gyan_chakra
    @gyan_chakra 2 года назад

    Sir better quality is not available for this video. Please fix it.

    • @DataSavvy
      @DataSavvy  2 года назад

      Hi Bhumitra...I am working on fixing this

  • @GreatIndia1729
    @GreatIndia1729 Год назад

    IF we have large Number of Small files, then Number of I/O operations...like opening & closing files will be increased. This is Performance Issue.

  • @sivakrishna3413
    @sivakrishna3413 3 года назад

    I want to learn Spark and pyspark. Are you providing any training?

    • @DataSavvy
      @DataSavvy  3 года назад

      Hi Siva... I am currently to perusing any online training... Let me look into this prospect