what is Apache Parquet file | Lec-7
HTML-код
- Опубликовано: 2 май 2023
- In this video I have talked about parquet file reading in spark. If you want to optimize your file and process in Spark then you should have a solid understanding of Parquet file format. Please do ask your doubts in comment section.
Directly connect with me on:- topmate.io/manish_kumar25
Download Parquet Data:- github.com/databricks/Spark-T...
Download parquet tools in your local to run all the below commands.
Parquet tools can be downloaded using pip command.
Run the below command in cmd or terminal
pip install parquet-tools
Run the blow command inside python
import pyarrow as pa
import pyarrow.parquet as pq
parquet_file = pq.ParquetFile(r'C:\Users
ikita\Downloads\Spark-The-Definitive-Guide-master\data\flight-data\parquet\2010-summary.parquet\part-r-00000-1a9822ba-b8fb-4d8e-844a-ea30d0801b9e.gz.parquet')
parquet_file.metadata
parquet_file.metadata.row_group(0)
parquet_file.metadata.row_group(0).column(0)
parquet_file.metadata.row_group(0).column(0).statistics
Run the below command in cmd/terminal
parquet-tools show C:\Users\manish\Downloads\Spark-The-Definitive-Guide-master\data\flight-data\parquet\2010-summary.parquet\part-r-00000-1a9822ba-b8fb-4d8e-844a-ea30d0801b9e.gz.parquet
parquet-tools inspect (path of your file location as above)
parquet.apache.org/docs/file-...
For more queries reach out to me on my below social media handle.
Follow me on LinkedIn:- / manish-kumar-373b86176
Follow Me On Instagram:- / competitive_gyan1
Follow me on Facebook:- / manish12340
My Second Channel -- / @competitivegyan1
Interview series Playlist:- • Interview Questions an...
My Gear:-
Rode Mic:-- amzn.to/3RekC7a
Boya M1 Mic-- amzn.to/3uW0nnn
Wireless Mic:-- amzn.to/3TqLRhE
Tripod1 -- amzn.to/4avjyF4
Tripod2:-- amzn.to/46Y3QPu
camera1:-- amzn.to/3GIQlsE
camera2:-- amzn.to/46X190P
Pentab (Medium size):-- amzn.to/3RgMszQ (Recommended)
Pentab (Small size):-- amzn.to/3RpmIS0
Mobile:-- amzn.to/47Y8oa4 ( Aapko ye bilkul nahi lena hai)
Laptop -- amzn.to/3Ns5Okj
Mouse+keyboard combo -- amzn.to/3Ro6GYl
21 inch Monitor-- amzn.to/3TvCE7E
27 inch Monitor-- amzn.to/47QzXlA
iPad Pencil:-- amzn.to/4aiJxiG
iPad 9th Generation:-- amzn.to/470I11X
Boom Arm/Swing Arm:-- amzn.to/48eH2we
My PC Components:-
intel i7 Processor:-- amzn.to/47Svdfe
G.Skill RAM:-- amzn.to/47VFffI
Samsung SSD:-- amzn.to/3uVSE8W
WD blue HDD:-- amzn.to/47Y91QY
RTX 3060Ti Graphic card:- amzn.to/3tdLDjn
Gigabyte Motherboard:-- amzn.to/3RFUTGl
O11 Dynamic Cabinet:-- amzn.to/4avkgSK
Liquid cooler:-- amzn.to/472S8mS
Antec Prizm FAN:-- amzn.to/48ey4Pj
I said 500 GB in the video by mistake. It is supposed to be 500MB, and when dividing 500/128, we will get 4 partitions.
I just saw this video and boom u mentioned the same in your comment section
Hi Manish, thanks for such smooth explanation of not just information related to parquet but also things related to it, kudos to your efforts :D
the best explanation ! you are a wonderful teacher
This is so far the best video in which I got to know in depth knowledge of parquet and very easy to understand.
Thankyou so much for sharing your knowledge.!!
Could you please share the video having optimization of parquet?
Literally the best and most detailed video on parquet file format on yt. Thank you!
I love how indept you are going, please keep doing it ! We are loving it.
Never saw such a detailed video for parquet file, these videos are really valuable. Really appreciate the efforts put in making these videos
Thank you so much for putting in so much time for making this video
This is the best explanation for parquet file format available online. Thanks Manish.
highly informative and detailed video on parquet. Thanks a lot Manish!
Predicate pushdown - Rows filtering, Projection Pruning/Pushdown - Column filtering. Thanks for the session bro!!
Wah....Itta clarification...maza aa gya...Video kab khatm hui pta hi ni chala....!!! Excellent explanation.
Very detailed awesome video!! Thanks
The best explanation!!! Your videos are giving me motivation and inspiration to keep learning spark!
Thanks for this Manish! Great Work!
Hats of Manish, Please keep doing the good work :)
Someone ask me in interview about internals of parquet file format and i couldn't answer it,Then i found your video.Now i can explain easily.Best video on parquet file format.
Hi Manish,
the parquet file detail class was classic example for how to present something .if same like this avro and orc file format classes can be discussed then it would be really helpful. Nowadays the interviewer is asking on those as well
Excellent video. Thank you :)
really awesome explanation
Thanks! for this masterpiece
bhai, maza aa gya, thanks bro
Hello Manish, excellent explanation, hats off to you. When will you give optimization video on parquet eagerly waiting for it
what a gem you are
Directly connect with me on:- topmate.io/manish_kumar25
Very nice.
Please make video on ORC and Avro as well
Hi Manish, great explanation of parquet i'm using parquet but didn't know about these features which made things fast how were you able to learn all this knowledge please suggest any documentation/resources to get deep understanding like this. you made my day. Thank you 😊
Thanks Manish 🙂
Thanks Manish
Please make a video on avro file format in detail because I faced challenges when interviewers asked about avro file format questions
informative Thanks
Thanks Sir
Manish, if we are writing parquet by making the files already sorted in asc or desc then the process of retrieval of data would be faster right? Because in row_number's meta data would have min and Max value in a certain range? Please correct me if I am wrong.
nice explaination brother
can you make one video on ORC also
❤
One doubt, I did not understand the logical partitioning completely, it resembles with file size we can set in spark config. Please help me understand it
Manish bhai, nested json?
Nested json to data frame explain kijiyee na
How one will understand in which 1L records we need to fetch the data. Still we need to scan the complete file. Isn't it?
Kindly explain
unable to install parquet-tools. can you help or point n right direction
Hi manish sir,Thanks for bringing such valuable info ..I have a question like how can we handle schema evaluation in parquet
I didn't get you
I think he wants to about shema evolution
@@manish_kumar_1 I think @pankajsolunke3714 is asking how to handle schema evolution in parquet if we can?
😇
Hello Manish, i really like your videos, thanks for the efforts you put in. Have a question, can you please tell a good tutorial/course that we can go through to get really good at pyspark, if not a single resource then what are the various resources that we can go through to get good at pyspark coding.
You don't need a course. Still if you want to go for a course then you can buy a udemy course by Prashant Pandey titled pyspark for Beginner. Rest depends on you ki how much questions you want to solve. Solve more problems rather than running behind multiple courses. Practice is the key to success not a number of course you have done.
sparkbyexamples is the RESOURCE we need for practice!! :)
Where can we practice spark from?
@@manish_kumar_1 where can we practice spark from??
@@royalkumar7658 Leet code se. Aap playlist start se follow kijiye tab Pata chal jayega kaha se and kaise
bhai 500gb data mein 4 row kyun rakhe gaye and manish bhai 128mb hota hai . aap explain kar sakte ho aisa kyun bhai
Hi Manish,
In the last video, you told that u will explain about nested json in further videos. Where can i find that?
I have not done yet. I will try to make one soon.
Yaar bhaiya wo bana do pls. Industry to usi pe chal ri
hii bro each row grop stores 128mb or 128 gb data you told 128mb
bur for for 500gb you told 4 row groups
you are talking about 500mb or 500gb
Null kaise write hota hai disk pe??
Hi Manish, thank you for the information. Please can you elaborate whats the default value of 128 MB and when we have 500 GB data how does that convert to 4 row groups? Thank you
500 mb not gb. 500 divided by 128 I.e 4.
4 block of data will be created. So the thing is we have a default block size of 128 MB in hdfs and multiple cloud service provider also use the same block size. So let say if you have 140 mb data that means one partition will be of 128 mb and next partition will be having just 12 mb of data.
@@manish_kumar_1 thank you so much!
sir, parquet file download nahi ho raha he github se
Cmd-parquet-tool issue
Where is the nested JSON video? You said you will make a separate video on it in the previous lecture "how to read json file in Pysaprk"
Lec 23
Hi Manish, what is projection pruning? Unable to find it on Google. Or is it Partition Pruning*? Can you please explain/clarify?
Projection pushdown Hota hai jisme columns ki pruning hoti hai. So Projection pushdown ya Projection pruning same hai.
@@manish_kumar_1 thanks for clarifying!
sir please make short video to downloadd and install parquet tool
Sure
Example of Modi ji for finding age >18 was highlight of the video
😂😂
Your content is good.Why don't you do videoes in English.
english nahi aati hai 😒. Just joking, I may record a session in future but not for now.
ruclips.net/video/zM2OAAvJItQ/видео.html&ab_channel=knowledgeEpicenter (We are making videos for those people for whom no one is making videos.)
Stage failure error show
Hey @manish_kumar_1, I was able to use the modes (append, overwrite, etc.) using this command:
df.write.option("header", first_row_is_header) \
.option("sep", delimiter) \
.mode("Overwrite") \
.csv(file_location)
All other ways of writing is returning error on Databricks if the file exists.. even if we're trying to append the data..! :| Unsure why is this happening...! :\
Same here. May be due to community edition. In production environment it does work
@@manish_kumar_1 oh..okay! Still weird, though!!! I'm yet to try databricks in production..
anybody help me please i cant read parquet file using command prompt
Koi issue nahi hai. Aap direct databricks me read kar lijiye. Ek baar video ko bas sahi se dekh lijiyega
@manish_kumar_1 databricks me kr liya h command prompt ka nhi ho rha h
21:25 500 GB or 500 Mb ?
500 mb
import pyarrow as pa
import pyarrow.parquet as pq
parquet_file = pq.ParquetFile(r'C:\Users\DELL\Desktop\part-r-00000-1a9822ba-b8fb-4d8e-844a-ea30d0801b9e.gz.parquet')
parquet_file.metadata
parquet_file.metadata.row_group(0)
parquet_file.metadata.row_group(0).column(0)
parquet_file.metadata.row_group(0).column(0).statistics Not able to see any output with this file. Not sure why
Error v nhi aa rha?
Thanks sir