This INCREDIBLE trick will speed up your data processes.
HTML-код
- Опубликовано: 10 май 2024
- In this video we discuss the best way to save off data as files using python and pandas. When you are working with large datasets there comes a time when you need to store your data. Most people turn to CSV files because they are easy to share and universally used. But there are much better options out there! Watch as Rob Mulla, Kaggle grandmaster, discusses some alternative ways of saving data files: pickle, parquet and feather files. I run some benchmarks to show that you can save time, space and keep the important metadata about your files in the process!
Timeline
00:00 Intro
00:49 Creating our Data
02:08 CSVs
04:39 Setting dtypes for CSVs
06:15 Pickle Files
07:16 Parquet ❤️
09:07 Feather
10:31 Other Options
11:02 Benchmarking
12:19 Takeaways
12:43 Outro
Code Gist: gist.github.com/RobMulla/7384...
Follow me on twitch for live coding streams: / medallionstallion_
Other Videos:
Speed up Pandas: • Make Your Pandas Code ...
Efficient Pandas Dataframes: • Speed Up Your Pandas D...
Inroduction to Pandas: • A Gentle Introduction ...
Exploritory Data Analysis Video: • Exploratory Data Analy...
Audio Data in Python: • Audio Data Processing ...
Image Data in Python: • Image Processing with ...
* RUclips: youtube.com/@robmulla?sub_con...
* Discord: / discord
* Twitch: / medallionstallion_
* Twitter: / rob_mulla
* Kaggle: www.kaggle.com/robikscube
#python #code #datascience #pandas
First post! That’s my husband he knows about data…
He knows a lot of good stuff about data 😁. His the first non-introductory Python RUclipsr I have found so far 🎉
aww this is cute
Guess he's really in a "pickle" now.
Awww now you guys need a The DataCouple channel if you both do data science! Love your content
Nice work Mr. ROB
As always, awesome video...a real eye opener on most efficient file formats. I have only used pickle as compression, but will now investigate feather and parquet. Thanks for putting this together for all of us.
Glad it was helpful! I use parquet all the time now and will never go back.
You are my new favorite RUclipsr, Sir. I'm learning more from you than anyone else, by a country mile!
Rob, you did it again...keep'em coming, good job!
Thanks!
Very good video :). One note: pickle files can be compressed. If you compress them, they become much smaller but reading and writing becomes slower. Overall parquet und feather are still much better.
Good point! There are many ways to save/compress that I probably didn't cover. Thanks for watching the video.
Thanks Rob, awesome information! Learning a lot from your channel. Keep it up!
Isn’t learning fun?! Thanks for watching.
Very clear and insightful explanation, thanks Rob, keep it up!
Thanks Gustavo. I’ll try my best.
I saw people mentioned feather on Kaggle sometimes, but had no clue what they were talking about. Finally, I got answers to many questions in my mind. Thank you!
Yes. Feather and parquet formats are awesome for when you want to quickly read and write data to disk. Glad the video helped you learn!
Excellent tutorial Rob. Subscribed!
Thanks so much for the feedback. Thanks for subscribing!
One really cool feature of .read_parquet() is that it passes through additional parameters for whichever backend you're using. For example the filters parameter in pyarrow allows you to filter data at read, potentially making it even faster:
df = pd.read_parquet("myfile.parquet", filters=[('col_name', '
Whoa. That is really cool. I didn't realize you could do that. I've used athena which allows you to query parquet files using standard SQL and it's really nice.
Athena is amazing when backed with parquet files, I've used it in order to be able to read through 600M+ records that were in those parquets easily
That's the real use case for parquet. Feather doesn't have this.
Very clear, very structured, and the details are intuitive to understand!
Excellent as usual Rob...very very useful indeed
Thank you sir!
Amazing! Got one new member. Thanks, Rob! 😉
Glad you liked it. Thanks for commenting!
Exactly what I needed to know, and to the point. Thanks.
As Einstein said, 'Everything should be as simple as possible, and no simpler!'
That’s a great quote. Glad you found this helpful.
as someone moving into datascience this is such a great explainer! thank you
learnt something new today. Thank you Rob for this useful & informative video.
Learn something new every day and before long you will be teaching others!
This was the first video from the channel that randomly appeared in my feed. I clicked, I watched - I liked and subscribed :D. This video plant a seed into my mind, some others inspired me to try. So few days later I got running playground environment in the docker. I'm not data scientist but tips and tricks from your videos could be useful for any developer. I used to code before to check some datasets, but with pandas and jupiter notebook it way more faster. Thank You for sharing your experience !
Wow, I really appreciate this feedback. Glad you found it helpful and got some code working yourself. Share with friends and keep an eye out for new videos dropping soon!
I've learned a great deal with this video. Thank you!
Thanks so much for the feedback. Glad you learned from it!
A major design objective of feather is to be able to be read by R. If you are doing pandas-type data science stuff, this is a significant advantage.
Great point. The R package called "arrow" can read in both parquet and feather files.
I really love it man, thank you. You saved a life
Thanks! Maybe not saved a life, but saved a few minutes of compute time!
Amazing.
Congrats for the video
Glad you like the video. Thanks for watching.
Great stuff! Thanks for sharing.
Glad you enjoyed it!
@@robmulla 👍
Great summary of data types. Thanks
Thanks for the feedback! Glad you found it helpful.
Very engaging and clear. Thanks!
Thanks for watching. 🙌
Awesome informations ! Thankyou for this.
Glad you liked it!
Huge thanks for sharing 🍀
Glad you liked it? Thanks for the comment.
This is good to know. I`m going into web development now, so I usually use JSON format for serialization... I`m still new to python so I didn`t know about parquet and feather. Thank you!
Glad you found it helpful. Share it with anyone else you think would benefit!
This is excellent, thank you man
Glad it helped!
Great video!! Small things matter the most. Thanks
Absolutely! Thanks.
really good video! thank you
Rob, You're a natural communicator (or you worked really hard at acquiring that skill) - most effective. I follow you on twitch and I'm currently going through your youtube content to come up to speed. Thanks for sharing your time and experience. Have you thought about aggregating your content into a book as a companion to your content - something like "Data Analysis Using Python/Pandas - No BS, Just Good Stuff" ?
Hey. Thanks for the kind words. I’ve never considered myself a naturally good communicator and it’s a skill I’m still working in but I appreciate your positive feedback. The book idea is great, maybe sometime in the future….
This blew my mind, duuude
Happy to hear that! Share with others so their minds can be blown too!
Really useful video - thanks.
I was just searching for some Pandas videos for some light upskilling on the weekend, so this was a great find.
Glad I could help! Check out my other videos on pandas too if you liked this one.
Hey Guy, nice job. Congratulations! Thanks for video.
Thanks for watching Humberto.
super clear and useful! Subscribed
Awesome, thank you!
Great! Thank you for this very helpful video.
Glad it was helpful!
awesome ! thank you for this tutorial
You're very welcome! Share with a friend.
Hi Rob. I'm from Argentina, you are the best!!!
Hey this was very useful to me thank you for sharing!!
So glad you found it useful.
I looked this up, and it's a pretty cool format, I kinda guessed that it could be a column-based storage strategy when you said that we can efficiently get only select columns, but after I looked it up and found it to be true, it felt very exciting.
Anyways, hats off to Google's engineers for thinking out of the box on this, the number of things we can do just by storing data as column-lines rather than row-lines is a lot. Of course, the trade-off is that it's very expensive to modify column-wise data, so this is more useful for static datasets that require multi-dim analysis
thanks rob, its help me a lot for beginner like me to realize there is weakness in csv format 😉
Very informative video! Subscribed :)
Glad it helped! 🙏
Lol this video changed my life :D Thank you so much.
Very good stuff. The essence of information.
Glad you liked it!
@@robmulla I saw few more videos, insta sub. Thank you. Glad to find you.
Great video and content.
Man, I thought this video is a clickbait, but it was awesome. Thank you!
Thanks a lot, just brought down my database backup size to MBs.
Glad it helped. That’s a huge improvement!
I really hope you make a video about Data Cleaning in Python soon. Thanks a lot for all your awesome tutorials
I'll try my best. Thanks for the feedback!
Was very useful, thanks much
Thanks! Glad you learned something new.
super awesome tricks, thank you
Glad you like them! Thanks for watching.
It's useful for me, thanks a lot!
Happy to hear that!
Thank u very much for sharing such useful skills! 😉Subscribed!
Anytime! Glad you liked it.
Parquet really saved me )
Around one year data, each day is appr 2GB (csv format). Parquet is both compact and fast.
But have to use filtering and load only necessary columns “on demand”.
This content is really awesome
Appreciate that!
Fantastic video
Fantastic comment. 😎
Good tips on speeding up large file read and write
Glad you liked it! Thanks for the feedback.
Informative video! I've heard about feather and pickle, but never used them. I think I should give feather and parquet a try!
I'd like to get some materials on machine learning and data science that are not introductory - something for middle and senior engineers :)
Glad you found it useful. I’ll try to make some more ML videos in the near future.
Great video. Thanks
You are welcome!
interesting to learn the existence of parquet and feather files. nothing beats csv for portability and ease of use
Yea, for small/medium files CSV gets the job done.
Useful. Thanks.
Thanks, great comp. One thing about Parquet - it has some limitations in what chars column names can take, I spent quite some time renaming col names 1 year ago - perhaps that has fallen away by now.
Good point! I've noticed this too. Definately a limitation that makes it sometimes unusable. Thanks for watching!
Great video - It would have been good to at least mention the downsides to pickle and also the built in compatibility with zip files. Haven't come across feather before, will try it out
Great point! I did forget to mention that pandas will auto-unzip. I still like parquet the best.
@@robmulla - Agreed, parquet has some serious benefits
You know it also supports a compression option? Use it with gzip to see your parquet file get even smaller (and you only need to use it on write)
This video greatly helped me. I didn't know so many ways to dump a DataFrame. I then did a further test, and found the compression option plays a big role:
df.to_pickle(FILE_NAME, compression='xz') -> 288M
df.to_pickle(FILE_NAME, compression='bz2') -> 322M
df.to_pickle(FILE_NAME, compression='gzip') -> 346M
df.to_pickle(FILE_NAME, compression='zip') -> 348M
df.to_pickle(FILE_NAME, compression='infer') -> 679M # default compression
df.to_parquet(FILE_NAME, compression='brotli') -> 334M
df.to_parquet(FILE_NAME, compression='gzip') -> 355M
df.to_parquet(FILE_NAME, compression='snappy') -> 423M # default compression
df.to_feather(FILE_NAME) -> 500M
Nice findings! Thanks for sharing. Funny that compressing parquet still works. I didn't know that.
@@robmulla Actually if you check the docs parquet files are snappy compressed by default. You have to explicitly say `compression=None` to not compress it.
Snappy is the default because it adds very little time to read/write with modest compression and low CPU usage while still maintaining the very nice columnar properties (as you showed in the video). It is also the default for Spark.
Other compressions like gzip get it smaller but at a much more significant cost to speed. I'm not sure this is still the case but in the past they also broke some of the nice properties because it is compressing the entire object.
Nice video. I'm going to rewrite the storage on the parquet
You should! Parquet is awesome.
On the first pass when you timeit the csv writing you time both the writing to csv and generating the dataset. So you are likely having biased results as you only time the writing with other format. (Sure it does not change the final message, just want to point it out)
Also with timeit, you can use the -o flag of timeit to output the result to a variable, and this can help you to for example make a plot of the times.
Good point about timing the dataframe generation. It should be negligable but fair to note. Also great tip on using -o. I didn't know about that! It looks like from the docs it writes the entire stdout, so it would need to be parsed. ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit Still a handy tip. Thanks!
Very good and informative video
So nice of you. Thanks for the feedback.
Hey Rob, this was a really nice video! Can you please make a tutorial where you try to write this data to a database? Maybe sqlite or postgres? And explain bottlenecks? (Optional: with or without using an ORM).
I was actually working on just this type of video and even looking at stuff like duckdb where you can write SQL on parquet files.
thanks very helpful
Glad it helped
amazing info
Thanks!
Great videos! Thank you for posting them. I wonder if feather is faster to read a >2G file.tsv than csv in chunks.
Thanks for watching Ondina! I think it would depend on the data types within the >2G file. I think the only difference between tsv and csv is a comma ',' vs tab '\t' seperator between values. Hope that helps.
Thank you very much 😂, I got something totally new to me.
Happy to hear it.
Very nice explanation. Can you compare Dask and PySpark ?
Great Video!!!!!!!!!!!
Glad you enjoyed it
Another awesome video. It has become my favorite channel. Only regret is that I found it too late.
Small correction. It should be 0.3s 0.08s for parquet files. You mistakenly wrote 0.3ms and 0.08ms while converting.
Thanks.
Apprecate that you are finding my videos helpful. Good catch on finding that typo!
i was going to comment that, but decided to check first, least should have caught that. Good video.
Thanks!
Thanks for watching!
Great comparing, thanks, not sure if feather/pickle files i'm creating from Julia's script use some compression - none that i'm specifying out of the box .. but happens that the pickle files always end up being 1/2 the size smaller than the feather ones.
(havent compared those 2 to a parquet made file)
Experiment add the compression "Brotli" at the file create. The file size reduce considerably and the read is more fast a lot.
Example:
to save file:
from pyarrow import csv, parquet
parse_options = csv.ParseOptions(delimiter=delimiter)
data_arrow = csv.read_csv(temp_file, parse_options=parse_options, read_options=csv.ReadOptions(autogenerate_column_names=autogenerate_column_names, encoding=encoding))
parquet.write_table(data_arrow, parquet_file + '.brotli', compression='BROTLI')
to read file: pd.read_parquet(file, engine='pyarrow')
Oh. Very cool I need to check that out.
stumbled on to this awesome video and absolutely loved it. Just out of curiosity - what tool are you using for making Jupyter notebook with themes especially dark theme?
Glad you enjoyed the video. I have a different video that covers my jupyter setup including theme: ruclips.net/video/5pf0_bpNbkw/видео.html
great stuff
Thank you sir!
I'm really interested in the comparison against hdf file. My guess is that it's gonna be the fastest to read, however it prolly takes up more space.
I’m not sure. But I think feather files are pretty fast.
@@robmulla Hey Rob thanks for the reply. I had the impression that hdf maps the data taken in ram so there wont be much conversion once its read in the ram but I could be wrong. Also it would be interesting to investigate how feather works. I'll do some benchmarking on my m1mac and maybe get back to you.
12:28 "When your data set gets very large." - Me working with 800GB json files: :)
Good video regardless, i might give them a test sometime.
Haha. It’s all relative. When your data can’t fit in local ram you need to start using things like spark.
Cool. Would be nice to compare with storing data to an sql base (Postgres for example).
Great suggestion! This video only covers storing to flat files, but comparison of different relational databases is a great idea for a future video.
Just wow!!!!
Thanks!
great comparison. What about HDF5 format? Is it in anyway better?
Fantastic video as always. What are disadvantages of json? I use json because it can easily be passed to the front end.
Great question. I don’t use json much. It isn’t common for tabular/relational data and more for unstructured web based stuff I believe. It probably is pretty slow to read/write large dataset I’m guessing.
Thank you
Anytime!
Thank you for the video! I've basically never heard of parquet or feather and don't really know what type of file those are. I assume it's not an easy format to share with stakeholders for example. Is there a way to link those types of file to a database or perhaps import them in a data vizualisation tool (such as PowerBI or Tableau)?
Thanks for watching Jonathan. Glad you found the video useful. You are correct these file formats are more common for storage within systems that read the data via code and not sharing with stakeholders. CSV and excel still dominates for that type of thing.
keep uploading videos please!!
Thanks Sbg! I'm planning on it!
When we create a parquet dataset, can we dummycode the columns?
Nice video! Thank you. What about hdf5 format? Thanks!
Thanks! I haven’t used Hdf5 much but I’d be interested to hear how it compares.
Hi Rob! I love our channel. It is very helpfull. I would like to ask you a question: is HDF5 any better than all the options you showed in the video?
Good question. I didn't cover it because I thought it's an older, lesser used format.
@@robmulla so the answer is no?
@@leonjbr The answer is - I don't know but probably not. 😁
@@robmulla ok thanks.
I don't know about "better" but HDF5 is a very popular data format in science.
Thanks for the great benchmark. In R / Python hybrid environment I sometimes use `csv.gz` or `tsv.gz` to address the size issue with CSV but retain the ability to quickly pipe these through line based processors. It would be interesting to see how gzipped flat files perform. I do agree that parquet/feather is a better way to go for many reasons, they are superior especially from the data engineering point of view.
I do the same with gzipped CSV files. Good idea about making a comparison. I’ll add it to the list of potential future videos.
Nice video. How does the performance and storage size of parquet, feather compare to hdf/pytables?
Great question. I have no idea! I need to learn more about how they compare.
진짜 parquet는 혁명임...
저장용량은 확 줄이고 나중에 다시 데이터 불러올 때의 속도는 확 높이는 최고의 데이터 포맷
I agree. Parquet is great!
Hey! Thanks a lot for the video!
I'm having some issues with .parquet format in Jupyter Lab. It's subtracting the index by 1 hour.
I have an index (datetime format) that starts each day at 10am and ends 16pm. When I read as .csv or in .parquet using colab, it works fine as it should.
But when I pd.read_parquet it in Jupyter Lab, it changes the original index to starting at 9am and ending at 15pm. (basically -1 hour)
Do you have any idea why this is happening?
Thanks!
Thanks for watching. I don’t know what the problem could be. Could it be a difference in the time zone set in your jupyterlab instance of python?
@@robmulla Oh that could be it since I'm from Brazil and colab is probably executed in US. Didn't know that was a thing and could change the index of my data. I'll look into it. Thanks for the help!
edit: That's not the case (i think)
Both indexes (colab and jupyter) has tz='America/Sao_Paulo'
.index[0] in colab >Timestamp('2021-01-04 10:00:00-0200', tz='America/Sao_Paulo')
.index[0] in jupyter >Timestamp('2021-01-04 09:00:00-0300', tz='America/Sao_Paulo')
Somehow jupyter is still showing one hour less, even with the same timezone. I think it's better to convert the index in colab into a string, then read it in jupyter and converting again into datetime.
Hello! Very interesting! Thank you! Can you please tell me is any limitation for a DF to save in parquet in terms of number of columns? Excel allow around 16-17k columns to save! Thank you for the answer!
Very nice bro
Thanks. Hope you learned something!
In addition to everything, parquet is the native file format to spark and can fully support spark‘s lazy computing (spark will only ever read the columns and rows that are needed for the desired output). If you ever prep really big data for spark, parquet is the way to go.
That’s a great point. Same with polars!
@@robmulla Need to have a closer look at polars then! 🙂
really awesome video and always well explained. One naive question - problem I am facing. Can you share with the community how you would approach to write a lets say 50 MB structured file (CSV) into a remote SQL Database in a fast manner? I am asking, since my approach takes too much time and asking Stackoverflow was not a big help. Having a video here would be awesome :)
Thanks for watching and asking the question. Working with databases in python can be it’s own art and depend on the database type. 50mb isn’t too big but also depends on indexing and existing data in the database. You can use pandas to_sql but if that’s too slow you may need to try sqlalchemy.
sqlloader for oracle. disable constraints,index,triggers beforethen enable it later. It even works for Million rows/10GBs of data.Or external tables if you've server access.
I would use the native bulk data importers for the database in question. They are optimized to import GBs even Terabytes of data. Just have to remember to disable the constraints, indexes and triggers before the import and then reenable them. I know both SQL Server, Sybase and Oracle have bulk data loaders. I image DB2 and the other big databases out there also have similar functionality. You can script the disabling and reenabling of the constraints, indexes, etc. Also, you setup the database to trigger the import when the file is dropped in certain location or run it on a schedule.
first time watching, insta subbed, great stuff, could you do an add on to this video and go over some fast ways to read/write to a database , SQL/mongo ?
Happy to have you as a new sub. Yes, I really want to make a video about databases and how to store/read/write with python to them.