Это видео недоступно.
Сожалеем об этом.
Speed Up Data Processing with Apache Parquet in Python
- Добавить в
- Мой плейлист
- Посмотреть позже
- Поделиться
Поделиться
HTML-код
Размер видео:
- Опубликовано: 4 сен 2024
Комментарии • 18
Следующие
Автовоспроизведение
Argument Parsing with argparse in PythonNeuralNine
Просмотров 14 тыс.
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
Просмотров 33 тыс.
Garbage Collection in Python: Speed Up Your CodeNeuralNine
Просмотров 16 тыс.
TRAVIS HUNTER GOES OFF FOR 3 TDS IN SEASON OPENER (NDSU Gameday Vlog)Travis Hunter
Просмотров 583 тыс.
Indiana Fever Postgame Media Availability (vs. Los Angeles Sparks) | September 4, 2024Indiana Fever
Просмотров 161 тыс.
U.S. seizes Venezuelan leader Nicolás Maduro's plane; South Florida reactsCBS Miami
Просмотров 167 тыс.
Wicked - Official Trailer 2Universal Pictures
Просмотров 251 тыс.
This INCREDIBLE trick will speed up your data processes.Rob Mulla
Просмотров 264 тыс.
Is Rust the New King of Data Science?Code to the Moon
Просмотров 136 тыс.
Image Annotation with LLava & OllamaSam Witteveen
Просмотров 25 тыс.
Automatically Schedule Python Scripts with Cron JobsNeuralNine
Просмотров 14 тыс.
Test-Driven Development in Python: Test First Code LaterNeuralNine
Просмотров 11 тыс.
Structural Pattern Matching in Python: Not Your Average Switch-CaseNeuralNine
Просмотров 9 тыс.
What is Apache Parquet file?Riz Ang
Просмотров 75 тыс.
Polars: The Next Big Python Data Science Library... written in RUST?Rob Mulla
Просмотров 169 тыс.
Selenium Headless Scraping For Servers & DockerNeuralNine
Просмотров 29 тыс.
How to get Spongebob El Primo FOR FREE!Brawl Stars
Просмотров 13 млн
ОДИН ИЗ СИЛЬНЕЙШИХ СЕКРЕТНЫХ ЗОМБИ В PVZ!IGRARIUM - Игровые обзоры
Просмотров 503 тыс.
Кровожадный зверь или ласковая доча?🥹Я считаю не оправдыванно о ней так… а что думаете вы?Нравится?Мама Дзена
Просмотров 2,9 млн
Я уговариваю своего друга выпить Лава ЛаваАришнев
Просмотров 4,6 млн
🤣 Проблемы миллионеров: выхлоп Ламборгини оказался слишком жарким! | НовостничокНОВОСТНИЧОК
Просмотров 360 тыс.
Блатная песня Эдуарда Сурового 😳 #ComedyClub #КамедиКлаб #эдуардсуровый #ГарикХарламов #тнт4 #шансонТНТ4 Shorts
Просмотров 1,1 млн
история про хомяка. см в тг «хей! это марьяна!» #тикток #тренд #хомякMaryana Lokel
Просмотров 686 тыс.
It hurt my eyes when I saw the calculator even though a python console exists. For a future video it would be interesting to include a comparison with the pickle, feather and jay formats.
The reason why the memory taken for both dataframe is because of the datatypes. Csv will convert most predefined datatypes into string which is much larger than numeric datatypes
Had never heard of Parquet. Thank you. It looks very useful.
Interesting, but I am not convinced. If I got it correctly, when selecting columns the time went down by a factor of 3 for both methods (4->1.3s and 0.24->0.08s). So parquet is better anyway, but whether it is specifically better for column-wise access still needs to be demonstrated.
As the other commenter, I would also be interested in a broader comparison with other formats.
Great channel, keep up the good work.
You are a genius! Fantastic video! Thanks!
Why not compare sizes of files on a disk? Are they different?
I think pandas tried to infer data types from CSV and often defaults to string. This takes much more space and CPU. Parquet has data types built in to the file so pandas does not need to infer anything. What would be more interesting is when reading the CSV, specify the data types to make it a more “even” comparison.
Nice tutorial! Very introductory!
Usually go for feather format. Never understood the difference - just that for me and the data im handling (few columns) feather seems to be quicker.
I have a related question: Since parquet files are "column-oriented", do you think they would be a good way to store database backups?
Example scenario: Let's say you want to store a database backup, assuming that the data in the database is in a stable state; it contains a large number of product records; maybe their IDs, descriptions, how many purchases for a product, the product prices, etc. Would it be a good idea to store a backup of this database using a parquet file since the backups would be faster to load in case of the data becoming unstable via a transaction in the future? You could rollback the transactions too; however, what if too many of them fail, and all of them need to be rolled back?
Parquet isn’t a generic file format. It IS a table so you’re not “store backups” in a Parquet file. I guess you could backup each table independently but nearly every real DB has much more efficient and powerful native backup infrastructure.
Parquet however is where a lot of transactional data ends up for analytics. Columnar storage is more suited to large analytic workloads. Row stores are more suited for OLTP workloads. You would never want to use Parquet for things like “deduct $7.83 from customer 1234’s checking account”.
@@KingOfAllJackals That is exactly what I thought of possibly using it for; I could use it to back up tables in the database. You did interpret that correctly. I would NOT edit the contents of the parquet backups.
Tnx Capt.
nice
Awesome!
ok Boss
I am Junior data scientist From Pakistan