Enabling cuDF using a single flag is insane! However, I just wannted to point out (especially for new pandas users) that the proper way to calculate average price per city in pandas is by using groupby. Running `df.groupby('Town/City')['price'].mean()` in plain pandas is blazing fast (a few ms), nothing compared to 19 minutes. That doesn't mean that cuDF is not useful, but don't forget that using plain pandas properly can get you a long way.
Hey Sentdex, can you take this video down so my manager doesn't find out that I sped up the entire codebase by 200 fold with just one line and I end up getting appreciation bonuses?? Jokes aside, this is absolutely wild. What a gamechanger. Thanks a lot as always, Kevin!
Awesome video! I encountered a similar issue where I had to process ~8 GB of data using an AWS Lambda (limited RAM and time). I used polars (pandas alternative written in rust from scratch for performance) and I found it to be blazing fast . It's really really useful - especially with non nvidia devices like my raspberry pi and the AWS lambda function. You should definitely check it out!
For the read_csv operation I would be curious what is actually taking the most amount of time with the Pandas object, I suspect it's building the Python string objects, and if so I wonder if you have PyArrow installed and set pd.options.future.infer_string = True it would be much faster? And in general it makes sense that using strings are slow in Pandas, because it's falling back to looking up a Python object by via reference to it, it's actually a much more interesting comparison for number or datetime data types. For strings it would be much more interesting if you had use the PyArrow string data type.
Hello Sentdex, I am reaching out to you regarding your Neural network from scratch series ? any updates on that, you left on pt 9 Please do continue its an awesome series and any updated on Book discounts for use for the Black friday ?? please do help
3:38 it doesn't have the prices "in quotes like a string", it's a properly exported csv that has ALL fields quoted. Your pd.read_csv is missing quoting=csv.QUOTE_ALL (or just quoting=1) and optionally quotechar='\"' . The only "magic" pandas is doing is interpreting that column as quoted. If you add those options, I'm guessing cudf will run just as well, since the ingest portion will still be using python standard lib or at least pandas C implementations.
What's the reasoning for not using groupby in this demo? Wouldn't that be the more natural and faster pandas method to use - instead of looping over everything. Feels a little disingenuous to compare poorly optimised pandas code that no one would actually write.
9:01 I'm running late for work, but wouldn't it be possible to vectorize this code and it be faster than both the CuDF and the CPU versions of this benchmark? Curious to see how CuDF plays with vectorized versions. If I get the time I'll try some experiments and update this comment.
On a trial dataset with 10000 rows of fake data (~500kB in size), using GroupBy to find the mean of the "unique prices" was 72x faster than the one implemented in this one at 6:45 or so. I expect that it will become exponentially faster with a dataset 5GB in size. I used groupby in another part and that naively halved the time, but it's still far from optimized and I'll probably upload my findings here with a ~5GB dataset running on Colab sometime in the next couple of weeks. After that, I'll try the CuDF version.
@@EarlZMoade thanks for your comment! I made a dataset with random data of 1.5GB for benchmarks, and for 2 operations (the first groupby at 6:45 and the upper-lower price bands) it took Sentdex's code 4 mins and 2 seconds, and 19 mins and 43 secs respectively on my computer. An optimized version (just Groupby and vectorization, nothing fancy) took 2.25 SECONDS and 13.5 seconds respectively. I'm sure CuDF would be even faster but in this case there was a lot of performance left on the table.
how about Mojo? Mojo can actually use GPU to accelerate calculation too, currently Mojo support numpy,pandas in cpu. It will be fun to make a comparison with CuDF. Mojo is more like a superset for python.
hi thank you for showing this great way to use gpu, it seems really easy but I ran into an error that I couldn't find the solution anywhere: UserWarning: cudf.pandas detected an already configured memory resource, ignoring 'CUDF_PANDAS_RMM_MODE'=managed_pool can anyone help me?
Has anyone been able to install the library with pip as he shown ? I keep getting errors like - Preparing metadata (pyproject.toml): finished with status 'error' :')
Ngl, your video looks like Deepfaked, I also think at this point you can probably deepfake yourself with a bit editing to make an people believe its real.
Enabling cuDF using a single flag is insane! However, I just wannted to point out (especially for new pandas users) that the proper way to calculate average price per city in pandas is by using groupby. Running `df.groupby('Town/City')['price'].mean()` in plain pandas is blazing fast (a few ms), nothing compared to 19 minutes. That doesn't mean that cuDF is not useful, but don't forget that using plain pandas properly can get you a long way.
😊
Hey Sentdex, can you take this video down so my manager doesn't find out that I sped up the entire codebase by 200 fold with just one line and I end up getting appreciation bonuses??
Jokes aside, this is absolutely wild. What a gamechanger. Thanks a lot as always, Kevin!
Awesome video! I encountered a similar issue where I had to process ~8 GB of data using an AWS Lambda (limited RAM and time). I used polars (pandas alternative written in rust from scratch for performance) and I found it to be blazing fast . It's really really useful - especially with non nvidia devices like my raspberry pi and the AWS lambda function. You should definitely check it out!
I've nearly forgotten pandas after going with polars. Pandas was great for its time.
there is also dask which also allows deployment on clusters with several workers similar to spark
@@incremental_failurepolars doesn’t even offer a .info() method. Simply inferior()
@@AyahuascaDataScientist df.describe()
Wonderful! Thank you. It would be an interesting comparison with polars library as well.
Once again thank you for sharing :-) You are appreciated.
Great find!🎉
Thanks bro, will give it a test run.
Outstanding. Thank you for this informatoin.
Thanks a lot for sharing. super useful
the kubota warrior is back with the heat 🗣🗣🗣
Missing your tutorials man, trying to install this on windows...
Thanks a ton!
For the read_csv operation I would be curious what is actually taking the most amount of time with the Pandas object, I suspect it's building the Python string objects, and if so I wonder if you have PyArrow installed and set pd.options.future.infer_string = True it would be much faster?
And in general it makes sense that using strings are slow in Pandas, because it's falling back to looking up a Python object by via reference to it, it's actually a much more interesting comparison for number or datetime data types. For strings it would be much more interesting if you had use the PyArrow string data type.
Hello Sentdex, I am reaching out to you regarding your Neural network from scratch series ?
any updates on that, you left on pt 9
Please do continue its an awesome series
and any updated on Book discounts for use for the Black friday ??
please do help
Hi! Can you share what hardware were you operating on?
Please post more often videos Harrison
Thanks for sharing... I would be curious about a comparison between accelerated version of pandas and polars.
Impresive!
3:38 it doesn't have the prices "in quotes like a string", it's a properly exported csv that has ALL fields quoted. Your pd.read_csv is missing quoting=csv.QUOTE_ALL (or just quoting=1) and optionally quotechar='\"' . The only "magic" pandas is doing is interpreting that column as quoted. If you add those options, I'm guessing cudf will run just as well, since the ingest portion will still be using python standard lib or at least pandas C implementations.
Great video. Just one thing: instead of comparing cuDF with vanilla Pandas, wouldn’t a comparison with Modin be a more appropriate one?
I'd like to see this as well, scale Modin on a Ray Cluster/Single Node using a GPU
jesus christ my life has totally changed
Polars in rust wrapped in tqdm.
10 sec compared to 19 min?!?! Holy f....!!!!!
Cool!
What if my RAM (128GB) is larger than my VRAM (32GB)? Is normal pandas still faster for data that's larger than the VRAM?
Can you make a tutorial on howa to install cuDF, i saw that there is a lot of things to install before it
How about Polar?
CuDF vs Polars may be.
@sentdex sir please make videos on 3d deep learning, its really exciting to see your work on point cloud
Will it work on apple notebook?
What happens if your dataset doesn't fit in GPU memory?
I have tested that and it is slower than on CPU. Pretty much you use all GPU memory and rest going to RAM. Then it is back and forward.
This looks like it would beat out something like Dask for non-distributed large datasets. Is that the case?
Next brow... how to manage your gpu memory. Loading your dataset and training your model
What's the reasoning for not using groupby in this demo? Wouldn't that be the more natural and faster pandas method to use - instead of looping over everything.
Feels a little disingenuous to compare poorly optimised pandas code that no one would actually write.
Does it still use memorry to fit the entire dataset?
9:01 I'm running late for work, but wouldn't it be possible to vectorize this code and it be faster than both the CuDF and the CPU versions of this benchmark? Curious to see how CuDF plays with vectorized versions. If I get the time I'll try some experiments and update this comment.
On a trial dataset with 10000 rows of fake data (~500kB in size), using GroupBy to find the mean of the "unique prices" was 72x faster than the one implemented in this one at 6:45 or so. I expect that it will become exponentially faster with a dataset 5GB in size. I used groupby in another part and that naively halved the time, but it's still far from optimized and I'll probably upload my findings here with a ~5GB dataset running on Colab sometime in the next couple of weeks. After that, I'll try the CuDF version.
Very interested to see this comparison. No one writes pandas code like this video where looping is both unnatural and terribly optimised.
@@EarlZMoade thanks for your comment! I made a dataset with random data of 1.5GB for benchmarks, and for 2 operations (the first groupby at 6:45 and the upper-lower price bands) it took Sentdex's code 4 mins and 2 seconds, and 19 mins and 43 secs respectively on my computer. An optimized version (just Groupby and vectorization, nothing fancy) took 2.25 SECONDS and 13.5 seconds respectively. I'm sure CuDF would be even faster but in this case there was a lot of performance left on the table.
how about Mojo? Mojo can actually use GPU to accelerate calculation too, currently Mojo support numpy,pandas in cpu. It will be fun to make a comparison with CuDF. Mojo is more like a superset for python.
how would you start to make an AI that deals with data using python. I'm trying to learn more about this
please make a video on custom GPTs, actions and open ai dev event
Can you please make a video for using cudf in python scripts? it's much trickier in the scripts
Does this new accelerator speed up groupby() operations?
Is this faster than numpy??
**does scipy work with it?
Will this work with geopandas?
Hi, how to do it for geopandas? is it same?
Is this compatible with Python 3.7 at all? Last time I tried installing CuDF I remember version incompatibility stopping me.
Does cudf.pandas work with apple silicon MPS GPU framework instead of just cuda?
Any alternative for Apple Silicon ?
Dask maybe
Dask and swifter have like 1000x processing speed for some batch jobs I have with airflow, so def try that. It’s also drop in
hi thank you for showing this great way to use gpu, it seems really easy but I ran into an error that I couldn't find the solution anywhere:
UserWarning: cudf.pandas detected an already configured memory resource, ignoring 'CUDF_PANDAS_RMM_MODE'=managed_pool
can anyone help me?
what about amd gpu ?
Hey does anyone know how to get this working on normal visual studio code python file instead of opening it in jupyter? Thanks
3:20
Has anyone been able to install the library with pip as he shown ? I keep getting errors like - Preparing metadata (pyproject.toml): finished with status 'error' :')
Very interesting, thank you for sharing 😊but this seems to be not compatible with MacOS and 2,9 GHz Quad-Core Intel Core i7 processor.
which would make perfect sense since you would need a cuda enabled nvidia gpu for cudf to work
Anyone have any luck getting it installed on a local windows machine?
You cannot install it on Windows because CuDF is only supported on Linux. Instead, you can make a WSL instance and install Python and CuDF on there.
Ditch pandas and use spark, your local data engineer will thank you
I'm waiting on AMD to enter the DS space so I can use my 7900XTX to do things LOL
like ** 10 == 0.0s
Ngl, your video looks like Deepfaked, I also think at this point you can probably deepfake yourself with a bit editing to make an people believe its real.