00:18 #1. Writing into csv with unnecessary index 00:53 #2. Using column names which include spaces 01:25 #3. Filter dataset like a PRO with QUERY method 01:44 #4. query strings with(@ symbol) to easily reach variables 02:07 #5. "inplace" method could be removed in future versions, better explicitly overwrite modifications 02:35 #6. better Vectorization instead of iteration 03:01 #7. Vectorization method are preferable than Apply method 03:30 #8. df.copy() method 04:08 #9. chaining formulas is better than creating many intermediate dataframes 04:28 #10. properly set column dtypes 05:01 #11. using Boolean instead of Strings 05:25 #12. pandas plot method instead of matplotlib import 05:45 #13. pandas str.upper() instead apply and etc 06:10 #14. use data pipeline once instead of repeating many times 06:41 #15. learn proper way of renaming columns 06:59 #16. learn proper way of grouping values 07:31 #17. proper way of complex grouping values 08:01 #18. percent_change or difference now could be implemend with function 08:25 #19. save time and space with large datasets with pickle,parquet,feather formats 08:58 #20. conditional format in pandas(like in Microsoft Excel) 09:22 #21. use suffixes while merging TWO dataframes 09:48 #22. check merging is success with validation 10:13 #23. wrapping expression so they are readable 10:33 #24. categorical datatypes use less space 10:55 #25. duplicating columns after concatenating, code snippet
@@robmulla i wish i commented better as English is not my native language, Thank You for bringing us Valuable Tutorials that saves us our time and energy! I wish i helped and learned from you more
Please keep doing this. No additional jargon, crisp, straight to the point explanations are what are required. No body needs a 10 hour tutorial. Thank you for this.
I'll try my best! I do like trying to cram a ton of information into a short format, but these videos take a while to create. I totally copied the format from mmcoding (check out the channel if you haven't already)
The pandas query function does not outperform the loc method. In fact, it is sometimes much slower when your query/data is so big. We industry users will utilize the loc method for quick EDA. Query might be useful when you have a scheduled cron
Rob, thank you for all the time and energy you have put in for us. Would appreciate an updated video on "Exploratory Data Analysis" may be expanding on your year old one. Thank you again!
This can be, some of my first times commenting in youtube after years of usage. This video was INCREDIBLY USEFUL! There's a lot of my previous team members did on scripts and sometimes are complicated to maintain or create new ones following the same logic. This covers exactly what they used and what is the best option to rewrite it and make it more understandable. Thank you so much for this godly information.
You're very welcome! I really appreciate the positive feedback. I’ll try to keep making helpful videos like this. Share with your friends in the meantime!
I was thinking that I was pretty bad, but surprisingly I usually only make 2 mistakes from the video (which is a cool chance to improve). I just love such videos because not only they help to improve your skills, but also to be realistic about your expectations and ambitions. Thanks for the video, Rob!
The 'Query' method in particular is relatively unknown. In conjunction with not using 'snake case' this leads to beginners being very inefficient at code due to not being able to use dot syntax I am just an intermediate level so I can relate to many of these mistakes. It goes as deep as university however. They do not teach clean, efficient code at all!
I had no experience with Pandas before joining a team where I need to work with it a lot. Have been learning as I go and it feels like the perfect time to see this video. I have enough time under my belt to have made or inherited code with many of these mistakes. With that context, I absorbed so much from what you shared. Thank you for helping me improve. I’m excited to refactor and apply what I learned!
1:28 Before I discovered your videos, I'd never considered using the query method. The examples I've previously seen online made it look like a me-too add-on for seasoned SQL users. Using conditionals to mask off rows seemed just as easy and more pythonic. Also, at work, I typically filter with a script when I pull down the data, so by the time I get the data into pandas, I just need to tweak. But, you've shown me the light. Thanks!
I totally understand where you are coming from. Its important to keep in mind query can be slower, but for quick filtering it can be really quick and clean way to filter data. It really depends on what I'm doing. Glad I showed you something new though!
I'm currently working on my first major pandas project and I reckon that I may have done around 15/25 of these 'mistakes'. Looks like I have some optimisation to do over the coming days!
I started to watch your videos recently, and from now on I'm doing the chaining and putting each function in "one row" to make the data cleaner, and also, the query method, so powerful and simple, I was used to replicate the dataframe with the column and value searched to filter my df. You are boosting my studies! Thanks for that!
Dear Rob, I'm a total beginner in Python and Pandas. From what I understand, the warning at 3:30 is not about making a copy of sliced data, but rather about not using the .loc method and using "direct assignment" for columns (or whatever it's called). I could be wrong, but this is what I've gathered from reading the documentation and encountering a similar warning in my code. Thanks for your valuable content. It has been a great help
Thanks, great tips! I've been using pandas for years, and I've only recently started using some of these (particularly query, and didn't know about the @ operator)
This video rocked me. I've been using python for a few months and watching this video made me bust out my laptop so I could try all of these items out. Thank you for this.
Found lots of favorite annoyances and learned a few new tricks! I'll add a shout-out to the ".pipe()" method to allow for wrapping all your transforms in a single statement when a single .method can't cover the required transform. An added bonus of "pipe()" - since it's using user defined functions to do the transforms, you can add decorators to automatically print out metadata on the resulting transform steps to get a quick insight into potential bugs.
4:25 I'll add a pet peeve of mine: Using chaining, but not placing each method call on a new line. One of the greatest benefits of method chaining is easier track of changes, since everything is moving linearly: rightwards, or downwards, with linebreaks and parantheses. having multiple method calls on some lines but not others breaks this one directional thought process and makes it much harder to skim code.
@@robmulla happens. Actually, I got pissed about this the other day when I tried "Black Formatter", because that only puts methods on new lines, not dot-notation attributes. E.g., calling df.T or df.columns would not result in a new line. Utterly annoying for my little OCD brain.
9:13 another pet peeve, though this one is more important than the last one. Do not use backslashes. ever. well, not _never_, use them when writing a `with` statement with more than two context managers. But otherwise, don't. I'll quote the `Black` (the formatter) documentation: Backslashes and multiline strings are one of the two places in the Python grammar that break significant indentation. You never need backslashes, they are used to force the grammar to accept breaks that would otherwise be parse errors. That makes them confusing to look at and brittle to modify. This is why Black always gets rid of them.
Awesome stuff. I've been using pandas for over 4 years, but it never occurred me to start using the query method instead of loc (despite me finding it tiresome to keep repeating "df" all over the place when using loc). I also appreciate the quick format. You see RUclipsrs taking too long to say nothing at all, so congrats on actually going through 25 tips in 10 minutes. You got yourself a sub!
*Introduction:* This video summarizes 25 common mistakes made by beginners learning pandas in Python. *Data Cleaning and Manipulation:* *Section 1 (**00:00**)* : Avoid unnecessary elements in CSV files by excluding the index or setting an index column when reading. *Section 2 (**00:52**)* : Use clear and consistent column names. Replace spaces with underscores for readability and dot syntax access. *Section 9 (**05:01**)* : Represent True/False conditions with boolean values for clarity and efficiency. Avoid using text strings ("yes", "no"). *Section 11 (**06:29**)* : Employ *_fillna_* for flexible missing value imputation (e.g., filling with mean, specific value). *Efficient Data Transformations and Calculations:* *Section 4 (**01:50**)* : Leverage `@` symbol for variables in queries for cleaner syntax. *Section 5 (**03:15**)* : Prioritize vectorized functions over `.apply` for efficient calculations. Use `.apply` judiciously when vectorization is not feasible. *Section 6 (**03:44**)* : Avoid unnecessary intermediate DataFrames. Chain transformations instead to modify the same DataFrame for cleaner code and memory efficiency. *Section 10 (**05:31**)* : Utilize built-in methods: * `df.plot` for quick data visualizations. * `.str` for efficient column-wise string manipulations. * Create reusable functions for common transformations. *Section 12 (**06:54**)* : * Use `rename` dictionary for clear and efficient column renaming. * Leverage `groupby` for flexible group-wise aggregations. * Utilize built-in methods like `pct_change` and `diff` for calculations. *Data Storage and Handling:* *Section 8 (**04:37**)* : Ensure proper data type assignment (e.g., datetime) for accurate operations and avoid errors. *Section 13 (**08:15**)* : Consider alternative file formats like parquet, feather, or pickle for large datasets. These offer better compression and performance compared to CSV. *Section 15 (**09:11**)* : Utilize `style` attribute for rich DataFrame formatting within pandas. *Advanced Techniques and Best Practices:* *Section 3 (**01:21**)* : Utilize the `.query` method for advanced filtering with concise and readable syntax. *Section 7 (**03:44**)* : Understand DataFrame slicing and copying. Treat slices as read-only to avoid unintended modifications. Use `.copy()` for truly independent DataFrames. *Section 14 (**08:15**)* : Choose the file format based on size, use case, and compatibility. Consider parquet for queryability and compression, feather for efficient data exchange, and pickle for flexibility. *Section 16 (**10:27**)* : * Break down chained method expressions for better readability. * Employ categorical data types for efficient storage and operations. * Prevent and identify duplicate columns using `df.columns.duplicated()`. I hope this combined and formatted transcript proves helpful!
Regarding the 'inplace' comment at 02:07 there's a very valid and very useful reason to prefer that and it's memory usage. `df = df.reset_index()` or anything similar creates an entire copy of the dataset before replacing it with the original and for extremely big data that is a problem, it may get over the physical memory available and have the OS kill the script.
Interesting. I’ve heard this but then also thought it was debunked. I think the fact that the pandas core developers want to remove inplace gives good reason to try and avoid using it.
@@robmulla I guess it's more "functional style" to do it like they want but I recently had this problem with the memory when creating copies and I solved it by using 'inplace' (Python 3.7 and Pandas 1.3.5 if it matters)
found your channels few days ago and man you have some epic content . The noob mistakes here are the exact way most tutorials teach you..just wondering why the hell the non noob ways are not taught as they are easier and shorter and the syntax makes more sense... thank you for this video
I can't believe I watched this whole video and only 2 of them were things I didn't know about! Thank you for sharing!
Год назад+1
At 6:23 (#14) you're returning the dataframe, but you're also modifying it in place. Having a return there gives the impression that the original dataframe isn't modified, specially if you also assign it to itself later. It ties back to #5.
Rob, as always, fantastic video. I have to admit, i get caught on some of those mistakes so it is great to have you point out and make suggestions on how to correct them. Thanks for sharing. Much appreciated.
Hey, Rob! Super video this one. I myself am Sr. DS working each day intensively with pandas, I will implement many of the tips you show! Thanks a million :)
Hey Rob! You got me on that one right off the bat! I write a file to csv and when I load it back in, I get an 'unnamed' column and I wonder why....then I have to drop the column. 🤐Unnecessary work! Thanks a heap!
That's good to hear that you learned something new only a few seconds into the video :D - if you enjoyed it please share it on social or with any friends who might learn from it.
Oh god. I clicked on this video just to confirm that this is one more overly exaggerated self-confident dude trying to teach newbies of 2 weeks experience. After watching this, this is god damn life changing. As an engineer focusing on fluid dynamics and floater response, I use pandas daily basis. Out of 25, I didn’t know approximately 20. Every single person who has any plan to use pandas must watch this. Awesome!
Thank you! The .diff method is a lifesaver when computing velocities. The advice on not using inplace is excellent i got into various troubles because of it but i thought that's what the "experienced guys" do.
Thanks for watching. inplace is very tricky. Diff method is really powerful, and there are parameters you can use within it depending on your use case.
The space need to be avoid part is so true! But wait a second, every time I face the space but not underscore is from others data, so I think what we actually need is how to deal with the space condition.(Which is a pain of journey)
Maybe rename all the columns with versions without a space. Like, you replace all the spaces with an underscore. df.rename can take dictionaries or even a mapper function so this is easy to do. Using a dictionary is preferable as you can just reverse map it, if you want to use the columns with spaces in them in the end.
8:17.....this loop maybe...can be replaced....maybe.....with creation of another colum with has the value of i-1....after_row.....extract the list of this column....[1:-1]....append(0)....then....insert this new list in row_after....then..percent_calc.... the end
great video! wanted to add on #7, may be someone would find that helpful: in case you need to apply some function to a several values in a row, one of the fastest solution is numpy.vectorize smth like: def divide(num, denom): if denom == 0: return 0 else: return num / denom so instead of doing df["div"] = df.apply(lambda row: divide(row["value1"], row["value2"]), row=1) you go with df["div"] = np.vectorize(divide)(df["value1"], df["value2"])
Oi! There were several of those I didn't know. I wouldn't have thought I was a noob, but I guess we all have a bit of that in us. Thanks for the video!
Great video. Lots of operations and procedures that are helpful for effective coding. Would be really helpful to have a cheat sheet linked for easy reference.
.query is one of my favorites. I use it all the time, but it is still not as flexible as the normal filtering way. For example you can not use .isna() method or IN for comparison. Though you can now use columns with spaces in by enclosing them in ` `
Totally query is great. But did you know in recent versions isna() does work in query. Same with IN - I use it all the time against lists using the @ to reference the list.
Great video as always. I will start exploring query method more. Rob, Can you please make a video on how feature engineering, especially how to create new features using aggregation etc. Thank you
3:24 vectorized version: what if I have a that removes TRI from name EG name: 1976: Hasely Carwford (TRI) and add new column ? I tried something like df['new_column'] = new_column_func(df['name']) Actual use case with test file as below sample.csv size,age,team,win,date,prob,file small,31,green,False,2022-04-09,0.3394,green.txt medium,39,blue,True,2022-12-13,0.0501,blue.txt Objective, create a new column as file_extn and save a file as sample_extn.csv Code : import pathlib df_csv = pd.read_csv('sample_data/test_csv.csv') # Function for new column def new_column_func(row): print(row) path = pathlib.PureWindowsPath(row) return path.suffix df_csv['file_extn'] = new_column(df_csv['file']) df_csv.to_csv('sample_data/testScan2_csv.csv', index = False) I am getting error as TypeError: expected str, bytes or os.PathLike object, not Series What do you suggest? Thanks in advance
Seems like a pretty specific case, but string operations (including regex) can be vectorized. For example df['name'].str.replace('TRI","") would remove TRI from any string in that column. Hope that helps!
Thanks for the video!! A small comment about number nine, creating multiple intermediate dataframes. I understand that this can be costly in terms of memory, but I also think it can be nice for debugging and understanding during the development phase. Moreover, using the same name 'df' once and another can be prune to errors if you have different operations in different cells and you are 'playing' skipping some of them to see the effect, because you don't know which 'df' are actually taking as input.
Good point! It really depends on what you're doing and the time it takes to develop sometimes is more important than the code itself. However, once you are done debugging then changing it to using chaining methods is typically preferred.
did you ever tried to use np.vectorize function to apply transformations over a df column? that one is along with my favorites. amazing video btw, subscribed!
I got to admit, that I regularly make 65% of this newbie "mistakes". That's why I am specifically helpful for your tips how to optimize my coding structure! Thanks a lot for your inputs!
10:25 Wrapping chains on parentheses. Something else I hadn't considered. Continuations using "\" look ugly. What would be nice is if the debugger would point to the method that failed instead of the beginning of the chain.
Yes, I usually use the \ when working fast. But I also like using the black autoformatter which will automatically change it to use (). You can even use the extension lab_black in jupyter which will do it for each cell you run.
What a brilliant video! My Q is on Q18, how do you do 3m/3m annualised percentage change? Cant seem to find any literature on this anywhere!?! MANY THANKS
Thanks for the feedback. I'm not sure about your specific question, but I think it might be possible if you use the periods and/or freq parameters in the pct_change method. freq is specific to time series data and you can give it things like M for month. Check out the docs here: pandas.pydata.org/docs/reference/api/pandas.DataFrame.pct_change.html
Concerning #24 "categorical": Does parquet also support this datatype? So, if a mark a column as categorical, saved the dataframe to parquet and read it back again do dataframe, will this column still be categorical or will it be a string?
I need to implement the chaining methods and using functions into what I do, much easier to use and read. Great video as always.
Totally. Just those two things alone are huge! Glad you enjoyed the video.
00:18 #1. Writing into csv with unnecessary index
00:53 #2. Using column names which include spaces
01:25 #3. Filter dataset like a PRO with QUERY method
01:44 #4. query strings with(@ symbol) to easily reach variables
02:07 #5. "inplace" method could be removed in future versions, better explicitly overwrite modifications
02:35 #6. better Vectorization instead of iteration
03:01 #7. Vectorization method are preferable than Apply method
03:30 #8. df.copy() method
04:08 #9. chaining formulas is better than creating many intermediate dataframes
04:28 #10. properly set column dtypes
05:01 #11. using Boolean instead of Strings
05:25 #12. pandas plot method instead of matplotlib import
05:45 #13. pandas str.upper() instead apply and etc
06:10 #14. use data pipeline once instead of repeating many times
06:41 #15. learn proper way of renaming columns
06:59 #16. learn proper way of grouping values
07:31 #17. proper way of complex grouping values
08:01 #18. percent_change or difference now could be implemend with function
08:25 #19. save time and space with large datasets with pickle,parquet,feather formats
08:58 #20. conditional format in pandas(like in Microsoft Excel)
09:22 #21. use suffixes while merging TWO dataframes
09:48 #22. check merging is success with validation
10:13 #23. wrapping expression so they are readable
10:33 #24. categorical datatypes use less space
10:55 #25. duplicating columns after concatenating, code snippet
Thanks for making this!
@@robmulla i wish i commented better as English is not my native language, Thank You for bringing us Valuable Tutorials that saves us our time and energy! I wish i helped and learned from you more
egg bro
thanks, I like no 4
This needs to be pinned
Please keep doing this. No additional jargon, crisp, straight to the point explanations are what are required. No body needs a 10 hour tutorial. Thank you for this.
I'll try my best! I do like trying to cram a ton of information into a short format, but these videos take a while to create. I totally copied the format from mmcoding (check out the channel if you haven't already)
Matt Harrison's "Effective Pandas: Patterns for Data Manipulation" is one of the best resources I've read on idiomatic pandas.
I really need to get myself a copy! He knows his stuff for sure.
He has a great video (series?) on effective pandas also!
ty i will look into this book
The pandas query function does not outperform the loc method. In fact, it is sometimes much slower when your query/data is so big. We industry users will utilize the loc method for quick EDA. Query might be useful when you have a scheduled cron
Yea. Query isn’t for speed of processing but speed of writing the code.
I have been working 2 years now with pandas and I can strongly affirm that I have made like 70% of those bad practices, appreciate a lot your video!
Thanks for commenting. Honestly I still make many of them to this day.
Rob, thank you for all the time and energy you have put in for us. Would appreciate an updated video on "Exploratory Data Analysis" may be expanding on your year old one. Thank you again!
This can be, some of my first times commenting in youtube after years of usage. This video was INCREDIBLY USEFUL! There's a lot of my previous team members did on scripts and sometimes are complicated to maintain or create new ones following the same logic. This covers exactly what they used and what is the best option to rewrite it and make it more understandable.
Thank you so much for this godly information.
You're very welcome! I really appreciate the positive feedback. I’ll try to keep making helpful videos like this. Share with your friends in the meantime!
I thought I was pretty good in Pandas, but you gave me so many new things to improve. HUGE thank you!
Glad I could help! I'm constantly learning better ways to do things in pandas myself.
I was thinking that I was pretty bad, but surprisingly I usually only make 2 mistakes from the video (which is a cool chance to improve). I just love such videos because not only they help to improve your skills, but also to be realistic about your expectations and ambitions. Thanks for the video, Rob!
This is awesome, I’ve been wanting to know what are the better ways to write my code and why. Please continue to make these videos.
Wow! Thanks so much Emily. Really apprecaite the feedback and super thanks!
One of the best videos I've seen on Pandas! So glad someone prominent enough is advocating for method chaining and pandas methods!
The 'Query' method in particular is relatively unknown. In conjunction with not using 'snake case' this leads to beginners being very inefficient at code due to not being able to use dot syntax
I am just an intermediate level so I can relate to many of these mistakes. It goes as deep as university however. They do not teach clean, efficient code at all!
Glad you enjoyed it! I confess I don't use chaining nearly as much as I should.
Wow dude! You are single handedly responsible for my data science growth. PLEASE keep making more of these videos I really appreciate it.
Wow! I love hearing feedback like this. I'll keep making videos if you all keep watching! :D
I had no experience with Pandas before joining a team where I need to work with it a lot. Have been learning as I go and it feels like the perfect time to see this video. I have enough time under my belt to have made or inherited code with many of these mistakes. With that context, I absorbed so much from what you shared. Thank you for helping me improve. I’m excited to refactor and apply what I learned!
1:28 Before I discovered your videos, I'd never considered using the query method. The examples I've previously seen online made it look like a me-too add-on for seasoned SQL users. Using conditionals to mask off rows seemed just as easy and more pythonic. Also, at work, I typically filter with a script when I pull down the data, so by the time I get the data into pandas, I just need to tweak. But, you've shown me the light. Thanks!
I totally understand where you are coming from. Its important to keep in mind query can be slower, but for quick filtering it can be really quick and clean way to filter data. It really depends on what I'm doing. Glad I showed you something new though!
I can't believe how good this video is. I love your no-nonsense delivery; I don't have time at work to watch a 4-hour "intro" video. Keep it up!
I'm currently working on my first major pandas project and I reckon that I may have done around 15/25 of these 'mistakes'. Looks like I have some optimisation to do over the coming days!
We all have to start somewhere. I didn't learn many of these until I had been using pandas for years.
oh wow the quality and clarity is worth subscribing! thank you !
I didn't know about suffixes. Amazing!
Thanks Ken, glad I you were able to learn something new! Love your videos.
I started to watch your videos recently, and from now on I'm doing the chaining and putting each function in "one row" to make the data cleaner, and also, the query method, so powerful and simple, I was used to replicate the dataframe with the column and value searched to filter my df. You are boosting my studies!
Thanks for that!
Dear Rob,
I'm a total beginner in Python and Pandas. From what I understand, the warning at 3:30 is not about making a copy of sliced data, but rather about not using the .loc method and using "direct assignment" for columns (or whatever it's called). I could be wrong, but this is what I've gathered from reading the documentation and encountering a similar warning in my code.
Thanks for your valuable content. It has been a great help
Thanks, great tips! I've been using pandas for years, and I've only recently started using some of these (particularly query, and didn't know about the @ operator)
Glad it was helpful! The @ operator is really useful. You can also do stuff like min() or or apply operations between columns within the query.
This video rocked me. I've been using python for a few months and watching this video made me bust out my laptop so I could try all of these items out. Thank you for this.
So glad you found it helpful. Share with a friend!
Found lots of favorite annoyances and learned a few new tricks! I'll add a shout-out to the ".pipe()" method to allow for wrapping all your transforms in a single statement when a single .method can't cover the required transform. An added bonus of "pipe()" - since it's using user defined functions to do the transforms, you can add decorators to automatically print out metadata on the resulting transform steps to get a quick insight into potential bugs.
Oh. Great one. I forgot to add pipe and assign in this video but wish I did.
4:25 I'll add a pet peeve of mine: Using chaining, but not placing each method call on a new line.
One of the greatest benefits of method chaining is easier track of changes, since everything is moving linearly: rightwards, or downwards, with linebreaks and parantheses. having multiple method calls on some lines but not others breaks this one directional thought process and makes it much harder to skim code.
He goes through this at 10:23.
@@Mats-Hansen and does it himslf earlier😉. This is just acheeky comment
Haha. Thanks for putting me in my place. I was leading up to the later point? At least I can pretend that’s my excuse 😝
@@robmulla happens. Actually, I got pissed about this the other day when I tried "Black Formatter", because that only puts methods on new lines, not dot-notation attributes. E.g., calling df.T or df.columns would not result in a new line.
Utterly annoying for my little OCD brain.
I used the Pandas lib more then 2 years, but today I learned something new! Thank you, man!
Glad you learned something new! Share with anyone else you think might appreciate it!
I've had little to no formal training. These tips are amazing and concise. Thank you so much.
9:13 another pet peeve, though this one is more important than the last one. Do not use backslashes. ever. well, not _never_, use them when writing a `with` statement with more than two context managers. But otherwise, don't. I'll quote the `Black` (the formatter) documentation:
Backslashes and multiline strings are one of the two places in the Python grammar that break significant indentation. You never need backslashes, they are used to force the grammar to accept breaks that would otherwise be parse errors. That makes them confusing to look at and brittle to modify. This is why Black always gets rid of them.
Good point. Then backlashes are and old habit I’ve been trying to stop use. We all are learning constantly!
Awesome stuff. I've been using pandas for over 4 years, but it never occurred me to start using the query method instead of loc (despite me finding it tiresome to keep repeating "df" all over the place when using loc).
I also appreciate the quick format. You see RUclipsrs taking too long to say nothing at all, so congrats on actually going through 25 tips in 10 minutes. You got yourself a sub!
Learned more about Pandas in this video than a whole many videos worth hours combined. Seriously, thank you.
*Introduction:*
This video summarizes 25 common mistakes made by beginners learning pandas in Python.
*Data Cleaning and Manipulation:*
*Section 1 (**00:00**)* : Avoid unnecessary elements in CSV files by excluding the index or setting an index column when reading.
*Section 2 (**00:52**)* : Use clear and consistent column names. Replace spaces with underscores for readability and dot syntax access.
*Section 9 (**05:01**)* : Represent True/False conditions with boolean values for clarity and efficiency. Avoid using text strings ("yes", "no").
*Section 11 (**06:29**)* : Employ *_fillna_* for flexible missing value imputation (e.g., filling with mean, specific value).
*Efficient Data Transformations and Calculations:*
*Section 4 (**01:50**)* : Leverage `@` symbol for variables in queries for cleaner syntax.
*Section 5 (**03:15**)* : Prioritize vectorized functions over `.apply` for efficient calculations. Use `.apply` judiciously when vectorization is not feasible.
*Section 6 (**03:44**)* : Avoid unnecessary intermediate DataFrames. Chain transformations instead to modify the same DataFrame for cleaner code and memory efficiency.
*Section 10 (**05:31**)* : Utilize built-in methods:
* `df.plot` for quick data visualizations.
* `.str` for efficient column-wise string manipulations.
* Create reusable functions for common transformations.
*Section 12 (**06:54**)* :
* Use `rename` dictionary for clear and efficient column renaming.
* Leverage `groupby` for flexible group-wise aggregations.
* Utilize built-in methods like `pct_change` and `diff` for calculations.
*Data Storage and Handling:*
*Section 8 (**04:37**)* : Ensure proper data type assignment (e.g., datetime) for accurate operations and avoid errors.
*Section 13 (**08:15**)* : Consider alternative file formats like parquet, feather, or pickle for large datasets. These offer better compression and performance compared to CSV.
*Section 15 (**09:11**)* : Utilize `style` attribute for rich DataFrame formatting within pandas.
*Advanced Techniques and Best Practices:*
*Section 3 (**01:21**)* : Utilize the `.query` method for advanced filtering with concise and readable syntax.
*Section 7 (**03:44**)* : Understand DataFrame slicing and copying. Treat slices as read-only to avoid unintended modifications. Use `.copy()` for truly independent DataFrames.
*Section 14 (**08:15**)* : Choose the file format based on size, use case, and compatibility. Consider parquet for queryability and compression, feather for efficient data exchange, and pickle for flexibility.
*Section 16 (**10:27**)* :
* Break down chained method expressions for better readability.
* Employ categorical data types for efficient storage and operations.
* Prevent and identify duplicate columns using `df.columns.duplicated()`.
I hope this combined and formatted transcript proves helpful!
Regarding the 'inplace' comment at 02:07 there's a very valid and very useful reason to prefer that and it's memory usage.
`df = df.reset_index()` or anything similar creates an entire copy of the dataset before replacing it with the original and for extremely big data that is a problem, it may get over the physical memory available and have the OS kill the script.
Interesting. I’ve heard this but then also thought it was debunked. I think the fact that the pandas core developers want to remove inplace gives good reason to try and avoid using it.
@@robmulla I guess it's more "functional style" to do it like they want but I recently had this problem with the memory when creating copies and I solved it by using 'inplace' (Python 3.7 and Pandas 1.3.5 if it matters)
@@SamusUy good to know!
This video is amazing, I am using pandas for a long time now and still learned so many new good practices thank you
found your channels few days ago and man you have some epic content . The noob mistakes here are the exact way most tutorials teach you..just wondering why the hell the non noob ways are not taught as they are easier and shorter and the syntax makes more sense... thank you for this video
Glad you like them! I’m trying to continue to make more stuff like this so keep watching!
This video made me realize i have still a long road ahead in Pandas. Thanks! Just subscribed ;D
Thanks for the sub! We all start somewhere, but you'll pick it up quickly in no time.
I can't believe I watched this whole video and only 2 of them were things I didn't know about! Thank you for sharing!
At 6:23 (#14) you're returning the dataframe, but you're also modifying it in place. Having a return there gives the impression that the original dataframe isn't modified, specially if you also assign it to itself later.
It ties back to #5.
Rob, as always, fantastic video. I have to admit, i get caught on some of those mistakes so it is great to have you point out and make suggestions on how to correct them. Thanks for sharing. Much appreciated.
I fall into these a lot too! We can all get better, glad you found the video helpful.
Hey, Rob! Super video this one. I myself am Sr. DS working each day intensively with pandas, I will implement many of the tips you show! Thanks a million :)
Awesome to hear! I'm still learning new tricks with pandas every day.
Dude I've worked with pandas for 7 years and learned some new tricks, thanks a lot!
Great to hear! You've been working with it longer than I have. Please share my channel with any friends you think might also learn from it.
Awesome video! I work with Pandas for +3 years and learned a lot here! Thanks
Happy to hear it. Tell your friends!
Hey Rob! You got me on that one right off the bat! I write a file to csv and when I load it back in, I get an 'unnamed' column and I wonder why....then I have to drop the column. 🤐Unnecessary work! Thanks a heap!
That's good to hear that you learned something new only a few seconds into the video :D - if you enjoyed it please share it on social or with any friends who might learn from it.
Really enjoyed how fast this content came. I felt like it was a great speed to keep me engaged. I usually find these types of videos boring.
Excellent points! Learned new stuff that a lot of tutorials don't explicitly teach.
Glad it was helpful! Thanks for watching and please share with others.
Nice video! I have been using pandas for years and still run into these issues :)
Thanks! Glad you enjoyed the video. I really enjoy your videos too.
some great tips here. i usually chain with \ and i didn't know a query method exists!!
guess you learn everything new all the time!
Glad you learned something new! Cheers.
Oh god. I clicked on this video just to confirm that this is one more overly exaggerated self-confident dude trying to teach newbies of 2 weeks experience.
After watching this, this is god damn life changing. As an engineer focusing on fluid dynamics and floater response, I use pandas daily basis. Out of 25, I didn’t know approximately 20. Every single person who has any plan to use pandas must watch this.
Awesome!
Thank you for creating such an amazing video on pandas. It has even been really helpful for me as a pandas new bee. Leanrt a lot! 🎉
Love it!
Releasing a notebook showing all these tips would be a great benefit to the community.
The `.style()` trick at @9:18 is amazing.
If this video gets 100k views I’ll share the notebook cringe 😬!
@@robmulla It currently has 241k views 😉
I feel personally attacked. Thanks so much for releasing this. I knew my code was bad, but not THIS bad.
Haha. With coding we all are learning and getting better every day. Me included. Thanks for watching!
These are fantastic refactoring suggestions.
Thank you! The .diff method is a lifesaver when computing velocities. The advice on not using inplace is excellent i got into various troubles because of it but i thought that's what the "experienced guys" do.
Thanks for watching. inplace is very tricky. Diff method is really powerful, and there are parameters you can use within it depending on your use case.
I really think this should be written up in a medium blog article. Would be awesome to refer to.
That’s a good idea. I really want to make blogs for all my videos but I don’t have the time. Maybe someday
I'm new to Pandas and all tips from this video are gold for me, thank you a lot!
Glad you learned something new. Welcome to the world of pandas!
+1000. I’m brand new to Pandas and still trying to grok the idiom. This video is GOLD.
Great overview. I also found that ChatGPT is most useful in explaining existing code rather than writing it. Same with writing.
Yes, but chatGPT can also be very confident when it gives you bad code or code that doesn't work so don't trust it blindly.
@@robmulla Chat GPT so arrogant lol
The space need to be avoid part is so true! But wait a second, every time I face the space but not underscore is from others data, so I think what we actually need is how to deal with the space condition.(Which is a pain of journey)
Maybe rename all the columns with versions without a space. Like, you replace all the spaces with an underscore. df.rename can take dictionaries or even a mapper function so this is easy to do. Using a dictionary is preferable as you can just reverse map it, if you want to use the columns with spaces in them in the end.
Good point. In most cases to can be done with a list comprehension one liner!
Oh man, I am making so many of these mistakes. Honestly, this is a great checklist to improve my clean coding.
Wow, very useful - a true "tour de force" for better Pandas code. THX for this !
Glad it was helpful! Please consider sharing it with anyone else you think would benefit from watching.
8:17.....this loop maybe...can be replaced....maybe.....with creation of another colum with has the value of i-1....after_row.....extract the list of this column....[1:-1]....append(0)....then....insert this new list in row_after....then..percent_calc.... the end
great video! wanted to add on #7, may be someone would find that helpful:
in case you need to apply some function to a several values in a row, one of the fastest solution is numpy.vectorize
smth like:
def divide(num, denom):
if denom == 0:
return 0
else:
return num / denom
so instead of doing
df["div"] = df.apply(lambda row: divide(row["value1"], row["value2"]), row=1)
you go with
df["div"] = np.vectorize(divide)(df["value1"], df["value2"])
Great tip! np.vectorize can be really handy. I think your example could be vectorized without having to use it though.
@@robmulla yeah) just couldn't come up with anything else))
It's more useful this than many pyhton courses as a whole. Thanks!!
Oi! There were several of those I didn't know. I wouldn't have thought I was a noob, but I guess we all have a bit of that in us. Thanks for the video!
Glad you learned something new. I find I’m always learning something new with python and data science. That’s why I love it so much.
Extremely underrated channel Extremely helpful
Thanks Nikhhilil!
Learned tons with this. Short and succinct. New subscriber.
Thanks for subscribing!
I'm so guilty of number 8! Thank you for this!
I’ve made every one of these mistakes at some point so I know how you feel. Thanks for watching!
Oh man, that guide is pro! Thanks, gonna apply all of that when refactoring my project!
Glad it helped! Tell a friend!
Great video. Thank you for being so direct and giving us valuable tips ☺
Glad you liked it! Thanks for giving feedback. Share the video with anyone else you think might also like it.
Great video. Lots of operations and procedures that are helpful for effective coding. Would be really helpful to have a cheat sheet linked for easy reference.
.query is one of my favorites. I use it all the time, but it is still not as flexible as the normal filtering way. For example you can not use .isna() method or IN for comparison.
Though you can now use columns with spaces in by enclosing them in ` `
Totally query is great. But did you know in recent versions isna() does work in query. Same with IN - I use it all the time against lists using the @ to reference the list.
Great video for new users not knowing tips and tricks..
Wish you shared the code also to keep it handy for reference
Thanks for watching. I don’t think I kept the code unfortunately
Rob, amazing video and intuitive. Happy to subscribe!
Great video as always. I will start exploring query method more.
Rob, Can you please make a video on how feature engineering, especially how to create new features using aggregation etc. Thank you
Glad you enjoyed the video. Feature engineering would be a good topic for a future video. I'll add it to the list!
3:24 vectorized version: what if I have a that removes TRI from name EG name: 1976: Hasely Carwford (TRI) and add new column ?
I tried something like df['new_column'] = new_column_func(df['name'])
Actual use case with test file as below sample.csv
size,age,team,win,date,prob,file
small,31,green,False,2022-04-09,0.3394,green.txt
medium,39,blue,True,2022-12-13,0.0501,blue.txt
Objective, create a new column as file_extn and save a file as sample_extn.csv
Code :
import pathlib
df_csv = pd.read_csv('sample_data/test_csv.csv')
# Function for new column
def new_column_func(row):
print(row)
path = pathlib.PureWindowsPath(row)
return path.suffix
df_csv['file_extn'] = new_column(df_csv['file'])
df_csv.to_csv('sample_data/testScan2_csv.csv', index = False)
I am getting error as TypeError: expected str, bytes or os.PathLike object, not Series
What do you suggest?
Thanks in advance
Seems like a pretty specific case, but string operations (including regex) can be vectorized. For example df['name'].str.replace('TRI","") would remove TRI from any string in that column. Hope that helps!
Very useful! Thank you for sharing in such an easy and agile way.
Hey! Glad you learned something. Appreciate the feedback!
Very useful video, thank you for making this !
Glad it was helpful! Share it with anyone you think might also benefit.
I'm an experienced developer looking to get familiar with Pandas. I found this video very valuable.
Thanks for the video!! A small comment about number nine, creating multiple intermediate dataframes. I understand that this can be costly in terms of memory, but I also think it can be nice for debugging and understanding during the development phase. Moreover, using the same name 'df' once and another can be prune to errors if you have different operations in different cells and you are 'playing' skipping some of them to see the effect, because you don't know which 'df' are actually taking as input.
Good point! It really depends on what you're doing and the time it takes to develop sometimes is more important than the code itself. However, once you are done debugging then changing it to using chaining methods is typically preferred.
Have not, and will not make any of these mistakes because I’ve seen your “A Gentle Introduction to Pandas Guide” !!
Trueeeeee
Love it! Thanks nick.
Where, please? I found your twitter feed, and lots of “gentle introductions” from other people, but not yours.
@@garyfritz4709 here is the link ruclips.net/video/_Eb0utIRdkw/видео.html
@@robmulla Aha. I was googling out on the web, and it didn't find THAT video in YT. Merci!
Great video. Very helpful. Please keep making more like this
Appreciate that. I plan to!
This video is literally a gem
Glad you liked it Fizip. Hopefully you learned a thing or two you that will help you write better code!
@@robmulla Thank you for your reply. I am thankful for your content.
did you ever tried to use np.vectorize function to apply transformations over a df column? that one is along with my favorites.
amazing video btw, subscribed!
Yes! I've used it before and had some good results. Thanks for watching!
Super useful! Thanks a lot, mate!
Thanks for watching. Please share with someone you think might also like it.
When you mention the slice warning sometimes you don't care about the original data frame so it doesn't matter if you modified it
That’s true. But I don’t like seeing the warnings. And if you don’t need the rest of the data you can just overwrite it with the slice?
#26 Look into alternatives when dealing with large data. Memory issues are a pain to deal with in Pandas.
Check out my videos on polars and pyspark!
I have been using the vectorised notation purely due to it requiring less syntax lol. But good to know its faster.
Yep! It can be a lot faster.
I didn't know about the .query neither the parenthesis for the chaining. Awesome video
What is it with the \ on a chaining example you showed?
Thanks! Glad it helped. \ let’s you split lines for the same code.
Another awesome, useful video, Rob. Thank you.
Thanks for watching Deepak!
Hi, I love your videos!!!
Can you please make a video on how to handle missing values and outliers?
Great suggestion! I did have a whole video on this topic on Abhishek Thakur's channel. Check it out here: ruclips.net/video/EYySNJU8qR0/видео.html
Great insights, thanks for these important tips
Glad you found them helpful. Share it somewhere on social you think people might learn from!
I got to admit, that I regularly make 65% of this newbie "mistakes". That's why I am specifically helpful for your tips how to optimize my coding structure! Thanks a lot for your inputs!
Glad it was helpful!🙌
Very illuminating video! I learned a lot quickly.
Thanks for the feedback Daniel!
OMG! I had to rest after first 10. So huge dose of information. Thanks.
This video is too damn good, I would love to find more videos like this.
Dude, Amazing video apparently clear the concept.
Glad you think so! Share with your friends!
I do several of these and never imagined Pandas has styling. Time to rewrite and share with my peers.
My mind was blown when I found out about the styling and I use it a lot now. Please do share with others who you think might find this helpful.
Awesome content, I'm an aspiring data scientist, very useful content. Like your jupyter note theme by the way, which one is it?
Thanks. I have a whole video on my jupyter setup. But it’s jupyterlab with the solarized dark theme.
This was great! Just what I needed :)
Could you please do a tutorial on how to properly iterate over the rows in a data frame ?
I actually already have an entire video about that! Check it out here: ruclips.net/video/SAFmrTnEHLg/видео.html&feature=shares
10:25 Wrapping chains on parentheses. Something else I hadn't considered. Continuations using "\" look ugly. What would be nice is if the debugger would point to the method that failed instead of the beginning of the chain.
Yes, I usually use the \ when working fast. But I also like using the black autoformatter which will automatically change it to use (). You can even use the extension lab_black in jupyter which will do it for each cell you run.
lots of good info! thank you!
Glad you learned from it!
What a brilliant video! My Q is on Q18, how do you do 3m/3m annualised percentage change? Cant seem to find any literature on this anywhere!?! MANY THANKS
Thanks for the feedback. I'm not sure about your specific question, but I think it might be possible if you use the periods and/or freq parameters in the pct_change method. freq is specific to time series data and you can give it things like M for month. Check out the docs here: pandas.pydata.org/docs/reference/api/pandas.DataFrame.pct_change.html
Concerning #24 "categorical":
Does parquet also support this datatype? So, if a mark a column as categorical, saved the dataframe to parquet and read it back again do dataframe, will this column still be categorical or will it be a string?
I think it does and will preserve the data type.