Please keep doing this. No additional jargon, crisp, straight to the point explanations are what are required. No body needs a 10 hour tutorial. Thank you for this.
I'll try my best! I do like trying to cram a ton of information into a short format, but these videos take a while to create. I totally copied the format from mmcoding (check out the channel if you haven't already)
00:18 #1. Writing into csv with unnecessary index 00:53 #2. Using column names which include spaces 01:25 #3. Filter dataset like a PRO with QUERY method 01:44 #4. query strings with(@ symbol) to easily reach variables 02:07 #5. "inplace" method could be removed in future versions, better explicitly overwrite modifications 02:35 #6. better Vectorization instead of iteration 03:01 #7. Vectorization method are preferable than Apply method 03:30 #8. df.copy() method 04:08 #9. chaining formulas is better than creating many intermediate dataframes 04:28 #10. properly set column dtypes 05:01 #11. using Boolean instead of Strings 05:25 #12. pandas plot method instead of matplotlib import 05:45 #13. pandas str.upper() instead apply and etc 06:10 #14. use data pipeline once instead of repeating many times 06:41 #15. learn proper way of renaming columns 06:59 #16. learn proper way of grouping values 07:31 #17. proper way of complex grouping values 08:01 #18. percent_change or difference now could be implemend with function 08:25 #19. save time and space with large datasets with pickle,parquet,feather formats 08:58 #20. conditional format in pandas(like in Microsoft Excel) 09:22 #21. use suffixes while merging TWO dataframes 09:48 #22. check merging is success with validation 10:13 #23. wrapping expression so they are readable 10:33 #24. categorical datatypes use less space 10:55 #25. duplicating columns after concatenating, code snippet
@@robmulla i wish i commented better as English is not my native language, Thank You for bringing us Valuable Tutorials that saves us our time and energy! I wish i helped and learned from you more
I was thinking that I was pretty bad, but surprisingly I usually only make 2 mistakes from the video (which is a cool chance to improve). I just love such videos because not only they help to improve your skills, but also to be realistic about your expectations and ambitions. Thanks for the video, Rob!
The pandas query function does not outperform the loc method. In fact, it is sometimes much slower when your query/data is so big. We industry users will utilize the loc method for quick EDA. Query might be useful when you have a scheduled cron
Rob, thank you for all the time and energy you have put in for us. Would appreciate an updated video on "Exploratory Data Analysis" may be expanding on your year old one. Thank you again!
Oh god. I clicked on this video just to confirm that this is one more overly exaggerated self-confident dude trying to teach newbies of 2 weeks experience. After watching this, this is god damn life changing. As an engineer focusing on fluid dynamics and floater response, I use pandas daily basis. Out of 25, I didn’t know approximately 20. Every single person who has any plan to use pandas must watch this. Awesome!
The 'Query' method in particular is relatively unknown. In conjunction with not using 'snake case' this leads to beginners being very inefficient at code due to not being able to use dot syntax I am just an intermediate level so I can relate to many of these mistakes. It goes as deep as university however. They do not teach clean, efficient code at all!
This can be, some of my first times commenting in youtube after years of usage. This video was INCREDIBLY USEFUL! There's a lot of my previous team members did on scripts and sometimes are complicated to maintain or create new ones following the same logic. This covers exactly what they used and what is the best option to rewrite it and make it more understandable. Thank you so much for this godly information.
You're very welcome! I really appreciate the positive feedback. I’ll try to keep making helpful videos like this. Share with your friends in the meantime!
I had no experience with Pandas before joining a team where I need to work with it a lot. Have been learning as I go and it feels like the perfect time to see this video. I have enough time under my belt to have made or inherited code with many of these mistakes. With that context, I absorbed so much from what you shared. Thank you for helping me improve. I’m excited to refactor and apply what I learned!
Dear Rob, I'm a total beginner in Python and Pandas. From what I understand, the warning at 3:30 is not about making a copy of sliced data, but rather about not using the .loc method and using "direct assignment" for columns (or whatever it's called). I could be wrong, but this is what I've gathered from reading the documentation and encountering a similar warning in my code. Thanks for your valuable content. It has been a great help
This video rocked me. I've been using python for a few months and watching this video made me bust out my laptop so I could try all of these items out. Thank you for this.
The space need to be avoid part is so true! But wait a second, every time I face the space but not underscore is from others data, so I think what we actually need is how to deal with the space condition.(Which is a pain of journey)
Maybe rename all the columns with versions without a space. Like, you replace all the spaces with an underscore. df.rename can take dictionaries or even a mapper function so this is easy to do. Using a dictionary is preferable as you can just reverse map it, if you want to use the columns with spaces in them in the end.
I'm currently working on my first major pandas project and I reckon that I may have done around 15/25 of these 'mistakes'. Looks like I have some optimisation to do over the coming days!
Found lots of favorite annoyances and learned a few new tricks! I'll add a shout-out to the ".pipe()" method to allow for wrapping all your transforms in a single statement when a single .method can't cover the required transform. An added bonus of "pipe()" - since it's using user defined functions to do the transforms, you can add decorators to automatically print out metadata on the resulting transform steps to get a quick insight into potential bugs.
1:28 Before I discovered your videos, I'd never considered using the query method. The examples I've previously seen online made it look like a me-too add-on for seasoned SQL users. Using conditionals to mask off rows seemed just as easy and more pythonic. Also, at work, I typically filter with a script when I pull down the data, so by the time I get the data into pandas, I just need to tweak. But, you've shown me the light. Thanks!
I totally understand where you are coming from. Its important to keep in mind query can be slower, but for quick filtering it can be really quick and clean way to filter data. It really depends on what I'm doing. Glad I showed you something new though!
I started to watch your videos recently, and from now on I'm doing the chaining and putting each function in "one row" to make the data cleaner, and also, the query method, so powerful and simple, I was used to replicate the dataframe with the column and value searched to filter my df. You are boosting my studies! Thanks for that!
Awesome stuff. I've been using pandas for over 4 years, but it never occurred me to start using the query method instead of loc (despite me finding it tiresome to keep repeating "df" all over the place when using loc). I also appreciate the quick format. You see RUclipsrs taking too long to say nothing at all, so congrats on actually going through 25 tips in 10 minutes. You got yourself a sub!
Thanks, great tips! I've been using pandas for years, and I've only recently started using some of these (particularly query, and didn't know about the @ operator)
Hey, Rob! Super video this one. I myself am Sr. DS working each day intensively with pandas, I will implement many of the tips you show! Thanks a million :)
I got to admit, that I regularly make 65% of this newbie "mistakes". That's why I am specifically helpful for your tips how to optimize my coding structure! Thanks a lot for your inputs!
found your channels few days ago and man you have some epic content . The noob mistakes here are the exact way most tutorials teach you..just wondering why the hell the non noob ways are not taught as they are easier and shorter and the syntax makes more sense... thank you for this video
great video! wanted to add on #7, may be someone would find that helpful: in case you need to apply some function to a several values in a row, one of the fastest solution is numpy.vectorize smth like: def divide(num, denom): if denom == 0: return 0 else: return num / denom so instead of doing df["div"] = df.apply(lambda row: divide(row["value1"], row["value2"]), row=1) you go with df["div"] = np.vectorize(divide)(df["value1"], df["value2"])
Oi! There were several of those I didn't know. I wouldn't have thought I was a noob, but I guess we all have a bit of that in us. Thanks for the video!
9:13 another pet peeve, though this one is more important than the last one. Do not use backslashes. ever. well, not _never_, use them when writing a `with` statement with more than two context managers. But otherwise, don't. I'll quote the `Black` (the formatter) documentation: Backslashes and multiline strings are one of the two places in the Python grammar that break significant indentation. You never need backslashes, they are used to force the grammar to accept breaks that would otherwise be parse errors. That makes them confusing to look at and brittle to modify. This is why Black always gets rid of them.
4:25 I'll add a pet peeve of mine: Using chaining, but not placing each method call on a new line. One of the greatest benefits of method chaining is easier track of changes, since everything is moving linearly: rightwards, or downwards, with linebreaks and parantheses. having multiple method calls on some lines but not others breaks this one directional thought process and makes it much harder to skim code.
@@robmulla happens. Actually, I got pissed about this the other day when I tried "Black Formatter", because that only puts methods on new lines, not dot-notation attributes. E.g., calling df.T or df.columns would not result in a new line. Utterly annoying for my little OCD brain.
Thank you! The .diff method is a lifesaver when computing velocities. The advice on not using inplace is excellent i got into various troubles because of it but i thought that's what the "experienced guys" do.
Thanks for watching. inplace is very tricky. Diff method is really powerful, and there are parameters you can use within it depending on your use case.
Rob, as always, fantastic video. I have to admit, i get caught on some of those mistakes so it is great to have you point out and make suggestions on how to correct them. Thanks for sharing. Much appreciated.
Regarding the 'inplace' comment at 02:07 there's a very valid and very useful reason to prefer that and it's memory usage. `df = df.reset_index()` or anything similar creates an entire copy of the dataset before replacing it with the original and for extremely big data that is a problem, it may get over the physical memory available and have the OS kill the script.
Interesting. I’ve heard this but then also thought it was debunked. I think the fact that the pandas core developers want to remove inplace gives good reason to try and avoid using it.
@@robmulla I guess it's more "functional style" to do it like they want but I recently had this problem with the memory when creating copies and I solved it by using 'inplace' (Python 3.7 and Pandas 1.3.5 if it matters)
Great video. Lots of operations and procedures that are helpful for effective coding. Would be really helpful to have a cheat sheet linked for easy reference.
Год назад+1
At 6:23 (#14) you're returning the dataframe, but you're also modifying it in place. Having a return there gives the impression that the original dataframe isn't modified, specially if you also assign it to itself later. It ties back to #5.
Hey Rob! You got me on that one right off the bat! I write a file to csv and when I load it back in, I get an 'unnamed' column and I wonder why....then I have to drop the column. 🤐Unnecessary work! Thanks a heap!
That's good to hear that you learned something new only a few seconds into the video :D - if you enjoyed it please share it on social or with any friends who might learn from it.
Thanks for the video!! A small comment about number nine, creating multiple intermediate dataframes. I understand that this can be costly in terms of memory, but I also think it can be nice for debugging and understanding during the development phase. Moreover, using the same name 'df' once and another can be prune to errors if you have different operations in different cells and you are 'playing' skipping some of them to see the effect, because you don't know which 'df' are actually taking as input.
Good point! It really depends on what you're doing and the time it takes to develop sometimes is more important than the code itself. However, once you are done debugging then changing it to using chaining methods is typically preferred.
Fun fact from the *query* method that wasn't mentioned here. In *query* you actually (!) Reference to pandas columns. So you can do something like this : `df.query('Name.isna()')` - to query *NaN* containeings `df.query('Name.str.contains("John")')` - to filters all rows when Name containing John And even something crazy like `df.query('Price.rolling(7,1).mean() > Price.mean()')` to take rows that rolling means more than average
.query is one of my favorites. I use it all the time, but it is still not as flexible as the normal filtering way. For example you can not use .isna() method or IN for comparison. Though you can now use columns with spaces in by enclosing them in ` `
Totally query is great. But did you know in recent versions isna() does work in query. Same with IN - I use it all the time against lists using the @ to reference the list.
Great video as always. I will start exploring query method more. Rob, Can you please make a video on how feature engineering, especially how to create new features using aggregation etc. Thank you
8:17.....this loop maybe...can be replaced....maybe.....with creation of another colum with has the value of i-1....after_row.....extract the list of this column....[1:-1]....append(0)....then....insert this new list in row_after....then..percent_calc.... the end
Hi, I'm from Brazil and I want to thanks for this video! I have a suggestion and a question to make - and please if my english is wrong I apologize in advance. 1) Suggestion: put the 25 'Nooby Pandas' separately in the time video to make it easier to find one of the 25 mistakes specifically. 2) Question: is it possible to find in two similar dataframes where is the difference in the data column? Example: I collect the prices of a stock market from two different sources and the start and end dates is the same, but the number of lines has a difference of one line and I don't know where it is because it's in the middle of the dataframe, can you give me a hint to solve that, please?
Thanks for watching. For #1 I didn’t add timestamps so people watch the whole thing 😏. #2 sounds like you want to do an outer merge and see where the null values exist.
I need to implement the chaining methods and using functions into what I do, much easier to use and read. Great video as always.
Totally. Just those two things alone are huge! Glad you enjoyed the video.
Please keep doing this. No additional jargon, crisp, straight to the point explanations are what are required. No body needs a 10 hour tutorial. Thank you for this.
I'll try my best! I do like trying to cram a ton of information into a short format, but these videos take a while to create. I totally copied the format from mmcoding (check out the channel if you haven't already)
00:18 #1. Writing into csv with unnecessary index
00:53 #2. Using column names which include spaces
01:25 #3. Filter dataset like a PRO with QUERY method
01:44 #4. query strings with(@ symbol) to easily reach variables
02:07 #5. "inplace" method could be removed in future versions, better explicitly overwrite modifications
02:35 #6. better Vectorization instead of iteration
03:01 #7. Vectorization method are preferable than Apply method
03:30 #8. df.copy() method
04:08 #9. chaining formulas is better than creating many intermediate dataframes
04:28 #10. properly set column dtypes
05:01 #11. using Boolean instead of Strings
05:25 #12. pandas plot method instead of matplotlib import
05:45 #13. pandas str.upper() instead apply and etc
06:10 #14. use data pipeline once instead of repeating many times
06:41 #15. learn proper way of renaming columns
06:59 #16. learn proper way of grouping values
07:31 #17. proper way of complex grouping values
08:01 #18. percent_change or difference now could be implemend with function
08:25 #19. save time and space with large datasets with pickle,parquet,feather formats
08:58 #20. conditional format in pandas(like in Microsoft Excel)
09:22 #21. use suffixes while merging TWO dataframes
09:48 #22. check merging is success with validation
10:13 #23. wrapping expression so they are readable
10:33 #24. categorical datatypes use less space
10:55 #25. duplicating columns after concatenating, code snippet
Thanks for making this!
@@robmulla i wish i commented better as English is not my native language, Thank You for bringing us Valuable Tutorials that saves us our time and energy! I wish i helped and learned from you more
egg bro
thanks, I like no 4
This needs to be pinned
I have been working 2 years now with pandas and I can strongly affirm that I have made like 70% of those bad practices, appreciate a lot your video!
Thanks for commenting. Honestly I still make many of them to this day.
Matt Harrison's "Effective Pandas: Patterns for Data Manipulation" is one of the best resources I've read on idiomatic pandas.
I really need to get myself a copy! He knows his stuff for sure.
He has a great video (series?) on effective pandas also!
ty i will look into this book
I thought I was pretty good in Pandas, but you gave me so many new things to improve. HUGE thank you!
Glad I could help! I'm constantly learning better ways to do things in pandas myself.
I was thinking that I was pretty bad, but surprisingly I usually only make 2 mistakes from the video (which is a cool chance to improve). I just love such videos because not only they help to improve your skills, but also to be realistic about your expectations and ambitions. Thanks for the video, Rob!
The pandas query function does not outperform the loc method. In fact, it is sometimes much slower when your query/data is so big. We industry users will utilize the loc method for quick EDA. Query might be useful when you have a scheduled cron
Yea. Query isn’t for speed of processing but speed of writing the code.
Rob, thank you for all the time and energy you have put in for us. Would appreciate an updated video on "Exploratory Data Analysis" may be expanding on your year old one. Thank you again!
Oh god. I clicked on this video just to confirm that this is one more overly exaggerated self-confident dude trying to teach newbies of 2 weeks experience.
After watching this, this is god damn life changing. As an engineer focusing on fluid dynamics and floater response, I use pandas daily basis. Out of 25, I didn’t know approximately 20. Every single person who has any plan to use pandas must watch this.
Awesome!
One of the best videos I've seen on Pandas! So glad someone prominent enough is advocating for method chaining and pandas methods!
The 'Query' method in particular is relatively unknown. In conjunction with not using 'snake case' this leads to beginners being very inefficient at code due to not being able to use dot syntax
I am just an intermediate level so I can relate to many of these mistakes. It goes as deep as university however. They do not teach clean, efficient code at all!
Glad you enjoyed it! I confess I don't use chaining nearly as much as I should.
Learned more about Pandas in this video than a whole many videos worth hours combined. Seriously, thank you.
This can be, some of my first times commenting in youtube after years of usage. This video was INCREDIBLY USEFUL! There's a lot of my previous team members did on scripts and sometimes are complicated to maintain or create new ones following the same logic. This covers exactly what they used and what is the best option to rewrite it and make it more understandable.
Thank you so much for this godly information.
You're very welcome! I really appreciate the positive feedback. I’ll try to keep making helpful videos like this. Share with your friends in the meantime!
Wow dude! You are single handedly responsible for my data science growth. PLEASE keep making more of these videos I really appreciate it.
Wow! I love hearing feedback like this. I'll keep making videos if you all keep watching! :D
I had no experience with Pandas before joining a team where I need to work with it a lot. Have been learning as I go and it feels like the perfect time to see this video. I have enough time under my belt to have made or inherited code with many of these mistakes. With that context, I absorbed so much from what you shared. Thank you for helping me improve. I’m excited to refactor and apply what I learned!
Dear Rob,
I'm a total beginner in Python and Pandas. From what I understand, the warning at 3:30 is not about making a copy of sliced data, but rather about not using the .loc method and using "direct assignment" for columns (or whatever it's called). I could be wrong, but this is what I've gathered from reading the documentation and encountering a similar warning in my code.
Thanks for your valuable content. It has been a great help
I used the Pandas lib more then 2 years, but today I learned something new! Thank you, man!
Glad you learned something new! Share with anyone else you think might appreciate it!
This video rocked me. I've been using python for a few months and watching this video made me bust out my laptop so I could try all of these items out. Thank you for this.
So glad you found it helpful. Share with a friend!
Dude I've worked with pandas for 7 years and learned some new tricks, thanks a lot!
Great to hear! You've been working with it longer than I have. Please share my channel with any friends you think might also learn from it.
I can't believe how good this video is. I love your no-nonsense delivery; I don't have time at work to watch a 4-hour "intro" video. Keep it up!
Awesome video! I work with Pandas for +3 years and learned a lot here! Thanks
Happy to hear it. Tell your friends!
I've had little to no formal training. These tips are amazing and concise. Thank you so much.
oh wow the quality and clarity is worth subscribing! thank you !
The space need to be avoid part is so true! But wait a second, every time I face the space but not underscore is from others data, so I think what we actually need is how to deal with the space condition.(Which is a pain of journey)
Maybe rename all the columns with versions without a space. Like, you replace all the spaces with an underscore. df.rename can take dictionaries or even a mapper function so this is easy to do. Using a dictionary is preferable as you can just reverse map it, if you want to use the columns with spaces in them in the end.
Good point. In most cases to can be done with a list comprehension one liner!
I'm an experienced developer looking to get familiar with Pandas. I found this video very valuable.
This video is amazing, I am using pandas for a long time now and still learned so many new good practices thank you
I'm currently working on my first major pandas project and I reckon that I may have done around 15/25 of these 'mistakes'. Looks like I have some optimisation to do over the coming days!
We all have to start somewhere. I didn't learn many of these until I had been using pandas for years.
Found lots of favorite annoyances and learned a few new tricks! I'll add a shout-out to the ".pipe()" method to allow for wrapping all your transforms in a single statement when a single .method can't cover the required transform. An added bonus of "pipe()" - since it's using user defined functions to do the transforms, you can add decorators to automatically print out metadata on the resulting transform steps to get a quick insight into potential bugs.
Oh. Great one. I forgot to add pipe and assign in this video but wish I did.
1:28 Before I discovered your videos, I'd never considered using the query method. The examples I've previously seen online made it look like a me-too add-on for seasoned SQL users. Using conditionals to mask off rows seemed just as easy and more pythonic. Also, at work, I typically filter with a script when I pull down the data, so by the time I get the data into pandas, I just need to tweak. But, you've shown me the light. Thanks!
I totally understand where you are coming from. Its important to keep in mind query can be slower, but for quick filtering it can be really quick and clean way to filter data. It really depends on what I'm doing. Glad I showed you something new though!
I can't believe I watched this whole video and only 2 of them were things I didn't know about! Thank you for sharing!
I started to watch your videos recently, and from now on I'm doing the chaining and putting each function in "one row" to make the data cleaner, and also, the query method, so powerful and simple, I was used to replicate the dataframe with the column and value searched to filter my df. You are boosting my studies!
Thanks for that!
This is awesome, I’ve been wanting to know what are the better ways to write my code and why. Please continue to make these videos.
Wow! Thanks so much Emily. Really apprecaite the feedback and super thanks!
Awesome stuff. I've been using pandas for over 4 years, but it never occurred me to start using the query method instead of loc (despite me finding it tiresome to keep repeating "df" all over the place when using loc).
I also appreciate the quick format. You see RUclipsrs taking too long to say nothing at all, so congrats on actually going through 25 tips in 10 minutes. You got yourself a sub!
This video made me realize i have still a long road ahead in Pandas. Thanks! Just subscribed ;D
Thanks for the sub! We all start somewhere, but you'll pick it up quickly in no time.
Oh man, I am making so many of these mistakes. Honestly, this is a great checklist to improve my clean coding.
I'm so guilty of number 8! Thank you for this!
I’ve made every one of these mistakes at some point so I know how you feel. Thanks for watching!
Great overview. I also found that ChatGPT is most useful in explaining existing code rather than writing it. Same with writing.
Yes, but chatGPT can also be very confident when it gives you bad code or code that doesn't work so don't trust it blindly.
@@robmulla Chat GPT so arrogant lol
Thanks, great tips! I've been using pandas for years, and I've only recently started using some of these (particularly query, and didn't know about the @ operator)
Glad it was helpful! The @ operator is really useful. You can also do stuff like min() or or apply operations between columns within the query.
Excellent points! Learned new stuff that a lot of tutorials don't explicitly teach.
Glad it was helpful! Thanks for watching and please share with others.
Hey, Rob! Super video this one. I myself am Sr. DS working each day intensively with pandas, I will implement many of the tips you show! Thanks a million :)
Awesome to hear! I'm still learning new tricks with pandas every day.
Really enjoyed how fast this content came. I felt like it was a great speed to keep me engaged. I usually find these types of videos boring.
I got to admit, that I regularly make 65% of this newbie "mistakes". That's why I am specifically helpful for your tips how to optimize my coding structure! Thanks a lot for your inputs!
Glad it was helpful!🙌
I didn't know about suffixes. Amazing!
Thanks Ken, glad I you were able to learn something new! Love your videos.
found your channels few days ago and man you have some epic content . The noob mistakes here are the exact way most tutorials teach you..just wondering why the hell the non noob ways are not taught as they are easier and shorter and the syntax makes more sense... thank you for this video
Glad you like them! I’m trying to continue to make more stuff like this so keep watching!
great video! wanted to add on #7, may be someone would find that helpful:
in case you need to apply some function to a several values in a row, one of the fastest solution is numpy.vectorize
smth like:
def divide(num, denom):
if denom == 0:
return 0
else:
return num / denom
so instead of doing
df["div"] = df.apply(lambda row: divide(row["value1"], row["value2"]), row=1)
you go with
df["div"] = np.vectorize(divide)(df["value1"], df["value2"])
Great tip! np.vectorize can be really handy. I think your example could be vectorized without having to use it though.
@@robmulla yeah) just couldn't come up with anything else))
some great tips here. i usually chain with \ and i didn't know a query method exists!!
guess you learn everything new all the time!
Glad you learned something new! Cheers.
Wow, very useful - a true "tour de force" for better Pandas code. THX for this !
Glad it was helpful! Please consider sharing it with anyone else you think would benefit from watching.
Releasing a notebook showing all these tips would be a great benefit to the community.
The `.style()` trick at @9:18 is amazing.
If this video gets 100k views I’ll share the notebook cringe 😬!
@@robmulla It currently has 241k views 😉
Oi! There were several of those I didn't know. I wouldn't have thought I was a noob, but I guess we all have a bit of that in us. Thanks for the video!
Glad you learned something new. I find I’m always learning something new with python and data science. That’s why I love it so much.
I really think this should be written up in a medium blog article. Would be awesome to refer to.
That’s a good idea. I really want to make blogs for all my videos but I don’t have the time. Maybe someday
I do several of these and never imagined Pandas has styling. Time to rewrite and share with my peers.
My mind was blown when I found out about the styling and I use it a lot now. Please do share with others who you think might find this helpful.
9:13 another pet peeve, though this one is more important than the last one. Do not use backslashes. ever. well, not _never_, use them when writing a `with` statement with more than two context managers. But otherwise, don't. I'll quote the `Black` (the formatter) documentation:
Backslashes and multiline strings are one of the two places in the Python grammar that break significant indentation. You never need backslashes, they are used to force the grammar to accept breaks that would otherwise be parse errors. That makes them confusing to look at and brittle to modify. This is why Black always gets rid of them.
Good point. Then backlashes are and old habit I’ve been trying to stop use. We all are learning constantly!
Oh man, that guide is pro! Thanks, gonna apply all of that when refactoring my project!
Glad it helped! Tell a friend!
4:25 I'll add a pet peeve of mine: Using chaining, but not placing each method call on a new line.
One of the greatest benefits of method chaining is easier track of changes, since everything is moving linearly: rightwards, or downwards, with linebreaks and parantheses. having multiple method calls on some lines but not others breaks this one directional thought process and makes it much harder to skim code.
He goes through this at 10:23.
@@Mats-Hansen and does it himslf earlier😉. This is just acheeky comment
Haha. Thanks for putting me in my place. I was leading up to the later point? At least I can pretend that’s my excuse 😝
@@robmulla happens. Actually, I got pissed about this the other day when I tried "Black Formatter", because that only puts methods on new lines, not dot-notation attributes. E.g., calling df.T or df.columns would not result in a new line.
Utterly annoying for my little OCD brain.
Extremely underrated channel Extremely helpful
Thanks Nikhhilil!
I'm new to Pandas and all tips from this video are gold for me, thank you a lot!
Glad you learned something new. Welcome to the world of pandas!
+1000. I’m brand new to Pandas and still trying to grok the idiom. This video is GOLD.
Thank you! The .diff method is a lifesaver when computing velocities. The advice on not using inplace is excellent i got into various troubles because of it but i thought that's what the "experienced guys" do.
Thanks for watching. inplace is very tricky. Diff method is really powerful, and there are parameters you can use within it depending on your use case.
Rob, as always, fantastic video. I have to admit, i get caught on some of those mistakes so it is great to have you point out and make suggestions on how to correct them. Thanks for sharing. Much appreciated.
I fall into these a lot too! We can all get better, glad you found the video helpful.
Great video for new users not knowing tips and tricks..
Wish you shared the code also to keep it handy for reference
Thanks for watching. I don’t think I kept the code unfortunately
Thank you for creating such an amazing video on pandas. It has even been really helpful for me as a pandas new bee. Leanrt a lot! 🎉
Love it!
I feel personally attacked. Thanks so much for releasing this. I knew my code was bad, but not THIS bad.
Haha. With coding we all are learning and getting better every day. Me included. Thanks for watching!
These are fantastic refactoring suggestions.
Regarding the 'inplace' comment at 02:07 there's a very valid and very useful reason to prefer that and it's memory usage.
`df = df.reset_index()` or anything similar creates an entire copy of the dataset before replacing it with the original and for extremely big data that is a problem, it may get over the physical memory available and have the OS kill the script.
Interesting. I’ve heard this but then also thought it was debunked. I think the fact that the pandas core developers want to remove inplace gives good reason to try and avoid using it.
@@robmulla I guess it's more "functional style" to do it like they want but I recently had this problem with the memory when creating copies and I solved it by using 'inplace' (Python 3.7 and Pandas 1.3.5 if it matters)
@@SamusUy good to know!
Have not, and will not make any of these mistakes because I’ve seen your “A Gentle Introduction to Pandas Guide” !!
Trueeeeee
Love it! Thanks nick.
Where, please? I found your twitter feed, and lots of “gentle introductions” from other people, but not yours.
@@garyfritz4709 here is the link ruclips.net/video/_Eb0utIRdkw/видео.html
@@robmulla Aha. I was googling out on the web, and it didn't find THAT video in YT. Merci!
Learned tons with this. Short and succinct. New subscriber.
Thanks for subscribing!
This video is literally a gem
Glad you liked it Fizip. Hopefully you learned a thing or two you that will help you write better code!
@@robmulla Thank you for your reply. I am thankful for your content.
Thanks Rob! I just made my first Kaggle notebook and I think I made all 25 of these mistakes 😂
When you mention the slice warning sometimes you don't care about the original data frame so it doesn't matter if you modified it
That’s true. But I don’t like seeing the warnings. And if you don’t need the rest of the data you can just overwrite it with the slice?
Great video. Lots of operations and procedures that are helpful for effective coding. Would be really helpful to have a cheat sheet linked for easy reference.
At 6:23 (#14) you're returning the dataframe, but you're also modifying it in place. Having a return there gives the impression that the original dataframe isn't modified, specially if you also assign it to itself later.
It ties back to #5.
OMG! I had to rest after first 10. So huge dose of information. Thanks.
Nice video! I have been using pandas for years and still run into these issues :)
Thanks! Glad you enjoyed the video. I really enjoy your videos too.
Hey Rob! You got me on that one right off the bat! I write a file to csv and when I load it back in, I get an 'unnamed' column and I wonder why....then I have to drop the column. 🤐Unnecessary work! Thanks a heap!
That's good to hear that you learned something new only a few seconds into the video :D - if you enjoyed it please share it on social or with any friends who might learn from it.
Super useful! Thanks a lot, mate!
Thanks for watching. Please share with someone you think might also like it.
Merge validator! Excellent thanks!
👍
Dude, Amazing video apparently clear the concept.
Glad you think so! Share with your friends!
"I can see how this would be confusing for new users". Sir, I have been using pandas for 10 years and had no idea I was making these mistakes!
Whoa! Glad you could learn something. I’m sure there are a few things you could teach me!
Rob, amazing video and intuitive. Happy to subscribe!
Very useful! Thank you for sharing in such an easy and agile way.
Hey! Glad you learned something. Appreciate the feedback!
Great video. Very helpful. Please keep making more like this
Appreciate that. I plan to!
This video is too damn good, I would love to find more videos like this.
As a beginner this video made me learn some basic concept about pandas. thanks
Very illuminating video! I learned a lot quickly.
Thanks for the feedback Daniel!
Thanks for the video!! A small comment about number nine, creating multiple intermediate dataframes. I understand that this can be costly in terms of memory, but I also think it can be nice for debugging and understanding during the development phase. Moreover, using the same name 'df' once and another can be prune to errors if you have different operations in different cells and you are 'playing' skipping some of them to see the effect, because you don't know which 'df' are actually taking as input.
Good point! It really depends on what you're doing and the time it takes to develop sometimes is more important than the code itself. However, once you are done debugging then changing it to using chaining methods is typically preferred.
Very useful video, thank you for making this !
Glad it was helpful! Share it with anyone you think might also benefit.
Fun fact from the *query* method that wasn't mentioned here.
In *query* you actually (!) Reference to pandas columns. So you can do something like this :
`df.query('Name.isna()')` - to query *NaN* containeings
`df.query('Name.str.contains("John")')` - to filters all rows when Name containing John
And even something crazy like
`df.query('Price.rolling(7,1).mean() > Price.mean()')` to take rows that rolling means more than average
🔥 great tips! Almost needs a video specifically on this.
#26 Look into alternatives when dealing with large data. Memory issues are a pain to deal with in Pandas.
Check out my videos on polars and pyspark!
Another awesome, useful video, Rob. Thank you.
Thanks for watching Deepak!
I have been using the vectorised notation purely due to it requiring less syntax lol. But good to know its faster.
Yep! It can be a lot faster.
Great video. Thank you for being so direct and giving us valuable tips ☺
Glad you liked it! Thanks for giving feedback. Share the video with anyone else you think might also like it.
Great insights, thanks for these important tips
Glad you found them helpful. Share it somewhere on social you think people might learn from!
I had no idea you could wrap and use oaranthesis that’s so clean
Yes! Also check out the black autoformatter which will do this for you automatically.
Great video! I also like the jazz bass behind you, I also play bass :)
Awesome! I’m more of a guitar player but I also enjoy playing bass.
.query is one of my favorites. I use it all the time, but it is still not as flexible as the normal filtering way. For example you can not use .isna() method or IN for comparison.
Though you can now use columns with spaces in by enclosing them in ` `
Totally query is great. But did you know in recent versions isna() does work in query. Same with IN - I use it all the time against lists using the @ to reference the list.
I loved this to s be to my students. You did a great job in a short video!
Thank you so much! It's hard to make it short but is worth it in the end.
Great video as always. I will start exploring query method more.
Rob, Can you please make a video on how feature engineering, especially how to create new features using aggregation etc. Thank you
Glad you enjoyed the video. Feature engineering would be a good topic for a future video. I'll add it to the list!
8:17.....this loop maybe...can be replaced....maybe.....with creation of another colum with has the value of i-1....after_row.....extract the list of this column....[1:-1]....append(0)....then....insert this new list in row_after....then..percent_calc.... the end
Hi, I love your videos!!!
Can you please make a video on how to handle missing values and outliers?
Great suggestion! I did have a whole video on this topic on Abhishek Thakur's channel. Check it out here: ruclips.net/video/EYySNJU8qR0/видео.html
Hi, I'm from Brazil and I want to thanks for this video! I have a suggestion and a question to make - and please if my english is wrong I apologize in advance.
1) Suggestion: put the 25 'Nooby Pandas' separately in the time video to make it easier to find one of the 25 mistakes specifically.
2) Question: is it possible to find in two similar dataframes where is the difference in the data column? Example: I collect the prices of a stock market from two different sources and the start and end dates is the same, but the number of lines has a difference of one line and I don't know where it is because it's in the middle of the dataframe, can you give me a hint to solve that, please?
Thanks for watching. For #1 I didn’t add timestamps so people watch the whole thing 😏. #2 sounds like you want to do an outer merge and see where the null values exist.
I wish I had this video 6 years ago. Thank you.
Glad you found it helpful!
This is really useful, thank you!
Glad you found it useful, Juan!
lots of good info! thank you!
Glad you learned from it!