Great video Arjan ! It would be great to see the integration with SQL Model since often you want to save the data to a DB without repetition of the schemas. Thank you for the content !
Great Tutorial. Clean presentation and motivation for use. Pandera was in my toolbox to use in a Pandas Project. I'll follow-up with this clean setup using pydantic. I'll be interested in the integration with FastAPI. Thank You!
@ArjanCodes Brother, thanks for the video, it is a really good resource for those of us looking to get into the topic, keep the good work. I just would like to clarify one point, in the video you mentioned that time series are rows, not properly correct!. Data frames are made of time-series, that is, columns, not rows.
I always envied C#-s FluentValidation package. It made validating data objects so easy and readbale. Glad to see Python has something similar with Pandera!
Great Video ! :) I would love a tutorial about the Pint package for working with physical/scientific units including a take from you regarding typehinting and validation of correct function inputs.
This is super useful. Thank you for sharing. I have a question about min 8:57 in the video though, you've mentioned the DataFrame columns can be set as an instance variable of OutputSchema class, is it an instance variable or a class variable?
Hi. I have a question, i don't know if you accept this kind of requests. I have a very huge table, of several GB. In an hour it writes about 4 thousand lines. Every day, every 5 minutes I have to populate/modify another table, much smaller, which makes a calculation: it is a sum of 3 or 4 columns contained in the larger database, from 00:01 to the time when the check is performer. So, if zi check at 8.00, I have to sum the values generate from 00:01 to 08:00. If I check at 13:00, the sum is for the values between 00:01 and 13.00. I was wondering, since I only work on data from that specific day (and so, max 30k of rows), does it make sense to create a temporary sqlite db in memory? Or rather a Pandas dataframe? Or are there other faster solutions? Something that does query caching (like SELECT * FROM XYZ WHERE TIME > "00:01" AND TIME
Hello Arjan, Thank you for this great overview. I have a couple of follow-up question. What kind of validation does `pandera` support? Can I have 1) fuzzy checks, something like I expect the value to be not a NULL, but I accept a few of them. 2) multicolumn checks? If df["column_a"] == xx then df["column_b"] must be int, otherwise float? 3) expectation regarding the shape of the data, using Z-test to compare it with a given distribution? Otherwise, this library is pretty useless. I can implement similar check in a few minutes on my own ;)
It would be nice if you can extend or deepen on the validation you make using Pandera. Maybe showing some logs of dataframe examples, one that complies to the schema and one that doesn't. And maybe showing when do you use it too. Do you use it for reading from csv's or for testing a transformed data frame to check that it complies? Thanks :)
Glad to see an integration of pydantic with this --schema file was not practical for a new developer coming into the codebase. Downside is we rely on two libraries but I believe it's worth it for now
Is there some way to pass the decorator check_output(schema) a schema that is not imported in global scope? Suppose you load the schema from file in main() and then want to call retrieve_retail_products() while passing it the validation schema. Can this be made to work?
I changed quite a bit in my programming techniques since I started watching your series of videos. Amongst other, I now also add the typehints when I define new methods. I am also a user of pandas. But when I see the typehints that you propose for pandas dataframes, I am getting the feeling that this is a bit over-the-top for me. I can understand it may be valuable in a professional software development department. But as an amateur programmer this is a bit too much I think. I also think you may change your title to also include "Pydantic". This since in the end, you propose to use Pydantic instead of (or combined with) Pandera.
Wouldn’t import pandera as pa because of the confusion with pyarrow. Also what if you don’t know the column names beforehand? But you do know the structure? Can you do regex matching? And can you repeat the structure for multiple columns?
I don't think you can achieve this Pydantic validators without some additional work, because Pydantic is not specifically designed to work with DataFrames out of the box. You'll need to convert the DataFrame into a format that Pydantic can understand, such as a list of dictionaries.
Great video! I am just about to start up a larger project for me working with a REST API so I will be using some Pydantic. I didn't know you could also validate pandas like this, pretty interesting. Right now what I am kinda stuck trying to figure out is how to design my classes. Right now there is one class for handling oauth authentication and two others it contains for get and set methods. So that way I can do restapi.get.systeminfo() or restapi.set.locationinfo() but they are starting to get large and I am thinking about have those get and set classes as bases and extended them each with other files to separate things more. My thoughts are to store some of this data with sqlite. Would I have benefits to using pandas with some of this? Right now I am thinking of using pedantic for API response validation, and user input validation and then internally store the datastructures in classes then using pandas to export to xlsx as one of the output formats.
Would it not be true to say that a Pandas Series is a column of a table, instead of a row of a table. A row usually multiple data types. A pandas series is usually of one datatype, a single column.
For DS and ML purposes Pandera seems to be useless. It definitely slows down your EDA and ml-model development. Furthermore, it's hard to imagine a situation where data validation in production needs to be done in this way. If you receive invalid data in production, it's likely that you have larger problems with other services and components. Such situations can be detected with the help of monitoring systems and services.
👷 Join the FREE Code Diagnosis Workshop to help you review code more effectively using my 3-Factor Diagnosis Framework: www.arjancodes.com/diagnosis
This is the only channel where I use the super thanks. Your channel is amazing and help me grow as a Python developer. Thanks!
Thank you so much, Bruno!
Great video Arjan ! It would be great to see the integration with SQL Model since often you want to save the data to a DB without repetition of the schemas.
Thank you for the content !
Great Tutorial. Clean presentation and motivation for use. Pandera was in my toolbox to use in a Pandas Project. I'll follow-up with this clean setup using pydantic. I'll be interested in the integration with FastAPI.
Thank You!
@ArjanCodes Brother, thanks for the video, it is a really good resource for those of us looking to get into the topic, keep the good work. I just would like to clarify one point, in the video you mentioned that time series are rows, not properly correct!. Data frames are made of time-series, that is, columns, not rows.
FastAPI integration pls!!
Just discovered this library last week. Amazing. Thank you
I always envied C#-s FluentValidation package. It made validating data objects so easy and readbale. Glad to see Python has something similar with Pandera!
Love your videos, always simple shot and to the point
Sounds very useful. Thanks for sharing.
Thanks for watching!
Wooow thanks for sharing
Great Video ! :)
I would love a tutorial about the Pint package for working with physical/scientific units including a take from you regarding typehinting and validation of correct function inputs.
Series can be not only a row but also a column of DataFrame.
True
which is actually really annoying sometimes
@@dispatch1347 I think it’s quite logic.
This is super useful. Thank you for sharing.
I have a question about min 8:57 in the video though, you've mentioned the DataFrame columns can be set as an instance variable of OutputSchema class, is it an instance variable or a class variable?
Thanks, it was indeed useful for me. I did not know about pandera
You're welcome Eduard!
love this would like more example with integration with hypothesis too!
wonderful series! Add with fastapi is a good shout. Or perhaps ORM into some SQL database? not sure if that makes sense. In any case - VALIDATED 🔥
This is super useful. Thank you
Best IT channel ever❤
❤
As always a great video. Thanks a lot :)
Thanks again!
Very useful stuff! Thank you!
Glad you think so!
Hi. I have a question, i don't know if you accept this kind of requests.
I have a very huge table, of several GB. In an hour it writes about 4 thousand lines. Every day, every 5 minutes I have to populate/modify another table, much smaller, which makes a calculation: it is a sum of 3 or 4 columns contained in the larger database, from 00:01 to the time when the check is performer. So, if zi check at 8.00, I have to sum the values generate from 00:01 to 08:00. If I check at 13:00, the sum is for the values between 00:01 and 13.00.
I was wondering, since I only work on data from that specific day (and so, max 30k of rows), does it make sense to create a temporary sqlite db in memory? Or rather a Pandas dataframe? Or are there other faster solutions? Something that does query caching (like SELECT * FROM XYZ WHERE TIME > "00:01" AND TIME
Hello Arjan,
Thank you for this great overview. I have a couple of follow-up question.
What kind of validation does `pandera` support?
Can I have
1) fuzzy checks, something like I expect the value to be not a NULL, but I accept a few of them.
2) multicolumn checks? If df["column_a"] == xx then df["column_b"] must be int, otherwise float?
3) expectation regarding the shape of the data, using Z-test to compare it with a given distribution?
Otherwise, this library is pretty useless. I can implement similar check in a few minutes on my own ;)
As always !!! your video is interesting and helpful !!! I really want to deep dive into the integration with FastAPI!
Excellent content!
Much appreciated!
Well, hmmm, interesting)
Would be great to see more on integrations
It would be nice if you can extend or deepen on the validation you make using Pandera. Maybe showing some logs of dataframe examples, one that complies to the schema and one that doesn't. And maybe showing when do you use it too. Do you use it for reading from csv's or for testing a transformed data frame to check that it complies? Thanks :)
Love your production quality! Are you using a teleprompter? Your camera presence in the intro is sooo good!
Thanks!
Is a series not a column of a Dataframe ?
Glad to see an integration of pydantic with this --schema file was not practical for a new developer coming into the codebase. Downside is we rely on two libraries but I believe it's worth it for now
Good, this is a very usefull video
Thank you!
Thanks for the great video Arjan! How can you integrate this with BigQuery
Thanks for sharing! I would like to know how to integrate with FastAPI. 😄
Noted!
Is there some way to pass the decorator check_output(schema) a schema that is not imported in global scope? Suppose you load the schema from file in main() and then want to call retrieve_retail_products() while passing it the validation schema. Can this be made to work?
A video on pandera and FastAPI integration would be great!
Thanks for the great video. MLFlow would be a good topic for a video in my opinion. Not many good vids out there.
Could you please observe Polars?
I changed quite a bit in my programming techniques since I started watching your series of videos. Amongst other, I now also add the typehints when I define new methods.
I am also a user of pandas. But when I see the typehints that you propose for pandas dataframes, I am getting the feeling that this is a bit over-the-top for me. I can understand it may be valuable in a professional software development department. But as an amateur programmer this is a bit too much I think.
I also think you may change your title to also include "Pydantic". This since in the end, you propose to use Pydantic instead of (or combined with) Pandera.
Wouldn’t import pandera as pa because of the confusion with pyarrow. Also what if you don’t know the column names beforehand? But you do know the structure? Can you do regex matching? And can you repeat the structure for multiple columns?
Is there a way yet to only keep rows that meet the criteria?
Superb video! But how to validate an email address with pandera? Thank you in advance
you can have 500 columns in a dataframe. ok, you will write a lot.
The infered schema might help
Hi arjan, very nice video!
Do know if DataFrameSchema works well with the new pyarrow dtypes from Pandas 2.0.0?
Thanks in advance :D
what about the performance impact?
cant you use the @validator function decorator with pydantic? Very nice video again! Thanks alot !
I don't think you can achieve this Pydantic validators without some additional work, because Pydantic is not specifically designed to work with DataFrames out of the box. You'll need to convert the DataFrame into a format that Pydantic can understand, such as a list of dictionaries.
Great video! I am just about to start up a larger project for me working with a REST API so I will be using some Pydantic. I didn't know you could also validate pandas like this, pretty interesting. Right now what I am kinda stuck trying to figure out is how to design my classes. Right now there is one class for handling oauth authentication and two others it contains for get and set methods. So that way I can do restapi.get.systeminfo() or restapi.set.locationinfo() but they are starting to get large and I am thinking about have those get and set classes as bases and extended them each with other files to separate things more.
My thoughts are to store some of this data with sqlite. Would I have benefits to using pandas with some of this? Right now I am thinking of using pedantic for API response validation, and user input validation and then internally store the datastructures in classes then using pandas to export to xlsx as one of the output formats.
You may not know Polars outperform Pandas, and Peaks prepares to outperform Polars.
What about pola-rs?! It has schemas built in!
Great video! I'd love to see how to combine pandera with fastAPI
Great suggestion!
Or with django too 🙏@@ArjanCodes
Would it not be true to say that a Pandas Series is a column of a table, instead of a row of a table. A row usually multiple data types. A pandas series is usually of one datatype, a single column.
I will be waiting for the fastapi integration.🙏🙏🙏
Attrs or pydantic is good, Databricks inferschema methods looks similar
polars pandera pydantic plz
arjan spying again for me. I just making project with tensorflow now lmao
Use polars.
For DS and ML purposes Pandera seems to be useless. It definitely slows down your EDA and ml-model development. Furthermore, it's hard to imagine a situation where data validation in production needs to be done in this way. If you receive invalid data in production, it's likely that you have larger problems with other services and components. Such situations can be detected with the help of monitoring systems and services.
en.wikipedia.org/wiki/Design_by_contract