Good day, sir! I'm getting back to some automated tests with python after, well, 4 years. I have to say - this video is a great refresher for me. Thank you.
This is wonderful sir This is what I am exactly searching for to automate the API testing but I didn't find any videos except this..all the while I am thinking of markers, fixtures, parametrizing..glad my sleepless night didn't go futile.. thank you so much
Good tutorial. For some reason, I'm having trouble getting the print statements to work within the test case functions themselves, even after doing what you suggested around 14:30.
Hmm, if you've added "-s" and still don't see anything printed, have you checked first if your tests were already being discovered by PyTest? Otherwise, I have another PyTest video that starts from simpler examples (with unit tests) that might be easier to follow until you get it working: ruclips.net/video/YbpKMIUjvK8/видео.html
@@Antinatalist_Rampage Hmm, without actually seeing the code I'm not sure I can debug further. The `-s` flag should work in normal cases. Here's a link to the relevant page on Pytest about how it captures your default stdout (including your prints): docs.pytest.org/en/7.4.x/how-to/capture-stdout-stderr.html#how-to-capture-stdout-stderr-output
Hey there! Thanks for the kind words! For this demo, I actually used a pre-existing API (todo.pixegami.io/) to showcase the testing process. I do also have some videos on how to build and deploy an API as well: ruclips.net/video/iLt00bqp6is/видео.html and ruclips.net/video/RGIM4JfsSk0/видео.html
very good video, the explaination is properly done explaining every unknown functionality on a fundamental level even my little sister would undderstand
There's lots of different interpretations of when to use PUT/POST. I use PUT here. because it's a "Create or Update" operation, whereas I see "POST" as more of an "append" operation. In this case, the PUT function is potentially destructive. But either is fine, I think.
Hey, thanks for the video, do you think designing a framework in POM design pattern is a good idea for this? I am getting confused what to put in different pages and test files as the requests are dependant on each other.
thank you for the great tutorial! I noticed though that deleting the SAME task ID multiple times keeps returning 200 (should be 404 if record no longer exists). I guess its just the way this API is designed so i'll just move on. Wish you have a pytest mocking tutorial.
Hmm, if it's a warning then I think it should be OK to ignore. It just means you need to look up what the library owners want you to use instead of that API going forward.
Good job! A quick question, you test cases actually interact with real database, is that what professional developers doing in the real world? As far as I am concerned, developers tend to use mocking or SQLite in memory, the last thing they want is to modify the real database.
Good question. For unit tests, you should mock and stub the database. But this is an "integration test", and it should absolutely test the real thing. There are some types of issues that only show up when you test the real thing end-to-end. If you are concerned with it affecting the PROD database, you will have to create a BETA stack, which is a working copy of the real thing, then use CICD to promote the code in stages. There's actually a lot of different types of testing as your service grows more complex as well (e.g. periodic smoke test - also on the PROD stack, and load/stress testing - probably also on the PROD stack).
Excellent video! One question, how would you test "def get_task" (another test) that receives a task_id as a parameter? Is there a way to chain the task_id that was created in create_task? or should I create a new task_id for that new test?
Not sure if it's exactly what you mean, but one way to do a setup once, get a UUID, and re-use it across multiple tests is to use a fixture: docs.pytest.org/en/6.2.x/fixture.html You can have a per-class fixture, for instance, and have it create a unique task once that you can then re-use in as many tests as you want.
@@pixegami Thanks, that's what I just did, in my test methods, I receive a fixture(scope="module") that responds to a taskId, so I take the necessary ID, and make the get request, I don't know if it's very elegant or a good one practice, but it works
@@OoFedericoO If it works, that great. Don't worry about getting too caught up in if it's elegant or good practice - as long as you are solving problems, the other stuff will fall into place gradually.
I'm getting back into testing APIs and am using TestCafe with Javascript while following along with your video. Is it safe to assume the /update-task/ endpoint is incorrect in being a put rather than a patch? Also that it doesn't take in an existing task_id to update? With it being a put and no existing task to reference, we're just creating a new task, right?
The way people interpret REST APIs can vary. Here, I overload `PUT` as both `CREATE` and `UPDATE` since it mostly uses the same code. I might also just want to express "Create (Or Update if it exists)" as a single operation. In that case, I'd just go with a PUT API for it. But it's not necessarily "correct" or the best way to do it-although it's not uncommon either.
I'm stuck behind corporate firewall. I can get to the todo from browser but not via python. I tried setting proxy etc. still no dice. video looked good ut I got stuck unfortunately
Nice tutorial. More than deserved like! I was wondering, why dont we erase the recently created test tasks? Instead of creating those uuid mechanism. Because in a real environment, it'll be full of garbage data on DB. I know that in this experiment, it vanishes after 24h, but considering a real world API, what would be a good practice.
You can use POST too. It doesn't really make a big difference and is a matter of preference. There's lots of discussions about it online: stackoverflow.com/questions/630453/what-is-the-difference-between-post-and-put-in-http I like to use "PUT" for anything where I want to say "Create and/or Update" - which also aligns with a lot of AWS APIs that I use (so it makes my code more consistent).
Hi, I'm new to your channel. This is a great video, so much information that is easy to digest, follow and implement. I just want to ask a few questions. 1. Can I move the helper functions to a separate file and import that file into the test case? 2. Should I move each test case to an individual file? For example, have a file for creating a task, have a separate file for updating a file and so on. I would really appreciate your feedback. Thanks.
Glad it was helpful, and thanks for the questions! 1. Yes. Helper functions are just like any other functions in Python - they can be stored in separate files and imported too if that's how you prefer to organize them. 2. Up to you - everyone has a different way of organizing their tests. For unit tests (which are usually targeted at modules/classes and mock everything else), I normally see a 1:1 mapping with the files they test. E.g. a `person.py` class will have a `test_person.py` test. But in this video, we are doing integration tests, which actually tests multiple components. For integration tests, you can also take the 1:1 file approach, but you don't have to. Sometimes I like to split files into 'logical units'. If this To-Do app became really big, "tasks" are just one logical unit, so it might be fine to have one file to test the entire tasks workflow (create, get, read, etc). Since tests are something you can easily refactor, it's OK to just try what works for you and change it later if you have a better idea.
Great video! I'm actually struggling with testing APIs for productive systems where I can't create or delete test resources. Are you planning on doing a video with Mocking? 😇 Keep up the amazing work!
this is a really good video for testing APIs, however i'd suggest one improvement. I come from a different language dev but we also always implemented "error" tests for every request. I know you briefly touched on it, but I believe this is a mandatory thing
It's a great idea for a more comprehensive test suite, and I agree if we were to go into more detail, testing the errors and the negative cases is just as important.
Good point, it probably would've worked here. I felt like functions were more primitive, and easier to digest. I do actually use fixtures in my other Pytest unit-testing focused tutorial: ruclips.net/video/YbpKMIUjvK8/видео.html
I don't think I ever uploaded source code for this project, but it was based on the test code in this other project here: github.com/pixegami/todo-list-api/blob/main/test/api_integration_test.py So it's not exactly the same, but should be close enough for reference.
Awesome content! I am data scientist working mainly with Python and PyTorch. I want to be full stack MLE, I want to be able to build applications using ML. I am wondering what I need to build end 2 end application. Which stack should I learn? Thanks a lot for your help
I used to do data-science and ML as well, but I find that ML/DS skills a very different from full-stack and application development skills. First I'd recommend figuring out a way to turn your ML/DS projects into a web service (like an API endpoint) so you can call it via a REST API (just like OpenAI's products). For Python, you can choose between Flask, FastAPI, Starlite. Then you'll need cloud infrastructure to host it. The obvious choices are AWS, Azure or Google Cloud. Once you have an API endpoint with your ML products, you can then look into adding a front-end. React is the dominant framework here, so I recommend it. But Svelte and Vue are also really good, so you could give those a try as well.
Awesome detailed Video 👍 Can you also give us the access to the project (code) used for this tutorial So we can checkout and try to implement on existing code.. thanks
I don't think I ever uploaded source code for this project, but it was based on the test code in this other project here: github.com/pixegami/todo-list-api/blob/main/test/api_integration_test.py So it's not exactly the same, but should be close enough for reference.
GitHub CoPilot! github.com/features/copilot I also made a video about that here: How To Use GitHub Copilot (with Python Examples) ruclips.net/video/tG8PPne7ef0/видео.html
For unit tests I think there's quite a few options, e.g. coverage.readthedocs.io/en/7.3.4/ But for integration tests, I think that's a little more advanced topic, and I haven't looked into it for Python yet - sorry!
@@rohitkapade1130 I haven't explored this myself, but I'd probably start here: coverage.readthedocs.io/en/7.4.0/api.html#api It's a way to programmatically generate a coverage report. You can embed this into your service code, and turn it on when a "test" request hits it.
Thanks for the feedback. I don't have any more PyTest course planned yet at this point, but I do have another PyTest unit testing video here: ruclips.net/video/YbpKMIUjvK8/видео.html
Good day, sir! I'm getting back to some automated tests with python after, well, 4 years. I have to say - this video is a great refresher for me. Thank you.
Glad it was helpful!
This channel is gold! Thank you so much
Thanks!
This is a great work! I would recommend the course to everyone and I would like to see more videos like this from the author
More to come!
This is wonderful sir
This is what I am exactly searching for to automate the API testing but I didn't find any videos except this..all the while I am thinking of markers, fixtures, parametrizing..glad my sleepless night didn't go futile..
thank you so much
Glad it helped! It's easy to overthink things when there's so many tools and techniques available.
tremendmous experience , I didn't ever seen any tutorial & clear crytsal explanation ,Waiting for more videos for Automated series
This channel is gold!
Thank you!
Nice tutorial! Thank you! Got me through a tricky textbook problem.
Thank you so much for this video!! Its really helpful for me to get started with API tests on python
Thanks! Glad you found it helpful for getting started with API testing. Keep practicing and you'll be a pro in no time!
great video to learn and practice integration testing. Thank you so much
Great video, just wanted to check how are you getting the auto suggestion of snippets in your tests? which extension is that? Thank You.
This is probably GitHub Copilot :)
This is awesome to watch and understand better. Can I ask you what code snippet tool you use in vscode to populate the suggestion code? TIA.
It's been a while since I made this video so I'm not 100% sure, but it's probably GitHub Copilot.
Good tutorial. For some reason, I'm having trouble getting the print statements to work within the test case functions themselves, even after doing what you suggested around 14:30.
Hmm, if you've added "-s" and still don't see anything printed, have you checked first if your tests were already being discovered by PyTest? Otherwise, I have another PyTest video that starts from simpler examples (with unit tests) that might be easier to follow until you get it working: ruclips.net/video/YbpKMIUjvK8/видео.html
@@pixegami Yeah all of the tests get properly discovered, and they pass, but the printed items within the test don't show up in the terminal
@@Antinatalist_Rampage Hmm, without actually seeing the code I'm not sure I can debug further. The `-s` flag should work in normal cases.
Here's a link to the relevant page on Pytest about how it captures your default stdout (including your prints): docs.pytest.org/en/7.4.x/how-to/capture-stdout-stderr.html#how-to-capture-stdout-stderr-output
Hello, @pixegami , super cool video! Just a question, how did you deploy / host the API on the web?
Hey there! Thanks for the kind words! For this demo, I actually used a pre-existing API (todo.pixegami.io/) to showcase the testing process.
I do also have some videos on how to build and deploy an API as well: ruclips.net/video/iLt00bqp6is/видео.html and ruclips.net/video/RGIM4JfsSk0/видео.html
@@pixegami Thanks for the information!
thanks! very clear tutorial :D
very good video, the explaination is properly done explaining every unknown functionality on a fundamental level even my little sister would undderstand
Why are we using PUT instead of the POST to create a new task?
There's lots of different interpretations of when to use PUT/POST. I use PUT here. because it's a "Create or Update" operation, whereas I see "POST" as more of an "append" operation. In this case, the PUT function is potentially destructive.
But either is fine, I think.
@@pixegami if you are not passing task_id it's only for Create so you should use POST.
Hey, thanks for the video, do you think designing a framework in POM design pattern is a good idea for this? I am getting confused what to put in different pages and test files as the requests are dependant on each other.
Thanks for your comment! I actually haven't done work with POM design pattern so I'm not quite sure how to advise you on that though, sorry!
Excellent Tutorial! Many thanks!
You're welcome!
Great video! The code suggestions are by copilot?
Yes they are!
Really great video. Good tempo and coverage. keep Rocking it.
thank you for the great tutorial! I noticed though that deleting the SAME task ID multiple times keeps returning 200 (should be 404 if record no longer exists). I guess its just the way this API is designed so i'll just move on. Wish you have a pytest mocking tutorial.
good video
Important: 33:28
set up a run configuration for pytest but it gave DeprecationWarning: pkg_resources is deprecated as an API. How to fix?
Hmm, if it's a warning then I think it should be OK to ignore. It just means you need to look up what the library owners want you to use instead of that API going forward.
Good job!
A quick question, you test cases actually interact with real database, is that what professional developers doing in the real world?
As far as I am concerned, developers tend to use mocking or SQLite in memory, the last thing they want is to modify the real database.
Good question. For unit tests, you should mock and stub the database. But this is an "integration test", and it should absolutely test the real thing. There are some types of issues that only show up when you test the real thing end-to-end. If you are concerned with it affecting the PROD database, you will have to create a BETA stack, which is a working copy of the real thing, then use CICD to promote the code in stages.
There's actually a lot of different types of testing as your service grows more complex as well (e.g. periodic smoke test - also on the PROD stack, and load/stress testing - probably also on the PROD stack).
Excellent video! One question, how would you test "def get_task" (another test) that receives a task_id as a parameter? Is there a way to chain the task_id that was created in create_task? or should I create a new task_id for that new test?
Not sure if it's exactly what you mean, but one way to do a setup once, get a UUID, and re-use it across multiple tests is to use a fixture: docs.pytest.org/en/6.2.x/fixture.html
You can have a per-class fixture, for instance, and have it create a unique task once that you can then re-use in as many tests as you want.
@@pixegami Thanks, that's what I just did, in my test methods, I receive a fixture(scope="module") that responds to a taskId, so I take the necessary ID, and make the get request, I don't know if it's very elegant or a good one practice, but it works
@@OoFedericoO If it works, that great. Don't worry about getting too caught up in if it's elegant or good practice - as long as you are solving problems, the other stuff will fall into place gradually.
I'm getting back into testing APIs and am using TestCafe with Javascript while following along with your video. Is it safe to assume the /update-task/ endpoint is incorrect in being a put rather than a patch? Also that it doesn't take in an existing task_id to update? With it being a put and no existing task to reference, we're just creating a new task, right?
The way people interpret REST APIs can vary. Here, I overload `PUT` as both `CREATE` and `UPDATE` since it mostly uses the same code. I might also just want to express "Create (Or Update if it exists)" as a single operation. In that case, I'd just go with a PUT API for it.
But it's not necessarily "correct" or the best way to do it-although it's not uncommon either.
I'm stuck behind corporate firewall. I can get to the todo from browser but not via python. I tried setting proxy etc. still no dice. video looked good ut I got stuck unfortunately
You are legendary for sharing this, explained simply.
Thank-you bro!!
😎👊🏾🔥💙
Glad it was helpful!
Nice tutorial. More than deserved like!
I was wondering, why dont we erase the recently created test tasks? Instead of creating those uuid mechanism.
Because in a real environment, it'll be full of garbage data on DB.
I know that in this experiment, it vanishes after 24h, but considering a real world API, what would be a good practice.
bro, I have one doubt regarding in creating sceneario u have applied put request ,Im right ? but actually Is it post in that case right?
You can use POST too. It doesn't really make a big difference and is a matter of preference. There's lots of discussions about it online: stackoverflow.com/questions/630453/what-is-the-difference-between-post-and-put-in-http
I like to use "PUT" for anything where I want to say "Create and/or Update" - which also aligns with a lot of AWS APIs that I use (so it makes my code more consistent).
@@pixegami okay bro thanks for the reply
Hi, I'm new to your channel. This is a great video, so much information that is easy to digest, follow and implement. I just want to ask a few questions.
1. Can I move the helper functions to a separate file and import that file into the test case?
2. Should I move each test case to an individual file? For example, have a file for creating a task, have a separate file for updating a file and so on. I would really appreciate your feedback. Thanks.
Glad it was helpful, and thanks for the questions!
1. Yes. Helper functions are just like any other functions in Python - they can be stored in separate files and imported too if that's how you prefer to organize them.
2. Up to you - everyone has a different way of organizing their tests. For unit tests (which are usually targeted at modules/classes and mock everything else), I normally see a 1:1 mapping with the files they test. E.g. a `person.py` class will have a `test_person.py` test.
But in this video, we are doing integration tests, which actually tests multiple components. For integration tests, you can also take the 1:1 file approach, but you don't have to. Sometimes I like to split files into 'logical units'. If this To-Do app became really big, "tasks" are just one logical unit, so it might be fine to have one file to test the entire tasks workflow (create, get, read, etc).
Since tests are something you can easily refactor, it's OK to just try what works for you and change it later if you have a better idea.
@@pixegami Thank you for your response. That all makes sense. Thank You. Honestly great channel. Thanks for sharing your knowledge.
I have a config.json file with a list of scenarios I wish to test, is there a way I can easily create tests for each scenario?
Very nice tutorial !! Do have any tutorial on PyTest + BDD ?
Not yet. Are you from the frontend/NodeJS world? I know BDD is big there, but I don't really see it used in Python as much.
@@pixegami Nope, I'm from an automation testing background. I require PyTest +BDD for one of my projects
Great video! I'm actually struggling with testing APIs for productive systems where I can't create or delete test resources. Are you planning on doing a video with Mocking? 😇
Keep up the amazing work!
this is a really good video for testing APIs, however i'd suggest one improvement. I come from a different language dev but we also always implemented "error" tests for every request. I know you briefly touched on it, but I believe this is a mandatory thing
It's a great idea for a more comprehensive test suite, and I agree if we were to go into more detail, testing the errors and the negative cases is just as important.
I wonder why you did not use Fixtures instead of helper functions..!?
Good point, it probably would've worked here. I felt like functions were more primitive, and easier to digest. I do actually use fixtures in my other Pytest unit-testing focused tutorial: ruclips.net/video/YbpKMIUjvK8/видео.html
Do you have a GitHub repo with the code?
I don't think I ever uploaded source code for this project, but it was based on the test code in this other project here: github.com/pixegami/todo-list-api/blob/main/test/api_integration_test.py
So it's not exactly the same, but should be close enough for reference.
Awesome content! I am data scientist working mainly with Python and PyTorch. I want to be full stack MLE, I want to be able to build applications using ML. I am wondering what I need to build end 2 end application. Which stack should I learn? Thanks a lot for your help
I used to do data-science and ML as well, but I find that ML/DS skills a very different from full-stack and application development skills.
First I'd recommend figuring out a way to turn your ML/DS projects into a web service (like an API endpoint) so you can call it via a REST API (just like OpenAI's products). For Python, you can choose between Flask, FastAPI, Starlite.
Then you'll need cloud infrastructure to host it. The obvious choices are AWS, Azure or Google Cloud. Once you have an API endpoint with your ML products, you can then look into adding a front-end.
React is the dominant framework here, so I recommend it. But Svelte and Vue are also really good, so you could give those a try as well.
@@pixegami thanks a lot for you help. I have seen a lot people recommending django, what is your taught on that?
Thanks. Very cool Video
Glad you liked it!
Awesome detailed Video 👍
Can you also give us the access to the project (code) used for this tutorial
So we can checkout and try to implement on existing code.. thanks
I don't think I ever uploaded source code for this project, but it was based on the test code in this other project here: github.com/pixegami/todo-list-api/blob/main/test/api_integration_test.py
So it's not exactly the same, but should be close enough for reference.
@@pixegamiappreciate it.. thanks 🙏
Whats the VS Code plugin that suggests "auto-complete" code after comment?
GitHub CoPilot! github.com/features/copilot
I also made a video about that here:
How To Use GitHub Copilot (with Python Examples)
ruclips.net/video/tG8PPne7ef0/видео.html
Ok now tell me how to get code coverage for code which is tested by tests. I am stuck there.
For unit tests I think there's quite a few options, e.g. coverage.readthedocs.io/en/7.3.4/
But for integration tests, I think that's a little more advanced topic, and I haven't looked into it for Python yet - sorry!
@@pixegami referaces to those resources can be helpful if known. Please I need it very urgently.
@@rohitkapade1130 I haven't explored this myself, but I'd probably start here: coverage.readthedocs.io/en/7.4.0/api.html#api
It's a way to programmatically generate a coverage report. You can embed this into your service code, and turn it on when a "test" request hits it.
great video, thx
Glad you liked it!
I am getting 403 error for create-task [put]
great video
when I got o "/docs" i didn't find any thing
Very useful video! Well done👍. Can you share a code, please ☺️
Not the *exact* code used in the video, but close enough: github.com/pixegami/todo-list-api/blob/main/test/api_integration_test.py
please provide full tutorial
thank you soo much sir
Thanks, maybe python + seleniumbase tutorial?
Good idea, that's the next step :)
waiting for python + selenium tutorials
I need a full course of Pytest with you. Pleaseeeeeeeeeee
Thanks for the feedback. I don't have any more PyTest course planned yet at this point, but I do have another PyTest unit testing video here: ruclips.net/video/YbpKMIUjvK8/видео.html
Thank you
You're welcome!
thanks buddy
thank youuuuuuuuuuuuuuuuuuu
it's unit test
Unit tests do not normally test external dependencies or make network calls - those are usually mocked/stubbed in unit tests.
PLEASE no music in guide videos, it's unwatchable
Thanks for the feedback. I've actually dropped it from my more recent videos for a while now, but unfortunately the older ones still have it :(
Thanks!