Been using this approach in our UI monorepo with 50 UI components and 500 WebDriverIO tests. This approach gets super powerful when combined with -filter 🎉
I see a lot of projects where the e2e tests test something with a real db or external service where there is state and complex data. the caching would here would show you a green test even when the data was changed underneath the the test should actually fail right?
I think it depends on what is inside your build, if your build stores state from external services then it would keep the tests green. But if you do something like client side fetching then tests will vary based on you external state. It's not caching the process of testing on this build it's caching the build. Let me know if I'm wrong :)
@@TheHouseTutorials that's true for his dev:test which is not cached, but the e2e:test depends on build pipeline step and is cached so the e2e tests will not be re-run until the build step produces a new artifact. So if the e2e tests depend on external data, then this workflow will not work if different external test data is expected to produce a different tests results. You could work around this by grabbing that external data and letting turbo cache it (via an npm script and making a turbo pipeline step out of it), then if code changes OR test data changes, tests cache gets invalidated and tests are re-run.
You could separate those tests which depend on an external service under a different command in your pipeline and they can either have no cache so they are always run or you can include another step which checks if the external service has the same state as when it was last ran
A package that depends on an app? Isn't this upside down? Or is it tolerable just because the "package" is tests, so it's not a real package? Put it somewhere else then, maybe? Have "apps", "packages" and "something else" in turbo? So many questions.
Are there any plans to support this type of workflow and caching strategy without requiring a separate package for the test suite? It's fairly common for projects to colocate their tests with their production code, so it would be nice if turborepo could accommodate this.
why is it my build time is become took longer time to finish? since the package/web-tests is dependent with app/web and somehow when im try to build the app it also compiling the package/web-tests node_modules. where can i see the build config?
Uses a Cypress plugin for PW demo, hehe. You can insert any e2e framework, really. How would you approach an app that uses json-server? Serving the app and the api is easy, how would you build the app while serving the api?
Two levels of test coverage. One where you test with the real API on a staging environment for example. And another “system test” where the test application is spun up locally, and you send the API requests to a mock server, using something like Wiremock. Allowing you to mock the API endpoints/ responses in your test :)
@@MuratKeremOzcan typically the tests will live as part of the system under test, so I don’t think it isn’t a great example in the video how they’re separated. And I guess it depends on the monorepo. If you wanted to have everything in a monorepo. The API would live as part of its own package, and would be a dependency of the application under test, with the tests living in the SUT package. This way when you build the application, Turborepo knows to build the API too. Turbo will also know if you make changes to the API, because it’s a dependency of the SUT, that will also need to be rebuilt/tested If however the API lives in another repo, then you’ll have to do the mocking as mentioned in my last comment. And test the real API with the application once both have been deployed. Feel free to DM and I can link you an open-source repo I work on that uses Turbo + WebDriverIO. Same practices can be applied to other test frameworks though.
1 small tip from QA to solve this .. just make your e2e solution as separate app, in separate solution .. dont waste time to have it inside the app itself. welcome.
Not sure how this would save any time - you are suggesting creating an entirely new project just for the e2e? Sounds like it would take more time to be honest. Not to mention you don’t get any of the caching or shared until from a monorepo.
Been using this approach in our UI monorepo with 50 UI components and 500 WebDriverIO tests.
This approach gets super powerful when combined with -filter 🎉
can we get link to your repo?
first time viewer and now a subscriber. i go lost in the other hlf of the video. but thanks for introducing me to Playwright Testing.
i understand the purpose here is to show the caching feature, but the explanation of this workflow really could do with being expanded on.
I see a lot of projects where the e2e tests test something with a real db or external service where there is state and complex data. the caching would here would show you a green test even when the data was changed underneath the the test should actually fail right?
I think it depends on what is inside your build, if your build stores state from external services then it would keep the tests green.
But if you do something like client side fetching then tests will vary based on you external state.
It's not caching the process of testing on this build it's caching the build.
Let me know if I'm wrong :)
@@TheHouseTutorials that's true for his dev:test which is not cached, but the e2e:test depends on build pipeline step and is cached so the e2e tests will not be re-run until the build step produces a new artifact. So if the e2e tests depend on external data, then this workflow will not work if different external test data is expected to produce a different tests results.
You could work around this by grabbing that external data and letting turbo cache it (via an npm script and making a turbo pipeline step out of it), then if code changes OR test data changes, tests cache gets invalidated and tests are re-run.
You could separate those tests which depend on an external service under a different command in your pipeline and they can either have no cache so they are always run or you can include another step which checks if the external service has the same state as when it was last ran
Do you have this template repo anywhere?
"Pooop" [me: giggles like a child]
A package that depends on an app? Isn't this upside down? Or is it tolerable just because the "package" is tests, so it's not a real package? Put it somewhere else then, maybe? Have "apps", "packages" and "something else" in turbo? So many questions.
I'm putting mine in tests/. So I have apps/, packages/, tests/.
your next course for Turborepo ! When ?
Are there any plans to support this type of workflow and caching strategy without requiring a separate package for the test suite? It's fairly common for projects to colocate their tests with their production code, so it would be nice if turborepo could accommodate this.
Hello,
You have explained very well, I've learned news concepts.
Thank you.
I gave you +1 like.
See you soon on a new tutorial.
why is it my build time is become took longer time to finish? since the package/web-tests is dependent with app/web and somehow when im try to build the app it also compiling the package/web-tests node_modules. where can i see the build config?
Blazingly Fast
Uses a Cypress plugin for PW demo, hehe.
You can insert any e2e framework, really.
How would you approach an app that uses json-server? Serving the app and the api is easy, how would you build the app while serving the api?
Two levels of test coverage. One where you test with the real API on a staging environment for example.
And another “system test” where the test application is spun up locally, and you send the API requests to a mock server, using something like Wiremock. Allowing you to mock the API endpoints/ responses in your test :)
@@siggerz100 right, all true
How would it work in Turbo repo with the approach being shown is the curious part
@@MuratKeremOzcan typically the tests will live as part of the system under test, so I don’t think it isn’t a great example in the video how they’re separated.
And I guess it depends on the monorepo. If you wanted to have everything in a monorepo. The API would live as part of its own package, and would be a dependency of the application under test, with the tests living in the SUT package.
This way when you build the application, Turborepo knows to build the API too.
Turbo will also know if you make changes to the API, because it’s a dependency of the SUT, that will also need to be rebuilt/tested
If however the API lives in another repo, then you’ll have to do the mocking as mentioned in my last comment. And test the real API with the application once both have been deployed.
Feel free to DM and I can link you an open-source repo I work on that uses Turbo + WebDriverIO. Same practices can be applied to other test frameworks though.
Great video! could you share the GitHub repo for this video? Thanks in advance :)
And here I thought you only taught typescript!
1 small tip from QA to solve this .. just make your e2e solution as separate app, in separate solution .. dont waste time to have it inside the app itself. welcome.
Not sure how this would save any time - you are suggesting creating an entirely new project just for the e2e? Sounds like it would take more time to be honest. Not to mention you don’t get any of the caching or shared until from a monorepo.