Unit Testing Rebooted

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • I call it rebooted because I believe we’ve lost sight of the original intent and purpose of unit testing, and a reboot will allow us to start from the ground up and re-think how we go about unit testing. In this session I’ll present an approach to unit testing that I have fine-tuned, and put into practice over the years. This approach to unit testing has been informed and guided by a number of books and team experiences.
    The approach presented will probably not sit right some of you. If that is the case I ask that you keep an open mind. In this session I hope to elicit a healthy discussion around unit testing so we can reboot it and get it right. Common Practice, is not Common sense.
    Who is this session for?
    This session is for folks who have been doing unit testing for a couple of years or so. This is not a Unit Testing “Getting started”. We will be looking at code samples and I assume you’ve written hundreds of unit tests so you can appreciate what you’re seeing.
    Test Spies and Test Mediators instead of Mocks
    • Test Spies and Test Me...
    Unit Testing - Test Doubles
    • Unit Testing - Test Do...
    Unit Testing Best Practices
    • Unit Testing Best Prac...

Комментарии • 14

  • @saurabhchauhan232
    @saurabhchauhan232 3 года назад +1

    Very good presentation about system/class boundaries to test what is considered in which category. Looking forward to gain some more knowledge from this series. :)

    • @Matlus
      @Matlus  3 года назад +1

      Glad to hear Saurabh. FYI the Movie service project has a lot of tests and the various kinds of tests (Acceptance Tests, Class Tests, Controller Tests and End-To-End Integration tests). They are a good place to start.

    • @saurabhchauhan232
      @saurabhchauhan232 3 года назад

      @@Matlus Thank you

  • @SridharSathya
    @SridharSathya 9 лет назад +1

    Thank you for this wonderful series on testing. Well phased explanation of concepts, good illustrations and you really make the viewer agree with you with good examples.

  • @chrise202
    @chrise202 6 лет назад

    The example at 23:09 is somewhat too stretched :)

    • @Matlus
      @Matlus  6 лет назад +1

      I got this example from Martin Fowler's website if I remember correctly. Also, I've seen lots of code similar to this or worse.

  • @stewie1571
    @stewie1571 6 лет назад

    I keep hearing people say that when you test at a higher level (a story for example) and test the behaviors without mocks then your tests become less coupled to the implementation. When I look at the graph you drew 28:00 it illustrates why I think that is false.
    Firstly, to test a behavior you can mock or give it a real world fixture (example: a DB), setup a scenario in the DB and assert the effects. What other magical ways can we assert behavior? I'm not talking about testing pure (computational) code. I'm talking about testing the effects (the behaviors).
    In that graph, if you were to test drive D by mocking I & J, the tests only know about the implementation of the SUT and only the CONTRACT of the dependencies I & J. The test know nothing about N or M or A. This is as decoupled as it could be.
    If I test A with no mocks then the test need to know how to setup the whole graph including cleaning up and setting a scenerio in a DB maybe a file system etc etc etc. When I say the whole dependency graph, we're no long coupling tests to a contract of the dependencies were' coupling the tests to the full implementation of all dependencies for each test.
    Higher level tests are great for when you end up needing to refactor at a medium level (everything A depends on).
    Lower level tests can help you account for all couplings, help show SOLID violations sooner than later and avoid the need for a large refactor. It has often shown me re-usability, compos-ability and maintainability I never would've thought of without writing tests at a lower level. Testing at a lower level encourages a more decoupled design.
    Testing too low can be damaging too - I know.
    I'm not saying lets not test at a higher level or not do integration or e2e. They have a lot of value too. I am saying lets not say testing at this level is the only right level. Each scope of testing has trade offs. It would be a lot more helpful to talk about those trade-offs and a way of thinking about them. SOLID + TDD is a practice and in my opinion the most important practice a developer can improve in. Lets not restrict this practice with superficial dogma in the form of redefining terms to be more specific than they should be.

    • @Matlus
      @Matlus  6 лет назад

      The idea is to test the functionality (Behavior) of the system, not classes. So if you're focused on testing functionality you don't care to know what classes implement that functionality. This allows you to refactor without your tests being coupled. Also the coupling in the "Mocking" style of testing is between the production classes and your tests.

    • @stewie1571
      @stewie1571 6 лет назад

      I'm not saying test at a class level. SUT can be a large or small composition or a class or a method etc etc etc.. I'm saying test at all levels necessary but at least test at lower levels and consider the trade-offs. If the tests only know about the contracts of the dependencies for SUT they don't need to test the dependencies. They know nothing about the dependencies other than contract. As such the tests know less. They're less coupled.
      Smaller SUTs tend to have less need for mocks anyway because they tend to have less dependencies as their purpose is more focused. They're also more re-usable, compose-able, maintainable and scalable. They're S.O.L.I.D by nature.
      If your SUT is big then you need more test setup and more assertion in each test and you need more tests. This is coupling. This coupling can get un-manageable quick.
      Coverage is another thing that bugs me when I see people only test at a high level. Just because a test executed through code that doesn't mean that code is tested. Assertions are often too general or don't have visibility from a higher level. Coverage can only tell you what you don't have tested. Coverage cannot tell you what is tested.
      I'm just concerned lower level testing is not being given a fair cost benefit analysis. A LOT of these talks avoid that analysis. Again, e2e, integration and other levels of testing are still important.

    • @stewie1571
      @stewie1571 6 лет назад

      And I was talking about the same coupling between production classes (SUT) and tests.

  • @JGoodwin
    @JGoodwin 9 лет назад +1

    Hi Shiv, I appreciate your efforts in sharing your thoughts in this presentation. I did have one opposing view on the unit test time to execute. Personally, I have found that while it is true that it is better to have the test, than not, slow tests, in my mind are the loud clacking noise from a motor. Sure, the spinning pulleys, gears and output all seem right, but the one thing no one tends to put as a requirement can be a sign of faulty engineering. Additionally, to your point about running unit tests, I feel that systems that run slowly become a drain on both the desire to rely on them and the productivity of those who do rely on them. I think the best world is one where dependency trees can be clearly identified, then appropriate tests are run automatically and indicated as success/fail as soon as you type valid code.

    • @Matlus
      @Matlus  9 лет назад +2

      John Goodwin Thank you for your well thought out comment.
      In my experience teams have 30,000 "unit" tests that prove absolutely nothing and take a very long time to run (about 1.5 hours). These folks are chasing the "unit is a class" style of testing where every class is tested in isolation. The same system (business requirements and not the same code base) ends up having 8,000 unit tests in the style that I describe and take about (average 5 seconds per 100 tests) 6.5 minutes. When you're developing you don't run all tests but only those tests your work impacts. Every once in a while you run all tests just to make sure you've not broken anything else and certainly before checking in.
      So it is not required that "Acceptance" tests take time. It depends on how you write your code. But really the take away is:"Write tests that matter". That have meaning to the business and let's not focus on testing every class dogmatically and in isolation, but rather be pragmatic and do what has real meaning. Teams love it too.

    • @JGoodwin
      @JGoodwin 9 лет назад

      Shiv Kumar You mentioned in your video that you often don't mock your data layer. How do you make tests that include the data layer take 5 seconds per 100 tests? I was also curious about your opinion in regards to testing reporting systems. In my experience, testing reporting systems is extremely difficult due to the need for sufficient test data and the number of edge cases.

    • @Matlus
      @Matlus  9 лет назад +1

      John Goodwin , yes that's correct. I haven't mocked (or used any test double) for my data layer for at least the last 4-5 years in my projects.
      I used ADO.NET core for all of my data access. I use stored procedures and I use LocalDB on the local machine as well on the CI build servers. I don't have a comparison to make so I can't really pin point how or why my tests run fast. I just checked my current project and see that most acceptance tests take less then 1 ms. Tests that throw exceptions (these are expected exceptions) take about 6-8ms. A couple of tests take just under 50ms.
      The tests that take the most time are in fact (Class Integration Tests) that work with the file system, email, MSMQ, Extracting metadata from media files etc. But these tests are critical and/or complex and so that have to exist.
      As regards Reporting. It's been a while since I built a reporting system, so I'm not the best person to ask. I would think that edge case tests should be easy to set up (data wise) but maybe you're dealing with something that make this complex.