Isolate your business logic into your service layer, then unit test that If your I/O and side effects are at the edges of your system, then testing the pure stuff in the middle is fun and productive in my experience
I didn’t truly understand TDD until I worked through Learn Go with Tests. Now I’m of the opinion that the animosity towards TDD stems from a misunderstanding of it.
I have been writing unit tests for 15 years, but for some reason I didn’t try TDD for real until last year and it has made a world of difference. Writing tests when the code is already in place may ensure that the current behaviour remains the same, but it doesn’t improve your design and it is often painful and requires ugly mocks.
I can not imagine building software without tests. I write tests to drive 1) risk areas like core performance, 2) define requirements that should be invariant regardless of implementation, 3) to speed up development by exercising the system in ways that are hard to do manually.
People say that their tests tend to break during refactoring, but that is not my experience. I think it occurs because they write tests AFTER their code is already written and then they need to invent complicated mocks in order to test their highly coupled code. Such massive mocks make refactoring really painful and break easily.
Tests breaking constantly is likely due to choosing the wrong unit for the tests. If they are too fine grained (only test 1 class) then everytime you move some code around tests will break. This has not necessarily to do with test before or after but doing it before does usually lead to better design.
He's gone into this in more detail before but his tests break because once he's written his initial passing code, the refactor he does is catastrophic enough that the tests break. Like, the unit doesn't even exist anymore sometimes so the test is a bit of a waste of time.
Some (not necessarily all) places where unit testing can be useful: 1. You're writing a tricky algorithm. This should probably be treated as a pure function (no side effects), which makes it easy to test as a block box. 2. You have a system/service with well-defined API where it's easier or more cost-effective to use fake/mock dependencies than real ones. Again, can be treated as essentially a black box. #2 is a little iffy and requires some judgement calls. I think the biggest unit testing anti-pattern is when you try to unit test an object that is essentially an orchestrator/manager of many other objects. If you find yourself writing mocks that collect the output of calls made against them and then asserting on that in the end, I would call this "backdoor testing" and it's pretty annoying to maintain. You're trying to test side-effects rather than end-to-end workflows that bubble back to the caller. Any time the dependency API changes (which it will because really it's an implementation detail), you have to go fix all of those tests.
Writing unit tests feel a lot like exploring the system. I do love that idea of the system being something that already exists (even if you haven't built it yet) and as you write it you discover more about it (platonic, yes). It's like math: the equantions we don't know are there already (logically), we just haven't thought of them yet. And testing has been really cool to use as means to explore the "business logic". From Dave Farley (yes, him), I get that tests are your first clients, because they're actually the first ones using your production code, so it makes them the critics of your code, and when things start to get too hard to test, that's the critics saying "something is going wrong...", so you can actually improve both your code and understanding of the problem. Also, like everything else in the devs' world, I am not crazy for tests, but I do like them (just like Clean Code).
If you're complaining that you can't know how the future code will look like, I think you're doing it wrong or you are in the very special situation of building something completely from the ground up, which in large orgs isnt always the case. To me, unit tests are valuable MOSTLY because they force you to make your components more modular and isolated from each other (building... well... units!). Also, I ran multiple times into the situation that I revisited old code to change/update its functionality, typically because I was using an existing piece of code in a new component and suddenly that old code had to do things differently. Now, instead of "quickly" changing things to make my new use case work, I am forced to think about previous use cases (and how my new changes might change those) too because of an existing unit test, which might break now. Especially in large code bases, I cant see how people honestly cannot get behind this concept.
"If you're complaining that you can't know how the future code will look like, I think you're doing it wrong" - someone who is not going to make it in this industry.
I know before I start my development what functions I am going to need, and I never write unit tests. I do value system and e2e and they per definition proof my independent units in actual real orchestration
Hot take: unit testing is a skill issue. It's good to unit test even on the first write. Write the tests after you write the code. Use the structure of your code to know what values to test, but never test using the internals e.g. Mocking. When you make changes to your code, fixing your tests is trivial and makes sense with the code you change. Most people find unit testing annoying because they haven't written them enough to know the patterns to use. Using the right patterns and strategies makes it easier to maintain. Also, just use a typed language so you don't test boring useless things like whether you got an integer or not.
Hot take: Unit testing is a skill issue. As in, people for whom unit tests are actually effective are people who are incapable of writing basic functions without producing bugs, or are incapable of breaking down complex problems into smaller, less complex ones.
Never test using mocks? There is such a thing as too much and too little. Fixing tests, when written around very small units of code (often function-level) are usually trivial. If you have to mock 10 different things, then the function you're testing is too big or you have an architectural problem. That doesn't make mocking bad in general, in my experience (20+ years).
@@iojournyYou've not worked in a team large enough to have diverging opinions on coding practices and where everyone is competent/their code can be trusted to have no bugs.
1:36, unit tests are important to me because: - Test edge cases. - Keep testing things that you think are already right, when you change code. - In TDD, helps planing the design. This may be controversial, when 1 doesn't still know how the f() signature will be, and it's better to discover it by developing it. - The most important: once having an incomplete f(), and a bunch of tests, the rest of development can now become really fast! 1:55, it's possible to extract that if condition from inside the f(), making another 1. Problem is that it may be seen by the rest of the project. C/C++ has a way to avoid that, while still keeping this flexibility.
If you understand the problem well enough to think through every edge case, input/output combination, dependency, and assertion before ever writing a line of the actual code, you probably won't benefit from having the tests ahead of time. How many times do you get into writing a complex service layer function and realize "oh shoot, I need this other field in order to handle X case or do Y calculation"? This realization breaks many of your unit tests and you end up having to spend more time fixing the functions or figuring out how refactor your tests. For this reason I can't say I've ever truly benefited from TDD
@@mikehodges841TDD for small/tiny f()s can be good. Even so, often the dev. for a while is not sure about the f() signature. So I think a "hybrid" TDD is better: code the small f() till discover its signature; once having that, write the tests, and then complete the f(), which now will become much faster, due to "the blessing" of the tests. It means it can be completed with less thinking, saving energy for the long run. However, in my experience, big f()s (2-5 screens) doing lots of things, either directly or by calling others, are hard to predict in a TDD way. And also have this editing issue. Good thing C/C++ have a kind of tight syntax, making each test fill only 1 line. So it's easy to turn some of them on/off, when broken. By macro, they may not even be compiled too.
Unit Tests (like static typing) is like Mathmatical Proof. Proving your code works and does what it's supposed to do. That's definitely a good thing. It's just about balancing the granularity of that with the other costs.
Very often they don't though. Pure functions are easy to test, but code that has side effects on anything outside of its contexts tends to be inherently untestable
For an API I took over from a previous collegue we just went with integration tests. There was not much domain logic. I made a pretty nifty test setup with testcontainers that allowed us to very easily test the whole app and it was fast as well (full test run was a minute). Our confidence in our tests was extremely high which allowed us to deploy often as well. Another major advantage is that there was very low coupling to the implementation. We did some pretty big refactors without having to update any test. Yet I also like unit tests but when to apply them really depends on the situation. As soon as more domain logic would appear in our app I would have definitely used unit tests for that. Its all about choosing the right unit. Recently joined a project as lead where the architect forces ppl to isolate every single class in tests. Its just hell as tests constantly break and there are so many low value tests. This is going to change whether the architect likes it or not.
That’s when I would write acceptance tests which test use cases for those components which usually contain a bunch of files / classes. That’s not what others usually call unit tests but that’s what I found to be perfect because it doesn’t discourage refactoring and actually check if the code fulfills its requirements.
What does object interdependency has to do anything with unit testing? You do not test for the dependencies or the presence/lack of relevant objects in the memory at the relevant time? TDD is just proper architectural design, which is impossible, hence TDD is impossible, improbable, infeasible, inappropriate, and is complete bullshit. I've been coding since 1990 all the way from basic to c to fortran to python to c# and all types of software from games to fluid/heat excange simulations to games to all types of fancy algorithm performance comparisons (between ant colony optimization, genetic algorithms, and many forms of heuristic algorithms) and I have never ever unit tested and I'm so glad I didn't take that poison in. Just code, do proper debugging and make sure all variables going in are value checked properly. It'll be fine.
I think I finally figured out how unit tests work. So, tests actually test 2 things; they declare a public interface and they assert the outcome of using that interface. If you change the outcome OR the public interface then you change the tests FIRST. All other changes should not break the tests. At the start of the project you don't know what the public interface should be, so just pick something and expect that it will change. No way around that.
Absolutely right. Unit tests do automated regression testing of the public API of your code, asserting io combinations to provide an executable specification of the public API. When well named, the value of these tests are as follows: 1, Because they test only one thing, generally they are individually blindingly fast. 2, when named well, they are the equivalent of executable specifications of the API, so when it breaks you know what broke, and what it did wrong. 3, they are designed to black box test a stable public API, even if you just started writing it. Anything that relies on private API's are not unit tests. 4, they prove that you are actually writing code that can be tested, and when written before the code, also proves that the test can fail. 5, they give you examples of code use for your documentation. 6, they tell you about changes that break the API before your users have to. Points 4 and 6 are actually why people like tdd. Point 2 is why people working in large teams like lots of unit tests. Everyone I have encountered who does not like tests, thinks they are fragile, hard to maintain, and otherwise a pain, and who was willing to talk to me about why usually ended up to be writing hard to test code, with tests at to high a level, and often had code with one of many bad smells about it. Examples included constantly changing public API's, over use of global variables, brain functions or non deterministic code. the main output of unit testing are code that you know is testable, tests that you know can fail, and knowing that your API is stable. As a side effect of this, it pushes you away from coding styles which makes testing hard, and discourages constantly changing published public API's. A good suite of unit tests will let you completely throw away the implementation of the API, while letting your users continue to use it without problems. It will also tell you how much of the reimplemented code has been completed. A small point about automated regression tests. Like trunk based development, they are a foundational technology for continuous integration, which in turn is foundational to continuous delivery and Dev ops, so not writing regression tests fundamentally limits quality on big, fast moving projects with lots of contributes.
How do I do unit test 1. Write the function name 2. Input types 3. Thinking in ramda or lodash 4. Go creazy with everything you know, don't stop 5. Consol log out the input and out and put to test 6.Refactor the function to be more athletically looking Bang, you've done your job
If you are professional software engineer, you have to write codes that are production ready. Production ready codes needs to have e2e and units tests in a reasonably way for the stability and maintainability. It doesn't really matter whether you like writing tests or not, just do it if you're pro.
I contributed to an open source project last week. And my PR included a few tests as well. It felt weird to do such an obvious thing. But today they pushed a larger refactor, and I wasn't sure what I had to fix in my own application. So it ended up being somewhat useful to just run test in a few seconds to know all the functionality did work.
I have recently written a full feature and it has two main parts. One is a client with facade like interface, it communicates with a several different APIs. The other part is the business logic which uses this client. After I learned enough about the different APIs for the clients and collected some outputs I was able to write it using unit tests. Also while I was working on the business logic part I used a mock of the client in my unit tests.
Tests are unquestionably good especially if you work in a team and even special if your company is hiring a lot. It is unresonable to ask a new developer to be diligent and not break existing code as it doesn't even known that code/business rule existed.
For microservice architecture what ive come to love is component tests (i.e. test a whole service using a mock server for any dependencies) + contract testing to make sure those mocks are valid. A decent amount of unit test coverage + this + a few smoke tests to make sure the deployment works with the env config and you are done.
No 1 issue I've had with unit testing, and the reason I didn't get into it sooner, is all the _extremely_ poor explanations of its utility and where to actually use it in the real world. Even to this day, there's a staggering number of turorials where they're demonstrating unit testing by writing a "Calculator" class with an "Add" method. And I'm like... dude... if you have to write a friggin unit test for the plus operator in [insert language here] then you should maybe look into applying some more Arctic Silver paste between your CPU and your cooler, cause there's no unit test in the world that's gonna help you.
I have the same observation there are very little resources teaching about testing. The author of TDD released a course in which he teaches TDD and guess what, for 2 hours he was writing tests for fizzbuzz method. Classic
There is a reason people use simple coding katas to demonstrate automated regression testing, tdd, and every other new method of working. Every new AI game technology starts with tic tac toe, moves on to checkers, and ends up at chess or go. It does this because the problems start out simple and get progressively harder, so you don't need to understand a new complex problem as well as a new approach to solving it. Also, the attempt to use large and complex problems as examples has been proven not to work, as you have so much attention going on the problem that you muddy attempts to understand the proposed solution. Also, there is a problem within a lot of communities that they use a lot of terminology in specific ways that differ from general usage, and different communities use that terminology to mean different things, but to explain new approaches you need to understand how both communities use the terms and address the differences, which a lot of people are really bad at.
There's also a tonne of terrible frameworks. No one also explains how you should structure your project to accommodate unit testing. What you need is someone to develop their project from scratch without jumping around all over the place.
@@chudchadanstud Very true. For instance, most of Microsoft's official documentation on their various C# implementations (everything from websites to APIs to functions) does not accomodate unit testing out the box.
almost every time I write a test (unit, integration) I find a mistake I've made in the code. good testing = +confidence in code, +faster dev, -mistakes
Unit tests = faster development 3:40, that's what I'm always telling people who complain about unit tests taking too much time to write. Instead of doing lengthy manual user setups in your program, you can just run a set of data through your logic (the black box) with the press of a button. You can know right away if your code works. Unit testing can speed up development of certain features greatly this way. There is a complexity threshold for the bit of code you want to test for this benefit to become apparent though. I think I'll make a video on this topic.
What I don’t understand about this approach is how seeing a failed test helps you decide how to fix it? If the thing you’re testing is sufficiently black boxed enough to make it hard to manually test, doesn’t that make it hard to interpret auto tests (even if they are easy to write)? It sorta feels like the Remembral from Harry Potter where Nevil goes “oh I forgot something but I can’t remember what I forgot!”
This is assuming you think of all of your edge cases when writing your tests.. which, if you can think through edge cases in tests, you can think to handle them in your code. Tests don't just magically appear in your codebase to handle all of the complex edge cases.. someone has to think of them and write them, which is usually the same person as the one developing the code.
@mikehodges841 once you find an edge case, putting in a unit test makes sure that future changes don't re-introduce the same problem. This is great if the code is being worked on by a bunch of different people.
There are zero concrete, ubiquitous statements that can be made about testing. There are always caveats, and an articulate programmer can justify the state of testing in a code base. It may be warranted. It may not. It may speed up development; or not. It may be easy; or not. Testing private methods may be helpful. Whitebox testing only public interfaces may be best. Anybody that claims there is a one size fits all approach is just an amateur that lacks experience and / or thinks too highly of themselves.
"thinks too highly of themselves" The primegean in a nutshell. He makes very strong statements for example about agile or TTD without considering the many different environment (technical and business) we work in. Everyone has a different context but he's adamant about making bold claims that makes no sense in a lot of case.
The point of unit testing is exactly to slow you down and think of what you are going to implement. If you need a function that return something you first need to think of you need it that way. Unit test force you to think and slow you down. It’s not super complicated and it’s a good practice!
@@elzabethtatcher9570 coding is about to think! And we are not machines, you are getting payed for that, I don’t see the problem to slow down and think…
Unfortunately all the unit tests I've made were the "pour concrete over it, don't touch it, it works fine as it is" for some complicated algorithms. That being said, I would like to write more. I just never really have the place to do so, the only thing I'd be testing is a quick for loop or some library wrapper with a bit of extra setup. I seem to either write a 4 line long function helper which is unnecessary to test, or some 200 lines (total, split across multiple helpers etc so it's still not that bad) behemoth I don't want to look at ever again. Why can't I write normal 30-40 line long functions...
Damn I thought I was weird for writing things atleast once and throwing it away, it legit is OP. The second write is so fast! I call Unit tests that I write for the purpose of debugging SANITY TESTS!
Unit tests are like construction lines in art. They can help you sketch out structure and shorten the feedback loop on your ideas. However, there are plenty of artists who can go direct to a final product without the intermediate step of construction lines. Telling Kim Jung Gi he should be using construction lines is dumb but teaching drawing by starting with construction lines makes sense. They can also be useful as shared foundations like in animation where multiple people collaborate, in this way they act as a form of documentation.
I like 100% coverage. Not because it necessarily proves my code correct, but because it proves future code didn't change anything unexpected. You make a change, tests break. You then decide whether the tests should have broken (and fix the tests) or whether the tests shouldn't have broken (and go back to your code). Certainly I try to prove correctness as much as possible as well, but we have multiple layers of testing to do that - unit tests, integration tests, regression tests, acceptance tests, etc. At least two of which are written and executed by people who aren't me and don't have my biases. Unit tests are generally the only place that tests the _code_ though (as opposed to testing the functionality), and I like to make use of that fact as much as I can.
Depending on Test Framework and language, it is impossible to achieve 100% test coverage. That is due to defensive code checking things that are actually impossible to happen because they are intercepted beforehand with validation. Some of those defensive things ought to receive their individual unit tests, in other cases it is just busywork. E.g. in Java, you will ALWAYS do a null check before accessing an object. Sadly Java is lacking a ?= Or ?. Operator for null checks
@@dominikvonlavante6113 > it is impossible to achieve 100% test coverage If you design your code with testing in mind, you should be able to get 100% on everything other than the entry point. I'm not saying there isn't a cost for doing that (dependency injection is usually harder to trace manually than just creating objects on the fly, for example). But its doable unless the language is explicitly trying to prevent it for some reason. > in Java, you will ALWAYS do a null check before accessing an object Will you? Certainly you _should,_ but that's not the same as saying you _will._ Those "busy work" tests are exactly the kind of thing I like to see in the "just here for coverage" tests. Its way too easy to have a private method that doesn't do a null check because "I know how it's called" (and IntelliJ will even complain if you include a "useless" null check which I wholly disagree with but it's the default), then 3 years later somebody else decides they also need your method's functionality and suddenly your "I know how it's called" protection is bunk. As noted before, I view these kind of tests not as guaranteeing anything about my code _today,_ but providing a warning about problems that may crop up tomorrow. _Especially_ the "dumb" problems that we often don't think about because we've internalized "you should always do a null check" so much that it's become "you will always do a null check" and overlook the fact that people aren't perfect.
Even if you don’t write the unit tests, writing your code with unit tests in mind helps keep your code more manageable cause you don’t introduce all these hidden dependencies that would be impossible to unit test against. I also think adding a specific unit test to help increase your ability to iterate on a solution to a hard problem is the sweet spot for unit testing. Unit testing somewhat trivial problems I think can also be worth while if it’s something with _a lot_ of dependencies or on some really critical/hot paths where breakages would cause you a disproportionate amount of pain. Definitely don’t think it’s worth adding a lot of tests on something that is very likely to get changed, as making changes in those unstable areas just becomes a chore of cleaning up no longer relevant unit tests that probably shouldn’t have been written in the first place. So they definitely provide some value, but it’s like that old saying … something like “even too much of a good thing can be bad” 😂
big part of what you are explain is "when" to unit test. my criterion for that is: when it pays back. every kind of testing is an investment. you should only do it for returns. returns can be: less prod problems, quicker development, better maintainability, ... it is never a goal in and of itself. tests incur a cost. manual testing needs to be re done all the time: they do not scale well. automated test may scale better; they also incur cost to write and maintain. so strike a balance that makes sense for what you are building. they should make sense, i.e.: add more value to the project than they cost in spending time to build/maintain, or they rationally would have to be qualified as a distraction.
design your interfaces well + code the implementation side-effect-free, and the core logic becomes a "black box" unit test. skip testing at the edges (presentation layer, repository layer, any other places with side-effects); you don't need to write unit test to check if a SQL query returns correct results or a generated JSON/HTML is expected - e2e tests are better suited for that.
I like unit tests, if you are able to actually automate them. I once worked in a company where we did embedded development. The tests were all manual. But not because we were lazy, but because we have no clue about how to do it. The machines themselves don't have a network connection. And we failed multiple times to create simulations of these machines which actually behave EXACTLY (we need the EXACT behaviour for the tests to actually be useful because of the tolerances we have) like the real ones (there were multiple tries to do that which cost multiple tens of millions each). So, instead we have a test protocol which we go through before release (which is once a year) and takes a few weeks of the entire department doing just that.
The book "A practical approach to large-scale agile development" describes exactly how and why hp introduced systematic testing for their entire laser jet printer line, including their prior problems and how it got rid of them, including the testing of the embedded hardware involved. Well worth a read.
test coverage is useless when you go for a defined percentage of coverage but it's really useful to find what parts of your code are not tested at all "oh this entire piece of this function is red, which means I haven't made a single test case that covers this usecase. I should probably write a test for that"
While I haven't been disciplined enough to apply unit testing in my personal projects, I have on occasion been required to include tests with professional work. For me, the reluctance to do it in the first place is a combination of lack of discipline to put forth the extra effort to write the extra automation of confirmation of correctness after I've considered a feature "final" (whether it ultimately turns out to be final or not) as well as choosing reliable test frameworks/harnesses for whatever language(s) I happen to be working with for a given project (mostly a laziness/discipline problem rather than a knowledge problem). Ironically, for literal decades I've been otherwise content with manually writing text to a console during debugging stages to figure out what's wrong if things go wrong. Maybe there's also the feeling of giving up control of the process to the machine even though one of my jobs as a programmer is to figure out ways to automate processes.
Unit testing itself is not a problem. If all you have is blackbox test without knowledge of the internal workings of the code, everything is great. It starts to fall apart when you add mocking. Then the testing code has to have knowledge about the implementation. Now when someone refractors the code, without changing the behaviour of the function it is likely that the tests fail, since the mocking is no more correct. I unit test only if I can do so without mocking. If I would have to start mocking, I create an integration test.
i tend to agree except for smaller things where you only need to mock one or two things, I'm fine with unit tests. i do prefer integration tests though, because 1. services are naturally dependent on other services, mocking can often hide bugs that come from the way two services interact 2. changing a service somewhere forces you to think more carefully about how the entire system is affected rather than how to reimplement your change in the mocks (provided that a test fails as a result of your change) 3. integration tests can be done in a very modular and "unit-test" type of way if you structure your services correctly, without having to mock anything. if you make each service a separate module with explicit dependencies on other modules (like an api module with a dependence on a database module for example) then you can start up the module you're testing (along with its dependents), configure it however you like, and then proceed with integration tests. if you were doing unit tests instead, you would have to mock out the dependent services.
But if you have to do mocking, because you cannot do unit testing, this is itself a sign that you are writing hard to test code, and is usually a sign that the idea of writing tests was not acted upon until after the code was written. If you write the tests first, you don't write the code in this way, and tend to produce code which is better. But of course you have to break your bad coding habits first, often before you see the design advantages of doing test first design.
TDD makes sense if you know everything before hand. Like v2 refactoring v1 code you can easily do TDD, but that means you have done the work to story board, further discuss, and write out a plan. Also unit tests make great sense to me until you have to redo them because of a change or if you mock waaayyy to much stuff.
That write things twice is something that is built into the process of TDD. Only when you rewrite it the second time you already have the tests in place to make sure you have the intended outcomes. On the unit tests - if you refactor and your tests change at all you have done things wrong. If you think a unit test is something coupled to the implementation you don't understand what a unit test is and you should do more effort into understanding your craft. If you write tests coupled to your implementation to help you develop a new piece of fine grained functionality throw the test away afterwards because it is only a maintanance burden afterwards.
Unit testing is really good when you have a lot of changes, when you need make a big refactor. What I can say is the most projects that I have worked don't have even 30% of test coverage and the application works well! But I'm not against tests, do tests, do a lot of type of tests, after that you'll have a solid opinion about.
Here you told how I look at it too: I don't care for test coverage, don't care to "catch regressions" - those are bullshit marketing points. What I care (and only why I unit test) is exactly to have faster feedback loop - like testing a totally incomplete algorithm is very possible with unit testing. Lets say I have a data structure that can grow (and not as simply as a vector) and I can literally just unit test that part alone wihout trying to build the whole complex thing.
The only valid reason to change a unit test is if the unit itself changed. If there is a new requirement for the unit, then you should write a new test for it. If no other requirements were affected, why you would need to change the existing tests? If you need to rewrite existing tests, then you are most likely testing implementation details. A unit test should focus on the "what", not on the "how". And that's why TDD is an excellent approach. When you do TDD, you: 1) write a test 2) see it fail 3) write the minimum amount of code to make it pass 4) refactor If you write the tests after, you are biased towards the implementation you wrote. So it is quite easy to write tests that know too much (about implementation). And this is the main reason you have to change the tests once you make changes to the code itself. If you are refactoring and your tests start failing, it just signals that your tests focus on the "how" and not on the "what".
If you work on a complicated project with many developers at a high velocity unit testing is worth its weight in gold. But in most projects, well crafted reliable E2E tests are the must have. For me unit testing is as much about communicating the invariants of a module or function to another developer or future me than it is about preventing regressions.
In my crdts package, I have an e2e test for each CRDT that applies a bunch of updates asynchronously and then plays them in random orders on fresh instances to ensure the final state is always the same. Blackbox fuzzing for correctness is probably the most valuable type of test.
Here is a quick take: testing should be allowed to access private methods as sometimes the smallest unit you really care about is part of a complex algorithm that is abstracted as it simplifies the problem, but then you want to ensure the values passed in are processing as expected. This problem can be solved by not making it private, but then some teams don’t want public methods that aren’t used in production code. Before anyone wants to give additional advice, the other limitation imposed by the team: functions are invoked by system events and those should be the only thing public, if you don’t use helper functions or utility classes. These limitations imposed by the team when combined lead to very bad results, as you most likely will need to pull additional data while your code is running. This is where the problem of not being able to access private methods is problematic, also this whole private concept is fake in all languages as reflection does grant access, but that too is discouraged. The bottom line is, imposing rules to have tests that are full complex mocks that need to be updated every time the object they are faking is adding more complexity with poorer results. This is the opposite of what tests should be, because when tests are hard to maintain they end up being poorly written.
The problem with calling the public / private split fake is that it ignores what it is actually for. When publishing code, the public stuff goes into your header files, and the promise is that this stuff won't break without a good reason, and not very often. While you can access the private code using reflection, it is not even guaranteed to work across individual patches, let alone be consistent across major releases. Your private code is then tested by calling your public API's and if it is not called then either you don't have enough tests, or it is dead code you can delete and the solution Is to either add tests or delete code.
@@grokitall Rust allows testing private methods, because there are times that you would need to write many mocks to just test the basic functionality that the private method provides. If all you need is to indicate that a method isn't stable and carries no guaranty that it won't change, naming conventions work fine, Python does that, and it is heavily used.
Sometimes I just spin up a test with a breakpoint and see what it is doing without even writing assertion (it is way faster than editing main or executing another file with some print(...)). And eventually if I am happy with the interface and output I write a proper test (or more).
Bad unit tests come when you have a load of mutable state in classes with complex interactions that are uncontrollable and instead of going back and rebuilding, you throw 100s of unit tests at it. If your code is well designed then unit testing is quick, easy and makes you faster. If it’s not then unit testing is a distraction from the real problems.
too be efficient and not having test completely consume me (because of deadline and things will change very quickly by stakeholders) I test if I am working with blackboxes, or if I don't trust the framework that handles some of the logic. Most of the time I test the function millions of times while developing anyway. The unit test value comes from testing code that will change during development without me personally changing it (package upgrade, dependency update, etc)
I don’t see use for unit tests for most functions - rarely are functions complex, only when they are I may do one. My functions are so small that I don’t need to have dedicated tests because I still need to validate the output and I do it once because if your functions are small and easy they don’t change. And unit tests on embedded is completely useless, how will I report a test status back in a truthful way the art of testing changes the real life outcome with timing critical systems (sorta like observing a quantum particle). What I’m interested in is seeing that cascading interrupts gives the required results. Aka E2E/System is what is valuable and when they do succeed I have per definition tested my smalles unit.l to work correctly too. In the mission critical medical, ball bearing and energy software I was working at, unit tests were never deemed proof and as far as I recall very rarely used or run. Only E2E tests and Systems tests were reason for certification and they were run frequently. And I’ve seen in so many IT projects that do unit testing that tests succeed and products still fail miserably after every bloody release. Because there’s no regression testing of the E2E and system. So hence those two have value the rest not. Where system testing is the most complex and we even had to create specialized hardware. To simulate a censor breaking and pulling a line high, and low and started spewing out random data or actually syntactically correct data but wrong values, proving that alerting world, that the quorum censor or algorithm detects the failure ignore that data and appoint the other as a “master”. Etc etc etc Those are the hard and complex things.
I’ve taken to writing vertical slice tests using an in-memory DB at work. Is it great? Uuuh. Well, it’s effective, but the tests individually run slower due to spinning up the whole application though if running all tests then the start up time is trivial compared to the total run time. The big upside however is that we get to do blackbox testing of both new and existing code by setting up the data itself instead of mocking out a million implementation details. Especially great for those parts of the app that might get the legal department sicced on us if we get it wrong.
When doing game development unit tests and just automated testing in general is pretty much the greatest and most useful thing to make your game good.... I don't understand how you can make any game worth the energy it takes to change the pixels without them.
I work on a highly interactive dashboard in react, and I struggle with how and where to test anything in that code. Are most ui's just not tested normally? We mostly rely on integration tests for the API stability, but its pretty touch and go.
If you properly isolate the key interaction triggers you can test for them, but integrations is pretty much the way to go tbh quick example: If you have a huge table that displays different columns based on user role then you test the permit to row builder function to make sure that critical bit of the UI never serves wierd/unwanted data and you test a full render and test for a snapshot to make sure everything renders/nothing visually break without you knowing. The rest of the functionality is pretty much e2e of user stories or you will spend an absurd amount of time refactoring UI code into units
@@Fernando-ry5qt and that is pretty much the boat we settled into. Most of us are backend people turned full stack for this project. It just feels alien to ride without a bunch of tests in the code.
I write unit tests for react code, but yes it can be tricky to decide which components are worth unit testing and which are just implementation details. I think there are two key cases: generic components that re-used all over the codebase and there unit tests are good documentation for other developers, and the other case is where components form a key part of a user story e.g. when a button is pressed the right API mock should be called. I don't write unit tests for every custom hook, every re-styled HTML tag, etc. as that would result in too many tests that don't add much value. I think there are various benefits to unit testing frontend code -- from encouraging modular design to checking accessibility compliance -- but I agree that it's not as cut-and-dry as black-box algorithm-heavy code.
@@Fernando-ry5qt Storybook is definitely something I've considered looking into, primarily because its very frustrating writing jest-based unit tests without being able to see the UI! However, as I understand storybook really shines as a communication tool between developers, designers, and other stakeholders so I'm sort of blocked from using it due to a lack of interest from all those people where I work. I have only heard good things though
Edit: IMHO TDD doesn't mean unit tests for every function. I'm prefer a few integration and E2E tests that get me 60% of the way I yearn for unlimited time to build something, understand it and then toss it in the trash and build v2.0 right away with reasonable tests. Your approach with harpoon1/2 sounds wonderful, but in the trenches I have been asked to build Thing A on Monday, then Thing B on Wednesday and Thing C on Friday afternoon while working with a bunch of other headless chickens who became headless thanks to poor project planning. TDD saves my ass in this case. I can be satisfied with gradually adding tests to services that are already in production when I go to fix issues that have cropped up.
The biggest issue with unit testing, is not understanding what a unit is. What I mean, people tend to call tests "unit tests" while they have nothing to do with a unit. In a consequence their "unit test" uses tons of mocks to mock everything out. There are other types of tests which should be covering these areas.
This is to do with the terminology mismatch between different developer communities. As testing became better understood, it defined the term "unit test" to refer to a simple test which follows one path through the code and returns one deterministic answer. In the meantime, and especially in object oriented communities, "unit tests" came to be used to refer to the entire test suite used to test an entire class or module. Not the same thing at all, but then add testing enthusiasts not defining their terms and you end up with people outside the testing community understandably not believing that "unit testing" is.not fragile because they are used to their tests suites breaking, especially when they write tests after the fact and have to do lots of fragile tricks like mocks to get around the fact that their code was not written with testing in mind.
Unit testing isn't bullshit., but it's overhyped for any use case. Unit testing is a tool that's helpful in certain situations. "Every hammer is a nail" or whatever that saying is.
People just hate "unit tests" because they were forced to cover the code with the "unit tests" which in fact was not suitable for unit testing at all. The unit is a key here.
@@ameer6168the only way to deal with such a client is to do continuous delivery based on incremental development, where you get sign off on acceptance tests with each new test cycle, combined with penalty fees for rework that needs to be done for excessive changes to already approved acceptance tests. This is what acceptance tests are designed for, and like with any customer, you have to make a choice about the value of working for them. If your rework fee is high enough then either you will stop them from keeping changing them, makes enough extra cash to be worth keeping them as a client, or you will know they are not worth the cost of keeping them.
I do TAD for units and a little bit of integration, then bug-driven (sometimes end to end ) testing from there out. When you find a bug, write a test first to make sure that bug doesn’t happen in the future.
TDD is great when you have a well defined existing spec. It's terrible for creative exploration because it will lock you into a design too early, this is where throw away code is better to draft. One of the best uses for tests I've seen is catching regressions, and expressing intended business logic for future maintainers. I've also seen some absolutely terrible and pointless tests. It's a mixed bag, they're just tools, blame the authors.
Tdd came about as a consequence of doing deliberate test first automated regression testing, and adding refactoring to the cycle. Comprehensive automated regression testing then forms part of the foundations of continuous integration. It works very well for exploratory design when you understand what you need the API to do, but are less sure about how the internals need to work, but does encourage early stabilisation of the public API of the code. If it is not public, it will either be exercised by your public API tests, or will be dead code to be deleted. In either case the implementation is not constrained by the API. Of course you are still free to break your public API right up to the point when you publish it for other code to use, at which point you are explicitly saying that the API documented in your header files won't break, supported by tests to make sure. If you break your API after this point, your users will rightly shout at you for it. Note: you do not have to add all of your functions and variables to your header files, making them public.
People who hate unit tests have never worked and maintained on years old, complex systems. A good test suite allows you freedom to do major refactoring with confidence that you haven't broken anything.
I have a hard line hate for unit tests because I'm so damn lazy, then I have regurts the instant I have to debug something that blew up in prod and don't have a test to fall back on to ensure I'm not mistakenly changing behavior with bug fixing, and then I suck it up and write tests, this time... Then the cycle repeats, because I am not smart.
No, by definition you are wrong. In tdd the developer writes a failing test, writes just enough code to pass the test, and then uses the combination of the passing test and refactoring to incrementally decrease the amount of technical debt. This is then fed into your continuous integration system, which proves that the code does what the developer expected. What you are talking about are acceptance tests which plug into your continuous delivery system to prove that what your developer expected was what the customer needed, which can either be done by the developer, or by the system designer. Of course one of the advantages of tdd is that it does incremental development of the system giving you something to present to the customer regularly which enables exploratory testing of the developing system to identify such mismatches against working code.
I work on a giant ass project. The only unit tests I have test some complex recursion and conversion libraries. I am now starting to implement integration tests for my own sanity and to just stop breaking prod. The mental load is big when a lot of your services are cloud based. Yeah your integration tests will need lambdas, a sql database, a redis database, s3 access. All of these permissions need to be set in your runners which means proxying... Once a single test will pass it will be great but the setup is literal hell.
Magento is a great example of how not to do testing. Tests that don't make sense, others that simply assert invalid output and plenty that intentionally avoid specific inputs otherwise their tests would fail. This is what you get when you demand that every merge needs to have tests associated to them.
the problem is everyone learns how to test, goes way overboard with it, and end up hating themselves, the coworker that made them test 100% and unit tests all together when they feel like their code was dipped in cement.
Oh my god at the beginning. "Whenever I hear unit testing is hard I hear that you are a CRUD developer". That is EXACTLY the problem. Dealing with those data objects in unit tests DOES make them a shitty waste of time. That's one reason you don't do that shit in the first place.
The tests need to actually be good and helpful and not a maintenance burden because they break during refactoring. That's hard, and sometimes trying to force tests does more harm than good and should be skipped. But almost all projects I've worked on in industry have been criminally under-tested, and coverage gates have only made the problem worse.
My hate for unit testing (and a lot of testing in general) is given by the fact that 90% of the tests that are written are ruling out program behaviours that could be avoided by using strong type systems (think Rust). I've seen too many unit tests just checking that some returned JS object's property was not undefined.
Unit testing is fine untill you have to test React component with some logic and hooks and whatnot, which forces you to mock said hooks and their responses. Which is still fine compared to UI integration tests for larger applications.
The trouble with integration tests is that it's hard to make them 100% deterministic, and if you have a large project with 100 integration tests that are each 95% successful, the tests flake 99.4% of the time. It's so annoying!!! Teams should be intentional about creating very few of these or holding them to an extremely high standard of reliability (3 nines will still flake 10% of the time).
Units tests are great! I love them. But with experience I learned that they are most effective when written to test out business logic and difficult algorithms/ validations. I don’t need to run the whole end to end system to test a simple change. My unit tests can do that for me and cover all bases. It’s wonderful. They stop the code from breaking with change. However unit test for the sake of coverage is meaningless and waste of time.
I wonder if the team working on Accounts or Payments for Netflix would agree that CRUD apps don't need unit tests. A lot of CRUD-based systems are more complex than niche dev tools 🤷🏾♀️
In my experience having a mandatory requirement for code coverage often leads to people writing really shitty unit tests just so they can get X percent of coverage.
The correct response to that is not only to have a ratchet on your code coverage, but to make the test author responsible for fixing every fragile test they committed before they can work on new code, thereby pushing the pain of their crappy tests back onto them. If it still doesn't work, add them to the list of people who have to fix the broken tests which no longer have a maintainer. Eventually they will figure out how bad crappy tests are, and stop writing as many. Also crappy tests are usually a product of crappy code, so their code should gradually improve as well.
When you go to refactor you may also have to throw away tests and you need to be okay with throwing away tests. Because if you're refactoring it means that something in design is more understood now than it was before and what you're testing may not be exactly the same otherwise yeah you have to test that fail.
Hard NO! Tests are documentation of the expected behaviour of your code. If you need to change tests when doing refractoring, you are testing the wrong stuff. You are then testing implementation details, which are irrelevant. You test for the value that your software is supposed to deliver.
1:09 There can be some really complicated state management logic in FE. So I do not agree that being an FE is an excuse for having this mindset. To me, unit testing ensures your logic is correct, your flow is not too complicated, and you know where your state is being managed like where your data is being prefetched. Manual testing covers the visual checks and user interactions that may break due to the environment or unexpected ways of using your app. But unit testing ensures the logic under the hood is solid so that you know your code makes sense and it will be easier to build upon in the future.
Of course the real value of lots of public API unit tests Is that it pushes a lot of code to be outside of the UI, and thus testable. The rest is just a thin skin that is wrapped around the rest of your program, which is then much easier to test and to change.
The value of unit tests is proportional to the cognitive load of the thing that you're considering testing. If it takes no effort, isnt breaking, and doesnt really keep you up at night? Probably not worth testing. On the other hand if its breaking, overwhelming to just think about or stressing you TF out? Should probably test it.
Isolate your business logic into your service layer, then unit test that
If your I/O and side effects are at the edges of your system, then testing the pure stuff in the middle is fun and productive in my experience
I didn’t truly understand TDD until I worked through Learn Go with Tests. Now I’m of the opinion that the animosity towards TDD stems from a misunderstanding of it.
I have been writing unit tests for 15 years, but for some reason I didn’t try TDD for real until last year and it has made a world of difference. Writing tests when the code is already in place may ensure that the current behaviour remains the same, but it doesn’t improve your design and it is often painful and requires ugly mocks.
I can not imagine building software without tests. I write tests to drive 1) risk areas like core performance, 2) define requirements that should be invariant regardless of implementation, 3) to speed up development by exercising the system in ways that are hard to do manually.
People say that their tests tend to break during refactoring, but that is not my experience. I think it occurs because they write tests AFTER their code is already written and then they need to invent complicated mocks in order to test their highly coupled code. Such massive mocks make refactoring really painful and break easily.
Can I ask you what are u using instead of mock? I’m new to tdd, by the way.
Tests breaking constantly is likely due to choosing the wrong unit for the tests. If they are too fine grained (only test 1 class) then everytime you move some code around tests will break.
This has not necessarily to do with test before or after but doing it before does usually lead to better design.
He's gone into this in more detail before but his tests break because once he's written his initial passing code, the refactor he does is catastrophic enough that the tests break. Like, the unit doesn't even exist anymore sometimes so the test is a bit of a waste of time.
Some (not necessarily all) places where unit testing can be useful:
1. You're writing a tricky algorithm. This should probably be treated as a pure function (no side effects), which makes it easy to test as a block box.
2. You have a system/service with well-defined API where it's easier or more cost-effective to use fake/mock dependencies than real ones. Again, can be treated as essentially a black box.
#2 is a little iffy and requires some judgement calls.
I think the biggest unit testing anti-pattern is when you try to unit test an object that is essentially an orchestrator/manager of many other objects. If you find yourself writing mocks that collect the output of calls made against them and then asserting on that in the end, I would call this "backdoor testing" and it's pretty annoying to maintain. You're trying to test side-effects rather than end-to-end workflows that bubble back to the caller. Any time the dependency API changes (which it will because really it's an implementation detail), you have to go fix all of those tests.
Writing unit tests feel a lot like exploring the system. I do love that idea of the system being something that already exists (even if you haven't built it yet) and as you write it you discover more about it (platonic, yes). It's like math: the equantions we don't know are there already (logically), we just haven't thought of them yet. And testing has been really cool to use as means to explore the "business logic". From Dave Farley (yes, him), I get that tests are your first clients, because they're actually the first ones using your production code, so it makes them the critics of your code, and when things start to get too hard to test, that's the critics saying "something is going wrong...", so you can actually improve both your code and understanding of the problem. Also, like everything else in the devs' world, I am not crazy for tests, but I do like them (just like Clean Code).
If you're complaining that you can't know how the future code will look like, I think you're doing it wrong or you are in the very special situation of building something completely from the ground up, which in large orgs isnt always the case.
To me, unit tests are valuable MOSTLY because they force you to make your components more modular and isolated from each other (building... well... units!).
Also, I ran multiple times into the situation that I revisited old code to change/update its functionality, typically because I was using an existing piece of code in a new component and suddenly that old code had to do things differently. Now, instead of "quickly" changing things to make my new use case work, I am forced to think about previous use cases (and how my new changes might change those) too because of an existing unit test, which might break now.
Especially in large code bases, I cant see how people honestly cannot get behind this concept.
"If you're complaining that you can't know how the future code will look like, I think you're doing it wrong" - someone who is not going to make it in this industry.
I know before I start my development what functions I am going to need, and I never write unit tests. I do value system and e2e and they per definition proof my independent units in actual real orchestration
Tests are only as good as how much confidence they give you.
Hot take: unit testing is a skill issue.
It's good to unit test even on the first write. Write the tests after you write the code. Use the structure of your code to know what values to test, but never test using the internals e.g. Mocking. When you make changes to your code, fixing your tests is trivial and makes sense with the code you change.
Most people find unit testing annoying because they haven't written them enough to know the patterns to use. Using the right patterns and strategies makes it easier to maintain.
Also, just use a typed language so you don't test boring useless things like whether you got an integer or not.
If you don't write unit tests for your unit tests then you are not going to make it in this industry.
@@youtubeenjoyer1743 imbecile
Hot take: Unit testing is a skill issue.
As in, people for whom unit tests are actually effective are people who are incapable of writing basic functions without producing bugs, or are incapable of breaking down complex problems into smaller, less complex ones.
Never test using mocks? There is such a thing as too much and too little. Fixing tests, when written around very small units of code (often function-level) are usually trivial. If you have to mock 10 different things, then the function you're testing is too big or you have an architectural problem. That doesn't make mocking bad in general, in my experience (20+ years).
@@iojournyYou've not worked in a team large enough to have diverging opinions on coding practices and where everyone is competent/their code can be trusted to have no bugs.
1:36, unit tests are important to me because:
- Test edge cases.
- Keep testing things that you think are already right, when you change code.
- In TDD, helps planing the design. This may be controversial, when 1 doesn't still know how the f() signature will be, and it's better to discover it by developing it.
- The most important: once having an incomplete f(), and a bunch of tests, the rest of development can now become really fast!
1:55, it's possible to extract that if condition from inside the f(), making another 1. Problem is that it may be seen by the rest of the project. C/C++ has a way to avoid that, while still keeping this flexibility.
If you understand the problem well enough to think through every edge case, input/output combination, dependency, and assertion before ever writing a line of the actual code, you probably won't benefit from having the tests ahead of time. How many times do you get into writing a complex service layer function and realize "oh shoot, I need this other field in order to handle X case or do Y calculation"? This realization breaks many of your unit tests and you end up having to spend more time fixing the functions or figuring out how refactor your tests. For this reason I can't say I've ever truly benefited from TDD
@@mikehodges841TDD for small/tiny f()s can be good. Even so, often the dev. for a while is not sure about the f() signature. So I think a "hybrid" TDD is better: code the small f() till discover its signature; once having that, write the tests, and then complete the f(), which now will become much faster, due to "the blessing" of the tests. It means it can be completed with less thinking, saving energy for the long run.
However, in my experience, big f()s (2-5 screens) doing lots of things, either directly or by calling others, are hard to predict in a TDD way. And also have this editing issue.
Good thing C/C++ have a kind of tight syntax, making each test fill only 1 line. So it's easy to turn some of them on/off, when broken. By macro, they may not even be compiled too.
Unit Tests (like static typing) is like Mathmatical Proof. Proving your code works and does what it's supposed to do. That's definitely a good thing. It's just about balancing the granularity of that with the other costs.
Very often they don't though. Pure functions are easy to test, but code that has side effects on anything outside of its contexts tends to be inherently untestable
For an API I took over from a previous collegue we just went with integration tests. There was not much domain logic. I made a pretty nifty test setup with testcontainers that allowed us to very easily test the whole app and it was fast as well (full test run was a minute). Our confidence in our tests was extremely high which allowed us to deploy often as well.
Another major advantage is that there was very low coupling to the implementation. We did some pretty big refactors without having to update any test.
Yet I also like unit tests but when to apply them really depends on the situation. As soon as more domain logic would appear in our app I would have definitely used unit tests for that. Its all about choosing the right unit.
Recently joined a project as lead where the architect forces ppl to isolate every single class in tests. Its just hell as tests constantly break and there are so many low value tests. This is going to change whether the architect likes it or not.
You start to appreciate unit testing when your project is growing to the degree when you cannot handle all the dependent components in your mind.
skill issue
That’s when I would write acceptance tests which test use cases for those components which usually contain a bunch of files / classes. That’s not what others usually call unit tests but that’s what I found to be perfect because it doesn’t discourage refactoring and actually check if the code fulfills its requirements.
@@MoonShadeStuffwell, yeah. They're not unit tests because they test more than a unit.
Acceptance tests are good though!
What does object interdependency has to do anything with unit testing? You do not test for the dependencies or the presence/lack of relevant objects in the memory at the relevant time?
TDD is just proper architectural design, which is impossible, hence TDD is impossible, improbable, infeasible, inappropriate, and is complete bullshit. I've been coding since 1990 all the way from basic to c to fortran to python to c# and all types of software from games to fluid/heat excange simulations to games to all types of fancy algorithm performance comparisons (between ant colony optimization, genetic algorithms, and many forms of heuristic algorithms) and I have never ever unit tested and I'm so glad I didn't take that poison in.
Just code, do proper debugging and make sure all variables going in are value checked properly. It'll be fine.
@@windwalkerrangerdm why wouldn't you unit test what you can though? A function for example?
I think I finally figured out how unit tests work. So, tests actually test 2 things; they declare a public interface and they assert the outcome of using that interface. If you change the outcome OR the public interface then you change the tests FIRST. All other changes should not break the tests. At the start of the project you don't know what the public interface should be, so just pick something and expect that it will change. No way around that.
Absolutely right. Unit tests do automated regression testing of the public API of your code, asserting io combinations to provide an executable specification of the public API. When well named, the value of these tests are as follows:
1, Because they test only one thing, generally they are individually blindingly fast.
2, when named well, they are the equivalent of executable specifications of the API, so when it breaks you know what broke, and what it did wrong.
3, they are designed to black box test a stable public API, even if you just started writing it. Anything that relies on private API's are not unit tests.
4, they prove that you are actually writing code that can be tested, and when written before the code, also proves that the test can fail.
5, they give you examples of code use for your documentation.
6, they tell you about changes that break the API before your users have to.
Points 4 and 6 are actually why people like tdd. Point 2 is why people working in large teams like lots of unit tests.
Everyone I have encountered who does not like tests, thinks they are fragile, hard to maintain, and otherwise a pain, and who was willing to talk to me about why usually ended up to be writing hard to test code, with tests at to high a level, and often had code with one of many bad smells about it. Examples included constantly changing public API's, over use of global variables, brain functions or non deterministic code.
the main output of unit testing are code that you know is testable, tests that you know can fail, and knowing that your API is stable. As a side effect of this, it pushes you away from coding styles which makes testing hard, and discourages constantly changing published public API's. A good suite of unit tests will let you completely throw away the implementation of the API, while letting your users continue to use it without problems. It will also tell you how much of the reimplemented code has been completed.
A small point about automated regression tests. Like trunk based development, they are a foundational technology for continuous integration, which in turn is foundational to continuous delivery and Dev ops, so not writing regression tests fundamentally limits quality on big, fast moving projects with lots of contributes.
How do I do unit test
1. Write the function name
2. Input types
3. Thinking in ramda or lodash
4. Go creazy with everything you know, don't stop
5. Consol log out the input and out and put to test
6.Refactor the function to be more athletically looking
Bang, you've done your job
If you are professional software engineer, you have to write codes that are production ready.
Production ready codes needs to have e2e and units tests in a reasonably way for the stability and maintainability.
It doesn't really matter whether you like writing tests or not, just do it if you're pro.
Recently learned about contract testing and I am really enjoying it.
It is related more to integration tests, where you are trying to replace external calls with some stubs. If done right, it is super powerful.
@@gbroton yea absolutely, it's just the new testing approach my team is adopting.
I contributed to an open source project last week. And my PR included a few tests as well. It felt weird to do such an obvious thing. But today they pushed a larger refactor, and I wasn't sure what I had to fix in my own application. So it ended up being somewhat useful to just run test in a few seconds to know all the functionality did work.
I have recently written a full feature and it has two main parts. One is a client with facade like interface, it communicates with a several different APIs. The other part is the business logic which uses this client. After I learned enough about the different APIs for the clients and collected some outputs I was able to write it using unit tests. Also while I was working on the business logic part I used a mock of the client in my unit tests.
Tests are unquestionably good especially if you work in a team and even special if your company is hiring a lot. It is unresonable to ask a new developer to be diligent and not break existing code as it doesn't even known that code/business rule existed.
It's also great for mapping out business logic. Especially it the solution isn't exactly mapped out upfront.
For microservice architecture what ive come to love is component tests (i.e. test a whole service using a mock server for any dependencies) + contract testing to make sure those mocks are valid. A decent amount of unit test coverage + this + a few smoke tests to make sure the deployment works with the env config and you are done.
No 1 issue I've had with unit testing, and the reason I didn't get into it sooner, is all the _extremely_ poor explanations of its utility and where to actually use it in the real world. Even to this day, there's a staggering number of turorials where they're demonstrating unit testing by writing a "Calculator" class with an "Add" method. And I'm like... dude... if you have to write a friggin unit test for the plus operator in [insert language here] then you should maybe look into applying some more Arctic Silver paste between your CPU and your cooler, cause there's no unit test in the world that's gonna help you.
I have the same observation there are very little resources teaching about testing. The author of TDD released a course in which he teaches TDD and guess what, for 2 hours he was writing tests for fizzbuzz method. Classic
@@gbrotondevs should just be forced to take the ISTQB.
There is a reason people use simple coding katas to demonstrate automated regression testing, tdd, and every other new method of working. Every new AI game technology starts with tic tac toe, moves on to checkers, and ends up at chess or go. It does this because the problems start out simple and get progressively harder, so you don't need to understand a new complex problem as well as a new approach to solving it.
Also, the attempt to use large and complex problems as examples has been proven not to work, as you have so much attention going on the problem that you muddy attempts to understand the proposed solution.
Also, there is a problem within a lot of communities that they use a lot of terminology in specific ways that differ from general usage, and different communities use that terminology to mean different things, but to explain new approaches you need to understand how both communities use the terms and address the differences, which a lot of people are really bad at.
There's also a tonne of terrible frameworks. No one also explains how you should structure your project to accommodate unit testing.
What you need is someone to develop their project from scratch without jumping around all over the place.
@@chudchadanstud Very true. For instance, most of Microsoft's official documentation on their various C# implementations (everything from websites to APIs to functions) does not accomodate unit testing out the box.
Finally. Someone says what I've always thought. Round one is just to get a basic understanding of what I'm trying to do. Round 2 is the Schnitzel.
almost every time I write a test (unit, integration) I find a mistake I've made in the code. good testing = +confidence in code, +faster dev, -mistakes
Unit tests = faster development
3:40, that's what I'm always telling people who complain about unit tests taking too much time to write. Instead of doing lengthy manual user setups in your program, you can just run a set of data through your logic (the black box) with the press of a button. You can know right away if your code works. Unit testing can speed up development of certain features greatly this way. There is a complexity threshold for the bit of code you want to test for this benefit to become apparent though. I think I'll make a video on this topic.
What I don’t understand about this approach is how seeing a failed test helps you decide how to fix it? If the thing you’re testing is sufficiently black boxed enough to make it hard to manually test, doesn’t that make it hard to interpret auto tests (even if they are easy to write)? It sorta feels like the Remembral from Harry Potter where Nevil goes “oh I forgot something but I can’t remember what I forgot!”
There is an upfront cost to creating tests, but once the tests are created it definitely speeds up future development.
Generally, if yours tests are taking a long time to write, then the code is hard to prove that it works, which likely means the way it's made is bad.
This is assuming you think of all of your edge cases when writing your tests.. which, if you can think through edge cases in tests, you can think to handle them in your code. Tests don't just magically appear in your codebase to handle all of the complex edge cases.. someone has to think of them and write them, which is usually the same person as the one developing the code.
@mikehodges841 once you find an edge case, putting in a unit test makes sure that future changes don't re-introduce the same problem. This is great if the code is being worked on by a bunch of different people.
There are zero concrete, ubiquitous statements that can be made about testing. There are always caveats, and an articulate programmer can justify the state of testing in a code base. It may be warranted. It may not. It may speed up development; or not. It may be easy; or not. Testing private methods may be helpful. Whitebox testing only public interfaces may be best. Anybody that claims there is a one size fits all approach is just an amateur that lacks experience and / or thinks too highly of themselves.
"There are zero concrete ubiquitous statements that can be made about testing" is a concrete ubiquitous statement about testing.
@@jcorey333 lol fair enough, but that was said of “statements about testing” not of testing itself. Err something.
"thinks too highly of themselves"
The primegean in a nutshell.
He makes very strong statements for example about agile or TTD without considering the many different environment (technical and business) we work in. Everyone has a different context but he's adamant about making bold claims that makes no sense in a lot of case.
The point of unit testing is exactly to slow you down and think of what you are going to implement. If you need a function that return something you first need to think of you need it that way. Unit test force you to think and slow you down. It’s not super complicated and it’s a good practice!
After you've built your unit tests, they are gonna be a pain wherever your interface changes. Not everybody loves that way of slow down.
@@elzabethtatcher9570 coding is about to think! And we are not machines, you are getting payed for that, I don’t see the problem to slow down and think…
Unfortunately all the unit tests I've made were the "pour concrete over it, don't touch it, it works fine as it is" for some complicated algorithms. That being said, I would like to write more. I just never really have the place to do so, the only thing I'd be testing is a quick for loop or some library wrapper with a bit of extra setup. I seem to either write a 4 line long function helper which is unnecessary to test, or some 200 lines (total, split across multiple helpers etc so it's still not that bad) behemoth I don't want to look at ever again. Why can't I write normal 30-40 line long functions...
Damn I thought I was weird for writing things atleast once and throwing it away, it legit is OP. The second write is so fast! I call Unit tests that I write for the purpose of debugging SANITY TESTS!
Unit tests are like construction lines in art. They can help you sketch out structure and shorten the feedback loop on your ideas.
However, there are plenty of artists who can go direct to a final product without the intermediate step of construction lines. Telling Kim Jung Gi he should be using construction lines is dumb but teaching drawing by starting with construction lines makes sense. They can also be useful as shared foundations like in animation where multiple people collaborate, in this way they act as a form of documentation.
Think this is the best example.
I like 100% coverage. Not because it necessarily proves my code correct, but because it proves future code didn't change anything unexpected. You make a change, tests break. You then decide whether the tests should have broken (and fix the tests) or whether the tests shouldn't have broken (and go back to your code).
Certainly I try to prove correctness as much as possible as well, but we have multiple layers of testing to do that - unit tests, integration tests, regression tests, acceptance tests, etc. At least two of which are written and executed by people who aren't me and don't have my biases. Unit tests are generally the only place that tests the _code_ though (as opposed to testing the functionality), and I like to make use of that fact as much as I can.
Depending on Test Framework and language, it is impossible to achieve 100% test coverage. That is due to defensive code checking things that are actually impossible to happen because they are intercepted beforehand with validation.
Some of those defensive things ought to receive their individual unit tests, in other cases it is just busywork. E.g. in Java, you will ALWAYS do a null check before accessing an object. Sadly Java is lacking a ?= Or ?. Operator for null checks
@@dominikvonlavante6113 > it is impossible to achieve 100% test coverage
If you design your code with testing in mind, you should be able to get 100% on everything other than the entry point. I'm not saying there isn't a cost for doing that (dependency injection is usually harder to trace manually than just creating objects on the fly, for example). But its doable unless the language is explicitly trying to prevent it for some reason.
> in Java, you will ALWAYS do a null check before accessing an object
Will you? Certainly you _should,_ but that's not the same as saying you _will._ Those "busy work" tests are exactly the kind of thing I like to see in the "just here for coverage" tests. Its way too easy to have a private method that doesn't do a null check because "I know how it's called" (and IntelliJ will even complain if you include a "useless" null check which I wholly disagree with but it's the default), then 3 years later somebody else decides they also need your method's functionality and suddenly your "I know how it's called" protection is bunk.
As noted before, I view these kind of tests not as guaranteeing anything about my code _today,_ but providing a warning about problems that may crop up tomorrow. _Especially_ the "dumb" problems that we often don't think about because we've internalized "you should always do a null check" so much that it's become "you will always do a null check" and overlook the fact that people aren't perfect.
Even if you don’t write the unit tests, writing your code with unit tests in mind helps keep your code more manageable cause you don’t introduce all these hidden dependencies that would be impossible to unit test against.
I also think adding a specific unit test to help increase your ability to iterate on a solution to a hard problem is the sweet spot for unit testing. Unit testing somewhat trivial problems I think can also be worth while if it’s something with _a lot_ of dependencies or on some really critical/hot paths where breakages would cause you a disproportionate amount of pain.
Definitely don’t think it’s worth adding a lot of tests on something that is very likely to get changed, as making changes in those unstable areas just becomes a chore of cleaning up no longer relevant unit tests that probably shouldn’t have been written in the first place.
So they definitely provide some value, but it’s like that old saying … something like “even too much of a good thing can be bad” 😂
big part of what you are explain is "when" to unit test. my criterion for that is: when it pays back. every kind of testing is an investment. you should only do it for returns. returns can be: less prod problems, quicker development, better maintainability, ... it is never a goal in and of itself. tests incur a cost. manual testing needs to be re done all the time: they do not scale well. automated test may scale better; they also incur cost to write and maintain. so strike a balance that makes sense for what you are building. they should make sense, i.e.: add more value to the project than they cost in spending time to build/maintain, or they rationally would have to be qualified as a distraction.
design your interfaces well + code the implementation side-effect-free, and the core logic becomes a "black box" unit test. skip testing at the edges (presentation layer, repository layer, any other places with side-effects); you don't need to write unit test to check if a SQL query returns correct results or a generated JSON/HTML is expected - e2e tests are better suited for that.
8:47 is the biggest nugget of wisdom!
I ONLY do black box testing. I constantly refractor code; unit tests would fail, back box integration tests won't.
I like unit tests, if you are able to actually automate them.
I once worked in a company where we did embedded development.
The tests were all manual.
But not because we were lazy, but because we have no clue about how to do it.
The machines themselves don't have a network connection.
And we failed multiple times to create simulations of these machines which actually behave EXACTLY (we need the EXACT behaviour for the tests to actually be useful because of the tolerances we have) like the real ones (there were multiple tries to do that which cost multiple tens of millions each).
So, instead we have a test protocol which we go through before release (which is once a year) and takes a few weeks of the entire department doing just that.
The book "A practical approach to large-scale agile development" describes exactly how and why hp introduced systematic testing for their entire laser jet printer line, including their prior problems and how it got rid of them, including the testing of the embedded hardware involved.
Well worth a read.
test coverage is useless when you go for a defined percentage of coverage but it's really useful to find what parts of your code are not tested at all
"oh this entire piece of this function is red, which means I haven't made a single test case that covers this usecase. I should probably write a test for that"
While I haven't been disciplined enough to apply unit testing in my personal projects, I have on occasion been required to include tests with professional work. For me, the reluctance to do it in the first place is a combination of lack of discipline to put forth the extra effort to write the extra automation of confirmation of correctness after I've considered a feature "final" (whether it ultimately turns out to be final or not) as well as choosing reliable test frameworks/harnesses for whatever language(s) I happen to be working with for a given project (mostly a laziness/discipline problem rather than a knowledge problem). Ironically, for literal decades I've been otherwise content with manually writing text to a console during debugging stages to figure out what's wrong if things go wrong. Maybe there's also the feeling of giving up control of the process to the machine even though one of my jobs as a programmer is to figure out ways to automate processes.
Unit testing itself is not a problem. If all you have is blackbox test without knowledge of the internal workings of the code, everything is great. It starts to fall apart when you add mocking. Then the testing code has to have knowledge about the implementation. Now when someone refractors the code, without changing the behaviour of the function it is likely that the tests fail, since the mocking is no more correct. I unit test only if I can do so without mocking. If I would have to start mocking, I create an integration test.
i tend to agree except for smaller things where you only need to mock one or two things, I'm fine with unit tests. i do prefer integration tests though, because
1. services are naturally dependent on other services, mocking can often hide bugs that come from the way two services interact
2. changing a service somewhere forces you to think more carefully about how the entire system is affected rather than how to reimplement your change in the mocks (provided that a test fails as a result of your change)
3. integration tests can be done in a very modular and "unit-test" type of way if you structure your services correctly, without having to mock anything. if you make each service a separate module with explicit dependencies on other modules (like an api module with a dependence on a database module for example) then you can start up the module you're testing (along with its dependents), configure it however you like, and then proceed with integration tests. if you were doing unit tests instead, you would have to mock out the dependent services.
But if you have to do mocking, because you cannot do unit testing, this is itself a sign that you are writing hard to test code, and is usually a sign that the idea of writing tests was not acted upon until after the code was written. If you write the tests first, you don't write the code in this way, and tend to produce code which is better.
But of course you have to break your bad coding habits first, often before you see the design advantages of doing test first design.
Primeagen "you're a UI dev and never built anything hard"...
Also Primeagen: completely avoids writing UI/frontends at all costs.
TDD makes sense if you know everything before hand. Like v2 refactoring v1 code you can easily do TDD, but that means you have done the work to story board, further discuss, and write out a plan. Also unit tests make great sense to me until you have to redo them because of a change or if you mock waaayyy to much stuff.
That write things twice is something that is built into the process of TDD. Only when you rewrite it the second time you already have the tests in place to make sure you have the intended outcomes.
On the unit tests - if you refactor and your tests change at all you have done things wrong. If you think a unit test is something coupled to the implementation you don't understand what a unit test is and you should do more effort into understanding your craft.
If you write tests coupled to your implementation to help you develop a new piece of fine grained functionality throw the test away afterwards because it is only a maintanance burden afterwards.
Finding ways to test UI is always cumbersome change my mind
Unit testing is really good when you have a lot of changes, when you need make a big refactor. What I can say is the most projects that I have worked don't have even 30% of test coverage and the application works well! But I'm not against tests, do tests, do a lot of type of tests, after that you'll have a solid opinion about.
Here you told how I look at it too: I don't care for test coverage, don't care to "catch regressions" - those are bullshit marketing points. What I care (and only why I unit test) is exactly to have faster feedback loop - like testing a totally incomplete algorithm is very possible with unit testing. Lets say I have a data structure that can grow (and not as simply as a vector) and I can literally just unit test that part alone wihout trying to build the whole complex thing.
Unit tests are fun and easy. Sometimes it can be a little boring but I like the repetition of it tbh
The only valid reason to change a unit test is if the unit itself changed.
If there is a new requirement for the unit, then you should write a new test for it.
If no other requirements were affected, why you would need to change the existing tests? If you need to rewrite existing tests, then you are most likely testing implementation details.
A unit test should focus on the "what", not on the "how". And that's why TDD is an excellent approach.
When you do TDD, you:
1) write a test
2) see it fail
3) write the minimum amount of code to make it pass
4) refactor
If you write the tests after, you are biased towards the implementation you wrote. So it is quite easy to write tests that know too much (about implementation). And this is the main reason you have to change the tests once you make changes to the code itself.
If you are refactoring and your tests start failing, it just signals that your tests focus on the "how" and not on the "what".
Property based testing is difficult, but when you learn it, you can get so much out of it.
If you work on a complicated project with many developers at a high velocity unit testing is worth its weight in gold. But in most projects, well crafted reliable E2E tests are the must have.
For me unit testing is as much about communicating the invariants of a module or function to another developer or future me than it is about preventing regressions.
Unit tests should be in 2 phases: at the beginning to understand some basic scenarios, and at the end when everything is finalized, for production
In my crdts package, I have an e2e test for each CRDT that applies a bunch of updates asynchronously and then plays them in random orders on fresh instances to ensure the final state is always the same. Blackbox fuzzing for correctness is probably the most valuable type of test.
Here is a quick take: testing should be allowed to access private methods as sometimes the smallest unit you really care about is part of a complex algorithm that is abstracted as it simplifies the problem, but then you want to ensure the values passed in are processing as expected. This problem can be solved by not making it private, but then some teams don’t want public methods that aren’t used in production code. Before anyone wants to give additional advice, the other limitation imposed by the team: functions are invoked by system events and those should be the only thing public, if you don’t use helper functions or utility classes. These limitations imposed by the team when combined lead to very bad results, as you most likely will need to pull additional data while your code is running. This is where the problem of not being able to access private methods is problematic, also this whole private concept is fake in all languages as reflection does grant access, but that too is discouraged.
The bottom line is, imposing rules to have tests that are full complex mocks that need to be updated every time the object they are faking is adding more complexity with poorer results. This is the opposite of what tests should be, because when tests are hard to maintain they end up being poorly written.
Perhaps a different level of urgency for tests that depend on implementation details? But then they just become ignorable. Idunno.
The problem with calling the public / private split fake is that it ignores what it is actually for. When publishing code, the public stuff goes into your header files, and the promise is that this stuff won't break without a good reason, and not very often. While you can access the private code using reflection, it is not even guaranteed to work across individual patches, let alone be consistent across major releases.
Your private code is then tested by calling your public API's and if it is not called then either you don't have enough tests, or it is dead code you can delete and the solution Is to either add tests or delete code.
@@grokitall Rust allows testing private methods, because there are times that you would need to write many mocks to just test the basic functionality that the private method provides. If all you need is to indicate that a method isn't stable and carries no guaranty that it won't change, naming conventions work fine, Python does that, and it is heavily used.
What can I make other than CRUD apps?
Sometimes I just spin up a test with a breakpoint and see what it is doing without even writing assertion (it is way faster than editing main or executing another file with some print(...)). And eventually if I am happy with the interface and output I write a proper test (or more).
Bad unit tests come when you have a load of mutable state in classes with complex interactions that are uncontrollable and instead of going back and rebuilding, you throw 100s of unit tests at it.
If your code is well designed then unit testing is quick, easy and makes you faster. If it’s not then unit testing is a distraction from the real problems.
too be efficient and not having test completely consume me (because of deadline and things will change very quickly by stakeholders) I test if I am working with blackboxes, or if I don't trust the framework that handles some of the logic. Most of the time I test the function millions of times while developing anyway. The unit test value comes from testing code that will change during development without me personally changing it (package upgrade, dependency update, etc)
Very very very very very very very very beneficial.
I don’t see use for unit tests for most functions - rarely are functions complex, only when they are I may do one.
My functions are so small that I don’t need to have dedicated tests because I still need to validate the output and I do it once because if your functions are small and easy they don’t change.
And unit tests on embedded is completely useless, how will I report a test status back in a truthful way the art of testing changes the real life outcome with timing critical systems (sorta like observing a quantum particle).
What I’m interested in is seeing that cascading interrupts gives the required results. Aka E2E/System is what is valuable and when they do succeed I have per definition tested my smalles unit.l to work correctly too.
In the mission critical medical, ball bearing and energy software I was working at, unit tests were never deemed proof and as far as I recall very rarely used or run. Only E2E tests and Systems tests were reason for certification and they were run frequently.
And I’ve seen in so many IT projects that do unit testing that tests succeed and products still fail miserably after every bloody release. Because there’s no regression testing of the E2E and system. So hence those two have value the rest not.
Where system testing is the most complex and we even had to create specialized hardware. To simulate a censor breaking and pulling a line high, and low and started spewing out random data or actually syntactically correct data but wrong values, proving that alerting world, that the quorum censor or algorithm detects the failure ignore that data and appoint the other as a “master”. Etc etc etc
Those are the hard and complex things.
I’ve taken to writing vertical slice tests using an in-memory DB at work. Is it great? Uuuh. Well, it’s effective, but the tests individually run slower due to spinning up the whole application though if running all tests then the start up time is trivial compared to the total run time. The big upside however is that we get to do blackbox testing of both new and existing code by setting up the data itself instead of mocking out a million implementation details. Especially great for those parts of the app that might get the legal department sicced on us if we get it wrong.
Theo’s response about unit tests not being useful for refactors was ice cold
When doing game development unit tests and just automated testing in general is pretty much the greatest and most useful thing to make your game good.... I don't understand how you can make any game worth the energy it takes to change the pixels without them.
TDD is really about working incrementally more the one before the other. Write a small amount if code and a few test cases at a time.
I work on a highly interactive dashboard in react, and I struggle with how and where to test anything in that code. Are most ui's just not tested normally? We mostly rely on integration tests for the API stability, but its pretty touch and go.
If you properly isolate the key interaction triggers you can test for them, but integrations is pretty much the way to go tbh
quick example:
If you have a huge table that displays different columns based on user role then you test the permit to row builder function to make sure that critical bit of the UI never serves wierd/unwanted data and you test a full render and test for a snapshot to make sure everything renders/nothing visually break without you knowing.
The rest of the functionality is pretty much e2e of user stories or you will spend an absurd amount of time refactoring UI code into units
@@Fernando-ry5qt and that is pretty much the boat we settled into. Most of us are backend people turned full stack for this project. It just feels alien to ride without a bunch of tests in the code.
I write unit tests for react code, but yes it can be tricky to decide which components are worth unit testing and which are just implementation details. I think there are two key cases: generic components that re-used all over the codebase and there unit tests are good documentation for other developers, and the other case is where components form a key part of a user story e.g. when a button is pressed the right API mock should be called. I don't write unit tests for every custom hook, every re-styled HTML tag, etc. as that would result in too many tests that don't add much value. I think there are various benefits to unit testing frontend code -- from encouraging modular design to checking accessibility compliance -- but I agree that it's not as cut-and-dry as black-box algorithm-heavy code.
@@rlamacraftI used to test for common primitives too but now we just rely on storybook for it haha
@@Fernando-ry5qt Storybook is definitely something I've considered looking into, primarily because its very frustrating writing jest-based unit tests without being able to see the UI! However, as I understand storybook really shines as a communication tool between developers, designers, and other stakeholders so I'm sort of blocked from using it due to a lack of interest from all those people where I work. I have only heard good things though
Edit: IMHO TDD doesn't mean unit tests for every function. I'm prefer a few integration and E2E tests that get me 60% of the way
I yearn for unlimited time to build something, understand it and then toss it in the trash and build v2.0 right away with reasonable tests. Your approach with harpoon1/2 sounds wonderful, but in the trenches I have been asked to build Thing A on Monday, then Thing B on Wednesday and Thing C on Friday afternoon while working with a bunch of other headless chickens who became headless thanks to poor project planning. TDD saves my ass in this case. I can be satisfied with gradually adding tests to services that are already in production when I go to fix issues that have cropped up.
You can use unit tests as your build and throw away stage. This is how we explore the problem domain with TDD.
The biggest issue with unit testing, is not understanding what a unit is. What I mean, people tend to call tests "unit tests" while they have nothing to do with a unit. In a consequence their "unit test" uses tons of mocks to mock everything out. There are other types of tests which should be covering these areas.
This is to do with the terminology mismatch between different developer communities. As testing became better understood, it defined the term "unit test" to refer to a simple test which follows one path through the code and returns one deterministic answer. In the meantime, and especially in object oriented communities, "unit tests" came to be used to refer to the entire test suite used to test an entire class or module. Not the same thing at all, but then add testing enthusiasts not defining their terms and you end up with people outside the testing community understandably not believing that "unit testing" is.not fragile because they are used to their tests suites breaking, especially when they write tests after the fact and have to do lots of fragile tricks like mocks to get around the fact that their code was not written with testing in mind.
Unit testing isn't bullshit., but it's overhyped for any use case. Unit testing is a tool that's helpful in certain situations. "Every hammer is a nail" or whatever that saying is.
People just hate "unit tests" because they were forced to cover the code with the "unit tests" which in fact was not suitable for unit testing at all. The unit is a key here.
Unit tests only catch the bugs you know to test for. Once you fix the code the tests just sit there forever slowing down the build.
That's true. It's very hard to get tdd right. It's impossible to know ahead of time how certain things are going to work.
TDD only works when you have complete and concrete spec of what you're implementing. Which is pretty much never the case.
TDD just asks you to know the interface of your unit and the next test
@@ivanjermakovmy client is changing the requirements like they are like underwears
@@ameer6168the only way to deal with such a client is to do continuous delivery based on incremental development, where you get sign off on acceptance tests with each new test cycle, combined with penalty fees for rework that needs to be done for excessive changes to already approved acceptance tests.
This is what acceptance tests are designed for, and like with any customer, you have to make a choice about the value of working for them. If your rework fee is high enough then either you will stop them from keeping changing them, makes enough extra cash to be worth keeping them as a client, or you will know they are not worth the cost of keeping them.
I do TAD for units and a little bit of integration, then bug-driven (sometimes end to end ) testing from there out. When you find a bug, write a test first to make sure that bug doesn’t happen in the future.
all arguments boils down to use whatever suits the requirement. that's it.
TDD is great when you have a well defined existing spec. It's terrible for creative exploration because it will lock you into a design too early, this is where throw away code is better to draft. One of the best uses for tests I've seen is catching regressions, and expressing intended business logic for future maintainers. I've also seen some absolutely terrible and pointless tests. It's a mixed bag, they're just tools, blame the authors.
Tdd came about as a consequence of doing deliberate test first automated regression testing, and adding refactoring to the cycle.
Comprehensive automated regression testing then forms part of the foundations of continuous integration.
It works very well for exploratory design when you understand what you need the API to do, but are less sure about how the internals need to work, but does encourage early stabilisation of the public API of the code. If it is not public, it will either be exercised by your public API tests, or will be dead code to be deleted. In either case the implementation is not constrained by the API.
Of course you are still free to break your public API right up to the point when you publish it for other code to use, at which point you are explicitly saying that the API documented in your header files won't break, supported by tests to make sure. If you break your API after this point, your users will rightly shout at you for it. Note: you do not have to add all of your functions and variables to your header files, making them public.
People who hate unit tests have never worked and maintained on years old, complex systems. A good test suite allows you freedom to do major refactoring with confidence that you haven't broken anything.
I have a hard line hate for unit tests because I'm so damn lazy, then I have regurts the instant I have to debug something that blew up in prod and don't have a test to fall back on to ensure I'm not mistakenly changing behavior with bug fixing, and then I suck it up and write tests, this time... Then the cycle repeats, because I am not smart.
The TLDR; Unit Testing is not the same as Test Driven Development. Unit test stuff that makes sense to write, not everything for everything’s sake.
Good TDD should have the separate developers write the unit tests and implementation.
No, by definition you are wrong. In tdd the developer writes a failing test, writes just enough code to pass the test, and then uses the combination of the passing test and refactoring to incrementally decrease the amount of technical debt.
This is then fed into your continuous integration system, which proves that the code does what the developer expected.
What you are talking about are acceptance tests which plug into your continuous delivery system to prove that what your developer expected was what the customer needed, which can either be done by the developer, or by the system designer. Of course one of the advantages of tdd is that it does incremental development of the system giving you something to present to the customer regularly which enables exploratory testing of the developing system to identify such mismatches against working code.
I work on a giant ass project. The only unit tests I have test some complex recursion and conversion libraries. I am now starting to implement integration tests for my own sanity and to just stop breaking prod. The mental load is big when a lot of your services are cloud based. Yeah your integration tests will need lambdas, a sql database, a redis database, s3 access. All of these permissions need to be set in your runners which means proxying... Once a single test will pass it will be great but the setup is literal hell.
Magento is a great example of how not to do testing.
Tests that don't make sense, others that simply assert invalid output and plenty that intentionally avoid specific inputs otherwise their tests would fail.
This is what you get when you demand that every merge needs to have tests associated to them.
If you write code that is testable, the unit tests aren't so bad. Its an art.
Unit tests lets coworkers know if they change something that breaks your code.
the problem is everyone learns how to test, goes way overboard with it, and end up hating themselves, the coworker that made them test 100% and unit tests all together when they feel like their code was dipped in cement.
Oh my god at the beginning. "Whenever I hear unit testing is hard I hear that you are a CRUD developer". That is EXACTLY the problem. Dealing with those data objects in unit tests DOES make them a shitty waste of time. That's one reason you don't do that shit in the first place.
The tests are mostly as boilerplate as the code
What do you think about mutation testing?
By definition refactoring should never break an existing test. If you are making deeper changes, that's cool but it isn't refactoring.
The tests need to actually be good and helpful and not a maintenance burden because they break during refactoring. That's hard, and sometimes trying to force tests does more harm than good and should be skipped. But almost all projects I've worked on in industry have been criminally under-tested, and coverage gates have only made the problem worse.
My hate for unit testing (and a lot of testing in general) is given by the fact that 90% of the tests that are written are ruling out program behaviours that could be avoided by using strong type systems (think Rust). I've seen too many unit tests just checking that some returned JS object's property was not undefined.
3:48 to 3:52 that sounds like TDD... like... when you use the test to drive the feature... test driven des....
If your unit tests break on every change, they're probably not unit tests but [accidentally] integration tests
Unit testing is fine untill you have to test React component with some logic and hooks and whatnot, which forces you to mock said hooks and their responses. Which is still fine compared to UI integration tests for larger applications.
The trouble with integration tests is that it's hard to make them 100% deterministic, and if you have a large project with 100 integration tests that are each 95% successful, the tests flake 99.4% of the time. It's so annoying!!! Teams should be intentional about creating very few of these or holding them to an extremely high standard of reliability (3 nines will still flake 10% of the time).
I'm actually doing UI development with 85%+ coverage
How do you write your unit tests prime?
Units tests are great! I love them. But with experience I learned that they are most effective when written to test out business logic and difficult algorithms/ validations. I don’t need to run the whole end to end system to test a simple change. My unit tests can do that for me and cover all bases. It’s wonderful. They stop the code from breaking with change. However unit test for the sake of coverage is meaningless and waste of time.
I wonder if the team working on Accounts or Payments for Netflix would agree that CRUD apps don't need unit tests. A lot of CRUD-based systems are more complex than niche dev tools 🤷🏾♀️
“Thor just broke 50k”, meanwhile Thor at 900k already in just a month 😂
In my experience having a mandatory requirement for code coverage often leads to people writing really shitty unit tests just so they can get X percent of coverage.
The correct response to that is not only to have a ratchet on your code coverage, but to make the test author responsible for fixing every fragile test they committed before they can work on new code, thereby pushing the pain of their crappy tests back onto them. If it still doesn't work, add them to the list of people who have to fix the broken tests which no longer have a maintainer.
Eventually they will figure out how bad crappy tests are, and stop writing as many.
Also crappy tests are usually a product of crappy code, so their code should gradually improve as well.
When you go to refactor you may also have to throw away tests and you need to be okay with throwing away tests. Because if you're refactoring it means that something in design is more understood now than it was before and what you're testing may not be exactly the same otherwise yeah you have to test that fail.
Hard NO! Tests are documentation of the expected behaviour of your code. If you need to change tests when doing refractoring, you are testing the wrong stuff. You are then testing implementation details, which are irrelevant. You test for the value that your software is supposed to deliver.
1:09 There can be some really complicated state management logic in FE. So I do not agree that being an FE is an excuse for having this mindset. To me, unit testing ensures your logic is correct, your flow is not too complicated, and you know where your state is being managed like where your data is being prefetched. Manual testing covers the visual checks and user interactions that may break due to the environment or unexpected ways of using your app. But unit testing ensures the logic under the hood is solid so that you know your code makes sense and it will be easier to build upon in the future.
Of course the real value of lots of public API unit tests Is that it pushes a lot of code to be outside of the UI, and thus testable. The rest is just a thin skin that is wrapped around the rest of your program, which is then much easier to test and to change.
The value of unit tests is proportional to the cognitive load of the thing that you're considering testing.
If it takes no effort, isnt breaking, and doesnt really keep you up at night? Probably not worth testing.
On the other hand if its breaking, overwhelming to just think about or stressing you TF out? Should probably test it.
Take this one , if you dont have unit tests you will wake up the next day knowing they need you to fix some bug
That's more free work
FALSE. Unit Tests don't test anything that the developer didn't already think of. Tests just automate it.