🎓 FREE TDD TUTORIAL: Explore what it takes to get started, and how to learn these skills, with a hands-on demonstration of TDD. Study alongside me FOR FREE HERE ➡ courses.cd.training/courses/tdd-tutorial
I don't know the complete context, so I'll add a comment based on my experience. People often confuse TDD with unit test, and unit test with class/method test. When they do this, they get stuck with fragile tests, that need to be rewritten as soon as something is changed. In my experience this is why people see TDD as something that slow them down.
Exactly. And i think that's because (Java) IDEs are method centric for tests - they generate a test-class for each class and a test-method for each method. And many TDD tutorials show a super simple application where the method is basically the whole application. Based on my experience, TDD can only work in a broader area, if we focus it on modules/interfaces/services and use it to create "stable" regression tests, which allow us to refactor code (implementation details) without constantly breaking dozens of unit-tests - which beats a major purpose of tests anyways. And i assume, that's what he describes around 15:00. Most unit-tests are borderline pointless and just make your code hard to change, which discourages people from refactoring bad code. I argue for the test-diamond - use (TDD) unit-tests only for complex logic (again on interface level, not every internal method), test the rest automated and integrated (contract tests, acceptance tests etc) during continuous delivery and have some testing after delivery (load-tests, resilience testing, health checks, if necessary explorative and manual testing etc). But focus on the middle part (module interfaces and APIs). This could be done with TDD, but usually it's not.
@@michaelcirikovic33 I don't blame IDEs for this, the real problem are tools like Coverage that forces you to reach (or at least aim to) 100% code coverage just to spit a metric and allow your work to go down into the pipeline. The idea behind these tools is wrong, they don't incentive people to write meaniful tests, only to reach 100% of lines of code covered, whatever the means you use.
@@chpsilva i agree that these metrics are often "management solutions" for technical problems and they way to often incentivise the worst behaviour instead of the desired one. But good regression (and contract) tests on module and API level can be coverage checked too and they usually achieve a very high coverage rate - unless your test set is bad, but then it's worth looking into why a large part of your application code is not tested with your acceptance/contract tests. Missing tests? Unused code? If your tests are good and stable and actually regression-test the behaviour of your system, you will automatically have a great code coverage and the coverage analysis is a great tool to further improve your tests and/or code. I made a project review at a customer which required 80% coverage and that's what he got in all applications - bug infested pieces of code with great coverage based on useless unit-tests with dozens of mocks, which broke whenever you changed anything in the system. A lot of "the code i wrote is the code i wrote" tests.
@@michaelcirikovic33 Spot on you need to have good separation of concerns by composition then TDD thrives when you're testing the context of those concerns properly (a highlight of course being module interfaces and APIs). It can help encourage better code that's also likely more refactorable from the start.
@@chpsilva I agree. TO my mind, the problem with Coverage as a concept is that it is deficient. Interactions between parts written by different people at different times are where the problems usually lie IMHO, and coverage isn't much use in tracking that down.
If people put as much effort into practicing TDD as they did into coming up with elaborate excuses about why it doesn't work for them, maybe there'd be less lousy code in the world.
Let them create lousy Code on which we will get paid for unmystifying. I do that right now and it is fun to be honest. Like watching a columbo episode or something (what is this guy trying to tell and why)
I've got a friend who comes up with so many excuses not to use version control. I've told him time and time again he'd have learnt to use git by now with all of his excuses.
The only way to have full confidence that you haven't broken anything is to test (almost) everything. I've been on teams where there has been a "no unit test" cutlure and it sucks. Everyone is scared of breaking production.
Wrong. Only the more skilled team members are worried of breaking production. The less skilled ones just write their changes and push the commit. *evil grin* (oh btw, then the skilled ones spent 90% of their work time fixing production and in the end they don't deliver assigned tasks and get roasted.... don't ask me how I know this pattern)
I have recently became a firm believer of TDD in my work environment. As a developer, it forces me to have a good understanding of the requirements that needs to be done and plan ahead of what the logic ought to be, it is especially crucial in complex services. TDD has minimised the carelessness in my codes, in terms of data models being sent and also the logic itself. It also helps to decouple your codes trying to make it testable, and when the section of code is decoupled, it’s easily scalable as well. Yes, it takes quite a while to setup the testing environment and time consuming to write test cases (I still hate it, but it’s necessary) but at least when an error/bug pops up on your end, you would have a clearer picture on which part of the code the error/bug occurs. It actually saves so much maintenance time in a long run
TDD is something I liked really early after getting introduced to it. I was convinced after the first try. I have foresight of what I want to implement or I have to go back clarifying requirements. That does not mean that I have to know the final state of the finished product. It works really fine for developping prototypes. As an added bonus I do not have to think about getting the parameters into the next increment. The code quality is good not because I am more of a genius than other developpers (which I probably ain´t). It´s good because the tests encourage good design and catch errors before someone else had a chance to spot them. Also it makes handing projects over a piece of cake. Want to know how it works and how it is suposed to be used? Just look at the tests.
Tests are some of the best documentation. They are actually live documentation, compared to markdown or e.g. python docstrings, which may or may not be updated when the implementations change.
TDD is unpopular because it is a skill and people don't recognize it as such. They try it, having heard the slogan "test first" thinking that's all there is to it, fail, and give up on it. TDD takes practice, and the result is well worth the effort.
Look i learned how to TDD during my study at University it wasn't even in curriculum, but i always was interested in quality of software and even before univercity i did a trainingship as IT professional and saw first hand what bad code looks like , but back then i as trainee i didn't knew how to solve those problems , 5 year curve , we as industry moving way to fast, watch out for the next big thing what it would be like, i assume brain chip , guest somebody hopefully test Neuralink code.
Dunno, I think there genuinely is a type of a person who benefits more from TDD. Some people think differently. Like, TDD isn't solving problems for you, it's providing a framework to solve problems. Ultimately, you don't care how you got there, you still need to solve problems, hard problems, and you can't rely on the framework to do that for you.
I often end up going back and forth between the code for a function and the test for it. I work with a lot of numerical libraries and sometimes I am not sure of what the output for something is that I need. For example what is the shape of an array that will result from a function call and how is the data in it arranged. I often find myself making a tiny function and a minimal test just so I can see the code do something and I can verify if it is called correctly. I then tend to flesh out the test a bit to cover what I think the function should accomplish. I then go to implement that in the function. Sometimes I find that the way I wrote the test has results in a very poor design in the function and that with a few changes to the test I can make the function cleaner. Sometimes I find out I need to split out some parts of the function into multiple functions. I find that tests really help with the design process.
The one thing people forget about TDD when they say that they don't know what the system should do is that this is a lie. It is a lie simply because they have the requirements of the system that put boundaries of what the system can do. You use TDD to anylize these boundaries, to design around it and to develop the code that delivers the required behavior.
I loved that comment that test driven development has test in the name. I think that expsoes such a weakness in our profession. That is that we think we don't need testing because we get it right. Yet we can't. Getting software right the first time is only possible for the most trivial of cases. Getting the design right even harder again. I drink from the fountain of TDD and try my best to get better at it. However i have not worked with many people that feel the same. Testing cones last of at all. Tests for commented out when thr pipeline fails to increase velocity. Oh well.
Maybe it's already too late to backtrack but what about changing the name from Test Driven Design to something like Consumer Driven Design? Meaning that the focus is to put ourselves in the shoes of who is later consuming the code that is about to be written. Whatever takes the spotlight off the testing part will do.
For the tricky aspect of 'getting started', one should reflect on the RGB cycle at 6:08 and ask where do you start, what do you start with and why. While the process is (appears to be) a continuous cycle, you need to ask how to start and how to end the process (just like the difficulties of starting and stopping major chemical plants - explosions everywhere, unless..). Getting started with TDD still needs a start-up phase. One should 'start' at refactor for your non existent code and the vague concept behind it. This is refactored into a more concrete concept and some pseudo-code, which can immediately be supplemented by its (hopefully) passing test concept. This is then quickly followed by converting the test concept to pseudo-code, to code, and back to refactoring the real pseudo code to its own code which now has a test that reflects the concept under test. And it probably 'fails' (the pair doesn't do what was expected). It may even be necessary to mentally split your existing coding approach to split out a small portion that is essentially just there to help visualise how it's working - call that your 'tests'. It's important to realise this is really as shift of mental model from the appearance of a code-code-code approach to code-check-code-'code the test' approach.
Great video! Your point that developers who initially try TDD imagine the code they would write, then make a test according to that imaginary code, is bang on. It wasn't till i grokked that i needed to let go of that mental code, and write a minimal test to start with, that i began to truly get how TDD works. I saw a live demo of TDD from Bob Martin that really helped my understanding of it.
I don't believe every tool works for every project, but I did have the joy of working on two projects that were TDD. And when management realized they missed something on Thursday afternoon, we could implement it and the new tests Friday morning, run the tests for a few hours, and deploy to production at 4pm on Friday and go home without a care in the world. We never had an interrupted weekend.
Point about UML/SysML - we can practice so called AMDD - Agile Modelling Driven Development - which is basically TDD + diagram/sketching before those tests for those small units.
I admit I struggle a lot with tdd approach, like it's described "do not write a line of code until you have a test for it". I do interfaces all the time, and often I don't have specifications nor design for an element I have to create or for ui itself. And most of my work is basically experimenting a lot with UI - styles, animations, details of ui behavior apart from business logic. I really need to cover all that in tests to be sure my UI components look properly, appear and disappear when triggered etc. But doing that in TDD is painful.
I've been practicing TDD for over ten years and I can count the amount of bugs that have been released on my fingers. So how much time do we spend fixing bugs, well next to none. So more time delivering features for our stakeholders then. We also don't suffer from legacy code issues because we can refactor our code with complete safety. We can upgrade packages and framework versions and know that it all still works. Software changes over time (shocking I know) as we get more features or changes to existing features. If you can't deliver these changes in a small quick incremental way you will suffer from delivery taking longer and longer and longer. You will never really be agile in this space. I hear the same old same old arguments about TDD taking longer etc. Writing more code does not mean it takes longer. How long it takes for a developer to commit code is not important, it counts for nothing. Being able to deliver it to your users/stakeholders is the only time you add any value. Getting another team or a QA in your team to test when you've 'finished' development does take longer. On a slightly different positive, I've never worked with a bad developer that follows XP practices, however I worked with many that don't.
Benefits you describe (stability, easy to change, being sure) are more about the code being highly covered with automatic tests than TDD itself. It means all that can be achieved when tests are written after the code. And yes you still don't pass your code to next stages until you've covered it with tests
@@Storytelless You’ll never get the early feedback by testing after. Your design will have already been done by then. You’ll never be 100% sure that you’ve covered all scenarios by testing after either, a coverage tool will only tell you if a test executes a line not that anything is asserted. One other drawback is that if you’ve never seen your test fail how can you be sure it’s correct?
@@leerothman2715 even for aftertesting you can start with the failing test if you know what is being tested. And I don't see how TDD gives 100% guaranty all scenarios are covered if it's not testing implementation (as it shouldn't). Implementation is a devil, you can easily do TDD and have passing tests but uncovered edge case scenarios.
One more thing about early feedback. Problem with testing environment is that it always quite not the same as the actual one. For example I develop interface with browser, I often start in TDD, write test, write code, it passes. I launch browser and then i discover that a passing test meant nothing. Is it my own lack of expertise? Sure. But I get the actual feedback and discover those blind spots not thanks to the test but thanks to running code in browser. And then you add on top that each browser behaves a bit differently that creates even more edge cases and the test doesn't help me spot them.
@@Storytelless I think in UI TDD apply to testing the behavior, usability and accessibility of the UI, and for how the UI look, you can add snapshots at the end of the implementation and this is enough, you can run the same suite of tests for each browser you support and if something not works for certain browser you just need to change your implementation to work in this browser and keep working in the others.
TDD probably doesn't make sense unless the person writing the code can envision the code as a series of functional modules, each with a specific functional contract. Then you simply write the test to that contract for the module. And those modules don't have to be simple function or service calls either. I guess this can help with understanding the core "contract" or requirement on how the module behave by doing the tests up front or basically help in designing better organized code overall.
And that stems from them not practicing it. TDD is not something that you just do. It takes practice (like almost anything in the world). "I'm starting to use TDD in my real world customer project, without ever having it done before from now on." sounds to me as smart as saying "I will write this new customer project in Go, although I've coded in Java exclusively for the last 15 years." Practice this with small things. Start with TDD katas. If that works, try to make room for it in smaller units of your production code, where you would think "I don't need it, but it cannot do harm." So many devs miss that point.
I haven't been in environments that practice TDD. Most places have created tests after the fact. But I would like to use the TDD-approach, to be more explicit with what I build, even if the assertions are implicit. And this is probably why people see unit testing as cumbersome and a waste of time: They think that they have to be explicit down to the detail. But you don't have to. You just have to think about what you build, its behavior, and be less write, compile, run (ego trial and error without knowing what to achieve). On the fly programming.
The basic problem I have with TDD is that it's just not how the creative process works for me. If I'm trying to figure out a hard problem at work, I might be relaxing doing other things on a Saturday when suddenly an idea pops in to my head. My reaction to that is to go to the computer and start implementing that idea. It's not to go to the computer and start writing tests for a solution to my idea. To me it would be like having an idea for a piece of music and instead of sitting down and immediately trying to play that music, try to figure out who you should call to listen to the yet unfinished music when it's done. To the people who do TDD: If you get an idea for a play project outside work, and you're all exited about it, do you immediately run to the computer and start writing tests? The first time you heard about Sudoku puzzles, and you figured that this problem should be pretty easily solvable by a computer program, did you sit down and write tests before you wrote the 20 lines of code that solves Sudoku? I just wrote the Sudoku solver. And if I were to write it as production code, I would subsequently have written some tests.
When you have an idea you have the overall requirements of the behavior of what you think the thing should do. Tdd allows for you to write those requirements in a way you ensure you will follow it or realise your assumptions for the behavior are wrong. It is like you had the idea for a piece of music, it is abstract in your head but you know you would like the beginning to sound a certain way. You write the way it should sound in a piece of paper (this is your test) and then you start playing with the guitar over and over until it sounds like the way you imagined or you realize the way you imagined would not sound as good as you first thought.
Your comment demonstrates some of the reasons people don't get tdd. First, you are equating the module in your code as a unit, and then equating the module test suite as the unit test, and then positing that you have to write the entire test suite before you write the code. This just is not how modern testing defines a unit test. An example of a modern unit test would be a simple test that when given the number to enter into the cell perform a check to see if the number is between 1 and the product of the grid sizes and returns a true or false value. For example your common sudoku uses a 3 x 3 grid, requiring that the number be less than or equal to 9, so it would take the grid parameters, cache the product, check the value was between 1 and 9, and return true or false based on the result. This would all be hidden behind an API, and you would test that given a valid number it would return true. You would then run the test, and prove that it fails. A large number of tests written after the fact can pass not only when you run the test, but also then you either invert the condition, or comment out the code which supplies the result. You would then write the first simple code that provided the correct result, run the test, see it pass, and then you have validated your regression test in both the passing and failing mode, giving you an executable specification of the code covered by that test. You would also have a piece of code which implements that specification, and also a documented example of how to call that module and what it's parameters are for use when writing the documentation. Assuming that it was not your first line of code you would then look to see if the code could be generalized, and if it could you would then refactor the code, which is now easier to do because it already has the regression tests for the implemented code. You would then add another unit test, which might check that the number you want to add isn't already used in a different position, and go through the same routine again, and then another bit of test and another bit of code, all the while growing your test suite until you have covered the whole module. This is where test first wins, by rapidly producing the test suite, and the code it tests, and making sure that the next change doesn't break something you have already written. This does require you to write the tests first, which some people regard as slowing you down, but if you want to know that your code works before you give it to someone else, you either have to take the risk that it is full of bugs, or you have to write the tests anyway for continuous integration, so doing it first does not actually cost you anything. It does however gain you a lot. First, you know your tests will fail. Second you know that when the code is right they will pass. third, you can use your tests as examples when you write your documentation. fourth, you know that the code you wrote is testable, as you already tested it. fifth, you can now easily refactor, as the code you wrote is covered by tests. sixth, it discourages the use of various anti patterns which produce hard to test code. there are other positives, like making debugging fairly easy, but you get my point. as your codebase gets bigger and more complex, or your problem domain gets less well understood initially, the advantages rapidly expand, while the disadvantages largely evaporate. the test suite is needed for ci and refactoring, and the refactoring step is needed to handle technical debt.
testing is not something you implement when you have sudden inspiration to play around and get something working. testing is something you do when you you've already done that, and now you need to integrate it into an environment that is opinionated and doesn't care how you got it working somewhere else. the code in these two situations are different, the process is different, the goal is different, and the skills required to do this well are different. in the latter case, if you don't write at least your test cases up front, you've started to write code in a way that you're not sure is gonna work because you haven't prioritized making it work, and your final product will likely be less readable than it could've been, buggier than it should be, and late. testing is about quality, not creativity. that's not a diss, i can't do what you do. testing is what enables me to ship something good. i can't reason with complex systems composed of complex parts unless i use testing to help me understand what knowledge about the system i can take for granted. i don't have aha moments, i design them by building in testable steps.
It seems to me that you misunderstand the process of TDD. You don't write all the tests before writing the code. You write the test for the feature you are going to implement next. Let's assume that you are going to create a class class SudokuSolver (int[][] board) throws InvalidBoardException The first step is to validate that the board size is valid. So you create a couple of tests that create a new SudokuSolver with some invalid board sizes and assert that it should throw InvalidBoardException. Then you want to validate that the numbers are valid. Now you create some tests with invalid numbers on the board and assert that it throws InvalidBoardException again (the exception message may say that this time the problem is with the numbers). and so on... The tests shouldn't know too much about the internals of SudokuSolver. If you test individual methods inside SudokuSolver then refactoring gets harder, because the tests become coupled with the implementation. But if the tests know nothing about the internals then refactoring becomes super easy because the tests validate that you didn't break the intended outcome.
Personally I think TDD is taught in particularly stupid ways. The way it is taught often implies that even the most stupidly obvious parts of your function are going to be iteratively constructed only by adding a test for them first. That makes no sense in practice. Of course if you are going to write a function to add up two numbers you are not going to write a test first for a function with zero parameters and a function that takes zero parameters, then a test for a function with one parameter and change the function to take one parameter, then another test for two parameters, then a change the function to take two parameters,...
Amen. I would also say that TDD is in general too focussed on single methods thanks to tutorials and teachings. If i write a calculator module, i would create a TDD test for the interface Calculator.add(a,b) and implement that interface. once i get my result, i can refactor the code behind that interface as much as i want - move it into 5 methods, classes, include a Math-library - whatever is necessary - my TDD test makes sure that the stuff still works. But the way i was taught TDD, every method involved in adding the two numbers behind that interface must be created in the TDD way. So if i decide to split the add-method internally into 5 methods (for whatever reason), every single one must again go through that TDD loop, creating a million test methods, which makes my module-internal code basically impossible to refactor - nobody will ever touch that structure again because they would have to change every single test method. In my opinion, TDD can only work, if we don't see every single method as isolated test-unit. The test-unit must be a module or other kind of "stable" interface and it creates the regression tests, that allows me the fast refactoring of the internal implementation details without braking stuff. And to be honest, even then TDD will often not work. A lot of our applications take data from an API (Rest, Kafka), do minor transformations and sometimes validations and hand them over to an object relational mapper. You take data from one framework and carry it to another framework with the help of a framework and even a lot of the validation is done by aspects + frameworks. Many developers just glue together framework calls and don't see a point in testing those 5 lines of code in a TDD approach.
@@vanivari359 In my experience, people who try writing tests for the internal methods of a class are usually amateurs at writing tests. This is because it's unnecessary if the code is designed to be testable and the public interface is tested. It don't think it has anything to do with TDD. Also, I'm very sorry to hear that such simple requirements of doing minor transformations and validations requires so many frameworks to do. I can see why it's untestable then 🙁
Problem might be there is no one catch-all approach to teaching an individual TDD. I think it highly depends and must be tailored to the individual. Thankfully, Dave Farley has been that individual to me.
@@Tkdestroyer1 I'm working as a vendor to a big costumer with very draconian rules about code quality, and therefore I am forced to just comply to their definitions. I prefer to not struggle against even obnoxious things like pure line code coverage because it's pointless, I have zero influence over any change. And unfortunately I suspect I am not an exception.
If I implement first, then testers will get the feature and can start testing early in the sprint. And I can do unit tests in parallel. And if it fails acceptance, I might just be lucky and won't have to rewrite a lot of unit tests.
In reality, you are bogged down by the previous sloppy implementation, already deliver late to QA and skip testing entirely because the toxic work environment doesn't support quality approaches anyway.
6 месяцев назад+2
Since learning about TDD from Dave, I've started using it for frontend and I'm not sure why people would say it's not possible because it definitely is!
That one is actually quite easy to answer if you look at how GUI code has been historically developed. First, you write too much of the UI which does nothing. Second, you write some code which does something but you embed it in the UI code. Third, you don't do any testing. Fourth, due to the lack of testing, you don't do any refactoring. Fifth, you eventually throw the whole mess over the wall to the q & a department, who moan that it is an untestable piece of garbage. Sixth, you don't require the original author to fix up this mess before allowing it to be used. When you eventually decide that you need to start doing continuous integration, they then have no experience of how to write good code how to test it, or why it matters. So they fight back against it. Unfortunately for them, professional programmers working for big companies need continuous integration, so they then need to learn how to do unit testing to develope regression tests, or they will risk being unproductive and risk being fired.
6 месяцев назад
@@grokitall 👆 This seems to be what it comes down to. I think that ultimately, it comes down to people being stuck in their ways and not understanding the idea that you should work out what you want your code to do (precisely) before actually writing it.
I think there is also an issue with how tdd is presented, partly preaching to the choir. A lot of those opposing tdd do not have the same definitions as the ci and tdd community. They oppose unit testing because to them a unit is a complete module, and the unit test is every test in the suite to test the module. Similarly they write the entire module, and only write regression tests when they have to, and adding tests or security or portability after the fact is always a nightmare. Because they don't write tests first, their code coverage is minimal, often consisting of UI tests and end to end tests which are fragile, and invert the testing pyramid. A lot of them come from the windows ecosystem or from the object orientation community, where the definitions don't match.
Fully agree with your analysis of the failure of UML-driven design. However, I do believe in Model Driven Design where the model is a Domain Specific Language.
Interesting video, definitely some good for thought. I've never practised TDD but I have, of course, mentally thought of the objections you mentioned. Your counterpoints are pretty good tho so I might try and put some effort into trying some TDD with my current personal project.
Practical example of my struggle with UI and TDD. I am writing a next js app and i want to do a complicated search filters. Business logic is totally ok to test: pass data, send or not send, handle errors and test validation. Then comes the UI. What UI components I should create for the best user experience? Should I use select? Should I put all the filters on one screen or should I do some kind of modal that appears? I don't know beforehand. I experiment with that. One moment I think that my component should show a dropdown with search results async when user updates filter, and next moment I understand: oh no that's not good at all I should only trigger search by clicking on a button and show all search results on a separate page. If I create test for this UI behavior beforehand I have to delete it and start over each time I change my mind which is often
Do you need much actual logic behind the UI exploration? Just explore it separately, and when you've decided, add tests to control logic that the data, that is to be shown in UI, is loaded correctly. If your UI is dependent on the rest of your application existing, you have a design problem.
@@defeqel6537 i agree that significant part of my problems are design problems. Funny enough I don't know what the good design should be beforehand. Like how should i separate my components should it be the parent or the child who controls the UI state etc. Usually people propose to just add a bunch of abstractions but with too much of abstractions while code gets really easy to test, the next time i read it it takes hideous amount of time to understand. And in some cases it causes performance issues. Again you will say it's a skill issue I should design the code the way it's testable and performant and easy readable. I agree and I try. And in most cases I fail on 1-2-10th try
I try every day to get better at doing TDD, and I think I'm reasonably good at using it when modifying or fixing existing code, but i still struggle when writing completely new code. Part of this is the mental block that if I don't have a single line of code written down, where do I even start with writing a test? In your Fraction example in the video, just from the way the test is written we can see that there is a Fraction class which takes an integer in the constructor and has an Add() method which takes another Fraction and returns a type supporting the toString() method. All of this is code that has to be written before writing even that simple test (or else we throw away our IDE's nice introspection and type-checking features). So maybe I need to understand TDD more as "write the tests before the *implementation*", and not the much stricter "write the tests before the *interface*".
The issue with that piece of TDD and everything that's already completed - article, post, video - is that those post and video has already been written. When you start writing test without a code, you just don't have a Fraction class or its constructor. You just type some words in a class with Test suffix. It won't compile, of course.. but imagine you typed 'new Fraction(1, 3)'. Do you like it? You might or you might not. You can change it, it's not compiled yet, you don't have code to change, yet. So you change that to, say, 'Fraction.make(1).over(3)'. Do you like it more?.. that's just one way to try and experiment with the api of it - that's what Dave said in the video. When you finalized that piece of api you actually make it compile by writing code: just constructor or static method or whatever. Later on, when you finish no one you would be able to say how you started to write that. Also, nothing stops you from cutting corners. I'm not strictly following rules of TDD. Sometimes I add class and constructor first. Sometimes I even play with code and implement something. Although I still consider it as TDD :-) If you'd like we could connect on discord/zoom, share screen and try something.
I think people get hung up on explicit assertions. Checking every little parameter or return value. You don't need to cover everything. Just just need to make sure that your main assertions are correct, and you can be implicit about the stuff that bothers you less.
Some folks think TDD is overengineering, but TDD prevents overengineering in general. You won't make anything you don't need, especially if you're doing TDD top-down.
I think people have different ideas about what constitutes a "unit" in unit testing. As you describe here writing tests against the public interface to your code is very different from testing private functions and methods.
Artem's problem seems to be not understanding you're only going going /try/ /something/ that /could/ only /possibly/ work, not "I know it's going to work", but actually just an idea, a direction, of what /might/ possibly work. So then you take a really small step /in that direction/ and move forward. Then you discover something you didn't think of (maybe) and adjust direction slightly. Another step. And so eventually you get to something that works but might be ugly /and/ you understand what you really want, much better. So you tidy and refactor, then return to fleshing out the problem and solution some more. Once you get used at moving in this way, you will find it incredibly self reinforcing and actually very quick.
If TDD is basically requirements, but acknowledging the fringe cases and the actual applications, is saying TDD is broken by extension saying that requirements too are broken? Does software just... manifest?
In QA, I have always written test plans before the code was written, and proceeded to write the tests without reference to the code, and never coordinated any of this with the developer at all. Likewise, I expect the developer to read and respond to bug reports without seeing any of my test code or knowing any details about how it works. And amazingly, we both managed to do our jobs just fine! My major complaint about TDD is that a whole lot of developers will "cheat" by writing tests that are easy to pass, or by exploiting their knowledge of the test - as its author - to write code inadequate to the real world. I think it's generally less effective for the same person to write both the code and the test, and if that is your ONLY testing... you're gonna have a bad time. It's about "trust but verify." TDD encourages development to think about what their code needs to do, and to ARTICULATE what that is, before they actually write the code. If your goal is to write good code, this improves your code. But your code is still not going to be perfect, and QA still needs to do a real test pass. The dynamic that has soured me to TDD is when QA sends a bug up to dev, and dev says "my tests pass, yours must be wrong" without even bothering to follow the repro steps in the issue report. Then you end up getting a visit from the PM asking why we keep testing the same thing twice. Because your devs fuck up the test, that's why. They're not test experts, they're dev experts. An amateur test is a good first line of defence to keep the most egregious bugs out of production, but then you need a competent test from a specialist to make sure the less-obvious bugs are found and fixed. And to be clear, this isn't the fault of TDD! TDD is working just fine and doing what it is supposed to do, just like AGILE, but when you ask human beings to implement anything they are going to miss the point and do it wrong and fuck it up.
That's not an argument. Your Test description is abstract, but in order to write a test, i have to reference the interface i write it against. I need to know each parameter and it's type that goes in and out. And i can do that starting form a test, but it's almost inevitable that my idea is wrong. I might change that interface 5 times before i'm happy with it and maybe the whole cut of that module and interface was wrong and i move that code to another module. So i have to refactor test code and test cases constantly, which is a tedious task - especially, if you're trying out, if moving those 3 classes makes the pieces fall into place or if it was a bad idea. The refactoring of logic i do in early stages of a system, when you have only a rough idea of modules and interfaces and responsibilities, is so big, that TDD is almost always in the way and does not add value. By the way, another reason why TDD "is broken": tons of code in the business world is stupid and simple, it just maps and forwards arguments to another layer, testing that stuff explicitly is a giant waste of time and slows down your code base because you mock 5 interfaces to test 1 or 2 lines of code. A lot of logic is so trivial, that people will not use TDD - especially because most people think, that TDD has to be used for every class and method in the system.
TDD is best when the language (or framework) make it easier to write and run a test than running it in situ. A Rust unit test is so convenient and fast to write it's hard *not* to write tests (either up front, or as you go). A React project or some legacy hodgepodge project though? Hardly worth the hassle a lot of the time.
@@danielwilkowski5899 I've had to work on a number of codebases that are not UI, but are obscure or old, so don't have good (or any) testing infrastructure. I had to write my own test system from scratch when working on a library of Adobe plugins for example. None of the code even interacted with the UI, but it was still hell to test!
I like TDD as a concept, but in my opinion, it doesn't scale as the complexity of required features does. It is quite rare in my experience that a feature requested can be represented with just one or two changes to the data. Instead, my code frequently has to pull data from a lot of sources (in a recent example, I had over 20 properties in the user input, 4 calls to 3rd party APIs and 6 DB tables as the input; change in one has a big difference to the final output). TDD fails in a few ways here. First, it is very easy to fall into the trap of defining the perfect ingress and egress object; in fact, that is what most TDD evangelists would recommend you start with are the ins and outs. The problem is that it is very easy with this approach to get 100% test coverage but have a highly unstable system as the data being provided by the users or 3rd parties is almost never perfect. In fact, in my career, most bugs and failures came from users or 3rd parties doing something we did not predict. This is where the idea of TDD being a safety mechanism for release falls over. The natural workaround is to create strict type safety, validate all inputs and add robust error handling, or preferably use languages and frameworks that do the heavy lifting for you as long as you define your models correctly. But this is a second problem with TDD; I have to couple the tests to the data models, specifically to what exceptions are thrown and what type casting is valid, often resulting in tests that prove more about the language and framework than they do about my own code. But let's assume I'm lucky enough to work on a project with a clear separation between the data models and the functional code, and the data side is already robust. I still have to decide, should the function just throw errors upstream? Is the whole function in a single try-catch, or does it try every validation and line of logic? Am I using exceptions or return statuses? Do I have to pass valid A and B parameters to test if parameter C's errors are handled? I think the fundamental issue is the assumption that we can bend the universe to our will of wanting a function that has a single point of input, just does one thing and has one output and that there is just one state. Even if you manage to abstract the user input, database state, messaging system, validation, processing and formatting into their own functions, you still need a main thread or function that orchestrates it. Not testing it is foolish, thinking you can write a test for it without knowing how this orchestration is done even more so.
There's a very good video by Dave about TDD being a good method for design. I think your example of orchestrating a lot of various sources of data and TDD failing to address it might be a bit misleading. Saying TDD can't test that function after the fact is disingenuous to what the practice is about. Moreover, I would think applying TDD for the design of this piece of software would result in a different design overall.
@@Flobyby I've seen most of Dave's videos and TDD is the one thing I don't find feasible at scale. I'm not suggesting to test it after, quite the opposite I would like to have the ability to develop the tests first before committing to the solution. But it is the last sentence you wrote I take biggest issue with and it illustrates the key problem with the TDD approach. I do not control the universe my code sits in. To start with, we can't have a world where users have direct DB access and have API keys to the 3rd parties for what should be obvious security reasons. Neither do I control the business requirements that dictate who has what data, what they can do and how should we react to the actions. Therefore there has to be single process, endpoint or function that will take the user input, DB data and any 3rd party outputs and use them to alter the state, often multiple states in form of talking to the 3rd parties, updating the DB and responding to the user. I don't get to change this, even if I wanted to. I can only test and develop my system which sits in the middle. I do see the value of TDD in some domains, at a small scale it does well. But as soon as you have to deal with multiple external dependencies it fails to achieve it's goals. First because you already written and tested all the building blocks so are no longer working with a clean sheet. Second because you have to decide in what order you will execute and validate the steps of the main process and couple the order of the test and code together. Third because it gives a false sense of security that your code is well tested, when actually it may have huge gaps. There are plenty of classic examples of bookshops and others. The reality is that when the user presses "buy" they don't just expect an "order placed" message, they expect the shipping label printed, their bank account charged, the business needs the stock table updated and someone still needs to be told to go and package the book. All of that has edge cases; if there is no money in the bank, no paper in the printer, no one at the store etc. Etc. And all of it has to happen, as far as the user is concerned, in a single button press.
In your starting point it feels that what you need is a layer of abstraction between your business logic and data sources that will handle all that you have described: ensure data types, accept only those types that will 100% fit your business logic, handle errors etc. And this is actually a task that can be performed with TDD
@@Storytelless My data is pretty well abstracted already, but it doesn't change the problem; it just moves where the problem is. TDD is great at testing these abstractions or anything that is small enough. But you can't get away from needing to test the user input validation and having your data distributed across multiple sources at some point. You can't get away from the need to handle real-world errors coming from the database or 3rd parties. You can't avoid the emergent behaviour of the models/abstractions interacting with each other depending on the order in which you trigger them. It would be nice if everything could fit into the abstract world, but at some point, you have to make a call on what models to use in what order and how to handle errors. The idea to "just abstract it away" or "hide it in some plumbing" is complete snake oil. Your system has real-world inputs, outputs and dependencies. If you don't test the plumbing itself, but only the sanitised version of the universe, you may have 1000s of green tests, but I could easily break your live platform. There is a good reason why you rarely see people show the plumbing under real-world examples; that code is never easy to follow or maintain, but people selling TDD courses and consultations don't want you to see that until you buy into their system. In this aspect, TDD is to testing what Jira is to Agile. And this isn't an attack on Dave or anyone else in that industry; I know it takes a lot of time to get to this level of advanced explanation, and it is fair people charge for providing it, but the other side of the coin is that TDD is a complex practice that requires advanced setup to work in the real world, not magic. If it was that easy, there would be no value they can add and no market for people explaining it.
If you click on some menu item than page should appear with a list. Since you are here, click on the add button should bring up a form. Filling the form properly and pressing the save button should close the form and make the new entry appear in the list. Filling the form inproperly should point out the wrong fields. Hmmm .. there is nothing special here. This all can be testcode. Keeping decoupling from implementation is a bit hard but that is also doable.
TDD can also be understood as an individual practice; You can practice it even of the rest of the team doesn't! It will be trouble to deal with maintenance tasks involving code written by the rest of the team though. My point is use TDD, it is a wonderful approach, don't if can't see the point. Lots of good points, particularly amusing was the comment "A picture isn't worth a thousand words", I sometimes quip, "but, which thousand words?" (Small complaint: For people who know TDD really well, the video title is a bit of click bait)
Every example of TDD is always some simple function. Real world applications often have a lot of complexity, dependencies, non-pure, etc. Would like to see examples of that. Most of the people that I have met that claim to do TDD, actually do very little of it in their day job so never got the chance to see it in action on a complex system. I suspect that is most peoples problem with TDD.
But in part the reason that TDD code looks like that, and, as you call it "real world code" does not, is because the real world code wasn't designed to be testable and the TDD code was. One of the huge values of TDD is that it makes us write simpler code. This is certainly NOT about inherent differences in the complexity of the tasks that TDD is applicable to, it is about the shape of the code that results from TDD, which is better, as well as being better tested!
@@ContinuousDelivery Wait... are you confusing TDD and testing here? All the code I'm talking about is testable and has a full suite of tests (not going to get better tested code). The question is NOT about testable code it is about TDD... Two different things.
In .NET worlds, if TDD is a broken practice, this is partly thanks to Microsoft scaffolding codes when creating a new project of WinForms, WPF, Xamarin/MAUI, WCF, MVC and Web API etc. What started as a small quick-start application, a small god assembly, ends up becoming a big god assembly. Difficult to craft unit testing and integration testing in levels and domains. The solution is simple, split those in folders into multiple assemblies (or Jar in Java), conforming to .NET components design, conforms to .NET architectural design, as Juval Lowy had advised MS long time ago, when he was an external consultant to to MS for architectural design when developing .NET Frameworks.
I don't know what is an integer fraction unless it is some kind of abstraction inside of a function. Like 0.6 is input and the 6 is the integer fraction inside of the function where it makes sense.
I think that what we need here is a test like jez humble's continuous integration test, but for tdd. Most of the criticisms I see can be sumerised as "I write legacy code and don't get the testing pyramid, unit testing or regression tests, so why would test first regression testing make sense". As they don't get regression testing, they also fundamentally don't get refactoring, continuous integration or technical debt. Time for them to learn some of the basics of the industry wouldn't you say?
I worked with strict TDD for a tear and noticed that it ensured quality, but almost 3 times slower. TDD can be demonstrated easily with simple examples or units that has business logic or rules that need to be examined. However, when you try to integrate components, tests will contain a lot of mocking and will be more complicated than the actual code. Other cases where you want to try a new product and see whether customers will like it or not. When I try to do it in TDD mode, I spend weeks instead of a couple of days to deliver. The real test here is customer feedback.
If you're complaining about TDD, you aren't doing TDD. You could read Kent Beck. He still publishes books and blogs to this day which have answered all of my questions about TDD.
Personally if I'm just doing a throw away proof of concept, I don't write tests at all, or even handle most of the error cases. The important part is that it's throw away and can never be "productionised", and trying to do so is doomed to fail. These pocs are just to quickly get an idea of what is possible and compare implementation details, not to produce production code.
Hello Dave, To me the problem of TDD is within its name. The intention is mixed with the technique. The intention is to define in an automated verifiable way the expectation. Then you write the code that makes the verification successful. The fact that we use tests framework is kind of an implementation detail in the story. Maybe we would better talk about Define your Expectations First … DEF DD (because it it doesn’t end with DD we have an acceptance problem 😄) Naming things is still one of toughest problem with cache invalidation isn’t it ? What do you think?
what are your views on event sourcing? perhaps a video? since if it is good, it should lead to better software faster, otherwise tell us where it is weak or bad thanks
Not only that, but when you break the API you feel the pain before your users even find out about it, which just goes to show that a lot of framework and core library developers are not doing regression testing and continuous integration.😂
Actually they are mostly complaining about having to start writing regression tests and having to write testable code when they could previously write untestable garbage and force q & a to have to deal with it. Regression testing shines a light on how bad your code is, and they don't like it as they like to think that due to spending some time programming they must be good at it. Test first they hits you in the face with how bad you are at testing, which is not taught, and tdd's refactoring step shows just how much copy past coding is in your code base. Understandably a lot of people who never had to do testing don't like it and push back, mostly with comments which can be paraphrased as "I only know how to write legacy garbage, so it doesn't work for me", not realizing that this is what they are saying.
Unfortunately TDD is not something you can learn/appreciate until you take steps to get there. We all start by writing concrete code and manually test it by running it with different inputs. At some point you realise you can use a test to automate the input, but the test is still highly coupled to your code Then you dont want the tests to be fragile, so you decouple them by making the interface and boundary clear Then you start writing code just based on interfaces you can test and leave tbe concrete code for later. Happy path coding Then you realise that happy path code... is effectively a test Tada~ TDD
For 10 years I'm practicing TDD and everything work for me, I write huge amount of code and small number of defects, most of the time I write new code when others trying to debug why their code not working! So each time I hear that TDD is not working, it makes me smile. TDD works, maybe you don't know how to work with it
@@Aleks-fp1kq I think you missing one important thing here. You should write you code undefended from FW as much is possible. Once your code is highly depended on framework you are missing opportunity to migrate you code to another one in near future. OOP and Solid can help you to brake dependency on FW. I'm always trying to use less FWs in my functional code. Lets take DJANGO for example, all logic is in the code that can run outside Django and Django just manage the server side, you can do it by strict utilization of OOP and SOLID. Think about two separate python packages like Logic package and Execution package that build separately .
@@Aleks-fp1kq That should not be the same with learning a new language? You'll take some time to get the syntax but the concepts are the same, for tests is even simpler, you just need to assert if your implementation have the behavior you suppose it to have, the test fails because you don't implemented it, you write the implementation, the test pass.
Clearly TDD is very misunderstood because of the "T" in the branding name. If it would be called for example PCR (Plan Code Review) or something sexier, people would maybe at least pause/stop and try to understand it.
No, not really right. TDD is certainly a design tool, but callout it a QA tools misunderstands TDD and QA. TDD is about testing in the same way that a carpenter measuring the wood before they cut it is "testing". Sure, you need to "measure" to do a quality job in both cases, but in neither case is that QA, it is "measurement"
To me, TDD is like building the house before the foundation has dried. I often need to restart from scratch several times before I have something even worth testing. I could write each iteration the TDD way, but no one wants to pay for that. So, once my foundation is dry, I can use TDD to check it’s structurally sound.
Let me add a little critique of TDD (at least of a “public picture” of it). Let us take a one of most common tasks of backend developer - build REST API that transforms some JSON payloads and manages some data in DB. Well what TDD recommends to do here - let’s start with tests of API. Oh, I have got no API, no contract, no schema, nothing. Let me write a stub “code” for my API first, answer multiple questions on review in a sense of “what do you add not working code?”, put it thru CI/CD, release it to staging (one week later). Now I can write test (given that I already in context of writing a load of code it’s “good to switch contexts now”) … that will not pass (be red as no real code backs any of them) and beat my moral thru the bottom! Using this example it is easy to identify some shortcomings of TDD: 1. You start with red test - this beats the moral of 99% of developers 2. You constantly switching contexts between design coding and testing (being a user of your code) 3. A lot of tests are unusable without some code like mocks/stubs (as for example in Python you can “pass” everything and it will work, but in C++ you can’t even build your tests because you don’t defined something in implementation and IDE/LSP will be constantly yelling at you that you are totally stupid to use something that do not exist yet) so you do not use TDD you code something that should work before TDD can be used (for example interfaces) but when you design this start up code to allow testing you are imagining the underlying implementation and not the tests of a design - it takes substantial efforts to switch to user context compared with switching to developer context (from designer context).
The context switching is actually one of the points of tdd. You are constantly validating that your regression test fail, then pass with passing code, then using the refactoring step which is now easy as you have the tests to handle the technical debt. All the while you are discouraging habits which produce hard to test, coupled and in other ways legacy code. As a side effect you end up with a validated test suite for continuous integration, code which you know passes, new code going into version control which you know works and so do the tests that go in with it, and debugging becomes easier due to the smaller changes. You also end up with high code coverage, testable code, a starting set of examples for writing your documentation, and lots of other benefits.
This isn't about a religious observance, this is a practical tool. If you have code so simple that it isn't really programming in any real sense, doing the same thing as a million times before, then perhaps it is not necessary, but the test of that is How often do you have bugs? How much time do you spend fixing them? How well do you understand the problem? Is it likely to grow into something more than simple CRUD in future? All of these would influence my decision, it is certainly true, that I don't really write "simple CRUD applications" vey often though.
@@ContinuousDelivery from experience it starts with CRUD and then eventually evolves beyond just CRUD. But say the public API you are designing is an HTTP api. What is your approach? I am going into the technical details now but for me that is important. Say we have events being raised from the API and some data stored in the database. Would you mock the database and message queue? In my opinion the event raised is also a public api if the event is to be used by others. Do you verify the event is raised correctly? Do you verify the data is saved to the database besides checking the response from the http API? What is your approach for a test in that case? I am curious of how you define TDD is that testing the whole part of the api such as database and raised events or only response?
My experience with people claiming they can't do something because they don't know the future really just don't want to do that something, not that they actually do need to know the future.
Nice example with fractions. However real life is a little bit more complex. Imagine that you have to create a neural network that generates people images with exact 5 fingers. Good luck to write tests!
What I've found to work in a TDD-like approach is to write the whole implementation in the test case/project, then when it's passing, refactor the relevant parts into my library code to form the API I want
Python and Javascript being extremely slow is an abstraction leak. being unable to access hardware I/O ports from Javascript is another, there are lots of them!
TDD is a mindset not a unit test That's the biggest problem i see, people think tdd is writing unit tests I try to write tests as close to the user behaviour as possible. Less the tests know about the internals the easier it is to refactor
TDD is a tool that's appropriate for some circumstances. TDD makes sense when you can write the tests and the API at the same time. If you don't have enough information to confidently write the API, you should sketch it out and see what it looks like. Then make a decision: either you're prototyping to understand the API or you know enough to confidently write some tests. Wash, rinse, repeat. Sometimes, you'll need to go back and write tests after. Like a craftsman taking the time to break the sharp edges on a milled part.
I love the idea of TDD. And the few times I've been consistent with TDD, I've liked what I got out of it. The reason I don't end up using TDD all the time is because it feels like twice the work. EVEN IF it actually reduces work in the long run, it FEELS like a massive headache, and I tend to not keep up the habit. So yes, the reason I don't do TDD is because it doesn't feel worth it most of the time, and if I have 3 hours to write code, I feel more accomplished by writing a new functionality than by writing tests for existing functionality. Maybe this is laziness, or maybe it's the result of feeling time pressure. I can't defend it rationally, but I can explain how it happens.
Don’t you still have to write tests to ensure that the feature will keep working as the code is changed in the future? Manually testing every feature in every release doesn’t scale.
@@delphicdescant I think it is a good start :) I don’t think in terms of ”I’m going to test function f och class T”. Instead I create a test for each requirement I add to my application. Sometimes I add a unit test, sometimes an integration test and sometimes a black box e2e-test depending on what is the easiest thing to do. If I change the requirement, I change the test and if the requirement is no longer needed I remove the test. I name my test functions “ShouldBlablablaWhenBlabla” and use comments to structure the test code into // Given, // When // Then sections. This prevents me from adding extra asserts to test things that are not related to the specific requirement. I have seen people name their tests TestFoo and these tests just keep growing with extra asserts related to the class Foo without actually specifying the reason for why each assert is there and what requirements are actually covered. This is what I try to avoid. We should not test entities, we should test behavior.
@@delphicdescant Sure! Each test should correspond to a single requirement and thus you won’t have more tests than what is needed to fulfill the (implicit and explicit) requirements of your customer.
That sounds like an opinion based argument rather than a data based argument to me. Start capturing story cycle time with and without TDD and you'll have the data you need.
Programming should be taught with TDD. If you learn a programming language, you learn the syntax and solving simple problems, but you have no idea how to solve more complex problems. The first test will be manual tests, but they are tests, and that's important, not a perfect testing framework. A test is a documentation.
Back in days before TDD, we longed for software development to be more professional and respected. After earnest endeavours towards this goal we started to see it taught as a respected discipline in universities. TDD helped raise the bar in promoting it as a sophisticated engineering practice. Now two decades later the university graduates don’t even want to touch the likes of TDD. Are we seeing the end of a rise and fall of software engineering and with it the demise of professional quality software development?
This implies that TDD is necessary for professional quality software, but tons of professional high quality software is created without TDD (based on experience out there, i would say: almost all of it). TDD was always for powerpoint and RFPs ("we will do TDD in this project, promised"), but i've never seen it actually in use, not even colleagues from thoughworks.
When most professors misunderstand what a Unit Test is, often have no idea about SOLID principles (whatever your opinion on them are), etc., how do you expect them accurately convey TDD?
@@defeqel6537 yes, and i find it almost impossible to "teach" all of that in a project setting. In small teams, i can spent time to explain those concepts and sometimes it sticks, but it's a tiny drop in an ocean of people. And in a normal setup, i have multiple teams with multiple team leads and teaching team leads is pointless because they often don't code anymore or struggle too with understanding, what a unittest or TDD is. Thousands of employees, constantly mixed into new teams and projects... it seems impossible to raise the bar measurably.
It is not about the language or domain, it is about frameworks. We rarely don't use them, don't we? New projects = new frameworks. Now the gist of the problem is: no time to learn the new framework and then start writing the code, we learn as we go, but to do it the TDD way, first we need to master the framework because each of them has a different take on testing. At the same time, stakeholders are expecting something - 2 week sprints.
TDD to me means "check its intention before you make it available" I use it with terraform modules as a safety net to check my intentions meet expectations, without it I do more work fixing the bad code.
The post-TDD generation will need to find new ways to ensure testability of code. Code might not need much unit-testing but still needs to support future testing when it becomes necessary. If TDD is dropped, either new versions of compilers will need to take care of testability or new lint tools (worse case scenario - linting is admission of failure) or new kinds of modularity will be needed to isolate unit-testable code from untestable code in a way which supports untestable older legacy code too.
I watch devs of all experience levels run their full application to manually test, with break points and watch inspectors, a single change to a class in the deepest depths of their domain context. It makes me want to cry. If you don't want to do TDD, fine. But for the love of all that is holy, please automate your tests. If I catch you running an app to test a simple change I will kick you from my team.
The only kinds of tests I've found useful are: contract tests that enforce contractual obligations, and bug exercise tests, which trigger a real world bug situation. Contractual testing is great, but those contracts are part of the definition of the thing being tested, so tooling should be co-locaing them, not putting them in separate files or god forbid, separate source code trees.
TDD is not a broken practice. The problem is, when someone goes to upper manageent requesting money to properly develop a piece of software, management will question "why are you writing that if the customer will not use that. stop doing that and write the code that customers will actually use and be done with it. I'm not paying you extra for that!" It seems that this management approach has been propagating through software companies, up to the point that management doesn't care about quality but more about delivery.
Completely unrelated to the video, but I just had it open alongside my PHPStorm window (floating mode from Firefox), and noticed in 14:00 that we use the same color scheme and dark mode. I must salute your tastes, at least in this, even if not so much in development methodologies.
Hate for TDD reminds me of hate for Scrum. People share countless stories of how useful it has been _for them,_ of how many problems it solved _for them,_ of how much value it brings _for them._ And then other people say the equivalent of _"Well, it's never worked for me so you must be wrong."_ How insultingly invalidating. Perhaps there are fringe cases where it didn't and even _couldn't_ work for someone, sure. But stating it's fundamentally bad or flawed only reveals their lack of skill or experience or understanding, or perhaps even their foolishness. Great video, as always! TDD FTW (For The Win!)
Nah, Scrum is better than Waterfall, and DOES have some useful concepts (though not unique to it), but I cannot say it is especially great "in full". The Story Point concept is especially harmful, since it is a waste of time beyond "tiny", "I can do this", "who knows", and easy to misuse (and yes, this is a problem of the process)
@@defeqel6537 as usual, a criticism of Scrum reveals the complainant's misunderstanding of Scrum. You're complaining about Story Points but those are not part of Scrum. At all. That's like saying you don't like movies because popcorn is bad, which is nonsense.
@@davetoms1 yeah, technically they aren't part of the modern Scrum, but in practice are always there, whether we are talking about abstract points, days, hours, whatever (edit: and to highlight why it is less of an issue in modern Scrum is that it recommends breaking the backlog into tasks taking one day or less, while before it just recommended estimating tasks and filling the sprint using those estimations)
To me the prime example is a compiler, one of which I happen to be working on at the moment. If your compiler does not work, you are seriously scr*wed. Everyone knows this. Thus compilers simply work, and are reliable most of the time. If they are not they die quickly, since nobody uses them. Thus the question becomes, compilers are a large and complex codebase. If we can get those right, why can't other programs be proven correct as well? The answer is that compiler developers simply take it as a given that the testing code for the compiler will be %50 of the total work to develop the compiler as the other half, developing the main compiler code. So this means that for most programs, its not worth it to spend that kind of effort to prove the program is accurate no? There in lies the paradox. A typical program takes %50 of the total development time or more in debugging. Even very optimistic programmers will admit to that. By that same logic, saying you want to write the program, then do the work to debug it into shape means you prefer to fix the program AFTER the fact than BEFORE the fact, which is the net argument against TDD. In a word, you can pay now, or pay later.
I graduated from junior programmer through architect to project manager a while ago (now retired), and the problem I never resolved was cost versus resource and timescale. I feel as if TDD could have been a solution, not because it's a magic bullet, but because it seems like it should help you understand the strengths and weaknesses of your codebase over time better than any other method.
I think what hurts TDD evangelists' credibility the most is when they show you yet another example of TDD'ing a calculator app or a trivial logical game script and then expect everyone to accept that as a proof of viability and immense value of TDD. The usual arguments of "I'm just trying to show you the general approach and a simple app is the best medium for that" sounds like an annoyingly manipulative copeout even if they were not intended as such. If instead every TDD tutorial started with opening up a large-scale project with at least 100k lines of code and walked you through designing and integrating a small but useful feature into an imperfect codebase with an actual history, a lot more people would be willing to give TDD a chance and stop viewing TDD evangelists as snake oil salesmen.
TDD never works in any real world application that's not a learning calculator. It's tought on academic easy examples that never ever work in real business cases with several complex dependencies that are most of the time nearly impossible to mock in a reasonable time. Certain isolated functions might be TTDable, but the overall application with handling files, databases, ftps, APIs etc never is.
Everything is taught using easy examples. The reason is simple. If you try and teach using real world large problem examples then it becomes less about what you are trying to teach, and more about the detailed understanding of the large problem you are using as your example. So what a good teacher will do is look for the simplest example possible which still allows you to demonstrate as many details as possible of the thing you are trying to teach. If it turns out a specific aspect doesn't work with that example, look for another one that covers as many of the remaining details as possible. It is for just this reason that every new way of doing AI in games starts by examining how it works with tic tac toe and moves on to harder examples as needed. Because you already know the problem space, you can spend all of your time looking at the potential solution and trying to understand the techniques involved.
Well it is simply wrong to say that TDD doesn't work in "real world applications". I led a team that built the nationally distributed point of sale system for one of the UK's leading retailers, I led another that built one of the world's highest performance financial exchanges, including public APIs for trading and sophisticated web-based applications that support real time prices and trading. Friends of mine wrote the main train-journey planning & ticketing system for the UK. Tesla practice TDD for the software in their cars. SpaceX practice TDD for the most effective space rockets on the planet. All of these support real world data, in some cases worth billions of dollars per day.
🎓 FREE TDD TUTORIAL: Explore what it takes to get started, and how to learn these skills, with a hands-on demonstration of TDD. Study alongside me FOR FREE HERE ➡ courses.cd.training/courses/tdd-tutorial
I don't know the complete context, so I'll add a comment based on my experience.
People often confuse TDD with unit test, and unit test with class/method test. When they do this, they get stuck with fragile tests, that need to be rewritten as soon as something is changed.
In my experience this is why people see TDD as something that slow them down.
Exactly. And i think that's because (Java) IDEs are method centric for tests - they generate a test-class for each class and a test-method for each method. And many TDD tutorials show a super simple application where the method is basically the whole application. Based on my experience, TDD can only work in a broader area, if we focus it on modules/interfaces/services and use it to create "stable" regression tests, which allow us to refactor code (implementation details) without constantly breaking dozens of unit-tests - which beats a major purpose of tests anyways. And i assume, that's what he describes around 15:00. Most unit-tests are borderline pointless and just make your code hard to change, which discourages people from refactoring bad code. I argue for the test-diamond - use (TDD) unit-tests only for complex logic (again on interface level, not every internal method), test the rest automated and integrated (contract tests, acceptance tests etc) during continuous delivery and have some testing after delivery (load-tests, resilience testing, health checks, if necessary explorative and manual testing etc). But focus on the middle part (module interfaces and APIs). This could be done with TDD, but usually it's not.
@@michaelcirikovic33 I don't blame IDEs for this, the real problem are tools like Coverage that forces you to reach (or at least aim to) 100% code coverage just to spit a metric and allow your work to go down into the pipeline. The idea behind these tools is wrong, they don't incentive people to write meaniful tests, only to reach 100% of lines of code covered, whatever the means you use.
@@chpsilva i agree that these metrics are often "management solutions" for technical problems and they way to often incentivise the worst behaviour instead of the desired one. But good regression (and contract) tests on module and API level can be coverage checked too and they usually achieve a very high coverage rate - unless your test set is bad, but then it's worth looking into why a large part of your application code is not tested with your acceptance/contract tests. Missing tests? Unused code? If your tests are good and stable and actually regression-test the behaviour of your system, you will automatically have a great code coverage and the coverage analysis is a great tool to further improve your tests and/or code.
I made a project review at a customer which required 80% coverage and that's what he got in all applications - bug infested pieces of code with great coverage based on useless unit-tests with dozens of mocks, which broke whenever you changed anything in the system. A lot of "the code i wrote is the code i wrote" tests.
@@michaelcirikovic33 Spot on you need to have good separation of concerns by composition then TDD thrives when you're testing the context of those concerns properly (a highlight of course being module interfaces and APIs). It can help encourage better code that's also likely more refactorable from the start.
@@chpsilva I agree. TO my mind, the problem with Coverage as a concept is that it is deficient. Interactions between parts written by different people at different times are where the problems usually lie IMHO, and coverage isn't much use in tracking that down.
If people put as much effort into practicing TDD as they did into coming up with elaborate excuses about why it doesn't work for them, maybe there'd be less lousy code in the world.
I couldn't agree more!
Hahaha so true
Let them create lousy Code on which we will get paid for unmystifying. I do that right now and it is fun to be honest. Like watching a columbo episode or something (what is this guy trying to tell and why)
TDD does not solve the 'doing the wrong thing righter' problem ;-)
I've got a friend who comes up with so many excuses not to use version control. I've told him time and time again he'd have learnt to use git by now with all of his excuses.
The only way to have full confidence that you haven't broken anything is to test (almost) everything. I've been on teams where there has been a "no unit test" cutlure and it sucks. Everyone is scared of breaking production.
Git is only just getting a unit test schema, but has had a "pass the tests" approach since forever.
Wrong. Only the more skilled team members are worried of breaking production. The less skilled ones just write their changes and push the commit. *evil grin*
(oh btw, then the skilled ones spent 90% of their work time fixing production and in the end they don't deliver assigned tasks and get roasted.... don't ask me how I know this pattern)
I have recently became a firm believer of TDD in my work environment. As a developer, it forces me to have a good understanding of the requirements that needs to be done and plan ahead of what the logic ought to be, it is especially crucial in complex services. TDD has minimised the carelessness in my codes, in terms of data models being sent and also the logic itself. It also helps to decouple your codes trying to make it testable, and when the section of code is decoupled, it’s easily scalable as well.
Yes, it takes quite a while to setup the testing environment and time consuming to write test cases (I still hate it, but it’s necessary) but at least when an error/bug pops up on your end, you would have a clearer picture on which part of the code the error/bug occurs. It actually saves so much maintenance time in a long run
TDD is something I liked really early after getting introduced to it. I was convinced after the first try.
I have foresight of what I want to implement or I have to go back clarifying requirements. That does not mean that I have to know the final state of the finished product. It works really fine for developping prototypes. As an added bonus I do not have to think about getting the parameters into the next increment.
The code quality is good not because I am more of a genius than other developpers (which I probably ain´t). It´s good because the tests encourage good design and catch errors before someone else had a chance to spot them.
Also it makes handing projects over a piece of cake. Want to know how it works and how it is suposed to be used? Just look at the tests.
Tests are some of the best documentation. They are actually live documentation, compared to markdown or e.g. python docstrings, which may or may not be updated when the implementations change.
TDD is unpopular because it is a skill and people don't recognize it as such. They try it, having heard the slogan "test first" thinking that's all there is to it, fail, and give up on it. TDD takes practice, and the result is well worth the effort.
Look i learned how to TDD during my study at University it wasn't even in curriculum, but i always was interested in quality of software and even before univercity i did a trainingship as IT professional and saw first hand what bad code looks like , but back then i as trainee i didn't knew how to solve those problems , 5 year curve , we as industry moving way to fast, watch out for the next big thing what it would be like, i assume brain chip , guest somebody hopefully test Neuralink code.
Dunno, I think there genuinely is a type of a person who benefits more from TDD. Some people think differently.
Like, TDD isn't solving problems for you, it's providing a framework to solve problems. Ultimately, you don't care how you got there, you still need to solve problems, hard problems, and you can't rely on the framework to do that for you.
@@gJonii right, but it helps a lot when you have good suit of unit tests.
I often end up going back and forth between the code for a function and the test for it. I work with a lot of numerical libraries and sometimes I am not sure of what the output for something is that I need. For example what is the shape of an array that will result from a function call and how is the data in it arranged. I often find myself making a tiny function and a minimal test just so I can see the code do something and I can verify if it is called correctly. I then tend to flesh out the test a bit to cover what I think the function should accomplish. I then go to implement that in the function. Sometimes I find that the way I wrote the test has results in a very poor design in the function and that with a few changes to the test I can make the function cleaner. Sometimes I find out I need to split out some parts of the function into multiple functions. I find that tests really help with the design process.
The one thing people forget about TDD when they say that they don't know what the system should do is that this is a lie. It is a lie simply because they have the requirements of the system that put boundaries of what the system can do. You use TDD to anylize these boundaries, to design around it and to develop the code that delivers the required behavior.
I loved that comment that test driven development has test in the name. I think that expsoes such a weakness in our profession. That is that we think we don't need testing because we get it right. Yet we can't. Getting software right the first time is only possible for the most trivial of cases. Getting the design right even harder again. I drink from the fountain of TDD and try my best to get better at it. However i have not worked with many people that feel the same. Testing cones last of at all. Tests for commented out when thr pipeline fails to increase velocity. Oh well.
Maybe it's already too late to backtrack but what about changing the name from Test Driven Design to something like Consumer Driven Design? Meaning that the focus is to put ourselves in the shoes of who is later consuming the code that is about to be written. Whatever takes the spotlight off the testing part will do.
For the tricky aspect of 'getting started', one should reflect on the RGB cycle at 6:08 and ask where do you start, what do you start with and why. While the process is (appears to be) a continuous cycle, you need to ask how to start and how to end the process (just like the difficulties of starting and stopping major chemical plants - explosions everywhere, unless..). Getting started with TDD still needs a start-up phase.
One should 'start' at refactor for your non existent code and the vague concept behind it. This is refactored into a more concrete concept and some pseudo-code, which can immediately be supplemented by its (hopefully) passing test concept. This is then quickly followed by converting the test concept to pseudo-code, to code, and back to refactoring the real pseudo code to its own code which now has a test that reflects the concept under test. And it probably 'fails' (the pair doesn't do what was expected).
It may even be necessary to mentally split your existing coding approach to split out a small portion that is essentially just there to help visualise how it's working - call that your 'tests'. It's important to realise this is really as shift of mental model from the appearance of a code-code-code approach to code-check-code-'code the test' approach.
Great video! Your point that developers who initially try TDD imagine the code they would write, then make a test according to that imaginary code, is bang on. It wasn't till i grokked that i needed to let go of that mental code, and write a minimal test to start with, that i began to truly get how TDD works.
I saw a live demo of TDD from Bob Martin that really helped my understanding of it.
I don't believe every tool works for every project, but I did have the joy of working on two projects that were TDD. And when management realized they missed something on Thursday afternoon, we could implement it and the new tests Friday morning, run the tests for a few hours, and deploy to production at 4pm on Friday and go home without a care in the world. We never had an interrupted weekend.
Sounds like heaven. Latest release we publish is wednesday morning, for,... reasons. 😅
But the same management isn't willing to pay 30% extra to build and maintain the tests.
Point about UML/SysML - we can practice so called AMDD - Agile Modelling Driven Development - which is basically TDD + diagram/sketching before those tests for those small units.
I admit I struggle a lot with tdd approach, like it's described "do not write a line of code until you have a test for it". I do interfaces all the time, and often I don't have specifications nor design for an element I have to create or for ui itself. And most of my work is basically experimenting a lot with UI - styles, animations, details of ui behavior apart from business logic. I really need to cover all that in tests to be sure my UI components look properly, appear and disappear when triggered etc. But doing that in TDD is painful.
TDD works poorly for visuals, but it still works well for testing the ViewModel / Controller logic
I've been practicing TDD for over ten years and I can count the amount of bugs that have been released on my fingers. So how much time do we spend fixing bugs, well next to none. So more time delivering features for our stakeholders then. We also don't suffer from legacy code issues because we can refactor our code with complete safety. We can upgrade packages and framework versions and know that it all still works. Software changes over time (shocking I know) as we get more features or changes to existing features. If you can't deliver these changes in a small quick incremental way you will suffer from delivery taking longer and longer and longer. You will never really be agile in this space. I hear the same old same old arguments about TDD taking longer etc. Writing more code does not mean it takes longer. How long it takes for a developer to commit code is not important, it counts for nothing. Being able to deliver it to your users/stakeholders is the only time you add any value. Getting another team or a QA in your team to test when you've 'finished' development does take longer. On a slightly different positive, I've never worked with a bad developer that follows XP practices, however I worked with many that don't.
Benefits you describe (stability, easy to change, being sure) are more about the code being highly covered with automatic tests than TDD itself. It means all that can be achieved when tests are written after the code. And yes you still don't pass your code to next stages until you've covered it with tests
@@Storytelless You’ll never get the early feedback by testing after. Your design will have already been done by then. You’ll never be 100% sure that you’ve covered all scenarios by testing after either, a coverage tool will only tell you if a test executes a line not that anything is asserted. One other drawback is that if you’ve never seen your test fail how can you be sure it’s correct?
@@leerothman2715 even for aftertesting you can start with the failing test if you know what is being tested. And I don't see how TDD gives 100% guaranty all scenarios are covered if it's not testing implementation (as it shouldn't). Implementation is a devil, you can easily do TDD and have passing tests but uncovered edge case scenarios.
One more thing about early feedback. Problem with testing environment is that it always quite not the same as the actual one. For example I develop interface with browser, I often start in TDD, write test, write code, it passes. I launch browser and then i discover that a passing test meant nothing. Is it my own lack of expertise? Sure. But I get the actual feedback and discover those blind spots not thanks to the test but thanks to running code in browser. And then you add on top that each browser behaves a bit differently that creates even more edge cases and the test doesn't help me spot them.
@@Storytelless I think in UI TDD apply to testing the behavior, usability and accessibility of the UI, and for how the UI look, you can add snapshots at the end of the implementation and this is enough, you can run the same suite of tests for each browser you support and if something not works for certain browser you just need to change your implementation to work in this browser and keep working in the others.
TDD probably doesn't make sense unless the person writing the code can envision the code as a series of functional modules, each with a specific functional contract. Then you simply write the test to that contract for the module. And those modules don't have to be simple function or service calls either. I guess this can help with understanding the core "contract" or requirement on how the module behave by doing the tests up front or basically help in designing better organized code overall.
TDD is wildly unpopular because most people don't think backwards enough to naturally generate a flow starting with tests.
And that stems from them not practicing it. TDD is not something that you just do. It takes practice (like almost anything in the world). "I'm starting to use TDD in my real world customer project, without ever having it done before from now on." sounds to me as smart as saying "I will write this new customer project in Go, although I've coded in Java exclusively for the last 15 years."
Practice this with small things. Start with TDD katas. If that works, try to make room for it in smaller units of your production code, where you would think "I don't need it, but it cannot do harm." So many devs miss that point.
I haven't been in environments that practice TDD. Most places have created tests after the fact. But I would like to use the TDD-approach, to be more explicit with what I build, even if the assertions are implicit.
And this is probably why people see unit testing as cumbersome and a waste of time: They think that they have to be explicit down to the detail. But you don't have to.
You just have to think about what you build, its behavior, and be less write, compile, run (ego trial and error without knowing what to achieve). On the fly programming.
The basic problem I have with TDD is that it's just not how the creative process works for me. If I'm trying to figure out a hard problem at work, I might be relaxing doing other things on a Saturday when suddenly an idea pops in to my head. My reaction to that is to go to the computer and start implementing that idea. It's not to go to the computer and start writing tests for a solution to my idea. To me it would be like having an idea for a piece of music and instead of sitting down and immediately trying to play that music, try to figure out who you should call to listen to the yet unfinished music when it's done.
To the people who do TDD: If you get an idea for a play project outside work, and you're all exited about it, do you immediately run to the computer and start writing tests? The first time you heard about Sudoku puzzles, and you figured that this problem should be pretty easily solvable by a computer program, did you sit down and write tests before you wrote the 20 lines of code that solves Sudoku? I just wrote the Sudoku solver. And if I were to write it as production code, I would subsequently have written some tests.
When you have an idea you have the overall requirements of the behavior of what you think the thing should do. Tdd allows for you to write those requirements in a way you ensure you will follow it or realise your assumptions for the behavior are wrong. It is like you had the idea for a piece of music, it is abstract in your head but you know you would like the beginning to sound a certain way. You write the way it should sound in a piece of paper (this is your test) and then you start playing with the guitar over and over until it sounds like the way you imagined or you realize the way you imagined would not sound as good as you first thought.
Your comment demonstrates some of the reasons people don't get tdd.
First, you are equating the module in your code as a unit, and then equating the module test suite as the unit test, and then positing that you have to write the entire test suite before you write the code.
This just is not how modern testing defines a unit test.
An example of a modern unit test would be a simple test that when given the number to enter into the cell perform a check to see if the number is between 1 and the product of the grid sizes and returns a true or false value.
For example your common sudoku uses a 3 x 3 grid, requiring that the number be less than or equal to 9, so it would take the grid parameters, cache the product, check the value was between 1 and 9, and return true or false based on the result. This would all be hidden behind an API, and you would test that given a valid number it would return true.
You would then run the test, and prove that it fails. A large number of tests written after the fact can pass not only when you run the test, but also then you either invert the condition, or comment out the code which supplies the result.
You would then write the first simple code that provided the correct result, run the test, see it pass, and then you have validated your regression test in both the passing and failing mode, giving you an executable specification of the code covered by that test.
You would also have a piece of code which implements that specification, and also a documented example of how to call that module and what it's parameters are for use when writing the documentation.
Assuming that it was not your first line of code you would then look to see if the code could be generalized, and if it could you would then refactor the code, which is now easier to do because it already has the regression tests for the implemented code.
You would then add another unit test, which might check that the number you want to add isn't already used in a different position, and go through the same routine again, and then another bit of test and another bit of code, all the while growing your test suite until you have covered the whole module.
This is where test first wins, by rapidly producing the test suite, and the code it tests, and making sure that the next change doesn't break something you have already written. This does require you to write the tests first, which some people regard as slowing you down, but if you want to know that your code works before you give it to someone else, you either have to take the risk that it is full of bugs, or you have to write the tests anyway for continuous integration, so doing it first does not actually cost you anything.
It does however gain you a lot.
First, you know your tests will fail.
Second you know that when the code is right they will pass.
third, you can use your tests as examples when you write your documentation.
fourth, you know that the code you wrote is testable, as you already tested it.
fifth, you can now easily refactor, as the code you wrote is covered by tests.
sixth, it discourages the use of various anti patterns which produce hard to test code.
there are other positives, like making debugging fairly easy, but you get my point.
as your codebase gets bigger and more complex, or your problem domain gets less well understood initially, the advantages rapidly expand, while the disadvantages largely evaporate.
the test suite is needed for ci and refactoring, and the refactoring step is needed to handle technical debt.
testing is not something you implement when you have sudden inspiration to play around and get something working. testing is something you do when you you've already done that, and now you need to integrate it into an environment that is opinionated and doesn't care how you got it working somewhere else. the code in these two situations are different, the process is different, the goal is different, and the skills required to do this well are different. in the latter case, if you don't write at least your test cases up front, you've started to write code in a way that you're not sure is gonna work because you haven't prioritized making it work, and your final product will likely be less readable than it could've been, buggier than it should be, and late. testing is about quality, not creativity.
that's not a diss, i can't do what you do. testing is what enables me to ship something good. i can't reason with complex systems composed of complex parts unless i use testing to help me understand what knowledge about the system i can take for granted. i don't have aha moments, i design them by building in testable steps.
It seems to me that you misunderstand the process of TDD.
You don't write all the tests before writing the code. You write the test for the feature you are going to implement next.
Let's assume that you are going to create a class
class SudokuSolver (int[][] board) throws InvalidBoardException
The first step is to validate that the board size is valid.
So you create a couple of tests that create a new SudokuSolver with some invalid board sizes and assert that it should throw InvalidBoardException.
Then you want to validate that the numbers are valid. Now you create some tests with invalid numbers on the board and assert that it throws InvalidBoardException again (the exception message may say that this time the problem is with the numbers).
and so on...
The tests shouldn't know too much about the internals of SudokuSolver. If you test individual methods inside SudokuSolver then refactoring gets harder, because the tests become coupled with the implementation. But if the tests know nothing about the internals then refactoring becomes super easy because the tests validate that you didn't break the intended outcome.
Personally I think TDD is taught in particularly stupid ways. The way it is taught often implies that even the most stupidly obvious parts of your function are going to be iteratively constructed only by adding a test for them first. That makes no sense in practice. Of course if you are going to write a function to add up two numbers you are not going to write a test first for a function with zero parameters and a function that takes zero parameters, then a test for a function with one parameter and change the function to take one parameter, then another test for two parameters, then a change the function to take two parameters,...
Amen. I would also say that TDD is in general too focussed on single methods thanks to tutorials and teachings. If i write a calculator module, i would create a TDD test for the interface Calculator.add(a,b) and implement that interface. once i get my result, i can refactor the code behind that interface as much as i want - move it into 5 methods, classes, include a Math-library - whatever is necessary - my TDD test makes sure that the stuff still works.
But the way i was taught TDD, every method involved in adding the two numbers behind that interface must be created in the TDD way. So if i decide to split the add-method internally into 5 methods (for whatever reason), every single one must again go through that TDD loop, creating a million test methods, which makes my module-internal code basically impossible to refactor - nobody will ever touch that structure again because they would have to change every single test method. In my opinion, TDD can only work, if we don't see every single method as isolated test-unit. The test-unit must be a module or other kind of "stable" interface and it creates the regression tests, that allows me the fast refactoring of the internal implementation details without braking stuff.
And to be honest, even then TDD will often not work. A lot of our applications take data from an API (Rest, Kafka), do minor transformations and sometimes validations and hand them over to an object relational mapper. You take data from one framework and carry it to another framework with the help of a framework and even a lot of the validation is done by aspects + frameworks. Many developers just glue together framework calls and don't see a point in testing those 5 lines of code in a TDD approach.
@@vanivari359 Coverage-oriented tools force you to write unit tests at class and method level, that's why.
@@vanivari359 In my experience, people who try writing tests for the internal methods of a class are usually amateurs at writing tests. This is because it's unnecessary if the code is designed to be testable and the public interface is tested. It don't think it has anything to do with TDD.
Also, I'm very sorry to hear that such simple requirements of doing minor transformations and validations requires so many frameworks to do. I can see why it's untestable then 🙁
Problem might be there is no one catch-all approach to teaching an individual TDD. I think it highly depends and must be tailored to the individual.
Thankfully, Dave Farley has been that individual to me.
@@Tkdestroyer1 I'm working as a vendor to a big costumer with very draconian rules about code quality, and therefore I am forced to just comply to their definitions. I prefer to not struggle against even obnoxious things like pure line code coverage because it's pointless, I have zero influence over any change. And unfortunately I suspect I am not an exception.
If I implement first, then testers will get the feature and can start testing early in the sprint. And I can do unit tests in parallel. And if it fails acceptance, I might just be lucky and won't have to rewrite a lot of unit tests.
In reality, you are bogged down by the previous sloppy implementation, already deliver late to QA and skip testing entirely because the toxic work environment doesn't support quality approaches anyway.
Since learning about TDD from Dave, I've started using it for frontend and I'm not sure why people would say it's not possible because it definitely is!
That one is actually quite easy to answer if you look at how GUI code has been historically developed.
First, you write too much of the UI which does nothing.
Second, you write some code which does something but you embed it in the UI code.
Third, you don't do any testing.
Fourth, due to the lack of testing, you don't do any refactoring.
Fifth, you eventually throw the whole mess over the wall to the q & a department, who moan that it is an untestable piece of garbage.
Sixth, you don't require the original author to fix up this mess before allowing it to be used.
When you eventually decide that you need to start doing continuous integration, they then have no experience of how to write good code how to test it, or why it matters. So they fight back against it.
Unfortunately for them, professional programmers working for big companies need continuous integration, so they then need to learn how to do unit testing to develope regression tests, or they will risk being unproductive and risk being fired.
@@grokitall 👆 This seems to be what it comes down to. I think that ultimately, it comes down to people being stuck in their ways and not understanding the idea that you should work out what you want your code to do (precisely) before actually writing it.
I think there is also an issue with how tdd is presented, partly preaching to the choir.
A lot of those opposing tdd do not have the same definitions as the ci and tdd community.
They oppose unit testing because to them a unit is a complete module, and the unit test is every test in the suite to test the module.
Similarly they write the entire module, and only write regression tests when they have to, and adding tests or security or portability after the fact is always a nightmare.
Because they don't write tests first, their code coverage is minimal, often consisting of UI tests and end to end tests which are fragile, and invert the testing pyramid.
A lot of them come from the windows ecosystem or from the object orientation community, where the definitions don't match.
Fully agree with your analysis of the failure of UML-driven design. However, I do believe in Model Driven Design where the model is a Domain Specific Language.
Interesting video, definitely some good for thought. I've never practised TDD but I have, of course, mentally thought of the objections you mentioned. Your counterpoints are pretty good tho so I might try and put some effort into trying some TDD with my current personal project.
Try not, do
Do until you get good at it, and then enjoy doing it more
Practical example of my struggle with UI and TDD. I am writing a next js app and i want to do a complicated search filters. Business logic is totally ok to test: pass data, send or not send, handle errors and test validation. Then comes the UI. What UI components I should create for the best user experience? Should I use select? Should I put all the filters on one screen or should I do some kind of modal that appears? I don't know beforehand. I experiment with that. One moment I think that my component should show a dropdown with search results async when user updates filter, and next moment I understand: oh no that's not good at all I should only trigger search by clicking on a button and show all search results on a separate page. If I create test for this UI behavior beforehand I have to delete it and start over each time I change my mind which is often
Do you need much actual logic behind the UI exploration? Just explore it separately, and when you've decided, add tests to control logic that the data, that is to be shown in UI, is loaded correctly. If your UI is dependent on the rest of your application existing, you have a design problem.
@@defeqel6537 i agree that significant part of my problems are design problems. Funny enough I don't know what the good design should be beforehand. Like how should i separate my components should it be the parent or the child who controls the UI state etc. Usually people propose to just add a bunch of abstractions but with too much of abstractions while code gets really easy to test, the next time i read it it takes hideous amount of time to understand. And in some cases it causes performance issues. Again you will say it's a skill issue I should design the code the way it's testable and performant and easy readable. I agree and I try. And in most cases I fail on 1-2-10th try
I try every day to get better at doing TDD, and I think I'm reasonably good at using it when modifying or fixing existing code, but i still struggle when writing completely new code.
Part of this is the mental block that if I don't have a single line of code written down, where do I even start with writing a test? In your Fraction example in the video, just from the way the test is written we can see that there is a Fraction class which takes an integer in the constructor and has an Add() method which takes another Fraction and returns a type supporting the toString() method. All of this is code that has to be written before writing even that simple test (or else we throw away our IDE's nice introspection and type-checking features). So maybe I need to understand TDD more as "write the tests before the *implementation*", and not the much stricter "write the tests before the *interface*".
The issue with that piece of TDD and everything that's already completed - article, post, video - is that those post and video has already been written. When you start writing test without a code, you just don't have a Fraction class or its constructor. You just type some words in a class with Test suffix. It won't compile, of course.. but imagine you typed 'new Fraction(1, 3)'. Do you like it? You might or you might not. You can change it, it's not compiled yet, you don't have code to change, yet. So you change that to, say, 'Fraction.make(1).over(3)'. Do you like it more?.. that's just one way to try and experiment with the api of it - that's what Dave said in the video. When you finalized that piece of api you actually make it compile by writing code: just constructor or static method or whatever.
Later on, when you finish no one you would be able to say how you started to write that.
Also, nothing stops you from cutting corners. I'm not strictly following rules of TDD. Sometimes I add class and constructor first. Sometimes I even play with code and implement something. Although I still consider it as TDD :-)
If you'd like we could connect on discord/zoom, share screen and try something.
I think people get hung up on explicit assertions. Checking every little parameter or return value. You don't need to cover everything. Just just need to make sure that your main assertions are correct, and you can be implicit about the stuff that bothers you less.
Some folks think TDD is overengineering, but TDD prevents overengineering in general. You won't make anything you don't need, especially if you're doing TDD top-down.
I think people have different ideas about what constitutes a "unit" in unit testing. As you describe here writing tests against the public interface to your code is very different from testing private functions and methods.
Artem's problem seems to be not understanding you're only going going /try/ /something/ that /could/ only /possibly/ work, not "I know it's going to work", but actually just an idea, a direction, of what /might/ possibly work. So then you take a really small step /in that direction/ and move forward. Then you discover something you didn't think of (maybe) and adjust direction slightly. Another step. And so eventually you get to something that works but might be ugly /and/ you understand what you really want, much better. So you tidy and refactor, then return to fleshing out the problem and solution some more. Once you get used at moving in this way, you will find it incredibly self reinforcing and actually very quick.
If TDD is basically requirements, but acknowledging the fringe cases and the actual applications, is saying TDD is broken by extension saying that requirements too are broken? Does software just... manifest?
In QA, I have always written test plans before the code was written, and proceeded to write the tests without reference to the code, and never coordinated any of this with the developer at all. Likewise, I expect the developer to read and respond to bug reports without seeing any of my test code or knowing any details about how it works. And amazingly, we both managed to do our jobs just fine!
My major complaint about TDD is that a whole lot of developers will "cheat" by writing tests that are easy to pass, or by exploiting their knowledge of the test - as its author - to write code inadequate to the real world. I think it's generally less effective for the same person to write both the code and the test, and if that is your ONLY testing... you're gonna have a bad time.
It's about "trust but verify." TDD encourages development to think about what their code needs to do, and to ARTICULATE what that is, before they actually write the code. If your goal is to write good code, this improves your code. But your code is still not going to be perfect, and QA still needs to do a real test pass.
The dynamic that has soured me to TDD is when QA sends a bug up to dev, and dev says "my tests pass, yours must be wrong" without even bothering to follow the repro steps in the issue report. Then you end up getting a visit from the PM asking why we keep testing the same thing twice.
Because your devs fuck up the test, that's why. They're not test experts, they're dev experts. An amateur test is a good first line of defence to keep the most egregious bugs out of production, but then you need a competent test from a specialist to make sure the less-obvious bugs are found and fixed.
And to be clear, this isn't the fault of TDD! TDD is working just fine and doing what it is supposed to do, just like AGILE, but when you ask human beings to implement anything they are going to miss the point and do it wrong and fuck it up.
That's not an argument. Your Test description is abstract, but in order to write a test, i have to reference the interface i write it against. I need to know each parameter and it's type that goes in and out. And i can do that starting form a test, but it's almost inevitable that my idea is wrong. I might change that interface 5 times before i'm happy with it and maybe the whole cut of that module and interface was wrong and i move that code to another module. So i have to refactor test code and test cases constantly, which is a tedious task - especially, if you're trying out, if moving those 3 classes makes the pieces fall into place or if it was a bad idea. The refactoring of logic i do in early stages of a system, when you have only a rough idea of modules and interfaces and responsibilities, is so big, that TDD is almost always in the way and does not add value.
By the way, another reason why TDD "is broken": tons of code in the business world is stupid and simple, it just maps and forwards arguments to another layer, testing that stuff explicitly is a giant waste of time and slows down your code base because you mock 5 interfaces to test 1 or 2 lines of code. A lot of logic is so trivial, that people will not use TDD - especially because most people think, that TDD has to be used for every class and method in the system.
@@vanivari359 See? You missed the point and did it wrong and fucked everything up. QED
TDD is best when the language (or framework) make it easier to write and run a test than running it in situ.
A Rust unit test is so convenient and fast to write it's hard *not* to write tests (either up front, or as you go).
A React project or some legacy hodgepodge project though? Hardly worth the hassle a lot of the time.
It's not about the language. React is used mostly for UI, and UI is famously hard to test because it's on the edge of the system.
@@danielwilkowski5899 I've had to work on a number of codebases that are not UI, but are obscure or old, so don't have good (or any) testing infrastructure.
I had to write my own test system from scratch when working on a library of Adobe plugins for example. None of the code even interacted with the UI, but it was still hell to test!
I like TDD as a concept, but in my opinion, it doesn't scale as the complexity of required features does. It is quite rare in my experience that a feature requested can be represented with just one or two changes to the data. Instead, my code frequently has to pull data from a lot of sources (in a recent example, I had over 20 properties in the user input, 4 calls to 3rd party APIs and 6 DB tables as the input; change in one has a big difference to the final output).
TDD fails in a few ways here. First, it is very easy to fall into the trap of defining the perfect ingress and egress object; in fact, that is what most TDD evangelists would recommend you start with are the ins and outs. The problem is that it is very easy with this approach to get 100% test coverage but have a highly unstable system as the data being provided by the users or 3rd parties is almost never perfect. In fact, in my career, most bugs and failures came from users or 3rd parties doing something we did not predict. This is where the idea of TDD being a safety mechanism for release falls over.
The natural workaround is to create strict type safety, validate all inputs and add robust error handling, or preferably use languages and frameworks that do the heavy lifting for you as long as you define your models correctly. But this is a second problem with TDD; I have to couple the tests to the data models, specifically to what exceptions are thrown and what type casting is valid, often resulting in tests that prove more about the language and framework than they do about my own code.
But let's assume I'm lucky enough to work on a project with a clear separation between the data models and the functional code, and the data side is already robust. I still have to decide, should the function just throw errors upstream? Is the whole function in a single try-catch, or does it try every validation and line of logic? Am I using exceptions or return statuses? Do I have to pass valid A and B parameters to test if parameter C's errors are handled?
I think the fundamental issue is the assumption that we can bend the universe to our will of wanting a function that has a single point of input, just does one thing and has one output and that there is just one state. Even if you manage to abstract the user input, database state, messaging system, validation, processing and formatting into their own functions, you still need a main thread or function that orchestrates it. Not testing it is foolish, thinking you can write a test for it without knowing how this orchestration is done even more so.
There's a very good video by Dave about TDD being a good method for design. I think your example of orchestrating a lot of various sources of data and TDD failing to address it might be a bit misleading. Saying TDD can't test that function after the fact is disingenuous to what the practice is about. Moreover, I would think applying TDD for the design of this piece of software would result in a different design overall.
@@Flobyby I've seen most of Dave's videos and TDD is the one thing I don't find feasible at scale. I'm not suggesting to test it after, quite the opposite I would like to have the ability to develop the tests first before committing to the solution. But it is the last sentence you wrote I take biggest issue with and it illustrates the key problem with the TDD approach.
I do not control the universe my code sits in. To start with, we can't have a world where users have direct DB access and have API keys to the 3rd parties for what should be obvious security reasons. Neither do I control the business requirements that dictate who has what data, what they can do and how should we react to the actions. Therefore there has to be single process, endpoint or function that will take the user input, DB data and any 3rd party outputs and use them to alter the state, often multiple states in form of talking to the 3rd parties, updating the DB and responding to the user. I don't get to change this, even if I wanted to. I can only test and develop my system which sits in the middle.
I do see the value of TDD in some domains, at a small scale it does well. But as soon as you have to deal with multiple external dependencies it fails to achieve it's goals. First because you already written and tested all the building blocks so are no longer working with a clean sheet. Second because you have to decide in what order you will execute and validate the steps of the main process and couple the order of the test and code together. Third because it gives a false sense of security that your code is well tested, when actually it may have huge gaps.
There are plenty of classic examples of bookshops and others. The reality is that when the user presses "buy" they don't just expect an "order placed" message, they expect the shipping label printed, their bank account charged, the business needs the stock table updated and someone still needs to be told to go and package the book. All of that has edge cases; if there is no money in the bank, no paper in the printer, no one at the store etc. Etc. And all of it has to happen, as far as the user is concerned, in a single button press.
In your starting point it feels that what you need is a layer of abstraction between your business logic and data sources that will handle all that you have described: ensure data types, accept only those types that will 100% fit your business logic, handle errors etc. And this is actually a task that can be performed with TDD
@@Storytelless My data is pretty well abstracted already, but it doesn't change the problem; it just moves where the problem is.
TDD is great at testing these abstractions or anything that is small enough. But you can't get away from needing to test the user input validation and having your data distributed across multiple sources at some point. You can't get away from the need to handle real-world errors coming from the database or 3rd parties. You can't avoid the emergent behaviour of the models/abstractions interacting with each other depending on the order in which you trigger them.
It would be nice if everything could fit into the abstract world, but at some point, you have to make a call on what models to use in what order and how to handle errors. The idea to "just abstract it away" or "hide it in some plumbing" is complete snake oil. Your system has real-world inputs, outputs and dependencies. If you don't test the plumbing itself, but only the sanitised version of the universe, you may have 1000s of green tests, but I could easily break your live platform.
There is a good reason why you rarely see people show the plumbing under real-world examples; that code is never easy to follow or maintain, but people selling TDD courses and consultations don't want you to see that until you buy into their system. In this aspect, TDD is to testing what Jira is to Agile. And this isn't an attack on Dave or anyone else in that industry; I know it takes a lot of time to get to this level of advanced explanation, and it is fair people charge for providing it, but the other side of the coin is that TDD is a complex practice that requires advanced setup to work in the real world, not magic. If it was that easy, there would be no value they can add and no market for people explaining it.
If you click on some menu item than page should appear with a list. Since you are here, click on the add button should bring up a form. Filling the form properly and pressing the save button should close the form and make the new entry appear in the list. Filling the form inproperly should point out the wrong fields. Hmmm .. there is nothing special here. This all can be testcode. Keeping decoupling from implementation is a bit hard but that is also doable.
TDD can also be understood as an individual practice; You can practice it even of the rest of the team doesn't! It will be trouble to deal with maintenance tasks involving code written by the rest of the team though. My point is use TDD, it is a wonderful approach, don't if can't see the point.
Lots of good points, particularly amusing was the comment "A picture isn't worth a thousand words", I sometimes quip, "but, which thousand words?"
(Small complaint: For people who know TDD really well, the video title is a bit of click bait)
Every example of TDD is always some simple function. Real world applications often have a lot of complexity, dependencies, non-pure, etc. Would like to see examples of that. Most of the people that I have met that claim to do TDD, actually do very little of it in their day job so never got the chance to see it in action on a complex system. I suspect that is most peoples problem with TDD.
But in part the reason that TDD code looks like that, and, as you call it "real world code" does not, is because the real world code wasn't designed to be testable and the TDD code was. One of the huge values of TDD is that it makes us write simpler code. This is certainly NOT about inherent differences in the complexity of the tasks that TDD is applicable to, it is about the shape of the code that results from TDD, which is better, as well as being better tested!
@@ContinuousDelivery Wait... are you confusing TDD and testing here? All the code I'm talking about is testable and has a full suite of tests (not going to get better tested code). The question is NOT about testable code it is about TDD... Two different things.
In .NET worlds, if TDD is a broken practice, this is partly thanks to Microsoft scaffolding codes when creating a new project of WinForms, WPF, Xamarin/MAUI, WCF, MVC and Web API etc. What started as a small quick-start application, a small god assembly, ends up becoming a big god assembly. Difficult to craft unit testing and integration testing in levels and domains. The solution is simple, split those in folders into multiple assemblies (or Jar in Java), conforming to .NET components design, conforms to .NET architectural design, as Juval Lowy had advised MS long time ago, when he was an external consultant to to MS for architectural design when developing .NET Frameworks.
I don't know what is an integer fraction unless it is some kind of abstraction inside of a function. Like 0.6 is input and the 6 is the integer fraction inside of the function where it makes sense.
1/1 and 2/1 are integer fractions, valid fractions with a denominator of '1'. The are simple because we can ignore the denominator.
@@ContinuousDelivery Thanks for the explanation!
I think that what we need here is a test like jez humble's continuous integration test, but for tdd.
Most of the criticisms I see can be sumerised as "I write legacy code and don't get the testing pyramid, unit testing or regression tests, so why would test first regression testing make sense". As they don't get regression testing, they also fundamentally don't get refactoring, continuous integration or technical debt.
Time for them to learn some of the basics of the industry wouldn't you say?
I worked with strict TDD for a tear and noticed that it ensured quality, but almost 3 times slower. TDD can be demonstrated easily with simple examples or units that has business logic or rules that need to be examined. However, when you try to integrate components, tests will contain a lot of mocking and will be more complicated than the actual code. Other cases where you want to try a new product and see whether customers will like it or not. When I try to do it in TDD mode, I spend weeks instead of a couple of days to deliver. The real test here is customer feedback.
If you're complaining about TDD, you aren't doing TDD. You could read Kent Beck. He still publishes books and blogs to this day which have answered all of my questions about TDD.
Personally if I'm just doing a throw away proof of concept, I don't write tests at all, or even handle most of the error cases. The important part is that it's throw away and can never be "productionised", and trying to do so is doomed to fail. These pocs are just to quickly get an idea of what is possible and compare implementation details, not to produce production code.
Hello Dave,
To me the problem of TDD is within its name. The intention is mixed with the technique.
The intention is to define in an automated verifiable way the expectation. Then you write the code that makes the verification successful.
The fact that we use tests framework is kind of an implementation detail in the story.
Maybe we would better talk about Define your Expectations First … DEF DD (because it it doesn’t end with DD we have an acceptance problem 😄)
Naming things is still one of toughest problem with cache invalidation isn’t it ?
What do you think?
I guess BDD somewhat addresses that?
what are your views on event sourcing? perhaps a video? since if it is good, it should lead to better software faster, otherwise tell us where it is weak or bad thanks
11:52 "TDD forces us to be the first consumer of our code". Yes!
Not only that, but when you break the API you feel the pain before your users even find out about it, which just goes to show that a lot of framework and core library developers are not doing regression testing and continuous integration.😂
@@grokitall Sure but they make actual money
they're not bitching about test driven design, they're complaining about the hugely complex testing frameworks
Actually they are mostly complaining about having to start writing regression tests and having to write testable code when they could previously write untestable garbage and force q & a to have to deal with it.
Regression testing shines a light on how bad your code is, and they don't like it as they like to think that due to spending some time programming they must be good at it. Test first they hits you in the face with how bad you are at testing, which is not taught, and tdd's refactoring step shows just how much copy past coding is in your code base.
Understandably a lot of people who never had to do testing don't like it and push back, mostly with comments which can be paraphrased as "I only know how to write legacy garbage, so it doesn't work for me", not realizing that this is what they are saying.
Excellent info. Well explained. Thanks.
Unfortunately TDD is not something you can learn/appreciate until you take steps to get there.
We all start by writing concrete code and manually test it by running it with different inputs.
At some point you realise you can use a test to automate the input, but the test is still highly coupled to your code
Then you dont want the tests to be fragile, so you decouple them by making the interface and boundary clear
Then you start writing code just based on interfaces you can test and leave tbe concrete code for later. Happy path coding
Then you realise that happy path code... is effectively a test
Tada~ TDD
Congrats on 200k+
For 10 years I'm practicing TDD and everything work for me, I write huge amount of code and small number of defects, most of the time I write new code when others trying to debug why their code not working! So each time I hear that TDD is not working, it makes me smile. TDD works, maybe you don't know how to work with it
What if you have to use a new framework but don't have time to study it in order to understand its testing approach?
@@Aleks-fp1kq I think you missing one important thing here. You should write you code undefended from FW as much is possible. Once your code is highly depended on framework you are missing opportunity to migrate you code to another one in near future. OOP and Solid can help you to brake dependency on FW. I'm always trying to use less FWs in my functional code. Lets take DJANGO for example, all logic is in the code that can run outside Django and Django just manage the server side, you can do it by strict utilization of OOP and SOLID. Think about two separate python packages like Logic package and Execution package that build separately .
@@Aleks-fp1kq That should not be the same with learning a new language?
You'll take some time to get the syntax but the concepts are the same, for tests is even simpler, you just need to assert if your implementation have the behavior you suppose it to have, the test fails because you don't implemented it, you write the implementation, the test pass.
Clearly TDD is very misunderstood because of the "T" in the branding name. If it would be called for example PCR (Plan Code Review) or something sexier, people would maybe at least pause/stop and try to understand it.
So TDD = Design + QA (Testing) itself, correct? Design should be an outcome of TDD solely, right?
No, not really right. TDD is certainly a design tool, but callout it a QA tools misunderstands TDD and QA. TDD is about testing in the same way that a carpenter measuring the wood before they cut it is "testing". Sure, you need to "measure" to do a quality job in both cases, but in neither case is that QA, it is "measurement"
To me, TDD is like building the house before the foundation has dried. I often need to restart from scratch several times before I have something even worth testing. I could write each iteration the TDD way, but no one wants to pay for that. So, once my foundation is dry, I can use TDD to check it’s structurally sound.
Let me add a little critique of TDD (at least of a “public picture” of it).
Let us take a one of most common tasks of backend developer - build REST API that transforms some JSON payloads and manages some data in DB.
Well what TDD recommends to do here - let’s start with tests of API. Oh, I have got no API, no contract, no schema, nothing. Let me write a stub “code” for my API first, answer multiple questions on review in a sense of “what do you add not working code?”, put it thru CI/CD, release it to staging (one week later).
Now I can write test (given that I already in context of writing a load of code it’s “good to switch contexts now”) … that will not pass (be red as no real code backs any of them) and beat my moral thru the bottom!
Using this example it is easy to identify some shortcomings of TDD:
1. You start with red test - this beats the moral of 99% of developers
2. You constantly switching contexts between design coding and testing (being a user of your code)
3. A lot of tests are unusable without some code like mocks/stubs (as for example in Python you can “pass” everything and it will work, but in C++ you can’t even build your tests because you don’t defined something in implementation and IDE/LSP will be constantly yelling at you that you are totally stupid to use something that do not exist yet) so you do not use TDD you code something that should work before TDD can be used (for example interfaces) but when you design this start up code to allow testing you are imagining the underlying implementation and not the tests of a design - it takes substantial efforts to switch to user context compared with switching to developer context (from designer context).
The context switching is actually one of the points of tdd. You are constantly validating that your regression test fail, then pass with passing code, then using the refactoring step which is now easy as you have the tests to handle the technical debt.
All the while you are discouraging habits which produce hard to test, coupled and in other ways legacy code.
As a side effect you end up with a validated test suite for continuous integration, code which you know passes, new code going into version control which you know works and so do the tests that go in with it, and debugging becomes easier due to the smaller changes. You also end up with high code coverage, testable code, a starting set of examples for writing your documentation, and lots of other benefits.
Say you only have simple CRUD HTTP api. Is there any point with TDD?
This isn't about a religious observance, this is a practical tool. If you have code so simple that it isn't really programming in any real sense, doing the same thing as a million times before, then perhaps it is not necessary, but the test of that is How often do you have bugs? How much time do you spend fixing them? How well do you understand the problem? Is it likely to grow into something more than simple CRUD in future?
All of these would influence my decision, it is certainly true, that I don't really write "simple CRUD applications" vey often though.
@@ContinuousDelivery from experience it starts with CRUD and then eventually evolves beyond just CRUD. But say the public API you are designing is an HTTP api. What is your approach? I am going into the technical details now but for me that is important. Say we have events being raised from the API and some data stored in the database. Would you mock the database and message queue? In my opinion the event raised is also a public api if the event is to be used by others. Do you verify the event is raised correctly? Do you verify the data is saved to the database besides checking the response from the http API? What is your approach for a test in that case? I am curious of how you define TDD is that testing the whole part of the api such as database and raised events or only response?
Brilliant, Dave! I could have hit the "like" button at about a dozen different times!
My experience with people claiming they can't do something because they don't know the future really just don't want to do that something, not that they actually do need to know the future.
Nice example with fractions. However real life is a little bit more complex. Imagine that you have to create a neural network that generates people images with exact 5 fingers. Good luck to write tests!
What I've found to work in a TDD-like approach is to write the whole implementation in the test case/project, then when it's passing, refactor the relevant parts into my library code to form the API I want
I often do this with new code as well
But almost all modern languagges are abstraction over machine code. Where is leaky there?
Python and Javascript being extremely slow is an abstraction leak. being unable to access hardware I/O ports from Javascript is another, there are lots of them!
@@ContinuousDelivery ok thanks. but how about C,C++ and assembly?
TDD is a mindset not a unit test
That's the biggest problem i see, people think tdd is writing unit tests
I try to write tests as close to the user behaviour as possible. Less the tests know about the internals the easier it is to refactor
TDD is a tool that's appropriate for some circumstances. TDD makes sense when you can write the tests and the API at the same time. If you don't have enough information to confidently write the API, you should sketch it out and see what it looks like. Then make a decision: either you're prototyping to understand the API or you know enough to confidently write some tests. Wash, rinse, repeat. Sometimes, you'll need to go back and write tests after. Like a craftsman taking the time to break the sharp edges on a milled part.
I love the idea of TDD. And the few times I've been consistent with TDD, I've liked what I got out of it.
The reason I don't end up using TDD all the time is because it feels like twice the work. EVEN IF it actually reduces work in the long run, it FEELS like a massive headache, and I tend to not keep up the habit.
So yes, the reason I don't do TDD is because it doesn't feel worth it most of the time, and if I have 3 hours to write code, I feel more accomplished by writing a new functionality than by writing tests for existing functionality. Maybe this is laziness, or maybe it's the result of feeling time pressure. I can't defend it rationally, but I can explain how it happens.
Don’t you still have to write tests to ensure that the feature will keep working as the code is changed in the future? Manually testing every feature in every release doesn’t scale.
@@krumbergify Sure, but does "writing the bare minimum tests" qualify as TDD?
@@delphicdescant I think it is a good start :)
I don’t think in terms of ”I’m going to test function f och class T”. Instead I create a test for each requirement I add to my application. Sometimes I add a unit test, sometimes an integration test and sometimes a black box e2e-test depending on what is the easiest thing to do. If I change the requirement, I change the test and if the requirement is no longer needed I remove the test.
I name my test functions “ShouldBlablablaWhenBlabla” and use comments to structure the test code into
// Given,
// When
// Then
sections. This prevents me from adding extra asserts to test things that are not related to the specific requirement.
I have seen people name their tests
TestFoo
and these tests just keep growing with extra asserts related to the class Foo without actually specifying the reason for why each assert is there and what requirements are actually covered. This is what I try to avoid. We should not test entities, we should test behavior.
@@delphicdescant Sure! Each test should correspond to a single requirement and thus you won’t have more tests than what is needed to fulfill the (implicit and explicit) requirements of your customer.
That sounds like an opinion based argument rather than a data based argument to me. Start capturing story cycle time with and without TDD and you'll have the data you need.
Programming should be taught with TDD. If you learn a programming language, you learn the syntax and solving simple problems, but you have no idea how to solve more complex problems. The first test will be manual tests, but they are tests, and that's important, not a perfect testing framework.
A test is a documentation.
I am not understanding. Why not just code what you need to code to get the job done. Why are we fomalizing something that is intuitive?
Back in days before TDD, we longed for software development to be more professional and respected. After earnest endeavours towards this goal we started to see it taught as a respected discipline in universities. TDD helped raise the bar in promoting it as a sophisticated engineering practice. Now two decades later the university graduates don’t even want to touch the likes of TDD. Are we seeing the end of a rise and fall of software engineering and with it the demise of professional quality software development?
This implies that TDD is necessary for professional quality software, but tons of professional high quality software is created without TDD (based on experience out there, i would say: almost all of it). TDD was always for powerpoint and RFPs ("we will do TDD in this project, promised"), but i've never seen it actually in use, not even colleagues from thoughworks.
@@vanivari359 😭😔😕
When most professors misunderstand what a Unit Test is, often have no idea about SOLID principles (whatever your opinion on them are), etc., how do you expect them accurately convey TDD?
@@defeqel6537 yes, and i find it almost impossible to "teach" all of that in a project setting. In small teams, i can spent time to explain those concepts and sometimes it sticks, but it's a tiny drop in an ocean of people. And in a normal setup, i have multiple teams with multiple team leads and teaching team leads is pointless because they often don't code anymore or struggle too with understanding, what a unittest or TDD is. Thousands of employees, constantly mixed into new teams and projects... it seems impossible to raise the bar measurably.
It is not about the language or domain, it is about frameworks. We rarely don't use them, don't we? New projects = new frameworks. Now the gist of the problem is: no time to learn the new framework and then start writing the code, we learn as we go, but to do it the TDD way, first we need to master the framework because each of them has a different take on testing. At the same time, stakeholders are expecting something - 2 week sprints.
“The code is the documentation”
A few months later, “why i wrote that. Not even a unit test to capture scenario”
TDD to me means "check its intention before you make it available" I use it with terraform modules as a safety net to check my intentions meet expectations, without it I do more work fixing the bad code.
I find it easier to actually plan what you are coding than to force yourself to remember to test...before you even code
I suppose it depends on how you plan what you are coding, I "plan" by writing a test!
The post-TDD generation will need to find new ways to ensure testability of code. Code might not need much unit-testing but still needs to support future testing when it becomes necessary. If TDD is dropped, either new versions of compilers will need to take care of testability or new lint tools (worse case scenario - linting is admission of failure) or new kinds of modularity will be needed to isolate unit-testable code from untestable code in a way which supports untestable older legacy code too.
I watch devs of all experience levels run their full application to manually test, with break points and watch inspectors, a single change to a class in the deepest depths of their domain context. It makes me want to cry. If you don't want to do TDD, fine. But for the love of all that is holy, please automate your tests. If I catch you running an app to test a simple change I will kick you from my team.
The only kinds of tests I've found useful are: contract tests that enforce contractual obligations, and bug exercise tests, which trigger a real world bug situation. Contractual testing is great, but those contracts are part of the definition of the thing being tested, so tooling should be co-locaing them, not putting them in separate files or god forbid, separate source code trees.
TDD is not a broken practice. The problem is, when someone goes to upper manageent requesting money to properly develop a piece of software, management will question "why are you writing that if the customer will not use that. stop doing that and write the code that customers will actually use and be done with it. I'm not paying you extra for that!"
It seems that this management approach has been propagating through software companies, up to the point that management doesn't care about quality but more about delivery.
Completely unrelated to the video, but I just had it open alongside my PHPStorm window (floating mode from Firefox), and noticed in 14:00 that we use the same color scheme and dark mode. I must salute your tastes, at least in this, even if not so much in development methodologies.
Yeah so... You always know exactly what you want your code base to do next... So write atest.
I really appreciate these perspectives
Hate for TDD reminds me of hate for Scrum.
People share countless stories of how useful it has been _for them,_ of how many problems it solved _for them,_ of how much value it brings _for them._ And then other people say the equivalent of _"Well, it's never worked for me so you must be wrong."_ How insultingly invalidating. Perhaps there are fringe cases where it didn't and even _couldn't_ work for someone, sure. But stating it's fundamentally bad or flawed only reveals their lack of skill or experience or understanding, or perhaps even their foolishness. Great video, as always! TDD FTW (For The Win!)
Nah, Scrum is better than Waterfall, and DOES have some useful concepts (though not unique to it), but I cannot say it is especially great "in full". The Story Point concept is especially harmful, since it is a waste of time beyond "tiny", "I can do this", "who knows", and easy to misuse (and yes, this is a problem of the process)
@@defeqel6537 as usual, a criticism of Scrum reveals the complainant's misunderstanding of Scrum. You're complaining about Story Points but those are not part of Scrum. At all. That's like saying you don't like movies because popcorn is bad, which is nonsense.
@@davetoms1 yeah, technically they aren't part of the modern Scrum, but in practice are always there, whether we are talking about abstract points, days, hours, whatever
(edit: and to highlight why it is less of an issue in modern Scrum is that it recommends breaking the backlog into tasks taking one day or less, while before it just recommended estimating tasks and filling the sprint using those estimations)
Surmount barriers, Dave. You want to surmount them, not hit them 😉
To me the prime example is a compiler, one of which I happen to be working on at the moment. If your compiler does not work, you are seriously scr*wed. Everyone knows this. Thus compilers simply work, and are reliable most of the time. If they are not they die quickly, since nobody uses them.
Thus the question becomes, compilers are a large and complex codebase. If we can get those right, why can't other programs be proven correct as well? The answer is that compiler developers simply take it as a given that the testing code for the compiler will be %50 of the total work to develop the compiler as the other half, developing the main compiler code.
So this means that for most programs, its not worth it to spend that kind of effort to prove the program is accurate no? There in lies the paradox. A typical program takes %50 of the total development time or more in debugging. Even very optimistic programmers will admit to that. By that same logic, saying you want to write the program, then do the work to debug it into shape means you prefer to fix the program AFTER the fact than BEFORE the fact, which is the net argument against TDD.
In a word, you can pay now, or pay later.
Meanwhile, Game Developers:
"You guys write tests?!"
Depends on the game developers 😉
ruclips.net/video/17esmz3X7tw/видео.html
This channel is wild. One day calling TDD criticism laziness, next day highlighting the criticisms as if they are new and novel.
Tests can only catch bugs that you already have thought of.
"Phyton" 😱😱
"I recently came across a critique of TDD"
Yup. There are loads.
I graduated from junior programmer through architect to project manager a while ago (now retired), and the problem I never resolved was cost versus resource and timescale. I feel as if TDD could have been a solution, not because it's a magic bullet, but because it seems like it should help you understand the strengths and weaknesses of your codebase over time better than any other method.
I think what hurts TDD evangelists' credibility the most is when they show you yet another example of TDD'ing a calculator app or a trivial logical game script and then expect everyone to accept that as a proof of viability and immense value of TDD. The usual arguments of "I'm just trying to show you the general approach and a simple app is the best medium for that" sounds like an annoyingly manipulative copeout even if they were not intended as such. If instead every TDD tutorial started with opening up a large-scale project with at least 100k lines of code and walked you through designing and integrating a small but useful feature into an imperfect codebase with an actual history, a lot more people would be willing to give TDD a chance and stop viewing TDD evangelists as snake oil salesmen.
This is hard to explain to colleges. I'm doing my best, but somehow, one won't listen
BDD is more broken practice than TDD 😅
every tdd evangelist out there:
*talks for an hour
- ... and here is how you can check that 2*2=4. the rest is up to you
TDD never works in any real world application that's not a learning calculator. It's tought on academic easy examples that never ever work in real business cases with several complex dependencies that are most of the time nearly impossible to mock in a reasonable time.
Certain isolated functions might be TTDable, but the overall application with handling files, databases, ftps, APIs etc never is.
Everything is taught using easy examples.
The reason is simple.
If you try and teach using real world large problem examples then it becomes less about what you are trying to teach, and more about the detailed understanding of the large problem you are using as your example.
So what a good teacher will do is look for the simplest example possible which still allows you to demonstrate as many details as possible of the thing you are trying to teach. If it turns out a specific aspect doesn't work with that example, look for another one that covers as many of the remaining details as possible.
It is for just this reason that every new way of doing AI in games starts by examining how it works with tic tac toe and moves on to harder examples as needed. Because you already know the problem space, you can spend all of your time looking at the potential solution and trying to understand the techniques involved.
Well it is simply wrong to say that TDD doesn't work in "real world applications".
I led a team that built the nationally distributed point of sale system for one of the UK's leading retailers, I led another that built one of the world's highest performance financial exchanges, including public APIs for trading and sophisticated web-based applications that support real time prices and trading. Friends of mine wrote the main train-journey planning & ticketing system for the UK. Tesla practice TDD for the software in their cars. SpaceX practice TDD for the most effective space rockets on the planet. All of these support real world data, in some cases worth billions of dollars per day.
TDD is absolute bs. No focus on what's important. Tones of wasted time writing useless tests instead of useful code.
Said the developer who has never done it. Development teams that follow XP practices deliver quicker.
@@leerothman2715 HAHA you're full of something and it ain't the truth. Keep lying.
@@eyesopen6110 These types of posts is like having a discussion with flat earther.
@@leerothman2715 Yes, stop pretending its flat.