It feels very validating that you explain this, as I've developed the same priorities in my coding. So it seems we've independently arrived at some of the same conclusions. This gives me hope, as lots of programmers I tell these things to look at me like I'm an alien. In particular about how a goal of mine is to make code so clear that even a non-programmer may be able to glean the intent of the code even if they don't understand it. This is incredibly helpful to us programmers as the better we are at lowering the complexity floor of our project, the higher our existing finite limit for complexity can reach. This leads to discoveries about improper assumptions within the code and results in a better product with less bugs. Often times, that feeds into further reduction of the complexity floor creating a bit of a positive feedback loop.
I would not be very optimistic, all. That is not a discovery that you made. All this is well known since 30 years. There are good books and books about this since decades. And people still don't understand it.
These are what I try to abide by to improve readability and clarity. - If a method is hard to name, then you're probably not understanding something, and it's not going to be easier to understand when you use it. - Avoid having an excessive number of parameters, especially if they're typically going to be literals. No one wants to count commas. - If in doubt, fail fast and fail early. It's a great way to bring attention to vague specifications, bad data, or misconfigurations. Handling exceptions too early can decrease the chance that a method does what it says it does, which means that some of your assumptions may not hold up.
I really like how you've described it as three mindsets, and I especially like RED being described as "focusing upon the external experience". And I love how "GREEN" in some sense is the most trivial of the three steps. Great video.
Hi Dave. Off topic, but I would be really interested in understanding your 'rules and guidelines' for designing interfaces - both between software components and between distributed systems. You've mentioned a few times how the 'hard bit' is the design to produce independently deployable services, version independence, non-breaking contracts etc. and I think it would be great to delve into more detail about this skill. tia.
I would Very much be interested in such aswell, I'm not creating a distributed service in the sense of many servers etc operating as a whole but need an incredibly modular rendering system and the first step to achieving that is to make it reliable modular and easy to understand.
Hello Dave !! As usual, great video... thanks !! I have selected this video for a team session that is taking place tomorrow morning. My teammates are new to TDD and refactoring; they need some material to understand the essence of TDD and I believe that this video is an excellent starting point.
Mmm some good stuff! Enjoyed the insight on display! The take of tests being the intended external use cases etc was quite a nugget this morning! Cheers! 🍻
I laughed at "where we do the real work, refactoring", because that's exactly where I was at today. The vast bulk of my work today, was refactoring. I'd written tests, got them passing and then ... "hmmm, this is a terrible solution." So, TDD worked - it made me realise my solution was ... pants. Most of the work involved "the domain" and in particular, that seam between an external API with its domain and my code, with its own domain. It was all about the shape of the data, how to manipulate it, how to prevent too much dependence, repetition, to isolate and encapsulate. TDD helped me, but it wasn't the only help - actually running the "real solution", based on the TDD outcomes I'd written to test before that "Real solution" , actually helped me just as much. In a way, there's a point in TDD where the refactoring is actually the point where you run your code with real data, in a real world situation - at least, that's the way it works for me. I can't see how there's any other way to do this a lot of the time. That the TDD gets you so far, because unless you are exceptionally prescient with understanding the complex interaction of data manipulation in a complex application, you will encounter edge cases you never considered. "Right, I've written my functions using TDD, let's see if they work." I think in reality, most people who practice TDD, may be using a hybrid approach - get an idea of what you want by writing tests, write the functions as you go, see them pass, high five yourself and then introduce the REAL data - if you did everything right, fantastic. Everything works. If the real data throws something at you that you didn't anticipate, the go back to your TDD, to your tests - and probably more your functions, where you then adapt your tests to suit. Are you then doing the opposite of TDD? - I don't know. TDD is hard and I don't think there's a pure approach to it - hence refactoring.
I would guess if all your tests pass but you're having problems with your actual data, then either there are some tests that are missing which will initially fail, or your data needs some sort of cleansing because it isn't in the expected format?
@@stephenbutler3929 To be honest, it was more to do with data transformation for consumption by analytics - pretty much getting current app state and finding the best way of transforming data. TDD totally got me to a _very_ useful point and in fact, most of the smaller methods - 80% of them - remained exactly as they started out before I refactored. However, a bunch of unit tests in isolation only gets so far - and the integration tests are going to be end-to-end tests and I wasn't ready for that stage yet. TDD is super useful, but to my mind, you use it to its strengths, knowing you still need to see how everything works together - and indeed, whether your solution is actually logical!
The purpose of the "red" stage is to verify that the *test* is working (i.e. it is being run). The purpose of the "green" stage is to verify that the *code* is working (it is being run from the test). Countless bugs have escaped notice due to a bad test that either was not invoked or that had some other flaw, thereby giving *false* confidence that the code was working.
Great video! Okay, the refactoring Phase takes care to improve the implementation. However I ask myself if the refactoring phase is not only to this but also to improve the _testcase(s)_ too? Could anyone please shed some light on this? My questions reason: At my Job I recently got the opportunity to teach a Student TDD and I remember that we had this Discussion too... At this time I advised him to concentrate to improve/rethink the testcase either. Many thanks in advance!
There's nothing I love more than being anal about what dumb code is possible to make tests green. Glad you mentioned that. If we need better functionality, we'll have to write better tests ;)
Thanks a lot for these thoughtful videos. I was wondering, wouldn't code coverage reporting need to be part of this? Let's say the external behavior needs to change due to biz requirement changes and we apply red, green & refactor. In the end, we reach a state where tests pass but there might be dead code lying around from the previous behavior because there is no test needed anymore that executes those dead code paths.
Code coverage doesn't help in this circumstance. What you need to make this work is clear tests, that test outcomes, not implementation, that is why the separation between the two design phases is useful, it keeps the "executable specifications" that describe what the SW does, separate from the implementation detail that defines how it does it. So the tests are better documentation of what the SW does, so it is more obvious what needs to change, when the need changes.
How can one use TDD in the following case: I have a public method push_to_vector(item) in a class the vector I am pushing to is a private member of the class I have no requirement from the user's side to have a get_vector_size() method. But I cannot test without this. Should I still have a get_vector_size() or is there any other way?
How easy you make it seem... I know that TDD is a powerful iterative process for new development... but how would you use it to refactor some rather dodgy code that was written by an untrained hack a decade ago and then recently I'm given the pile of... well, ... to make many modifications that it really can't handle. IMHO I want to throw it away and rewrite ALL of it, but there isn't time. So I've identified 4 areas of the code that give the biggest bang for the buck that would be good to tackle; and I'd like to use some form of a TDD process to do it... NOTE: It's for an embedded application so I don't have all the nice tools and stuff, and I have NO test harnesses, NO RTOS, or other tools, it's all freaking bare metal! I'm fairly experienced been doing code wrangling for a several decades but haven't used TDD... I have some ideas how to proceed, but it would be great to hear some of your ideas and advice.
1. If possible, start with writing high-level external application-level/API/functional tests for the current solution. Commit whatever crimes you need to get those running. These tests are your regression defensive line. 2. Next, follow ALL the code-paths towards the parts you want to change, write enough of some kind of tests for each layer you go through to give you confidence that you have understood it right. 3. Once you have reached your target module, write tests around that module's API. 4. Either refactor the module as is (scary), or break out some kind interface/headers/abstractions that specifies the contract of the module and start to write a completely new implementation using TDD in parallel with the old one, while keeping the tests from (3) as long as needed, use build-flags to select which implementation to include in the build and run the tests against (branch by abstraction). I think this pattern is called "lighting the forest". Each test adds a light to your path so that you can safely go on your journey. PS. I haven't done any embedded programming for 20 years, and never tried this kind of refactoring with embedded stuff so YMMV.
@@ddanielsandberg I appreciate the notions and I've never heard of "lighting the forest"... it doesn't come up in google searches. But the issue I have is that the embedded code is itself a tester that our company uses to test our products... and the first part that seems important to modify is the test runner logic... the crimes that were committed before I was handed the 10 lbs. sack are egregious and atrocious. Somehow I need to corral the tests, which are duplicated as a suite of tests or a single test that is repeatedly ran... take stacks of IF statements and tangled jumbled logic with global variables and states all mangled throughout... As you said It is scary to do this operation because the hardware that I was given as-is was actually is semi-broken to start with, so I don't have a known good working starting point. The person that did the code seemed to miss the lecture in school about not doing crap like that. Just how much Flipping fun is that!? LOL. But based on what you wrote, I think a way forward is to build the test harness that will run the tests and then have it test itself as I go? That's not a bad approach... it's good in fact!
I would recommend a book called Refactoring: Improving the Design of Existing Code from Martin Fowler. I believe there is an I interview with him on this channel.
@@JorgeEscobarMX, I thought I had that book in my library... that's odd I should have it... That's an excellent recommendation. I'll look for that interview, thanks a bunch!
How do you tackle non-functional requirements? Say for example that a "Software Component" must not have a footprint larger than X Bytes. Do you create a test for this? Or you have security requirements such that the code must avoid "data injection" or whatever. Is this taken into account in the Refactor stage every time? Or is it a test(s) that is run at every change?
The simple answer, broadly in a Continuous Delivery context, rather than only in terms of TDD is "Yes!", have the test. Your first example, I'd probably include as some kind of commit-time analysis test, it isn't TDD, but if memory footprint is a real constraint, then I want to learn that before I deploy the code and it blows up. The second I'd also treat as an analysis test. When we built a financial exchange, we wrote tests that scanned our UI for input fields and then ran a SQL injection attack on every field - we capture the output from the UI to look for traces of SQL. For other security features, they are normal features and need regular TDD style tests. TDD is about the design of our code, more than about testing. So you know what to write tests for already, you test every new behaviour that you add to the code. Other types of test are extremely helpful, but they don't all drive the design and the development, so don't count as TDD.
@@ContinuousDelivery OK, then for implementing behavior requirements it goes: Red -> Green -> Refactor. For implementing non-behavior requirements: do have tests, but these don't enter the TDD workflow.
Hi Continuous Delivery, I could really use your wisdom and guidance. I have been learning HTML, CSS and JavaScript for the past 5 to almost 6 months. I know HTML very well, I know CSS decently meaning flexbox, position, and basic layout principles. I also have a weak understanding of JavaScript. I know how to manipulate the DOM using pure vanilla JavaScript creating new elements like h1 or a p for paragraph and insert an innerHTML text message and adding a style like border, color or a font-family. Also, I know the basics of SQL for performing queries or creating tables or a new database. My question if I wanted to become a C# developer like a back-end developer. Should I learn more about JavaScript and get better at CSS before diving into C# and learn OOP using this technology? I would be greatful for any guidance which you could provide, thank you.
C# is a general purpose language. You should not learn anything before. When working with C# you should know very well the naked C# first. And that explicitly is true for refactoring. Refactoring is one of the biggest strength in C#, so should become really good in C# and OOP design. Otherwise you should not use C#, at all.
@@hrtmtbrng5968 Thank-you hrtmtbrng, I really appreciate the feedback. I like hearing from people who have personal experience in technologies which I am unfamiliar with. Unfortunately, the post i submitted was over 1 year ago and I've already learned the fundamentals of web development but switched to Django for about 5 months now learning a lot of different things with it like creating API's and making basic crud apps. It is funny after 1 year I still think of C#, because it feels very stable more so that JavaScript. I agree with you in the aspect of sticking with 1 technology only. Which is why I feel I am stuck in a catch 22. I still like C#. I have a feeling what it will boil down to is I either pick C# or stick with Django. Unfortunately, there are very little number of jobs in my area for Django developers, but I see so many job opportunities in C#. I guess the answer is simple? Maybe my skills of what I learned in Django are transferable to .NET and C#? Thank you for responding back to my post.
I have used both with clients, but not on my own projects. Nearly all of my BDD focused videos are VERY relevant to Cucumber & SpecFlow. Take a look at this playlist: ruclips.net/p/PLwLLcwQlnXByqD3a13UPeT4SMhc3rdZ8q
I don't think it is any different from writing an app which uses dependencies you don't control, so everything. You apstract, use spyies and mocks to confirm that your methods are doing their part. I suppose it gets trickier if you don't have the luxury of trusting your DB layer. Could you clarify why it would be different from anything else?
@@markogregurovic8190 I don't really understand. Maybe if I ask it like this... If I create a test for the method createUser, how do I stop the user being created in my database? Unless... I use a transaction in the test and rollback afterwards.
@@leonf.7893 If you want to use tests that actually write to the database then I don't think you should use your production or even test database. Common use is to create an H2 database in which you can insert data, and check it and the db is lost after test completion. You need to do this because if you want to test the createUser method you need to confirm the user is actually in the database and you can't do that unless you actually insert it. A rollback is not workable, I think, because you can't be sure the commit would not fail.
You must add extra code, that fakes a database. This is called "Test Induced Damage". OOP developers have hundreds of complicated design patterns, that they have to apply in order to achieve this. They call it encapsulation. In practice it is not so complicated, because in a good OOP design, each piece of code should only have a single responsibility. So you test only things that are so simple, that in principle you would not even need a test.
I dont think this can really be applied to ml stuff beyond the very surface level. Its very hard to tset that an ml algorithm is implemented corectly and alot of tests can pass with a mistaken implementation
Another way to view TDD work flow might be: Red: design & specify the outsider view of the API Green: design & implement the insider view of the API Refactor: tinker & optimise the API If the system is of a noun - verb type, then the class under test represents the noun (domain model) and the API under test represents the verb (domain logic). Naked object architectural pattern is applicable here. If the system is of a verb - noun type, then the class under test represents the verb (use case) and the API under test represents nouns (object role interactions). DCI architectural pattern is applicable here.
Do you think, that typical SW developers realize when they should do refactoring? Do you think, that typical SW developers actually know how to improve the code? Do you think, that typical SW developers actually do the refactoring? Do you think, that a typical manager let people refactor the code if it is already running well? Do you think, that a typical boss actually provides money for refactoring code if it is already running well? Do you think, that a typical customer actually pays for refactoring if the code is already running well. In my opinion all what you say is true. But most developers have no idea what cohesion or coupling means. All your good words fizzle out in the vacuum. You are a wise old man fighting against all the ignorance of all the world. But finally you lose this battle. I personally must say, I am bored of refactoring. Again and again the same stuff. Moving variables around, splitting classes, creating new interfaces. This is only fun for a few years or a decade. Then you are done and feel like either you have picked the wrong job or the world is on a wrong path. I admire you, still being motivated and still doing this after so many years of your professional life. Perhaps there is something wrong with the way how we do software development.
The reason that I started writing books, and created this channel is because I agree very strongly with your last statement - there is something wrong with the way that we do software development. I have seen, and know, that there are better ways, and that is what I describe. So yes, you are right, most devs don't refactor enough, maybe don't understand the importance and costs of ideas like modularity, separation of concerns and coupling, but all of them should, and when they do, they aren't average programmers any more, they're good programmers. I don't think that good programmers ask for permission from managers to do a good job, they do it anyway. My ambition is to help some people to see that there is a better way to do things, and then help them to understand what, in my experience, and with what data we have, works better and why. We get to change the industry only by changing one mind at a time. Hopefully some of those minds are influential and can help change other minds 😉
It feels very validating that you explain this, as I've developed the same priorities in my coding. So it seems we've independently arrived at some of the same conclusions. This gives me hope, as lots of programmers I tell these things to look at me like I'm an alien.
In particular about how a goal of mine is to make code so clear that even a non-programmer may be able to glean the intent of the code even if they don't understand it. This is incredibly helpful to us programmers as the better we are at lowering the complexity floor of our project, the higher our existing finite limit for complexity can reach.
This leads to discoveries about improper assumptions within the code and results in a better product with less bugs. Often times, that feeds into further reduction of the complexity floor creating a bit of a positive feedback loop.
I would not be very optimistic, all. That is not a discovery that you made. All this is well known since 30 years. There are good books and books about this since decades. And people still don't understand it.
That was the clearest definition of Red Green Refactor ever! Nice!
These are what I try to abide by to improve readability and clarity.
- If a method is hard to name, then you're probably not understanding something, and it's not going to be easier to understand when you use it.
- Avoid having an excessive number of parameters, especially if they're typically going to be literals. No one wants to count commas.
- If in doubt, fail fast and fail early. It's a great way to bring attention to vague specifications, bad data, or misconfigurations. Handling exceptions too early can decrease the chance that a method does what it says it does, which means that some of your assumptions may not hold up.
I really like how you've described it as three mindsets, and I especially like RED being described as "focusing upon the external experience". And I love how "GREEN" in some sense is the most trivial of the three steps. Great video.
Glad you enjoyed it!
Hi Dave. Off topic, but I would be really interested in understanding your 'rules and guidelines' for designing interfaces - both between software components and between distributed systems. You've mentioned a few times how the 'hard bit' is the design to produce independently deployable services, version independence, non-breaking contracts etc. and I think it would be great to delve into more detail about this skill. tia.
I would Very much be interested in such aswell, I'm not creating a distributed service in the sense of many servers etc operating as a whole but need an incredibly modular rendering system and the first step to achieving that is to make it reliable modular and easy to understand.
See me in my office on Monday after my Lecture with your class.
That full red-green-circle can be extremely satisfactory, and make a huge difference in how a day of work can feel. Especially when pair programming.
But it can also be disappointing when you again and again throw away your code or your ideas.
Hello Dave !! As usual, great video... thanks !! I have selected this video for a team session that is taking place tomorrow morning. My teammates are new to TDD and refactoring; they need some material to understand the essence of TDD and I believe that this video is an excellent starting point.
Cool, thanks. In case it helps, I also have this TDD tutorial, free but needs a sign-up, courses.cd.training/courses/tdd-tutorial
Mmm some good stuff! Enjoyed the insight on display! The take of tests being the intended external use cases etc was quite a nugget this morning! Cheers! 🍻
Think I spotted an example to do with fractions which might give me some inspiration for an exercise I’ll be doing tomorrow after work 🙂
I laughed at "where we do the real work, refactoring", because that's exactly where I was at today.
The vast bulk of my work today, was refactoring. I'd written tests, got them passing and then ... "hmmm, this is a terrible solution."
So, TDD worked - it made me realise my solution was ... pants.
Most of the work involved "the domain" and in particular, that seam between an external API with its domain and my code, with its own domain.
It was all about the shape of the data, how to manipulate it, how to prevent too much dependence, repetition, to isolate and encapsulate.
TDD helped me, but it wasn't the only help - actually running the "real solution", based on the TDD outcomes I'd written to test before that "Real solution" , actually helped me just as much.
In a way, there's a point in TDD where the refactoring is actually the point where you run your code with real data, in a real world situation - at least, that's the way it works for me.
I can't see how there's any other way to do this a lot of the time. That the TDD gets you so far, because unless you are exceptionally prescient with understanding the complex interaction of data manipulation in a complex application, you will encounter edge cases you never considered.
"Right, I've written my functions using TDD, let's see if they work."
I think in reality, most people who practice TDD, may be using a hybrid approach - get an idea of what you want by writing tests, write the functions as you go, see them pass, high five yourself and then introduce the REAL data - if you did everything right, fantastic. Everything works.
If the real data throws something at you that you didn't anticipate, the go back to your TDD, to your tests - and probably more your functions, where you then adapt your tests to suit.
Are you then doing the opposite of TDD? - I don't know.
TDD is hard and I don't think there's a pure approach to it - hence refactoring.
I would guess if all your tests pass but you're having problems with your actual data, then either there are some tests that are missing which will initially fail, or your data needs some sort of cleansing because it isn't in the expected format?
@@stephenbutler3929 To be honest, it was more to do with data transformation for consumption by analytics - pretty much getting current app state and finding the best way of transforming data.
TDD totally got me to a _very_ useful point and in fact, most of the smaller methods - 80% of them - remained exactly as they started out before I refactored.
However, a bunch of unit tests in isolation only gets so far - and the integration tests are going to be end-to-end tests and I wasn't ready for that stage yet.
TDD is super useful, but to my mind, you use it to its strengths, knowing you still need to see how everything works together - and indeed, whether your solution is actually logical!
The purpose of the "red" stage is to verify that the *test* is working (i.e. it is being run). The purpose of the "green" stage is to verify that the *code* is working (it is being run from the test). Countless bugs have escaped notice due to a bad test that either was not invoked or that had some other flaw, thereby giving *false* confidence that the code was working.
Great video! Okay, the refactoring Phase takes care to improve the implementation. However I ask myself if the refactoring phase is not only to this but also to improve the _testcase(s)_ too? Could anyone please shed some light on this?
My questions reason: At my Job I recently got the opportunity to teach a Student TDD and I remember that we had this Discussion too... At this time I advised him to concentrate to improve/rethink the testcase either. Many thanks in advance!
Yes, I think you should think about refactoring the testcase for readability too, but be careful to to change its meaning, and keep it passing.
There's nothing I love more than being anal about what dumb code is possible to make tests green. Glad you mentioned that. If we need better functionality, we'll have to write better tests ;)
Thanks a lot for these thoughtful videos. I was wondering, wouldn't code coverage reporting need to be part of this? Let's say the external behavior needs to change due to biz requirement changes and we apply red, green & refactor. In the end, we reach a state where tests pass but there might be dead code lying around from the previous behavior because there is no test needed anymore that executes those dead code paths.
Code coverage doesn't help in this circumstance. What you need to make this work is clear tests, that test outcomes, not implementation, that is why the separation between the two design phases is useful, it keeps the "executable specifications" that describe what the SW does, separate from the implementation detail that defines how it does it. So the tests are better documentation of what the SW does, so it is more obvious what needs to change, when the need changes.
How can one use TDD in the following case:
I have a public method push_to_vector(item) in a class
the vector I am pushing to is a private member of the class
I have no requirement from the user's side to have a get_vector_size() method. But I cannot test without this.
Should I still have a get_vector_size() or is there any other way?
How easy you make it seem... I know that TDD is a powerful iterative process for new development... but how would you use it to refactor some rather dodgy code that was written by an untrained hack a decade ago and then recently I'm given the pile of... well, ... to make many modifications that it really can't handle. IMHO I want to throw it away and rewrite ALL of it, but there isn't time. So I've identified 4 areas of the code that give the biggest bang for the buck that would be good to tackle; and I'd like to use some form of a TDD process to do it... NOTE: It's for an embedded application so I don't have all the nice tools and stuff, and I have NO test harnesses, NO RTOS, or other tools, it's all freaking bare metal! I'm fairly experienced been doing code wrangling for a several decades but haven't used TDD... I have some ideas how to proceed, but it would be great to hear some of your ideas and advice.
1. If possible, start with writing high-level external application-level/API/functional tests for the current solution. Commit whatever crimes you need to get those running. These tests are your regression defensive line.
2. Next, follow ALL the code-paths towards the parts you want to change, write enough of some kind of tests for each layer you go through to give you confidence that you have understood it right.
3. Once you have reached your target module, write tests around that module's API.
4. Either refactor the module as is (scary), or break out some kind interface/headers/abstractions that specifies the contract of the module and start to write a completely new implementation using TDD in parallel with the old one, while keeping the tests from (3) as long as needed, use build-flags to select which implementation to include in the build and run the tests against (branch by abstraction).
I think this pattern is called "lighting the forest". Each test adds a light to your path so that you can safely go on your journey.
PS. I haven't done any embedded programming for 20 years, and never tried this kind of refactoring with embedded stuff so YMMV.
@@ddanielsandberg I appreciate the notions and I've never heard of "lighting the forest"... it doesn't come up in google searches. But the issue I have is that the embedded code is itself a tester that our company uses to test our products... and the first part that seems important to modify is the test runner logic... the crimes that were committed before I was handed the 10 lbs. sack are egregious and atrocious. Somehow I need to corral the tests, which are duplicated as a suite of tests or a single test that is repeatedly ran... take stacks of IF statements and tangled jumbled logic with global variables and states all mangled throughout... As you said It is scary to do this operation because the hardware that I was given as-is was actually is semi-broken to start with, so I don't have a known good working starting point. The person that did the code seemed to miss the lecture in school about not doing crap like that. Just how much Flipping fun is that!? LOL. But based on what you wrote, I think a way forward is to build the test harness that will run the tests and then have it test itself as I go? That's not a bad approach... it's good in fact!
I would recommend a book called Refactoring: Improving the Design of Existing Code from Martin Fowler.
I believe there is an I interview with him on this channel.
@@JorgeEscobarMX, I thought I had that book in my library... that's odd I should have it... That's an excellent recommendation. I'll look for that interview, thanks a bunch!
Another book I recommend, might be "Working Effectively with Legacy Code" by Michael Feathers.
How do you tackle non-functional requirements?
Say for example that a "Software Component" must not have a footprint larger than X Bytes. Do you create a test for this?
Or you have security requirements such that the code must avoid "data injection" or whatever. Is this taken into account in the Refactor stage every time? Or is it a test(s) that is run at every change?
The simple answer, broadly in a Continuous Delivery context, rather than only in terms of TDD is "Yes!", have the test.
Your first example, I'd probably include as some kind of commit-time analysis test, it isn't TDD, but if memory footprint is a real constraint, then I want to learn that before I deploy the code and it blows up.
The second I'd also treat as an analysis test. When we built a financial exchange, we wrote tests that scanned our UI for input fields and then ran a SQL injection attack on every field - we capture the output from the UI to look for traces of SQL.
For other security features, they are normal features and need regular TDD style tests.
TDD is about the design of our code, more than about testing. So you know what to write tests for already, you test every new behaviour that you add to the code. Other types of test are extremely helpful, but they don't all drive the design and the development, so don't count as TDD.
@@ContinuousDelivery OK, then for implementing behavior requirements it goes: Red -> Green -> Refactor. For implementing non-behavior requirements: do have tests, but these don't enter the TDD workflow.
@@manuelgurrola Yes, I think that is about right.
Awesome video David - great animations and I loved the code snippets
Glad you enjoyed it!
Hi Continuous Delivery, I could really use your wisdom and guidance. I have been learning HTML, CSS and JavaScript for the past 5 to almost 6 months. I know HTML very well, I know CSS decently meaning flexbox, position, and basic layout principles. I also have a weak understanding of JavaScript. I know how to manipulate the DOM using pure vanilla JavaScript creating new elements like h1 or a p for paragraph and insert an innerHTML text message and adding a style like border, color or a font-family. Also, I know the basics of SQL for performing queries or creating tables or a new database.
My question if I wanted to become a C# developer like a back-end developer. Should I learn more about JavaScript and get better at CSS before diving into C# and learn OOP using this technology? I would be greatful for any guidance which you could provide, thank you.
C# is a general purpose language. You should not learn anything before. When working with C# you should know very well the naked C# first. And that explicitly is true for refactoring. Refactoring is one of the biggest strength in C#, so should become really good in C# and OOP design. Otherwise you should not use C#, at all.
@@hrtmtbrng5968 Thank-you hrtmtbrng, I really appreciate the feedback. I like hearing from people who have personal experience in technologies which I am unfamiliar with. Unfortunately, the post i submitted was over 1 year ago and I've already learned the fundamentals of web development but switched to Django for about 5 months now learning a lot of different things with it like creating API's and making basic crud apps. It is funny after 1 year I still think of C#, because it feels very stable more so that JavaScript.
I agree with you in the aspect of sticking with 1 technology only. Which is why I feel I am stuck in a catch 22. I still like C#. I have a feeling what it will boil down to is I either pick C# or stick with Django. Unfortunately, there are very little number of jobs in my area for Django developers, but I see so many job opportunities in C#. I guess the answer is simple? Maybe my skills of what I learned in Django are transferable to .NET and C#? Thank you for responding back to my post.
@dave question: have you ever used cucumber and gherkin ? Would you be willing to do a video about it ? Thanks for the content
I have used both with clients, but not on my own projects. Nearly all of my BDD focused videos are VERY relevant to Cucumber & SpecFlow. Take a look at this playlist:
ruclips.net/p/PLwLLcwQlnXByqD3a13UPeT4SMhc3rdZ8q
I always wondered how TDD works if your app always worked with the database.
Abstract the DB access and test to your abstraction. You get a better design that way + some tests.
I don't think it is any different from writing an app which uses dependencies you don't control, so everything. You apstract, use spyies and mocks to confirm that your methods are doing their part. I suppose it gets trickier if you don't have the luxury of trusting your DB layer. Could you clarify why it would be different from anything else?
@@markogregurovic8190 I don't really understand. Maybe if I ask it like this...
If I create a test for the method createUser, how do I stop the user being created in my database? Unless... I use a transaction in the test and rollback afterwards.
@@leonf.7893 If you want to use tests that actually write to the database then I don't think you should use your production or even test database. Common use is to create an H2 database in which you can insert data, and check it and the db is lost after test completion.
You need to do this because if you want to test the createUser method you need to confirm the user is actually in the database and you can't do that unless you actually insert it. A rollback is not workable, I think, because you can't be sure the commit would not fail.
You must add extra code, that fakes a database. This is called "Test Induced Damage". OOP developers have hundreds of complicated design patterns, that they have to apply in order to achieve this. They call it encapsulation. In practice it is not so complicated, because in a good OOP design, each piece of code should only have a single responsibility. So you test only things that are so simple, that in principle you would not even need a test.
I dont think this can really be applied to ml stuff beyond the very surface level.
Its very hard to tset that an ml algorithm is implemented corectly and alot of tests can pass with a mistaken implementation
Another way to view TDD work flow might be:
Red: design & specify the outsider view of the API
Green: design & implement the insider view of the API
Refactor: tinker & optimise the API
If the system is of a noun - verb type, then the class under test represents the noun (domain model) and the API under test represents the verb (domain logic). Naked object architectural pattern is applicable here.
If the system is of a verb - noun type, then the class under test represents the verb (use case) and the API under test represents nouns (object role interactions). DCI architectural pattern is applicable here.
Do you think, that typical SW developers realize when they should do refactoring? Do you think, that typical SW developers actually know how to improve the code? Do you think, that typical SW developers actually do the refactoring? Do you think, that a typical manager let people refactor the code if it is already running well? Do you think, that a typical boss actually provides money for refactoring code if it is already running well? Do you think, that a typical customer actually pays for refactoring if the code is already running well.
In my opinion all what you say is true. But most developers have no idea what cohesion or coupling means. All your good words fizzle out in the vacuum. You are a wise old man fighting against all the ignorance of all the world. But finally you lose this battle.
I personally must say, I am bored of refactoring. Again and again the same stuff. Moving variables around, splitting classes, creating new interfaces. This is only fun for a few years or a decade. Then you are done and feel like either you have picked the wrong job or the world is on a wrong path. I admire you, still being motivated and still doing this after so many years of your professional life.
Perhaps there is something wrong with the way how we do software development.
The reason that I started writing books, and created this channel is because I agree very strongly with your last statement - there is something wrong with the way that we do software development. I have seen, and know, that there are better ways, and that is what I describe. So yes, you are right, most devs don't refactor enough, maybe don't understand the importance and costs of ideas like modularity, separation of concerns and coupling, but all of them should, and when they do, they aren't average programmers any more, they're good programmers.
I don't think that good programmers ask for permission from managers to do a good job, they do it anyway. My ambition is to help some people to see that there is a better way to do things, and then help them to understand what, in my experience, and with what data we have, works better and why.
We get to change the industry only by changing one mind at a time. Hopefully some of those minds are influential and can help change other minds 😉