When TDD is Difficult - Try This!

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 91

  • @orange-vlcybpd2
    @orange-vlcybpd2 3 года назад +103

    The most underrated and BS-free channel on IT.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +6

      Thank you 😁 😎

    • @PMA65537
      @PMA65537 3 года назад +5

      He's not continuously telling us he's THE TECH LEAD like that other channel.

    • @rafeu2288
      @rafeu2288 3 года назад +5

      @@PMA65537 Well The Tech Lead is more of a dry comedy channel anyway ^^

  • @jimhumelsine9187
    @jimhumelsine9187 3 года назад +15

    This fits in nicely with Hexagonal Architecture (also known as Ports and Adapters).
    Hard in training. Easy in battle. Dave's technique allows you to emulate all sorts of normal or nasty behaviors that the software under test might encounter from its dependencies. Throw every exception you can think of. Return too much data. Return too little. Return incorrect data. Return slowly. Never return. If the software under test performs as expected in every possible crazy emulated situation in test, then there's a good chance it will perform as expected in production.
    This video hints at the Humble Object. This is a pattern that's about making the implementation easier to test. When a class is difficult to test, it may not be the entire class. Maybe you can separate the class into two related classes - one that's easy to test and one that's not easy to test. The easy to test class often delegates to the difficult to test one. You want to keep the class that's not easy to test as small as possible with little or no business logic. That is you want it to be humble, and hence the name of the pattern.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +7

      Good description Jim, thanks. Yes I think that there is lots of value to Ports&Adapters (Hex Architecture) beyond testing. I also very strongly believe that striving for good testability, is one of the most effective strategies to improve the quality of design overall. It sends the clearest signals when your code is over-complex to setup, too tightly-coupled to other things, of has poor separation of concerns.

  • @alessandrodigioia4003
    @alessandrodigioia4003 3 года назад +5

    I love this video and this channel.
    I just want to mention that you don't have to choose between the 2 strategies of testing at the edges.
    With Outside-In TDD you can drive the implementation starting from Acceptance Tests and then dive into the Unit Tests to fill in the gaps in the logic.
    This is known as the Double Loop of TDD.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      Thank you! 💯 💯 ! I completely agree these aren't alternatives, but add up to a strategy that is worth more than the sum of its parts. I hadn't heard it called "double-loop" before.

  • @DodaGarcia
    @DodaGarcia 3 года назад +8

    Dave I don't know whether it was you or someone else who designed the general aesthetic of your videos but as a visual artist myself I think it was just a stroke of genius. Not only does it make your content immediately stand out in a sea of traditional three-point-lit setups, but also it subconsciously reminds the viewer of the 90s when software seemed like a much more stable, learnable thing - even if it in reality it was probably just less complex.
    Great info as usual! All my systems have seen a lot of improvement from your advice.

  • @m.h.6470
    @m.h.6470 3 года назад +4

    my problem is, that the system I develop for my company is about 90% "Edge"... it is a database access system via web interface. Less then 10% of the functions/methods I write are "clean" from both database and web-UI. The rest is simply not testable without adding month of development time, which I simply don't have.

  • @wailerz757
    @wailerz757 3 года назад +3

    Ive never watch multiple videos of a programmer youtuber EVER until now

  • @danieljoaquinsegoviacorona1734
    @danieljoaquinsegoviacorona1734 3 года назад +2

    First, thank you for your knowledge-sharing enthusiasm!! second, great t-shirts! getting better with the references' fun detail! third, you are an example to follow, the improvement in the quality of the videos, graphics, and audio, is encouraging to always do better, also, the topics tackled here are massive, gargantuan, and you explain them in such simple terms that It seems graspable. Thanks!

  • @davesuperius3173
    @davesuperius3173 2 года назад +3

    Excellent content! Thank you.
    I want to ask one question about the FileListSorterTest example, if possible.
    Why can't we just mock the FileSystem interface, and in our test we assert that, test subject calls the #loadFile and #saveFile methods with the correct inputs?
    i am also aware of (and love the saying) that with this way of writing, our tests will assert that "the code I wrote is the code I wrote". But I am looking for a more detailed answer so that I can also understand better and explain to other people.
    Thanks!

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад

      You certainly could, I prefer to write my own adapters at this level, because mine will be simpler. The risk with the approach you describe is that I could come and change the implementation to do a whole bunch of other things through the FileSystem interface and the test may not see them. That does depend on how your mocking library works and how your implementation works. I prefer to write a minimal interface that insulates my code from the edges of the system and focuses ONLY on what I need it to. This just is a personal choice of course.

  • @TomHultonHarrop
    @TomHultonHarrop 3 года назад +2

    I really enjoyed the video but it would be great to talk about the distinction between output and input fakes/mocks.
    The end to end test example mentioned selenium and appium which support simulating device input events. Discussing how to use interfaces for things like controller or mouse/keyboard input would be really useful as well as test output like the file interface example given.
    I remember watching a talk many years ago where a game developer would run soak tests over night by simulating random controller input over and over. This unearthed a ton of bugs and could have been adapted in more precise situations to mimic a particular player input.
    Great video and thanks again for this awesome content 👍

  • @tomislavhoman4338
    @tomislavhoman4338 3 года назад +1

    Related to the second approach, some of libraries these days that communicate with edges (db access libs, networking libs, etc...) provide you with the testing versions ob object you use. InMemoryDb, DummyHttpClient for example etc..

  • @ashimov1970
    @ashimov1970 3 года назад +1

    Thank you very much, Dave, for a great job of evangelizing and sharing technology usage and adoption best practices

  • @GnomeEU
    @GnomeEU Год назад

    For most external systems you usually have logic that maps from database to objects or from file system to objects or from api to objects.
    All I would ever test is these objects.

  • @dazraf
    @dazraf 3 года назад +9

    Great video. Also, *love* the t-shirt!!

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      Yes, I have a small collection of SciFi geeky T-shirts, keep a look out for some of the others 😁

  • @petermuller7079
    @petermuller7079 3 года назад +1

    Most of the difficulties we have with TDD (better: with the automation that is always connected to it) is 'dynamic data' - we have a lot of (hardware) cryptography where randoms are used for session keys, padding, ... in general you have just a lot of 'random looking bytes' as input and as output - each hard to decipher (which we mostly do manually).
    We have regular meetings with TDD experts that always ended with resignation. ;-)

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +1

      I don't really see why this is an issue for TDD. Your code needs these random seeds, and the test needs determinsitic inputs, so separate the randomness, as an edge of your code, abstract it, so that in the context of a test you use fixed values rather than random ones. This sounds to me rather similar to dealing with time in tests, where the best course is to fake time in the context of a test.
      I did a video on the topic of dealing with time, that was mostly about higher-level functional tests, the the ideas are the same: ruclips.net/video/Xa6UEHyEyzQ/видео.html

    • @petermuller7079
      @petermuller7079 3 года назад

      ​@@ContinuousDelivery Thanks for the answer.
      Unfortunately
      - when using hardware cryptography you can't fake randomness (because it's delivered from the hardware and deeply 'baked' into the output) and
      - when the whole purpose of your software is to deliver the right cryptogram, ignoring it would render the whole test irrelevant.
      (our scenario is even 'worse' when the clients of our services are embedded systems with their own crypto hardware)
      I admit that this a rather special scenario ... but that's what we do. ;-)
      We are really using TDD and automation whereever we can and benefit greatly from it.
      But we have to defend NOT using it on a very regular basis - and it would be nice, if the limits of TDD would be as popular as it's benefits.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +1

      @@petermuller7079 Sure, I do a lot of work with clients that do hardware, there are choices. Part of this is designing code, and sometimes hardware, to support 'testabiilty'. If your are writing code that interfaces with the hardware, then abstract that interface and test to your abstraction. If you are writing code that is embeded, or part of the hardware, then it you do the same thing in insulating code from hardware.

    • @petermuller7079
      @petermuller7079 3 года назад

      @@ContinuousDelivery Yes, that's what we do ... but we have to defend again and again why we don't do automatic testing
      a) on production builts and
      b) the standard test systems (that lack the hardware)....
      The other problem is that with testing against an 'abstraction' you learn very little about the quality of code itself.
      As i mentioned: This is not a problem with TDD itself but more with automation (and the documentation) as you can't (fully) use the 'standard way' to verify results easily (with a few string operations aus usual ;-) ).

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад

      @@petermuller7079 Well Tesla manage it! I think that this is very much a solvable problem.

  • @Nerdsown
    @Nerdsown 2 года назад

    Thanks for the video.
    Regarding the dependency injection technique in the second half. I have in the past often used it for other services within the same solution but find that I end up re-implementing the service. But it's always different leading to bugs in the tests that don't exist in the real code and missing bugs in the real code, possibly because there was a bug in the service which is valuable to discover.
    Instead I now try to minimise dependency injection and mocking. And run all tests from as high a point as possible. If my test ends up testing another service in the process, whats the harm?
    Suppose your user tried to create a file named "con". It will pass your test and fail in the real code. Would you update your fake system to throw an exception for "con"? When i do file system tests, I use the windows TEMP directory and cleanup after each test. Why not use the real thing, it gives far greater assurance and I dont have to waste time implementing more test support. File system access is not THAT slow.
    When I write model code behind a UI, I test it by opening the real window UI in the unit test. If there is a binding mistake the test will fail. That is slower but still worth it in my opinion.
    I never write tests to test UI directly, requirements are too fragile, everything can change on a dim, so instead I test a more stable model behind it. Once I've done that then if we find a bug in manual testing we can be confident it is in the usually thin UI level.

  • @howtocodewell
    @howtocodewell 3 года назад +1

    Great talk. Much appreciated

  • @spell_smith_art
    @spell_smith_art 2 года назад

    That helps a lot. Thanks!
    So just a practical question to check that I understand this correctly. I've always had trouble testing DAOs. So you're saying that I should create a fake/mock database system and plug the DAO interface into that for unit tests. Then for integration testing I would plug in the real thing? Do I ever test the DAOs that connect to the real database?

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад +1

      Yes, it can be a MUCH simpler abstraction than a DB. I usually create a "Repository" or a "Store" that I pass things into and use to find things. I can fake that for most testing that isn't related to storing things. You can test the DAO as separate integration tests. In a good design the "Store" or "Repo" can be pretty generic, so the testing can be generic too, so you end up having a few simple, generic store test cases, and lots of other cases, testing the details of your system where you don't care about the details of storage.

  • @ChrisWrightGuitar
    @ChrisWrightGuitar 3 года назад +2

    I feel like this is where "clean architecture" (Bob Martin-ism) comes in which I've been trying to adopt more since I saw some of Brandon Rhodes's talks on the topic. Essentially, your business logic still isn't separated enough from I/O, even in your improved example. Instead of faking a file system, your business logic shouldn't concern itself with reading or writing the file at all, and instead just process simple, memory accessible data. Clean architecture suggests that I/O, databases, etc. should be at the very outer level of an application, not burried away at the bottom. You may then decide that you should open a file to get that data, and write that data back out again, and of course that may need its own tests, but those tests can also focus solely on the actual file I/O, and not on the business logic of processing the data, and more crucially, you don't need to deal with them when testing the business logic.
    I'd be interested to hear your thoughts on clean architecture, maybe a good video topic!

  • @dosomething3
    @dosomething3 3 года назад +2

    People like you are prophets in our technology civilization

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      Thank you, maybe I should get robes 🤔 🤣 🤣 🤣

  • @mrspecs4430
    @mrspecs4430 3 года назад

    When I'm not supposed to write to a database for my test, then how do I make sure the db access behavior is correct?
    It is not a good Idea to mock an interface/library that you don't own because you can't know what the exact behavior is and if it might change with future dependency updates, right?

  • @andreaszetterstrom7418
    @andreaszetterstrom7418 3 года назад +1

    So say I want to test code that communicates with some other system over http. I can easily write an interface that makes sure I receive what I expect when calling different routes and that I handle it correctly. But how to test the interface itself? How do I know when I send a command I actually send the correct http data? Or that I correctly receive http data from the external system? I guess I'd have to write a complete standalone emulator of the external system?

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +1

      Yes, I'd write a stub to represent the external system as part of my acceptance testing of the component. It isn't as complex as it sounds in practice. No real logic, just something that you can talk to with some data (from a test-case) that can translate some instructions and parameters into an HTTP message, or something that takes the HTTP output from your system to an external third-party, and translates it into something useful for your tests.

  • @tldw8354
    @tldw8354 2 года назад

    I like this channel too. I would like to ask about a difficult testing scenario.
    How can I test a large online shop, that has a high frequency of user interaction? The case is, that randomly and very seldom some guest is going to own the shopping cart page as the author. That is very weird and leads to a break in the sales, because this page no longer occurs for all other customers. The error Logs don't show any good hint. And kind of a mass testing is also somehow impossible. it's all about trying to replicate a scenario to reproduce the seldom bug. but how to achieve that? I don't have any clue how to reproduce this bug. The only thing I could do was to build a monitoring system to continuously check the current state and to get an alert sound if the error occurs, so that it can be fixed in almost no time. but that is not a solution, it's a helpless workaround.

  • @BlazPecnikCreations
    @BlazPecnikCreations 3 года назад

    Fantastic video!
    Is there any way to unit-test large CRUD apps?
    One of our projects is basically 80% query builders - it's the meat and potatoes of the app.
    If we mock the database layer away, then we aren't really testing anything?

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +2

      Well you could test the queries that were built in isolation from the DB - are you creating the queries that you ecpect to be. You could separate the code that actually inokes the query from the code that constructs it perhaps?

  • @kennethgee2004
    @kennethgee2004 Год назад

    except units tests are supposed to test the edge cases. AUT is not for smoke testing, but to simulate the user experience and see if the experience will break the design. How one handles things like writing to files is important. Do you overwrite the file? Do you append? what happens if the file is big? Here is the costly discovery when AUT does find the error at the edges or worse production users, and you have to do more handling procedures and delay the release or write patches. Again, behavioral design might have caught that, but full engineering most definitely thought of writing to files or any edge as a potential failure. Your own example shows how just one principle of design fails. The behavioral design will not show you when you missed a comparison test, but unit testing will, and TDD will shine greatly in that instance.

  • @joesharp3580
    @joesharp3580 2 года назад

    Fantastic channel

  • @gilmijar
    @gilmijar 3 года назад +1

    I love the T-shirt!

  • @Ownermode
    @Ownermode 3 года назад +6

    I do not understand why abstracting away your database is so common for TDD. A lot of logic ends up in the database (contraints, unique, etc.). Don't you want to test your SQL queries?
    By wrapping your database with a wrapper you are actually testing nothing. What if you want to test that a user can't have dupplicate email addresses? That adds extra logic in your wrapper which is going to start to look like a function that mimics a lot of constraint logic.
    Now you have: complex database wrappers, database migrations and your actual database functions/methods that you need to manage. Maybe you could even say that your wrapper starts to mimic you migrations files. New migration == new wrapper logic.
    Especially when you query gets complex you wrapper logic loses it value in my opinion.
    With simple external systems I do understand this logic, filesystems often already have a library that can create temp files for testing. But with database I just don't seem to get it.

    • @frank-michaeljaeschke4798
      @frank-michaeljaeschke4798 3 года назад +3

      Wrapping makes sense as even with talking to a database so much could go wrong (flaky network between you application and database server, firewalls that drop connections to your db you have in a connection pool, db outage of multiple hours etc.) That are only a few of the cases I encountered at a customer in the last few years.

    • @F.a797
      @F.a797 3 года назад +1

      You are raising a great point. I think that - in this case - it is better is to have an in-memory database that tries to mimic your production database as much as possible.
      As you pointed out in the end, it depends on the complexity of the external system.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      I still think that creating the abstractions that I describe in the video will lead to a better design. In the end this is about better 'separation of concerns' which is a hallmark of good design.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +9

      Well, because you can't write 'unit tests' that talk to a DB - by most agreed definitions, they are no long 'unit tests' at that point, they are integration tests and those are focussed on different things.
      I would test my SQL queries, in integration, or acceptance tests, but I don't want to confuse testing them with testing the logic elsewhere.
      If I write unit tests as in-memory tests of my code (how they are usually defined) then I can run 10s or 100s of thousands of those in minutes. As soon as I write tests that talk to DBs or File-Systems or whatever, I am probably down to 10s or 100s per minute. This is WAY TOO SLOW for proper feedback and control.
      To your last point, this is clearly a design choice. I confess that I am not a big fan of putting too much logic into the DB - it is the wrong place. It comes from the assumption that the DB is an integration point that is shared between applications, which I think is widely regarded as an anti-pattern in large-scale and high-perfromance systems (which is where I do a lot of my work). At which point, my choice of where to put the logic for "can't have duplicate email addresses" is probably going to be built into my app, or service, or logic rather than into the data-store. So now I am back to being able to test it in memory if I needed to.

    • @frank-michaeljaeschke4798
      @frank-michaeljaeschke4798 3 года назад +1

      @@ContinuousDelivery I think the idea of the abstractions is somewhat similar to ports in a Hexagonal architecture or Onion archtecture ( en.wikipedia.org/wiki/Hexagonal_architecture_(software) )

  • @porky1118
    @porky1118 3 года назад +1

    9:15 I don't think, it's in general a good idea to remove file interactions from unit tests.
    If it's just short strings or a few lines, sure. But if I really need to read in larger data formats, I don't want to add them into my code, but rather have them in separate files.
    Else I might even mess up the format, I want to parse (for example when the parsed format and the language I use both support strings surrounded by quotes and escape characters, I can't just write the content of a file down).
    I'd prefer to have them in real files. Maybe they could even inlined into the file at compile time and not really be read at runtime, but I don't think, this would make such a huge difference.
    Besides I'm not sure if writing a file system abstaction, which can be used everywhere, where a normal file can be used, will not cause other problems and make some of the code far more complicated and maybe even less performant. Java is probably an exception, where this works well, and only, because the performance reducing features are activated by default (in this case virtual method inheritance).

    • @PMA65537
      @PMA65537 3 года назад +1

      I suggest making a new directory for each test and copying into it the input files (from your repo) and creating the output file that can be compared against the expected result. if it's wrong you've got an output file to look at.

  • @Adam_Lyskawa
    @Adam_Lyskawa 3 года назад +1

    How should I test a feature, that can be described "should calculate the statistics for all matching entries in a DB in less than 1 second" without a real DB? How should I test a feature working on a file system, that can be described as "should calculate the statistics for all entries within a subdirectory"? The abstraction should consider advanced FS properties like access permissions, special entries like symbolic links and such. How should I test an edge working with external API that has bugs and my code must workaround those bugs? For now the only way I know of doing that efficiently is just use the real things.

  • @CarlosVera-jy7iy
    @CarlosVera-jy7iy 9 месяцев назад

    I have a doubt, if in an API Rest a controller is considered a edge the app and should be tested with integration tests, how to do TDD for these cases?

    • @ContinuousDelivery
      @ContinuousDelivery  9 месяцев назад

      The idea is not to use fine-grained TDD for everything, do some lightweight "integration tests" if you really feel the need to specify the interactions, but make the code that expose the API part of the service a thin layer, and adaptor, that translates into meaningful interactions and use TDD to test those meaningful interactions much more thoroughly. Don't use "integration tests" to test anything but "integration" they should be connection-smoke tests really.

    • @CarlosVera-jy7iy
      @CarlosVera-jy7iy 9 месяцев назад

      @@ContinuousDelivery I am a bit confused, you tell me you say that for the example of doing TDD to a controller I should create an interface and implement it in my abstraction(for the TDD) and in the production controller(who will handle the http request), but only do TDD for the abstraction?

    • @ContinuousDelivery
      @ContinuousDelivery  9 месяцев назад

      @@CarlosVera-jy7iy I'd think of this as general defensive design. There is a difference between the service that the service provides and the API to that service so a good separation of concerns means that we have code to deal with the API calls and different code to deal with the service that those calls represent.
      If you send a service a message with, maybe including item, an order a quantity and an account number, I could crack the message, the API call in-line with creating the order, or I could extract the parameters that I am interested in
      item = getStringParam(msg, "oder/item")
      qty=getLongParam(msg, "order/quantity")
      account-id = getLong(msg, "order/accountId")
      and then call placeOrder(item, qty, account-id)
      This is better code than in-lining the cracking of the parameters with the placing of orders, good design says each part of the code should be focused on doing one thing, here we have two, cracking params and placingOrders, and these two things are at VERY different levels of abstraction so combining them will very often lead to problems.
      As far as testing, the paramCracking helpers, getStringParam getLongParam in my example would have been built with TDD in the abstract, which means that for cracking this specific message there is little testing left to do. does "order/item" map to item etc? I may test that with TDD or integration tests, depending on my design and the rest of the system.
      the really intersting bit though is the logic in placeOrder which should now be perfectly testable.

    • @CarlosVera-jy7iy
      @CarlosVera-jy7iy 9 месяцев назад

      @@ContinuousDelivery Thank you very much for the response, then what I can understand is that the idea presented is to keep responsibilities separated and apply TDD when it is valuable, so creating a method for the API of the service that receives that order you mentioned as an example is not where TDD should be applied?

    • @ContinuousDelivery
      @ContinuousDelivery  9 месяцев назад

      @@CarlosVera-jy7iy Yes, at least it is not the point where TDD is most valuable. I would use Acceptance testing to test the integration, configuration and deployment of the TDD-tested pieces.

  • @POVShotgun
    @POVShotgun 3 года назад

    How many tests are too many tests? Sometimes I find myself writing up functions and finding that these functions need more functions and all the functions needs to be tested. Or is that the way it goes?

    • @Nerdsown
      @Nerdsown 2 года назад

      Do not test every function. Test the peripheries of your system, its API.

  • @thomashanson3476
    @thomashanson3476 3 года назад

    Opinions on Haskell?

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +1

      You may want to watch tonight's video, "OO vs Functional" 😉 😎

  • @thetrickster42
    @thetrickster42 3 года назад

    Ask the SEL4 guys about TDD :)

  • @DavidDavida
    @DavidDavida 3 года назад

    isnt that why we have sandboxes?

  • @bisschops99
    @bisschops99 3 года назад

    Good videos, but please remove or lower that bell sound.

  • @HoD999x
    @HoD999x 3 года назад +2

    i think it's not very efficient to create "fake edges" for everything. whenever we write code just to enable testing, we create more logic that could behave unlike its real counterpart. the test might fail but everything is fine. the test might say everything is fine but IRL it fails.
    tests running in real world conditions are much more useful than tests that run in fake worlds
    (my unit tests access files and databases)
    for example, what if my file is big? the in memory test will fail, IRL it will work
    or what if there is a speed requirement? in memory things can be fast, but IRL they will fail

    • @Nerdsown
      @Nerdsown 2 года назад

      Agree 100%

    • @yuriy5376
      @yuriy5376 4 месяца назад

      Technically that would be an integration test. And you're right - these tests are underrated and should be preferred over unit tests.

  • @brianschalme1457
    @brianschalme1457 3 года назад +4

    Great video Dave, thank you for this. I encourage mocking collaborating classes, but your video gave me ideas for mocking I/O.
    Now where can I get one of those t-shirts? #SurfArakis

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +2

      Cool, thanks.
      I thik I bought mine from here bit.ly/3rEUbc7

  • @xcoder1122
    @xcoder1122 3 года назад +2

    And then you end up with a test, that confirms that your code works correctly with a hash map in memory while the real code fails completely as soon as it must deal with real files because you've never tested the real thing. Would you want to ride an automated taxi driven by code that was only tested in a traffic simulation and never on a real road? I fail to see the problem testing with real file access. Just create a temp directory somewhere for all your tests and delete it at the end of the test phase, unless a test fails in which case you may want to look into that directory to find out why exactly it has failed. But then your test may also fail because of a storage failure? Yes, it can. But your test may also fail because of a bad RAM chip or an overheated CPU. Go and fix that hardware failure and run test again.

  • @gooseberry41
    @gooseberry41 3 года назад +1

    Faking away upstream and downstream systems is such a bad idea. Things like contracts of interactions are never reliable enough. You can make any "proof" that your system is working as expected, such as contracts (based on what?) or going over all possible inputs and outputs, but at the end the only way to say it for sure is to test it. And if you want to release what you've done on production, you should test it on production-like environment, which, yes, would include all external dependencies.
    Imagine the simpliest possible case, when your app sends a request to another app expecting a certain response. How would your write a test, which includes that interaction? How can you write the expected behavior? Expected behavior is not something that you can take from the contract, you have to see how the application actually collaborate with others and evaluate if this is what the end user would expect. And real interactions might be much more complicated, you might not know how many times your application has to interact with the others, or what types of request you have to send, or the other application also might call yours in any time at any moment inside your processes with unpredictable pattern. Can you be sure, that interaction, that you've written in tests, has anything to do with reality? Can you be sure that you and the team making apps on the other side understand the contract the same way?
    Not taking into account external dependencies is just a lack of responsibility. I saw how it happens - two teams blaming each other trying to get which one was the bad and did not satisfy the contract. Looks incredibly stupid, in my opinion.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      I guess we will have to agree to disagree. On your question, "how can you write a test that includes an interaction" well you create a simple simulation of the interaction. In fact, that is the only way that you can test the interactions unless you can populate the external system with the data that you need for the test and that can get complicated. My statement is not theoretical, I have done this for real world complex systems.
      So we can certainly decide that we disagree, but you can state that this is imporsible, because I, and other people, have done it.
      This is not about "not taking external dependencies into account" rather my position is that you can't test your system properly if you only test with them in-place.

    • @Greedygoblingames
      @Greedygoblingames 3 года назад

      To be fair I think he's talking mainly about a strategy for unit tests, rather than integration tests.

    • @gooseberry41
      @gooseberry41 3 года назад

      @@ContinuousDelivery No, I won't state, that it is impossible. I know how to do TDD and cover everything with unit-tests, and test everything not having any dependencies. My point is different - it's not enough to make a release. As well as ignoring external systems dependencies in integrarion tests. In fact, in my current project our teams are using both approaches at the same time - testing with dependencies and with mocking them. The former one is use as a first line defence, and also helps people write and debug tests (and I'm still not certain if it is necessary, or just a complication). Testing with dependencies on a production-like environment, indeed, has a lot of problems with test instability, but it also gives a much more valuable feedback and regularly finds serious problems, which could not be found otherwise, and it is mandatory for any release.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +2

      @@gooseberry41 Well, it is enough to make a release. I, and many other orgs have done that. I did it for a fairly significant finance system that integrated with somewhere between 20 and 30 external systems. We didn't do ANY end to end testing with these systems. We did other stuff, to protect us, and them, from changes to these interfaces. We architected our system so that it was defensive at these boundaries. These things were intentional, to allow us the freedom to avoid E2E testing.
      I agree that you need to defend these interfaces, but IMO end to end testing is a fairly poor way to do that. You can, IMO, do better than that.

    • @jimiscott
      @jimiscott 3 года назад

      @@ContinuousDelivery As someone providing an integration platform I certainly agree with this statement, BUT faking all your calls doesn't work either.
      Some things need to be unit/integrated tested (the line here is blurry) with an actual solution/service/etc. For example, if you need to capture the auto identity from a SQL insert, the mechanism to do this is different in Oracle than it is in MSQL. If you need to hit both types of databases you need to do the hard work. Similarly, web services implement authentication, paging, error handling, differently. Yes you can fake these, but also sometimes you need to actually make an invocation. FTP/SSH the same.....Files should be written and can be written (but these are also the least likely to fail).
      There are certainly tools to assist with the different edges, but sometimes you have to bake your own....if you deem the edge(s) important, then actually test it.

  • @ebrelus7687
    @ebrelus7687 2 года назад

    I heard co vid instead covered. Wow I got also brainwashed.

  • @DavidDavida
    @DavidDavida 3 года назад

    Do U Speak Dave? iDo

  • @fabianhachenberg860
    @fabianhachenberg860 3 года назад +1

    I'm not convinced that everyone should implement their own imitation of a filesystem for test purposes. If you amass so much test code doing actual interaction with the file system that this is relevant for the total runtime of your tests, you probably are in the situation that you have to test for the idiosyncrasies of real file systems (like asynchronous reads & writes). In that case it's probably a horrible task to imitate this behaviour in your own class.

  • @PelenTan
    @PelenTan 3 года назад

    Do you even listen to what you're saying??? You just said that you can get two different results on testing. And that if unit tests succeed but "end-to-end" tests fail, the problem isn't the code but the user. This is so backwards, I'm not sure where to start. I don't see how any sane person could care less if the unit tests fail but all the end-to-end tests succeed. You can't even claim "security" as when you are faking all your data, all your security tests are by default fake as well.
    You want us to throw away thousands of years of doing things for this TDD. It's not up to me to prove it is the way. It is up to you that there is any real-world value to it. And everything I've seen and heard shows you triple your work for zero gains. ROE is _negative_. Prove yourself.

    • @tomvahlman8235
      @tomvahlman8235 8 месяцев назад

      @PelenTan
      This file example may not be for the complex tasks professional devs have to cope with in the real world e.g. banking system, working with a batch up to two years on Kubernetes and integrating the filesystem to being able to talk to mainframe systems with different character sets than our own system. When testing complex interactions with filesystems from JUnit you probably use functionality from the frame work, e.g. Spring Batch, to test handling of files, not writing your own Fakes. However when testing against a database in JUnit it is the preferred way to create a Mock facade, focusing on driving the behaviour of the business logic with TDD. This way to not complicate things. Using Testcontainers etc to test database queries. I am a beginner in this area though, and have been tearing my hair trying to make testcontainers work with the help of Google :)