Real Example of a Deployment Pipeline in the Fintech Industry

Поделиться
HTML-код
  • Опубликовано: 29 авг 2024

Комментарии • 134

  • @tuV13ja
    @tuV13ja 3 года назад +50

    Have you thought about making a video with an example of project structure, showing folders, where each test goes, what goes in the repository and what only in the cd server?? I think that would complete the full picture 🙏🙏🙏 I can share ur video with my students. Love your job Dave! Hi from Argentina :)

    • @andile5945
      @andile5945 2 года назад

      ruclips.net/video/q0YFuAfykWU/видео.html hope this helps! BW!

  • @brintmontgomery8323
    @brintmontgomery8323 2 года назад +2

    This was a fantastic example. It really clarified the CI pipeline for me. During my lunch hour, I've watched almost every video up to this one, and for now this is probably my favorite one. Thanks for such helpful content.

  • @danielwilkowski5899
    @danielwilkowski5899 Год назад +1

    This is pure gold.

  • @PraneetCastelino
    @PraneetCastelino 3 года назад +9

    I have learned a lot from this channel.
    Keep it up.

  • @chulucninh2083
    @chulucninh2083 3 года назад +16

    Hi Dave, can you give your thought about monorepo vs multirepo in the theme of continuous delivery? What is the best practice to maintain codebase of multi services with CI/CD?

  • @marcin2x4
    @marcin2x4 3 года назад +2

    This channel needs more audience! Awsome t-shirt ;)

  • @gronkymug2590
    @gronkymug2590 Год назад +1

    Hi Dave. I would really like to see a pipeline which look more like this: CI => canary deployment to production => automatic tests in prod for canary users (test are: acceptance, integration, etc.) => feature flag canary releases (allowing only a subset of users at the time to test the feature itself, usually testers first then beta testers, then 10% of normal users, and so on).

    • @ContinuousDelivery
      @ContinuousDelivery  Год назад

      Yes it is difficult to find orgs that are willing to show their real world examples.

  • @markemerson98
    @markemerson98 3 года назад +5

    great book (continuous delivery pipelines)

  • @arturk9181
    @arturk9181 3 года назад +4

    Love this channel

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +1

      Thanks 😊

    • @arturk9181
      @arturk9181 3 года назад

      @@ContinuousDelivery No, thanks to you for the great videos and sharing the knowledge

  • @ForCeGR
    @ForCeGR 2 года назад

    Their DSL for acceptance tests look really nice, reads like a book!
    Nice video!

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад +1

      Yes, TransFICC are what I think of as as "2nd generation LMAX company" all of the founders worked at LMAX and so were heavily influenced by the approach that we created there. TransFICC are doing a really nice job, and building some great software.

    • @michaelszymczak4245
      @michaelszymczak4245 Год назад +1

      Thanks, the video is already one year old, I hope that they look much better now ;) On a serious note, in our domain we spend a lot of time dealing with various protocols and we try to use this experience to design our internal protocols. Nicely defined protocols have clear layers of abstractions that separate unrelated concerns. This turned out to be a great help when writing DSLs and test scenarios, as all you have to do is to remember to stay on the same level of abstraction and use the protocols as test boundaries (plus the usual stuff, like arrange act assert patterns, deterministic and isolated scenarios, good domain model etc.).

  • @maardal
    @maardal 3 года назад

    those were some really short, readable tests. Good stuff.

  • @tiagodagostini
    @tiagodagostini 2 года назад

    I have been watching all your videos in the last 2 weeks or so, and the more I read people comments and examples, the more I realize we as "development field" have a problem. We, frequently think as development as a domain , when in reality it is a function upon different domains . An example was your statement in this video... "it is a fairly complex system and take sup to 25 minutes to build from scratch". I come from a background where a compilation from scratch takes like 3 hours, Our test datasets had more than 700 GB of data and the testing as a whole took yet a few more hours (and COSTS quite a bit of money to execute) . That just raises a point for me, we usually take OUR reality as the reality of development and that is as much a mistake as is when marketing and other departments take their realities and try to shoehorn it into development. There is no such thing as the reality of development, each company and field will have very different conditions that will result in different needs!

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад

      Of course it is all contextual, but when I said "it was a fairly complex domain" I meant that. Although in the next breath I said "it took 25 minutes to build from scratch" you can't measure the complexity of the system based on that. Tesla do better than that for all of the systems in a car, Google do better than that for 9.5 billion lines of code.
      My point is that you can optimise this stuff WAY beyond what most people think.
      I most work with larger, more complex systems these days, and sometimes it is challenging to reduce build and test times, but there are lots of ways that you can. My preference is to avoid using copies of production data for testing, for example. I am only guessing, but If your test datasets have 700Gb of data, my bet is they are probably using copies of prod data, and this is a very inefficient, and not very well focused way of testing things and results in slow tests.
      I completely accept I may be wrong, this is only a guess, but it is an example of problems that I see people face testing big systems all the time.

    • @tiagodagostini
      @tiagodagostini 2 года назад

      @@ContinuousDelivery It is production data because there is statistical and AI inference about data in the product and we need to test if in a real workload it keep within certain boundaries (and Tomosynthesis exams have a few GB each one). Sure probably there is along path of optimization but you need to survive and hold on while you trail that path. Each domain has its different minefield to navigate. I bet Space X teams would love if testing a rocket engine was cheaper :)

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад

      ​@@tiagodagostini Sure, but most orgs I see don't think that feedback performance is important enough to work hard to minimise it, when it is. In you case I'd be looking at the value that these tests add, and how you could get that value in other ways. Testing is not the same as training for an AI, so what is this testing telling you that the training didn't? If you are doing this to get inputs into the rest of your system, so that you can test that, I'd be wondering about re-architecting the system so that I could isolate the AI parts more and then simulate their outputs, rather than the, presumably much richer, more complex, inputs.
      Again, I don't assume that these are sensible in you context, I don't know your context, I am merely trying to give prototypical examples of other ways that you could cope with that problem.

  • @thaianle4623
    @thaianle4623 3 года назад +2

    Wonderful content as always, Dave. This one is really helpful to see something that considered ideal in practice. I have a couple of questions:
    1. How is the commit test different than unit test? How come Michael passed the unit test but failed the commit tests?
    2. The commit test is supposed to run on every single commit? or batch of commit?
    3. What is the scope of the commit test (for only one service whose repo receiving the commit or else? My observation is that it doesn't seems to test the whole system, else how come it runs faster than the integration tests?
    4. One developer is supposed to write all the tests (including acceptance test and integration test)? I suppose the moment the commit hit the repo, all the tests should be available.
    I know this is a lot to ask but I really appreciate if you could help me understand better :)

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +3

      1: The commit test is the pre-push test, plus a couple of extra stages like packaging and uploading artefacts. The break in the commit test was done deliberately for illustrative purposes :)
      2: The commit test is done on every commit pre-push, and for all commits pushed since the last commit build in CI. This is usually a single commit, but occasionally there's some batching involved.
      3: It runs all our unit (by which I mean fast, isolated) tests, across all modules. We have a monorepo. The integration test job runs a different set of slower tests.
      4: We pair, but yes, most tests are written by developers. We have QAs, and they're also involved in writing tests, particularly at the acceptance level.

    • @thaianle4623
      @thaianle4623 3 года назад

      @@TARJohnson1979 Thanks a bunch for the clarification. Mind if I ask what is the team size and what are the most important factors for all of this to work?

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +3

      @@thaianle4623 We have 3 dev teams, each with 4 devs (plus QAs, BAs, SREs and all the rest), so we're pretty small.
      I've seen this approach work almost verbatim with a team of 25 devs - ie, twice this size - I'm sure it would have difficulty scaling to 500 people, but I don't know at what point it would start to creak. I have no interest in finding out first-hand. :)
      I think the most important factor in making this work is trust. Everything else flows from that.

    • @thaianle4623
      @thaianle4623 3 года назад

      @@TARJohnson1979 Appreciate for the info. Small teams definitely could still do wonders :) Assuming that each team could maintain this high and consistent quality, as long as we can define a good boundary and certain level of tolerance, scaling is possible (though this assumption is not something easy to achieve I suppose)

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      Jez Humble and I defined the idea of “Commit Tests” in our book “Continuous Delivery”. The idea is that these are tests that you run at the point of commit. The aim is to give fast feedback and high confidence - to fail fast if there is a problem. The vast majority of these tests are unit tests, best written via TDD, but it is often helpful to also include some other kinds of test that can still run fast, but help to increase your confidence in the system as a whole. The tests that Michael and Judd ran before they committed were the same tests that would run on commit. They ran them locally first to check they wouldn’t break anything. Post commit, they’ll are run again, because this is the only point that is definitive, in terms of what will end up in production.

  • @harag9
    @harag9 3 года назад +3

    Excellent video, still confused with what "acceptance" tests are... are they different from unit tests. Still plowing my way through your vids though.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      Yes they are different. Unit tests tests small 'units' of code to basically check that the code does what the developer thinks it does. Acceptance tests test a deployed version of the system from the perspective of an external user. Check this video for more on acceptance tests ruclips.net/video/JDD5EEJgpHU/видео.html

    • @harag9
      @harag9 3 года назад

      @@ContinuousDelivery Thanks for the response and the link, that was the next video for me to check out. :)

  • @gameconner
    @gameconner 3 года назад +4

    Hi there Mister Farley, thanks for the video!
    If I perhaps might give a suggestion, could you perhaps use another background or go back to the previously had?
    The amount of (fast) movement combined with the constant hard contrast swapping when the 'triangles' form, is quite distracting and mentally draining (for me personally that is).
    Thank you for all the hard work you clearly put into all your content. I and my pupils have learned a lot from watching you discuss various software topics.

    • @Vaisakhreghu007
      @Vaisakhreghu007 3 года назад

      I totally agree...the content is top notch quality but graphics of stone age quality...

  • @videojeroki
    @videojeroki 3 года назад

    thanks for sharing this real world process.
    Would be great if you could talk about the configuration of the 'agents' that run the pipelines (especially with unusual machines like ARM64 embedded devices)

  • @rudySTi
    @rudySTi 3 года назад

    Thank you for the video!

  • @androkles04
    @androkles04 3 года назад

    Very cool and informative video.

  • @vjkkjv
    @vjkkjv Год назад

    An instructive video, thank you.
    I have a question please : Micheal has committed a change which caused the build to fail. Whilst the build related to Michael's change was running, Developer-X and Developer-Y commit their own changes. So, on the Trunk, there are Michael's commit, Developer-X's commit and Developer-Y's commit. What happens to Developer-X+Y's commits when Micheal's commit has caused the build to fail?
    Does the CI process stop when the build fails? And Michael then fixes the bug and commits to trunk, at which point all three commits - Michael's bug fix commit and Developer-X+Y's commits - are evaluated together with CI?

    • @ContinuousDelivery
      @ContinuousDelivery  Год назад

      Other devs can continue to commit, but the build will continue to fail until Michael fixes the problem he introduced. The SW is not considered releasable until all the tests pass. Michael, having committed a failing change, has two options, either fix the problem or revert his change to keep the route to production open for everyone else.
      I have a video on Continuous Integration etiquette: ruclips.net/video/Xl62gQpAl1w/видео.html

  • @ribaker822
    @ribaker822 Год назад

    Thank you for sharing this. About the trunk based approach, is/are there specific benefits to working solely off main branch as against feature branches and subsequently merging? Thanks

  • @edgunn1395
    @edgunn1395 3 года назад +1

    Great video, thank you! One question, is TransFICC using a monorepo?

  • @fmkoba
    @fmkoba 3 года назад +1

    sorry if I missed this or it was implicit, but do they rule out peer code reviews? I understand how that can fit easilly into that pipeline, but like many here in the comments, I was thrown off by the embargo system, maybe in the case of an embargo the subsequent code review could be bypassed to expedite the process and "unpoison" the well

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +5

      Transficc uses pair programming instead of code review, but if adapting this approach for a code review shop, I would simply comment that a revert shouldn't require a review.

  • @gamer-gw9iy
    @gamer-gw9iy 3 года назад +2

    Do you have any examples of CI pipelines? Thank you for the great video👍

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      The first stage of this one, and all true pipelines, is CI. The commit stage is CI.

    • @antoruby
      @antoruby 3 года назад

      I don't know if this was a joke or not, but the answer is on point!

    • @gamer-gw9iy
      @gamer-gw9iy 3 года назад

      @@antoruby 😂 I had only started watching the video when I commented this

  •  3 года назад +3

    Hi, I love your share. BTW, I'm a C# fan. And I see the syntax of the tests from this video. I wonder if you could share what is the library or tool to write those? Does it support C#?

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      It is not a library or tool, it is their home-built DSL to support their testing. It is an evolution of an approach that we started with at LMAX where I worked with some of the TransFICC founders.

    • @CTimmerman
      @CTimmerman 3 года назад

      Looks like Java to me, written without side effects.

    • @TARJohnson1979
      @TARJohnson1979 3 года назад

      @@CTimmerman Oh, no, it's full of side effects - keeping track of implicit state for the test. That's actually most of the value it brings. Failing an assertion is a side effect! And an important one too.

    • @CTimmerman
      @CTimmerman 3 года назад

      @@TARJohnson1979 That's handled by the wrapper.

    • @TARJohnson1979
      @TARJohnson1979 3 года назад

      ​@@CTimmerman OK, now I'm not sure what you mean exactly either by "written without side effects" or "handled by the wrapper" here?

  • @nitzblitz
    @nitzblitz 3 года назад

    I like it but I think there is scope to improve it a lot further. Essentially you run what runs on CI from a feature branch plus new tests on a short lived server provisioned for that branch. Passing this stage PR is created into master branch. Benefit is you give the dev a chance to run all tests in isolation from others so even if it breaks only he is blocked and not entire team.

    • @ddanielsandberg
      @ddanielsandberg 3 года назад +1

      What do you mean by "runs on CI"? Why I ask is because I see this confusion everywhere; where people thinks that CI is a build server and/or "running tests on a branch". Neither are true.
      They don't need branches to improve anything because they have the practices, experience and discipline to move faster with higher quality without them.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад

      You may like to watch (and probably disagree with) this: ruclips.net/video/v4Ijkq6Myfc/видео.html
      and if you are still not convinced, this: ruclips.net/video/lXQEi1O5IOI/видео.html

  • @ashimov1970
    @ashimov1970 10 месяцев назад

    Great video! Thanx a lot, Dave for this amazing piece of knowledge and experience that you and TransFicc generously shared with IT community worldwide. Though I didn't get who write acceptance tests at the commit stage - devs or QAs? If former is the case then what's the responsibility of QA team in the whole process/pipeline?

  • @mateuscanelhas5662
    @mateuscanelhas5662 3 года назад +1

    Building on the test failure case, in conjunction with trunk based development: Suppose the fix to the new commit was a bit more involved, and required some more hours from the developer. How would the commit and build system act in this case? still embargo, for whatever was the needed time? temporarily revert the commit?

    • @michaelszymczak4245
      @michaelszymczak4245 3 года назад +5

      Hi, it's Michael from the video ;) I would usually revert the change as soon as possible to unblock the pipeline and then do what I did on the video, which is reproduce the issue locally and make sure its OK in the second try.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      I prefer to operate a “10 minute rule”. If you can’t commit a fix within 10 minutes, revert your change and work it out off-line.

    • @mateuscanelhas5662
      @mateuscanelhas5662 3 года назад

      makes sense. Thanks for the input, everyone. Keep up the good work.

    • @NoamSoloveichik
      @NoamSoloveichik 3 года назад

      @@michaelszymczak4245
      Was the failing commit reviewed by anyone in some form? Code review/pair? Because I realize you practice trunk-based development but I just can’t get how it works in practice

    • @michaelszymczak4245
      @michaelszymczak4245 3 года назад +1

      Very good point. Yes, we practice pair programming, so most code reviews are done as we code, before the code is checked in. Here is Dave's short video highlighting some of the benefits ruclips.net/video/qtxZoO6R2fc/видео.html

  • @RobertoPrevitera
    @RobertoPrevitera 3 года назад +1

    Great video, as always. As far as I understood this approach is heavily related on the pre-commit checks, that are performed by the developer on the local dev environment: once the changes are committed and something goes wrong in later stages (suche as: integration tests) the only option is to lock any other commits ("embargo"), fix the issue and then proceed.
    I understand this is a rare exception and it's acceptable, given the benefits. but still a question remains: why not choosing a more "distributed" approach with every developer branching (short lived feature branches which can perform all integration tests) instead of directly pushing to the trunk?

    • @ddanielsandberg
      @ddanielsandberg 3 года назад +2

      In part. Continuous Integration is a human practice where it is expected that the developers pulls the latest changes and run the unit tests, lint, checkstyle, findbugs, (and component integration tests) before pushing. Those same checks are then run at push, which is called a "commit build" and should give you a high confidence that everything works. In CD there are other kinds of tests and checks downstream that takes different aspects of the entire system into account.
      As I understand it, the embargo shown here is seldom used. Again, CI is a human practice and a social contract: if the build is not fixed within 10 minutes, anyone can revert the commit that broke it. The key here is to be disciplined and don't break things all the time, and the few times it breaks we do whatever we can to get back to a working state. Shit happens, learn from it and move on.
      Part of the issue with feature branches is that it optimizes for individual developers working alone in their corner with headphones on. This is a bad thing. Sure, you can use short-lived feature branches if you must; where (in my view) short means a couple of hours. But if most only live 30 minutes to a few hours what's the point of branching in the first place?
      Look, I'm just an opinionated a-hole that's tired of arguing with people and also thinks that part of the popularity around FB/PR is an amazing feat of social engineering by the tool-vendors that has even the oil and tobacco companies in awe. So I'm just going to give up and drop this Twitter thread by Jez Humble: twitter.com/jezhumble/status/982988370942025728

    • @RobertoPrevitera
      @RobertoPrevitera 3 года назад +3

      @@ddanielsandberg Thanks. It all comes down to these sentences: "The premise of CI and trunk-based development is that coding is fundamentally a social, team activity." and also "[...] we are striking at one of the central beliefs about what it means to be a "good" programmer."
      I agree on coding being a "social" activity but also disagree on it being necessarily a "team" activity, at least not as "team" as intended in sports. Coding is (sometimes, not always) a creative task just like designing a new object or writing a novel. This kind of activities are notably "single player" sport, so it makes sense to enable it via a slightly different CI approach.
      In my opinion (ad experience) the "good programmer" does not need to be the "rockstar" type nor the "team coach", it can be a bit of both.

    • @metadexter
      @metadexter 3 года назад +1

      @@ddanielsandberg thanks that makes a lot of sense to me. The only thing I don’t understand is how do you do code review if you’re not doing pull requests? Do you just have to share your screen and ask another teammate to give you feedback on the spot? Cause if it’s only on your local machine then they can’t check it out on their machine.

    • @vitasartemiev
      @vitasartemiev 2 года назад

      @@metadexter There's a video on this channel somewhere that discusses this. He proposes to have a very short-lived branch dedicated specifically for the pull request.

    • @metadexter
      @metadexter 2 года назад

      @@vitasartemiev Yeah that does seem like a decent way to do it. Although I do like the idea of just sharing your screen and doing the review in real-time. I think it might make the code review process more interactive (and possibly quicker).

  • @TheZimberto
    @TheZimberto 2 года назад

    If Michael's commit failed why would he use an embargo? Surely the last step in a successful commit is a merge so a failure should not impact anyone else.

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад

      For true CI, you need to be testing the post-merge version, so the merge can't happen after the testing.

  • @TurkishCowboy-z1d
    @TurkishCowboy-z1d Год назад

    How is it possible to write unit tests as good, clean, simple as this? Awesome. Can you provide a github sample repo or a blog post showing this? Thanks

    • @JDLuke
      @JDLuke Год назад

      When you follow a TDD (Test Driven Development) approach, unit tests like that are the natural result.
      This is because TDD is not merely a testing practice but a tool for designing elegant code in the first place.
      There are lots of videos about this on RUclips and elsewhere, but the core concept is that your write your tests from the perspective of the client for the code in question. You won't write a test that requires 150 lines of setup and 30 mocks to validate business logic that way, because only a fool would do that when starting from scratch.
      Ugly unit tests can be traced back to having written code first, and then trying to figure out how to test it. That's hard, much harder than most realize.

  • @shilohaendler6360
    @shilohaendler6360 3 года назад +2

    Really interesting video, thank you! One question though: when the commit failed, does the pipeline reverts it from master? After all, for that case, if the commit is still there, the software is not in a releasable state until the fix (let’s drop the embargo feature for now)

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +5

      No, the pipeline is running on master. When the build fails you commit a fix or revert the change quickly. That keeps everything running. The embargo is not used every time you break something. It is there for exceptional problems where the breakage is, for some reason, difficult to revert. We included it on the video to demonstrate its use, not because it was necessary in this example.
      You can see more on my advice on the way to use CI in this video: ruclips.net/video/Xl62gQpAl1w/видео.html

    • @shilohaendler6360
      @shilohaendler6360 3 года назад +2

      @@ContinuousDelivery thank you!! I am trying to understand what is fast. Is it relative to successful deployments rate? Or to the overall commits rate? I had the conception of executing the majority of tests prior to the commit - on the PR phase. This of course increases the time being away of master but decreases the probability of failure on master.

  • @MorgurEdits
    @MorgurEdits 2 года назад

    Now I want to build my own pipeline.

  • @jerrysundin
    @jerrysundin Год назад +1

    1. I think the local dev testing should have its own blue container on the wheel, since it appears crucial.
    2. Why isn't the commit-testing executed on a feature branch, and if it passes, that feature branch is merged to the trunk? The Embargo seems not required to avoid team poisoning.
    Thoughts team? :)

  • @RudhinMenon
    @RudhinMenon 3 года назад +1

    Thank you for the video, looking forward to scenarios such as how to deal with defects which are identified in a much slower branch than the development branch someday :)

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +6

      There are no branches. If the defect is already in production you fix it the same way that Michael does in this video.

    • @RudhinMenon
      @RudhinMenon 3 года назад

      @@ContinuousDelivery thanks for the reply sir,
      But isn’t working on master branch risky ?
      As Changes might takes days to stabilize, we can’t hold production till then

    • @ddanielsandberg
      @ddanielsandberg 3 года назад +6

      @@RudhinMenon Then the problem is not that we are working on master instead of a branch.
      The root cause is that "it takes days to stabilize".
      1. What would need to change in your way of working for master to always be stable (without branches)?
      2. How would you behave differently if, at any time, anyone can take the latest build from master and deploy it to production?
      3. What if "done" meant "running in production, working, making the company money" instead of "coded on a branch, but not really tested nor integrated"?
      Just some thoughts...

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +2

      @@RudhinMenon Changes take days to stabilise if your don't do a good enough job of testing and your Continuous Integration process isn't working well. Otherwise, when they do, this is the significantly higher-quality approach.

    • @RudhinMenon
      @RudhinMenon 3 года назад

      @@ContinuousDelivery thanks, but having multiple clients to cater to and each running on a separate version with their own customization, I find it hard to see how it can be managed on a single master branch

  • @DanielPradoBurgos
    @DanielPradoBurgos 2 года назад

    My big question is for code reviews, all seems pretty good and I'm driving my team to more or less have the same pipeline, but my team thrives a lot on code reviews, they even race each other to find code that can be improved.
    Do they practice peer coding for reviewing code?

    • @michaelszymczak4245
      @michaelszymczak4245 Год назад

      Hi, we pair-program, so we constantly review each other's ideas as they arise. Sometimes we ask for code reviews broader audience if the piece of code is challenging/interesting enough (or I know someone is an expert in the area but I did no pair with this person). But most of the time, pair programming is enough.

    • @shanedemorais7397
      @shanedemorais7397 Год назад

      @@michaelszymczak4245 However, a code review done in person with another dev (outside the pair) asking specific questions based on a code review guideline has proven better than say a pull request code review style system. Having to explain the decision making regarding an implementation can cause someone to reconsider/rethink their code in ways that PR's don't facilitate, nor pairs catch (distance from the code helps). And the resource utilization/context switching is avoided as a code review is a scheduled event with a timebox conducted by someone who understands the broader intention behind the activity.

  • @tuV13ja
    @tuV13ja 3 года назад

    Forgive my ignorance please but, we should run all tests locally before we push, right? So, what's the difference between the commit stage on cd pipeline and running all tests locally?

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +2

      We should run all tests locally, but mistakes are possible. The commit build just proves it somewhere shared and centralised, and ensures you haven't done something stupid like forgetting to push part of your changes or leaving file paths pointing to your home directory.
      There are also fringe benefits like: the commit build updates a shared build cache, speeding up builds for everyone. But it's mostly about proving the build good.

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +3

      I’d add to what Tom said, running the tests locally is not integrating your change with other people’s, at least not definitively so, so the commit adds certainty that, if all the tests pass, your change is releasable.

    • @IvanSeminara
      @IvanSeminara 3 года назад +1

      If I can add to this, you should also take into account that it's not a given that local tests will replicate exactly the integration environment, some code may depend on an external service (e.g. an identity provider) that you might not be allowed to run on your local machine (imagine licensing issues). You could have some kind of simulator to run locally, but I wouldn't bet prod on it being 100% reliable.

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +1

      @@IvanSeminara Whilst that's true of some tests in builds later in your pipeline, you don't want those in your commit build. That should consist only of fast, isolated tests.
      It's not expected to test all those downstream cases before pushing (although it's good manners to do some tests in the immediate vicinity of your changes) - that's what the pipeline is for.

    • @IvanSeminara
      @IvanSeminara 3 года назад

      @@TARJohnson1979 Thank you for the answer. You're right, I managed to misinterpret Leon's question and missed the "commit" part.

  • @ShadCollins
    @ShadCollins 3 года назад

    This was interesting. Maybe I misunderstood what was happening but it looks like a mistake by any random developer can shut down the entire pipeline. That seems pretty fragile to me. Am I missing something?

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад +2

      Well not really "any random developer" people committing to the pipeline are part of a team. Also there is Continuous Integration discipline (ruclips.net/video/Xl62gQpAl1w/видео.html) which kicks in to fix problems highlighted by a test failure in the pipeline. Finally this is true of any development process, we just raise the quality bar a bit higher. If "any random developer" commits a change that later is found to introduce a show-stopper bug, the release process is stalled until that bug is fixed. In CD we class any automated test failure as a show-stopper, and prioritise fixing it, or reverting the change that caused it, above everything else. This works well at scale and produces significantly, measurably, higher quality software.

    • @ShadCollins
      @ShadCollins 3 года назад

      @@ContinuousDelivery Thanks for the further explanation. Really enjoying your videos. Now I just need to read your book that arrived a few days ago.

  • @patharagaddagopinath473
    @patharagaddagopinath473 3 года назад

    Hi Dave this is Gopianth I am from i am DevOps enthusiastic.pls guide me for better devops Engineer I am working in CI part with Jenkins and Git I want to be expert in these pls help me out

  • @firefighter8083
    @firefighter8083 3 года назад

    It would be very cool, if you can make a video about policy management in version controll systems. Open source projects often uses pull request. Does companies like the fintech startup using this kind of policy management, too? Or does everybody has push permission? How does this affect the CI pipeline?
    In one of your last videos you talked about branches. Does the startup use branches and are their any differences in the CI between the branches?

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +1

      In this specific case: it's all pair programming and pushing directly to trunk. That triggers the commit build, which then cascades to all the subsequent builds.

  • @DanielPradoBurgos
    @DanielPradoBurgos 2 года назад

    That sounded a lot like something that could have been solved by using hyperledger

  • @shanedemorais7397
    @shanedemorais7397 Год назад

    I worked at The IPE in London (st. katherine's dock). Performance seemed to be a requirement. However, having a manager have 4 devs sitting at machines performing trades and telling us this is a performance test seemed to tell me that someone, somewhere didn't understand what they were doing. Which brings up the question of why, if performance is key, are trading systems implemented using Java and not, say Golang?

  • @patharagaddagopinath473
    @patharagaddagopinath473 3 года назад

    Hi Dave I from India how can I buy this book

    • @ContinuousDelivery
      @ContinuousDelivery  3 года назад

      You can buy “Continuous Delivery” from Amazon, and lots of other online booksellers. You can buy “CD Pipelines” from Amazon or LeanPub. Links to all of these are in the descriptions for most of my videos.

  • @satheeshan2000
    @satheeshan2000 3 года назад

    I am sure, employees loves work there.

  • @tristanperotti8756
    @tristanperotti8756 3 года назад +1

    So never any code review?

  • @baconsledge
    @baconsledge 2 года назад

    Obviously not done much embedded development.

    • @ContinuousDelivery
      @ContinuousDelivery  2 года назад +1

      Not my main thing, but I have done quite a bit. I used to write firmware for PCs a long time ago, and have been involved with LOTs of projects that do include embedded systems. If you are suggesting that this approach doesn't work for embedded systems, then you are wrong. Tesla upgraded their Model 3 cars a couple of weeks ago, to accept higher charge rates, from 200 to 250 kwh, they did it in 3 hours, because they work the way that I describe on this channel. There are lots of embedded examples out there.

  • @disgruntledtoons
    @disgruntledtoons Год назад

    If your deployment isn't automated, you're doing it wrong.

  • @nathanstott1909
    @nathanstott1909 3 года назад +2

    Run gocd. Then you get the dashboard out of the box. Lol

    • @TARJohnson1979
      @TARJohnson1979 3 года назад +5

      I work at Transficc, and the hugeness and starkness of the green / red boxes is actually a really important feature here. It turns the information radiator from something you need to read to something you only need to glance at in your peripheral vision on a single monitor twenty feet away.
      One-click embargoing is also pretty important.

  • @prakash7921
    @prakash7921 2 года назад

    this is manual deployment.

    • @michaelszymczak4245
      @michaelszymczak4245 Год назад +1

      Of the youtube video? Yeah, probably. After all, everything is manual, someone has to type the code... On a serious note, this looks like fully automated pipeline, what exactly is manual here?