Just deploy it over ftp. If you're working with someone else just shout loudly which files you're working on so they know to stay away from those. That was how I worked in my first job.
My favorte part of Mr. Prime Agen is that he's totally cool with "I was wrong" or "I misunderstood this before." Everyone needs to be okay with being wrong sometimes and it's so nice to see someone like Prime being so casual about learning new things and getting better.
Continuous Integration is *NOT* the automation pipeline you build or the tests you run, or the platform on which those tests get run. Despite the confusion caused by platforms branding themselves as "CI" and the regular misuse of the acronym. Continuous Integration is simply: "everyone merges their changes into the main line very frequently, ideally multiple times a day in small chunks". That's it. The pipeline of tests, and the use of feature flags to disable incomplete work and all of these other things are there to facilitate developers integrating their code "continuously". Continuous Delivery/Deployment are then just additional layers that sit on top, these layers are again facilitated by automated pipelines, checks, tests, etc.
This. And when someone inevitably merges something that breaks the remote/main, because you integrate in small steps (preferably doing TDD and merging every step after refactoring), it's easy to fix by anyone. Problem with some of the video's by CD is that they talk about one specific practice that should ideally be practices in a suite with other practices to really have the benefit he talks about. In this case CI, as he describes it, doesn't really work well without the other technical parts of XP. And then I would suggest doing Trunk based development with short lived feature branches that are merged via (automated) PR's. Also, no this indeed would not work for OSS with external contributers, quite sure CD also mentions that in some other video's.
It always gets me when people say things like "I will just run CI against my branch" or "I will just run my branch through CI" CI and TBD are basically synonyms, and you wouldn't say I'll just run my branch through TBD quickly. The other one is feature branches. Creating long lived branches to keep features separate for weeks at a time is bad, because at some point you will have to merge it, and if there is ever more than 1 of them that will fall apart. If there is not more than 1 of them then why bother doing it? But short lived branches that last an hour and only exist for code review and running tests before merging are fine IMHO, and I wouldn't personally call these feature branches at all. Then you have to say don't use *long lived* feature branches just to differentiate them.
Then Continuous Integration is either: A. Incredibly stupid. or B. Redundant. If people do work that can and should be merged with the main project - then they should do so. If they have work that isn't ready for that yet, then they should wait. How was there ever a situation that occurred which demanded a push for "Continuous Integration"? Did people work in the shadows for 3 months on a separate branch, never merging branches and then having a completely broken / incompatible project? Said in other words: Is the need for outspoken "CI" not just a general indictment of the bad judgement of people working with Version-Control systems?
I think I can address a bit of confusion - When Dave talks about the minimum integration frequency of being 1 day, and you talk about being a rebaser and "Why wait a day to update?" - As I understand it, he means integrating all diverging branches, and not just the one you are on. You can update your branch as often as master is updated, but nobody else who is diverging is going to get your changes until you merge.
You don't need to keep your release branches. As long as they are tagged with a version, you can recover that branch at any time, create a new one, hotfix it, versioning, release and delete the branch. Branching Strategies shouldn't be imposed like Dave is suggestion, but instead the team that is going to manage the repository and releases should find what fits their needs.
umm you do realize that I agreed with Prime, right? "his definitions" were directed at the guy he's reacting to. Every team can define branching for themselves, but mine - and I think Prime's too - is that we should not break master. That's just a different workflow with a different definition of CI that works for us. I simply do not understand the other ways and I do not care actually.
there's a disconnect between how an evangelist for CI/CD talk vs how an average devs talk of "we use a CI-server". So I kinda avoid using the term CI/CD outright when describing what what team does. I just describe it in a simple way: "we don't push directly to master. we use feature branches and on each push, a server runs our test suite. We can't merge a PR to master unless it's ✅. Master should always be releaseable. we TRY to make small branches, and ship often. But we'll make exceptions"
That sounds exactly like what CI is. Short lived smaller feature branches that are tested and then merged into production through a PR. With the CD part a master build and release will trigger automatically on PR's merged into master. The master branch should always be protected and only way to do changes to the master branch is through PR's with reviews and automatic testing.
@@alexandersagen3753 there is absolutely no reason why master should be *protected*. This obsession about protecting master is foolish. Master needs to be releasable, but it's absolutely okay to break master from time to time, as long as we can quickly fix it (it's the principle of the Andon cord from Lean). Establishing convoluted workflows just for the sake of protecting master can lead to all sort of inefficiencies.
@@andrealaforgia isn’t there a reason to protect master simply to avoid accidental pushes? A) Sure you can enforce all devs to have local git settings that prevent it - but that will fail when that mom who was on leave for 9 months comes back to work and everyone forgot she doesn’t have it B) it’s 100% ok to push to master in a solo project. It’s ok in a small team. It’s ok in a large team if everyone is working on the same change and are aware you’re gonna push to master. C) BUT in a large team it’s often the case that you have multiple tasks worked on by multiple sevs and it might turn out A is completed and ready to release before B - but if team B pushed problematic code to master by accident that screws everything up. team B might be aiming to push to a featureBranch before ending the day - but accidentaly push to master. Which screws up the plan team A had for an early release the next day. Especially if the policy is ”master should be release-ready” I thin the default should be that anything merged should have passed the remote test suite ✅ Which reduces the risk I have been involved in urgent bug fix attempts where we certain our changes had no negative effects - we pushed - and then thanked the heavens that we did not not push to master cause the remote tests failed, and we were about to head into another meetkng sl didn’t want to break master. (And please don’t ask ”why didn’t you test locally before pushing?”) PS: of course if the entire dev team is mob programming an urgent fix - it is nice if we’re allowed to push directly to master, but as part of a mid/large size team - that is a rare exception imo DS: I might be misunderstanding something about your Workflow or CI/CD pipleline.
Continuous delivery is a way of working (with a lot of automation) to ensure only good builds makes it to production. It is a falsification process where the "bet" is that the build is a release candidate that will pass the crucible and make it to production - but most builds won't. CD is the codification of the organization's processes and SDLC cycle so that every change is verified and has an audit trail instead of process theatre where everyone pretends the process is followed and done doesn't actually means done. Continuous Deployment is the same as Continuous Delivery but the final human "approval gate" is automated. Continuous Delivery is not "every commit goes straight to production".
This feels like the 'different worlds' problem to me. I've worked at places where CD works great and operates just as you describe, providing a way to ensure that all commits are rigorously tested before hitting production. However I've also seen it turn into a raging tire fire when people fail to actually make the 'rigorous' part a reality :) YMMV and probably does as with most things in the software development world!
@@ChrisPatti Yes, context matters. The issue is when organizations optimizes for "open source, zero-trust workflows" and "we have juniors, must control them" and "we can't have seniors to hold their hands" and end up implementing processes and workflows based on the lowest common denominator. And even worse, they tend to implement it across the entire organization by edict, again missing the different contexts. Working with CI/TBD is not "unprofessional" just because someone made a video on youtube saying "Use git like a professional". But it's at that level we are as a software industry. And it's sad.
@@kasper_573same. So many people think of CD and think of the deployment one. I find continuous delivery much more valuable, knowing the thing that made it to master is as good as we can make it ready for production (nothing is perfect). This means the deployment flow can be made to match the release needs of the organisation.
i think when talking about CI/CD you always have to consider the context: little library, application, oss, live service, devices, ... i assume his context was in the live service area. he also often mentions the DORA metrics and his co-author was a driving force for these. these were mostly focused on live services i think. in the end it has to evolve into whatever works best for your case so there cant ever be a one for all be all
You're missing some context here. David's idea of CI/CD is essentially based on the idea that your local copy of master is continuously updated so that you're never working on old code. Conversely, it means that your work is continously being pushed to master so that no one on your team is working on old code. You can do this how you want, but rebasing frequently is a good start.
I think he understands that. What hes saying is there could be a situation where you identify 2 bugs at once, and one of them has been deployed to production. Ideally you would want to hotfix the first bug, but now you have to fix both before you can deploy to production again.
@@olot100 Usually for that. There are 3 cases: the bug discovered if it exists only on main, fix will apply to hotfix branch of the main branch. If it exists on the develop/feature only there is no hotfix, you fix in develop/feature. And if it exists on both. You apply fix in develop/feature Then cherry-pick to hotfix branch of the main branch. This also apply on release branch where hotfix is mainly cherry-picked from other dev branch to staging of release-branch then merge later. So If only first bug to be fix, hotfix will contain only cherry-picked fix from develop/feature. (Yes usually maintainer of main/release are not the same as develop, and he is responsible for fetch, cherry-pick and merge fix from other team) (On 9:30. Hot fix never pull back to dev, That wrong, you do opposite) Then CI do things afterwards. Usually only done CI of production on tags in release/test, where maintainer feels OK this ready for test/deploy, If devs want to do do CI/CD of their own just setup own test branch for it and push fucking test branch for other to fetch and see what you currently doing for feedback rather than commit to develop to messed up others work before getting merge by maintainer.
Meanwhile, on a mid-sized project: 1. Client needs a hotfix for 1.0 deployed today. 2. You've been working on 1.1 for a while, and will deploy it in two months. 3. Client decided that feature X will create issues for them, and decided to postpone it for 1.2, which will happen in four months. 4. Client requested a proof of concept that may or may not be approved. Most of the clients I worked for preferred that all merges are done through peer-approved code reviews. I don't see how you could handle all these requests by pushing to master daily.
This guy talking about CI reminds me of the business people talking about "lean manufacturing"--it sounds very neat and pretty in theory, but it falls apart when subject to the constraints of reality. Ultimately, optimizing your production processes for philosophical purity does not create the best product, with the fewest hours invested, at the lowest cost.
Have worked in systems engineering, devops, and SRE across about 8 companies from 10k+ employees down to 5 employees and I have to agree. Smells like somebody who attended a CI seminar and is now coming back trying to explain to everyone how they are doing CI wrong. Starts with some suss definition of CI crafted specially to make fit his arguments, while much more broad definitions exist(see Atlassian's definition and also people make this mistake with the definition of CD). Also don't care if the definition came from the person who coined the term. You could go ask Germany about the definition of beer and come back to the USA touring breweries for the rest of your life telling them they aren't making beer correctly.
My interpretation of Dave’s central argument is to avoid all team members building large features/changes to completion in long-lived, isolated feature branches, then trying to merge back to main. Even if everyone is rebasing frequently off main, all feature branches are diverging from each other which can cause hell when everyone wants to merge to main. Feature branches are fine, but they should be short lived and merged to main quickly (ideally within a day). This means larger features/changes need to be broken up, merged to main, and shipped incrementally. If the changes are not ready for users they need to be hidden somehow. This could be through feature flags or simply not including the “entry point” of the feature in the UI (Dave has a separate video on this). Note: Dave argues taking it a step further by substituting code review with pair programming and pushing straight to main. I’ve never done this so I can’t comment, but I think a standard GitHub-style PR/review workflow aligned with what I mentioned above gets you 90% of the way there, particularly when coupled with good team communication and an emphasis on prompt code reviews.
Yup, that's exactly what he advocates. And I think he has a great point. The one downside is that it absolutely requires very solid testing and just as importantly, fast running testing. It's just not feasible if testing takes 1h+. edit: I also find it super funny how comments are saying Dave doesn't understand CI/CD. Wonder if they know he actually came up with the idea in the first place.
What gets suggested by Dave seems to me what I would call "continuously YOLO it to production". (Especially the CD part of CI/CD). But with the help of feature-flags that may be possible? I regularly have code deployed in prod that is "ready when the orther parties want to use it". Typically I think backend should be able to release before (app-)frontend uses it.
This is usually fine but in larger projects/companies features might get cancelled. I have seen it painful to revert tons of "small features" that contributed to the actual feature.
@@MeriaDuckYou are missing 1 very important thing about what he was talking about. You need to separate source code from the builds. So after someone merges to master it gets built and tested. If it fails testing that build does not move forward. If it passes it then gets deployed to the Test environment. Then it goes through more tests, this is where you will manually test if you have it. When it passes testing it goes to stage, then to prod. This is what continuous delivery is. The guy in the video was only talking about continuous integration
Yep, that's his argument. But depending on how the software gets developed, you may not be able to actually work like this. That's especially the case in open source (you have NO guarantee that the half finished feature you just merged gets finished in a later PR by that person).
Yeah, that video got me really confused (heh!), too. For larger projects, I haven't found anything better than this: - You have a dev and a release branch (the release branch may be replaced by tagging releases on the dev branch, which has it's on subtle pros and cons) - You branch off of dev, when you want to make any changes and develop thoese on your new branch (you try to keep the changes in one such feature branch to an absolute minimum, the goal is to merge as soon as possible and have as little opportunity for merge conflicts as possible, and favor repeating the entire process a couple of times over large merges) - You occasionally run tests you care about on your new branch (be it via some automated process like git hooks or not, and be it locally or on origin/newbranch, doesn't matter) - When you (and probably your team) are ready you merge it to dev which triggers a mandatory run of the entire test suite and maybe a canary or staging deployment - If those checks are fine, you merge to your release branch and do a release You try everything to never have to push directly to the release or dev branch, but pushing to dev might occasionally be necessary for a real critical hotfix or complicated checks that you need to do as close to production as possible.
if Farley works on master directly - and has a huge test suite how does he work efficiently? Doesnt he have to wait for the entire test suite to run locally before pushing? Personally I'm often pushing my work to branchA switching to a new quick task, and I get a solid five minutes of work in before the test server has told me branchA is ✅ then I merge it and resume with B would be one hell of a hassle to have to wait to run tests locally before I can push and go do other things
I feel so validated hearing your take on this video. I watched it once months ago and similarly thought it made no sense. And it makes no sense because of how he describes the branches/pushing/testing. He mentions making changes to his fork of master, and then clones of master and doesn't really make clear which one he's talking about in the various references he makes to it. Like it seems like he's talking about pushing to master, but really means like a push from his local copy of the fork branch to his remote fork, then from the remote fork there's automated testing (or PR that triggers it) that has to run before allowing the PR to be merged. But he doesn't mention PRs and he doesn't really describe the testing in his Ci Pipeline except when he mentions running the tests locally before pushing to his fork. Running the tests locally isn't bad per se, but that's not really part of the pipeline because how can you really enforce that when making a push to your fork or even master?
EXACTLY. i was confused. now if you push to your master and it _auto_ merges to master after thorough testing, totally get it, but it _seemed_ like it was local -> local testing -> master
@@ThePrimeTimeagen This video makes more sense if you understand that Farley probably came up with the “pipeline” concept and that he modelled it after CPU pipelines hence those slower acceptance test and performance test stages etc are not run on every commit and the end goal is only as recent a build as possible that could be deployed to prod at will at the touch of a button. Each stage beyond the fast-ish artefact builds aims to falsify the assumption that the artefact is good to go. That’s what he means by CD (another term he came up with). Semantic diffusion is a real SOB :)
He probably omits PRs because he thinks pair programming is a better way to do code reviews (it might not work with open source), but it doesn't make any difference. If you find PRs prevent you from using this methodology, it probably points to an issue with the code review, not everything else. For your last question, it takes a lot of trust in your team; believe they will code using BDD/TDD and run unit tests before pushing. It is far faster to run something locally than remotely regarding unit tests. If your tests fail at integration (your integration tests), your unit tests are insufficient. Your design needs to be simplified, and you need to break it apart or have better interfaces with well-defined ways to speak to other components. The same needs to be considered for unit tests that run far too long. Suppose your unit testing is too simplistic to detect issues between things you might think are "external components" external to the module you worked on. In that case, your code needs better interfaces between the two systems. As for the rest of your questions, MANY resources cover these topics as the result of decades worth of software development. The XP book is an example of this, written decades ago. I tried to consider your question and any other questions you might have in this explanation. If something needs clarification, I will try to explain it my best. This is my understanding from looking at Dave Farley's videos on CD for the past two years and reading his CD book (although I admit I need to re-read it sometime soon.)
Every branch is essentially forking the entire codebase for the project, with all of the negative connotations implied by that statement. In distributed version control systems, this fork is moved from being implicit in centralized version control to being explicit. When two forks exist (for simplicity call them upstream and branch), there are only two ways to avoid having them become permanently incompatible. Either you slow everything down and make it so that nothing moves from the branch to upstream until it is perfect, which results in long lived branches with big patches, or you speed things up by merging every change as soon as it does something useful, which leads to continuous integration. When doing the fast approach, you need a way to show that you have not broken anything with your new small patch. The way this is done is with small fast unit test which act as regression tests against the new code, and you write them before you commit the code for the new patch and commit them at the same time, which is why people using continuous integration end up with a codebase which has extremely high levels of code coverage. What happens next is you run all the tests, and when they pass, it tells you it is safe to commit the change, this can then be rebased, and pushed upstream, which then runs all the new tests against any new changes, and you end up producing a testing candidate which could be deployed, and it becomes the new master. When you want to make the next change, as you have already rebased before pushing upstream, you can trivially rebased again before you start, and make new changes. This makes the cycle very fast, and ensures that everyone stays in sync, and works even at the scale of the Linux kernel, which has new changes upstreamed every 30 seconds. In contrast, the slow version works not by having small changes guarded by tests, but by having nothing moved to upstream until it is both complete and as perfect as can be detected. As it is not guarded by tests, it is not designed with testing in mind, which makes any testing slow and fragile, further discouraging testing, and is why followers of the slow method dislike testing. It also leads to merge hell, as features without tests get delivered with a big code dump all in one go, which may then cause problems for those on other branches which have incompatible changes. You then have to spend a lot of time finding which part of this large patch with no tests broke your branch. This is avoided with the fast approach as all of the changes are small. Even worse, all of the code in all of the long lived braches is invisible to anyone taking upstream and trying to do refactoring to reduce technical debt, adding another source of breaking your branch with the next rebase. Pull requests with peer review add yet another source of delay, as you cannot submit your change upstream until someone else approves your changes, which can take tens to hundreds of minutes depending on the size of your patch. The fast approach replaces manual peer review with comprehensive automated regression testing which is both faster, and more reliable. In return they get to spend a lot less time bug hunting. The unit tests and integration tests in continuous integration get you to a point where you have a release candidate which does all of the functions the programmer understood was wanted. This does not require all of the features to be enabled by default, only that the code is in the main codebase, and this is usually done by replacing the idea of the long lived feature branch with short lived (in the sense of between code merges) branches with code shipped but hidden behind feature flags, which also allows the people on other branches to reuse the code from your branch rather than having to duplicate it in their own branch. Continuous delivery goes one step further, and takes the release candidate output from continuous integration and does all of the non functional tests to demonstrate a lack of regressions for performance, memory usage, etc and then adds on top of this a set of acceptance tests that confirm that what the programmer understood matches what the user wanted. The output from this is a deployable set of code which has already been packaged and deployed to testing, and can thus be deployed to production. Continuous deployment goes one step further and automatically deploys it to your oldest load sharing server, and uses the ideas of chaos engineering and canary deployments to gradually increase the load taken by this server while reducing the load to the next oldest server until either it has moved all of the load from the oldest to the newest, or a new unspotted problem is observed, and the rollout is reversed. Basically though all of this starts with replacing the slow long lived feature branches with short lived branches which causes the continuous integration build to almost always have lots of regression tests always passing, which by definition cannot be done against code hidden away on a long lived feature branch which does not get committed until the entire feature is finished.
CI is about how your team works together to write software in order not to have mega-merges. It is orthogonal to your deployment strategy, which could be direct CD to prod, release of specific version after UAT, canary, etc... You can use tools (e.g. deploy by tag instead of branch) to use any deployment strategy you prefer and the proper implementation techniques (e.g. feature flag) while doing CI (=no mega-merge) in your team.
I've worked/do work on a project where people push directly to master. Since we all work on completely separate things for the most part, it _works_ but it is awful. My 2 cents (which is overvaluing it): I think developers are particularly prone to falling for a trap of getting things done perfectly/most efficiently as possible. It's a mindset where any setback, any inconvenience is a failure. I'm fine with a system where i might have to do an awkward merge conflict resolution every once in a while. Dealing with surprise is just a cost of doing business in life. Fwiw, when i do work on projects (usually pretty small), I like to have a protected master branch and then just branch off that for features/bugfixes etc and merge them in via PR, with CI only running on PR's, and using local unit-tests in between. I'm sure it has a name, but i just call it using git.
I've worked with all kinds of system, pushing to a single branch only, full-fledged Git-Flow with MR(PR in GitHub speak)/Code Reviews, and a kind of "hybrid" version, where you work on feature branches and a single dev branch, marking new releases to be created by a pipeline trigger operating on release-tags for a specific commit in the dev branch. The last approach seems - for now - to be the most sensible approach for me, and I'm curious why it didn't gain more traction (maybe other people didn't think about it lol). All approaches, of course, use automated testing for each commit and also for each merge that needs to be performed.
Pushing directly to the master never truly sounds like a good idea. Even in a team of two, I would never allow either of us to merge into main directly lmao
Branching creates multiple versions of the source code (e.g. "dev" vs "main"), thus creating the possibility for these versions to become desync'd. If you never branch then having desync'd versions is impossible because there is nothing that has to be sync'd. This eliminates tons of wasteful effort spent fixing bugs, git issues, merge conflicts, etc because they just can't happen. For example, just this last week I sat with 6 -7 developers for an hour because one developer squashed their commits on a feature branch and the PR itself broke the syncing between their feature and main branches. Add to this that they couldn't fix the issue on their own because the main branch was locked and you can see how this costs companies real money and productivity. Also PR's become a huge bottleneck when you transition to truly continuous delivery/deployment and deploy to production 10, 20, 40+, etc times a day. Streaming your changes "continuously" into prod is impossible when you use the discontinuous process of a PR.
@@jordanpavlic9745 rebasing is for sure a thing. So I'm not sure I understand how anything was messed up, if following that very obvious principle. That, and of course code reviews. 🫠
The entire concept of "continuous delivery" seems to be something like "discover footguns as quickly as possible by firing them as many times as possible in production systems".
"discover footguns as quickly as possible by firing them as many times as possible **before we hit production** systems". Fixed it for you. Continuous Delivery is not Continuous Deployment.
@@ddanielsandberg did some reading, and it seems that continuous delivery and continuous deployment are described with overlapping terms. Continuous delivery indeed attempts to discover footguns just before production while continuous deployment automates the last step of sending undiscovered footguns to prod. Glad to have found clarity on the topic; this video had left me a bit confused about the terms.
@@k98killernot quite. Continuous integration attempts to find the foot guns before main testing using lots of fast automated unit and integration regression tests. This allows you to get something suitable for detailed long termed testing while at the same time attempting to spot any breakages with the regression tests before you even make the commit, and which results in tested code with no obvious regressions which does what the programmer understood it needed to do. The output of continuous integration is then used by the deployment pipeline to further prove that the code is not fit for purpose by running longer lasting tests which try and discover regression in performance, memory usage, etc. These additional foot guns are not always discovered at this point, but the end result is something which has already been deployed to testing, and which looks as fit as possible for deployment to production. Continuous deployment goes further and rolls out this new release to the load balanced server with the oldest codebase deployed, and gradually takes over the load from the next oldest, either succeeding, or discovering additional foot guns which don't show up until under heavy load, in which cae it is rolled back.
I think another main confusion was "feature branch". CD means a "long" (which for him is more than a day) lived branch; something one interacts with and refers to by name. Prime means wherever I store my changes before they are merged to master (or whatever). I lean toward Prime's definition, and would qualify the CD-version with "longlived-" or "git-flow-". I think CD usually mentions this, but he might have left that out since the video was specifically about git-flow.
I think the big difference between your local master as the development branch vs a development branch on the central server is that the when it's on the central server rather than you pushing directly to master on the central server, the central server can run the test suite and make sure it passes *before* it is pushed to the master on the central server. The other one is that when it's a feature branch on the central server, others can code review and test it before you decide it is ready to push to master. Also, I don't think that long (ish) lived feature branches are the problem.. the problem is when they remain diverged from the master for long periods. As long as the feature branch is being rebased onto the central master at least daily, it's just fine if you sit on that feature branch ( especially if it is subject to daily integration tests and peer review on the central server ) for some time before doing the final merge. In other words, a WIP pull request. As long as it is rebased daily, it can stay as a WIP PR all week and that's just fine.
That only works as long as you have a single feature branch. As soon as there are two long-lived feature branches, it's not fine any more, because those two do not get rebased off each other.
@@willemm9356 They don't need to be rebased off each other. As long as their deviation from the shared master is limited and regularly rebased against it, then each feature branch contains a different set of deviations relative to the master, but within one feature branch, the deviations are relatively limited. As long as the longer lived feature branch gets rebased after shorter lived feature branches are merged to the master, then the deviation in the long lived branch remains relatively low, no matter how long it is around for.
@@phillipsusi1791this is only true as long as the code changes in the new feature does not touch any code used by any other new feature. In it's own long lived feature branch. As the feature gets bigger, lives longer, and requires more modifications to pre-existing code this assumption becomes increasingly invalid, and that is before you consider any of the other fundamental problems with long living feature branches in actively changing code based.
@@grokitall What you are talking about is the amount of code churn, or how much change to existing code is in the branch. That has nothing to do with how long it has been around. No matter how long a branch has been around that churns a lot of code, once it is merged, it is going to cause problems for other branches that have not been merged yet. That's going to be a problem for them when they go to rebase, but so what? They will have to figure it out, just as they would even if that branch was only 1 day old and hardly touched anything else. They will have to resolve the conflicts in the rebase, retest, and by the time they submit their merge, everything will be fine.
It's also called Green Trunk. Everyone works and commit to mainline and, ideally, it should always compile. You can use bots in Slack or whatever to report on build failures and those should be addressed ASAP. With Perforce, you can even set triggers to prevent submitting code that contains files that don't compile or haven't passed review. These don't catch everything like the obvious mistake of forgetting to add a file but it increases the trust in the latest code.
Dave never said you should push into production directly. He says you should merge as often as possible into the master branch in the staging are which triggers the whole CI/CD pipeline, and you deploy the change into production when it passes every stages.
thank you for saying posting another reason I find this guys videos, insufferable and impossible to watch. He is just about the only technical RUclipsr I’ve told RUclips to not recommend videos from ever.
I am actually shocked that the word "trunk based development" was not mentioned a single time by any of the 2 devs with multiple years of experience, in this 30 minute video about trunk based development. In summary: - Branches are not supposed to live longer than a few days. They always branch off from, and merge directly into, master. As soon as they are merged, the branches are deleted. - Use feature flags when working on larger features, so you can continually merge your changes, without giving customers access to an incomplete feature.
Yeah I think that’s the main point here. The entire video can be summed up by a couple of points: - Follow trunk based development + feature flags to ensure mainline can always be deployed - If you have bigger features don’t create a feature branch, rather break it up into smaller commits such that each commit can be merged into master - Use tags for releases instead of release branches
The failure of trunk based development is evinced by how the expectation when getting an update these days is it'll break things. It's a developer-centric approach "I don't want to think about what I'm doing" instead of what it should be, user-centric.
I think you kinda got there at the end. You DO mostly agree with him, but you are using slightly different definitions of “feature branch” and “CI”. What you call a “feature branch”, he calls a “local copy”, and what he calls a “feature branch”, you would call “a long-lived feature branch that hasn’t merged in more than a day”. Whether the local copy is only local or is reflected in the central repo is completely arbitrary, even he would concede that. The tl;dr of what he is actually advocating is: “don’t use any workflow that keeps code from getting merged for longer than a day”. GitFlow and even GitHubFlow imply that features do not get merged until they are completed and since many features can take longer than a day, the risk is you finding out you’ve broken something way too late. To make this work in the real world you need a good automated test suite and some real feature flag kung fu. But the big shops invest in that so that it can work. One area where you diverge slightly is that you prefer to have a branch that mirrors production and he advocates just using a tagged release for that purpose.
There's probably some misunderstanding here, but I feel like you agree on the main premise. It's the policy of staying in synch with single remote branch. That's it. Every push to (main) remote is rebased onto current tip of master branch - not pushed as separate branch and attempted to merge later. It forces frequent integrations and discouraging big changes made in isolation, which are hard to merge without conflicts. Local/team feature branches are fine if they're frequently stacked up in main repo.
Been using gitflow with the nvie/gitflow extension for a few years now on small to large projects. Really enjoy the pattern myself and haven't hit any issues, yet. It's typically, features are branched from develop, merged back after all the PR/QA hoopla is done. A hotfix is branched from master and should be very small (for 911's), tagged and merged back into master and develop after things are appoved. A release branch is a tagged version of develop that gets merged into master after again, things are approved. I'm sure it doesn't work for all projects. Just use what works for you and your team.
I do not have a problem with gitflow except for the requirement to delete old branches....DUMB if you are using Jira and want your branches to continue to go to the right place in stories. But aside from that having a development branch is a fine idea
@@batboy49 Our company keeps track of the merge requests instead of the branches themselves, sometimes stuff is missed because there were some commits made to hotfix stuff, but you at least get a good idea of what changed and how it changed even if it's not a perfect representation. Although we also archive the versioned branches for similar reasons to what you describe.
@@batboy49 I typically point to pull requests in Jira tickets (or alternatively commit diffs). Branches are only really useful for tracking live changes, you can always find changes between commits, regardless of whether or not the branch exists. Also, I don't think the gitflow pattern requires the deletion of branches, are you saying a specific extension enforces this?
Peter - I see release branches as being used predominantly for isolation during a release. That way you can test your code and go through the release cycle while others are committing to the develop branch. I'd then tag the commit that ultimately goes out for release, merge this back into develop, and also into master if you have a master branch. Edit: your last sentence is perfect
@@sb_dunk I just looked it up and I think you are right it is not required to delete branches according to gitflow. The last time I was in a group that enforced gitflow THEY deleted the branches and I came to associate it with gitflow. I also agree with what you said about pointing to the pull request in the ticket. I do not guess I hate gitflow then. What I hated was deleting old branches and having that connectivity to the story broken. BTW...I do think it is useful knowing where that branch ended up at the end of the story. I think there is value in having that branch still around. What if some other change ends up breaking this functionality? If we step through the stories we can easily go right to where the changes were and diff them directly. But to your point no, I saw nothing indicating you had to delete branches in gitflow. I like keeping those book marks around so that if I go looking for them later it is easier to find. Generally we will not need the branch so much.
There are some tests you cannot run every time there is a small change. There's a reason we create release candidates (while develop can be ahead) while release is being thoroughly tested before release. The idea that develop and production can be the same thing is under the assumption that all tests are fast enough that daily changes don't break anything.
I have only worked with CI like workflows (short lived feature branches).and it works flawlessly. A lot easier for a QA to test a single feature change and to find what caused an issue in prod (that will happen anyway from time to time). We release around 20 times a day to production with around 25 PR's merged daily. Build/test takes around 25 minutes (many concurrent test-stages). I hear about a lot of SAAS companies using develop and other branches where changes goes out in batches and I've never understood why.
@@alexandersagen3753 the person you're responding to made a really important point you totally glossed over. Prime makes it as well. If your exhaustive test suite takes 30 hours, how do you suggest running it twenty times a day? Assume for the sake of argument that the reason it's so long is that it has to be, not that the people building it are incompetent.
@@SteveKuznetsov that is a good point and I agree. I'm just overly happy that my work environment has ~20 minutes build and test time and that we can do deploys to production 20-25 times every day. For most companies I'm sure the build/test time is a lot longer than it needs to be. Modularize the project with very clear boundaries and clear contracts. Test each module separately on every border + unit tests. And most importantly. Only test the modules changed. Then the test build doesnt take longer time than the longest module
Here is how I understand what he's saying (please don't argue with me about the statements it's not me making them): You constantly push to main (yes, actual main, the one everyone else pulls from). You don't push if you don't think your changes are correct, which is why you use tdd to assure yourself that they are (if you your tests don't catch your bugs, get better at writing tests). That's what CI is. This doesn't preclude code reviews, or running automated tests before pushing, the main point is the frequency of the changes being integrated into main though so any processes you do here must not be time or labour intensive. Automated tests are also running on main. Of course you don't release if your automated tests are failing. If you don't do CD, you don't necessarily release all changes that pass automated tests on main, but if you are you do. Even if you're not doing CD, you still release very frequently. The idea is that main should always be in a releasable state, and if it ever is not it becomes top priority to bring it back to a releasable state. Any change that isn't immediately ready for production is hidden by feature flags and similar mechanisms but still merged and integrated with the main branch. When the feature is ready to release you just turn on the feature flag, and remove it ASAP if there doesn't seem to be a need to turn it back off again. So that's why you don't need a separate production branch to make hotfixes too. If there's a bug you fix the bug on main and release, which you can do since like we just said main is always ready to release. The advantages are that any QA that happens always happens on a version of the code that's very close to production, and you don't get huge merge conflicts because you don't make big changes accumulated over days or weeks. Things like code reviews happen very close to the time you're writing the code, not after some days when you're about to complete the feature. --- I personally have never worked this way and while I understand the theoretical appeal I have a really hard time imagining how this would work out with pretty everyday tasks like schema migrations (OK that's not something you do literally every day but you can't really do those with feature flags exactly... You could _try_ to have multiple parallel versions of the same table based on your combination of features I guess, but that sounds awful), big reactors (yeah I'm sorry you can't always isolate these into neat little packages unless your code was already so well written it's hard to imagine why you need to refactor it).
Regarding your CI confusion. CI is short for "CI pipeline", but what it really means is a "build pipeline". CI on the other hand, is a concept that you check in your changes regularly (at least daily) to the master branch (no long lived branches).
Having used git flow, it takes a lot of maintenance unless you have a lot of automation. Our feature branches(long running features) kept getting out of sync with dev and main so merge conflicts happened all the time.
That's simply a sign that you're making too broad changes at once. I've added half-finished features to production, I just ensured it wasn't breaking stuff. That way there were far fewer conflicts to resolve and the process was far smoother. Frequent merges is a good thing.
@@CottidaeSEA we ended up partially solving it by pull main in to dev all the time and dev into the feature branches all the time. The merging into production is something you are not meant to do in git flow(at least that was according to the tech lead that took a 3 day training on it). These issues were not avoidable when two separate features touched the same file or worse one feature refactored code that was being changed by a different feature...
@@joejazdzewski Ouch, that statement within the parentheses have a lot of feelings behind them. I can totally relate. Yeah, there will always be some friction when merging, especially if the interface is changed somehow. I'm not at all against changing/improving the interface, but I really think changing it in a feature branch is a bad idea. Refactoring on that scale should be discussed internally and done in a separate branch to ensure those changes are applied as soon as possible.
This appears to me that is a problem of your project and team. In my work we used git flow on a daily basis for months and haven't much of these problems, they only happens when someone starts making changes and not updating his branch Ideally, you should update your feature branch if it lives long enought with develop, and then at the end the conflicts would be inexisting or minimal
Yeah feature branches shouldn't last that long. We run with feature branches matching stories, so should only be a few days to fit in a sprint. If the entire feature will not be ready then it's feature toggled out so that we can enable bits in test envs but not have it enabled in production.
So, he wants immediate feedback from his fellow devs? Well, I sure hope he's a sooo good, that EVERY SINGLE COMMIT he makes is flawless and doesn't break his fellows, whom otherwise have to wait on you fixing your mess (wait for the "feedback" you get after wasting the other team's time a few times). You NEED an isolated working branch to test BEFORE pushing in everyone else's origin.
The whole point is you work so your software is always in a releasable state. If you push shit, that's on you. You heard him: he's committing after each new test he's written and made passing. How are you fucking anything up when you are working that incrementally. The pipeline ensures all existing requirements are still met. Getting fails? Push fixing change. You are like 10 min away from the failure you introduced
@@jangohemmes352 Then again, that's just a "feature branch" but in your local git staging area instead. Why not do exactly what you said in a feature branch? And merge your changes when the tests that you wrote on that feature branch pass. What if your feature is not atomic, and is not easily divided into multiple "releasable states"?
@@ahmedalsaey454100% agree. people don't realize this SVN single version approach leads to a local copies all over the place because people aren't ready to upload
You completely missed the point about what CI is. CI is not a build server/tests/linters/etc. The term appeared long before Jenkins was there. It's about integrating your changes frequently to one branch, so you eliminate the risk of a merge hell. As simple as that. CI has nothing to do with your releasing/deployment strategy. Please, read the definition on Wikipedia.
I think GitFlow is a bit like microservices. It makes sense when Netflix does it, but hardly anyone else. Enterprise SaaS, which I'm sure is most software that gets written out there, could really benefit from a radical simplification of both process and architecture. I routinely see even internal tools with a few dozen of users being built like it's user facing Netflix services.
Here's my hot take on Git Flow: for legacy systems that are way too vast to realistically implement testing on and have management approve such an enormous endeavor, Git Flow the best system there is. On the other hand, if you're starting a new project from scratch in 2023, of course you'll be much better off adopting a single-trunk strategy with TDD. EDIT: the subtitles are in portugese.
I'm a bit impressed by how Farley lays out his argument. cause after Primagen asks a question Farley's next segment is often a direct response to that question He must've had this talk quite often and structured it in a way were he gives enough info at every step to adress common questions. but still room for improvement - like how it's unclear until 20:00 if he works on master directly
There are different tools for different organization types and this is one of them. Knowing many tools will help you to pick the right one or create your own processes. I personally don't prefer git flow but I usually work on small teams and prefer faster feedback.
I think prime was using CI and CD somewhat interchangeably , what he showed at the end and agreed on was actually CI and what he argued against was CD, CI doesnt mean every commit goes to production, but you can have CD and every merge to main can probably have CD flow that pushes into production, CD is hard and almost impossible for most, CI on the other hand is possible for most
Yes, and Continuous Delivery doesn't even mean every change you make goes directly to production, unchecked. That's the difference between Continuous Delivery and Continuous Deployment. In Continuous Deployment, every change goes to production. In Continuous Delivery, every change is built and ready for release but it's a product decision when you should release so you may have a manual step here.
What you described in the end is totally correct. Yes, your vision (my/dev -> check -> push tp central/dev -> fetch and rebase onto -> continue work) is the same as his with just slightly different explanation.
Used CI for many years at many companies. The whole fixing old bug thing you mentioned is generally fixed by disabling the feature flag. It's totally possible in real life, it's just hard for people who are not used to it to understand. You can also check something in disabled under a feature flag. Google uses CI for everything. Feature branches cause mega merges and a host of issues because they are getting less testing. Testing happens just before testing to master. Feature branches is when you have multiple groups working in groups on feature and then releasing them to master monthly or whatever. Large amount of code hitting the mainline at the same time causing integration issues. I think you might be confused by what is ment by feature branches when we talk about CI. Feature branches are either long running branches to build a large feature or a branch with a lot of devs. Either way these branches end up being more complex then frequent incremental commits to mainline. Also feature branches often will have branches off the feature branch. You can have main and production branches in CI however production and mainline are only two branches to maintain, essentially you split the focus testing between two branches. Note testing is a combination of automated, devs pulling down the code and QA. With non-CI you can have hundreds of feature branches (again these are not short lived dev branches). These hundreds of branches it becomes extremely costly and expensive to have the same quality of testing that mainline gets because only a subset of developers are pulling from that branch. So in summary. Feature branches (using the CI term of long running or shared branches) cause lower quality code or slowdown code delivery. Also worth reading "how google tests software" if you want to find out more.
How do you fix a bug by disabling a feature flag? Monke pushed code to master trying to refactor some code. Weird glitch appears. Glitch needs bugfix. What flag does monke switch off so bug goes away?
@tedchirvasiu changes are the leading cause of bugs. So disabling the feature flag disables the feature and the bug together until it can be fixed. If you introduce a feature and suddenly the app starts crashing. You can disable, diagnose the issue and re-enable. Also note that often the feature will be turned back on before it hits live, although sometimes features need to be disabled in live as well depending on what is worse. Often features are not even consumer facing but are needed for the long term. Of course before something is released it should go through testing, automated test etc... however we all know bugs get through that. Having more eyes on it at dev time rather than just the few in the branch increases chances of catching issues early. For more read the book "How Google Tests Software" which gives a good summary.
@@TheBestNameEverMade Changes to me means modifying something that already exists and additions would be new stuff. For every change do you keep both the old code and the new code and switch between them whenever you do refactoring for instance? If you take down the entire thing until you fix the bug AND verify the rest of the commits in the mainline up until that bugfix, it seems way worse for the user. Sounds like you either have to delay the bugfix or you rush more unverified code to prod, leading to more possible issues. I'm quite curious how Google tests their software, because it has plenty of bugs. Right as I'm typing this message I'm experiencing a bug where deleting a new line removes the entire comment text. It has been happening for weeks. Hitting the back button on RUclips sometimes moves you 2 videos back. On the mobile app sometimes the video starts but without sound. I have to go back and open the video again (and then it works). I tried removing a credit card from my billing but it said there was no credit card whenever I pressed the remove button. Had to contact support and they did it for me. For years I have been unable to use the sync feature in Chrome because as soon as I enabled it and it loaded my profile, it would insta-crash my browser every time. After some years I just gave up trying to check if they fixed the issue, I just don't have sync. And don't even get me started on their Cloud platform....
@tedchirvasiu the first thing in large software that needs to be accepted is there will be bugs unless you are working on a NASA high reliability project. Also we should expect humans to make mistakes. Those takes many more years to develop and produces less functionality. Each line of code has to have automated tests, can't use many language features and requires multiple engineers spending days reviewing every aspect of the code. A company would be out competeted by someone else... you can't let perfect be the enemy of good. RUclips works well enough to be the most popular video platform. Given there will be bugs how do you reduce the amount? You allocate more resouces to them in terms of automation and number of people looking at the issue. Having groups of people in separate branches reduces the amount of testing a feature gets. The uber feature ends up going into the codebase all in one go and if there is a bug the entire feature to have to be backed out or toggled off rather than just the part that broke. The goal of the game is mostly to keep the dev branch stable. Having the ability to do it in live is nice to have. Also having the ability for users to toggle experiment modes such as google labs is also useful for getting more eyes on new features. If someone commits something and it breaks, if there is a toggle it can be turned off within seconds to minutes (depending on if the unit test was able to disable the feature or it had to be done by humans) then a fix can be made in hours or days. Blackouts are another option although as new code comes in all the time they can be more complex. Also if the is a huge feature checkin (which we are trying to avoid by not having group feature branches) then that's a very large backout which depending on how long the feature has existed in dev it can take a very long time to disable, hours or days or months. Also there is the problem of checkin queuing with feature branches. Google with the number of feature branches they had, they would have very long checkin queues because a team would try to checkin and some other massive feature would go in before them requiring a rewrites of their code. If the feature had been incremental then the amount of changes would have been smaller to make it adapt to changes. I mention this because it's a core part of feature branches that slow down features to users and reduces the amount of testing code gets. With feature branches, If you need to get your feature in and the day before the api you use changes you have to fix it fast and get it in so the api doesn't change again however fixing it fast could bring a bunch of new issues. If you wait and test you might miss your window before new api changes comes in and the feature keeps getting more complex. Feature branches are one of those things that sound nice on paper but don't work well reality. In most cases, they cause slow down in development, longer time to release and more bugs.
@tedchirvasiu also yes you might take down the entire feature but it should be a very small change not the large kinds of features that come from feature branches. It could be as small as a few lines of code. Maybe it was an optimization or a new bit of helpful text. Sure the dev user (and wrost case end user) will appreciate those things but likely they won't be noticed. In anycase 99% would be fixed before going into the live branch, it would be disabled to allow other devs to keep working. If it hits live then it's about making a call about what is worse, the feature or the bug it caused. If you discovered that half your users were crashing then disable it for a few hours and let your users actually use the product. That's an extreme example of couse... it should have been caught in dev and in that case the users would just have to wait for the next release in a few days for the feature. There is also sample testing with feature flags where you only expose the feature to 1% of your users and see how they respond before rolling it out to more although that depends on the feature. Also the feature flag can be used by QA or automation to figure out who caused the bug since it's easy to toggle rather than performing a backout and rebuilding code. QA can go through several hundred toggles in a few hours but they could probably only do a small number of blackouts to find the route cause of an issue. Anyway I hope that was helpful.
Re. 11:32, the "mega merge" is actually caused by the diff between a feature branch and the merge target. In other words, it is caused by the drift of 2 branches over a significant period of time. So Continuous Integration is not isn't going to cause this, since merging more frequently will make the individual merges much smaller.
He means every commit of every dev goes to master without actually having, necessarily, a completed feature that is merged. Each commit is meant to be whole onto itself and, in theory, the dev ensures this by running a basic test (not the full suite) on their local copy (which reminder is master). They then push to remote (FCFS with multiple devs) and the CI takes over ensuring the sanctity of the master branch per commit and its release-readiness. Tags denote versions and for hotfix cases the idea is that this approach makes sure the fixes don’t introduce a scenario where the RC code acts bizarrely as a result of a patch/fix to prod. In the case of a broken master, he advocates for a swarm fixing on the off-chance it occurs because he also advocates pairing the approach with TDD first strategy. All of that said, there are pieces of this I like but I don’t think I’m sold on the whole model. I do like the idea of a more continuous CI but personally would like extra gates at CD for A/B, Canary, etc rollouts to name just a couple gripes.
@@retagainez True, and Dave Farley for sure is in the camp that there is no CD without CI but in practice I’ve found CI is implemented without CD. So the conversation around CI is usually a pre-CD conversation. I tend to shorthand CD-related CI as just CD for simplicity.
I don't get it, how does his version work with PRs? Is the assumption that everything is correct and nobody will have anything to say about it, or does he create local feature branches that he works on while the PRs are pending?
At 13:30 you missed that he makes the difference between CI and CD. Dave Harley says that CI builds into a prod-like environment. CD is something else. Don't shoot the messenger
I agree. Prime definitely missed something. I would revisit his distinction between the two: CI is about "making sure your code is always working," and CD is about "making sure your code is always releasable." CD includes CI.
So if I merge a feature into master, all tests pass, but in the end there's still a serious bug discovered by QA team, I'm blocking my whole team form deploying potentially for hours or days? Sounds like madness to me. Or is it just a constant rollback & cherrypick sh*tshow? And is QA even possible this way ... afterall sh*t already happened since I merged into master? I must be misunderstanding the guy ... otherwise this sounds like a total gamble "lets hope the tests cover everything" ... which they never do
The main issue is that a bunch of consultants have sold these methodologies as a one size fits all approaches. Teams are different and have diffeeent requirements. There's a video from the guy who wrote the manifesto for agile Software development. He's bashing agile in it's current Form because it was never intended that way. Very insightful stuff.
agile was always corporate propaganda… i mean they even made a total BS origin story about about the smarted men in the room going on a ski trip. that guy can suck it because he was stupid or in on it
Totally agree with you. I don't see his argument that a feature branch won't match master - that is what rebase is for! It does sound like he is pushing to the main remote master - either he works alone or really likes pissing off his colleagues!
If everyone is working on a bunch of feature branches and only occasionally merges back to master - how will a rebase off master help keep all these feature branches up to date with each other? They won't, you'd be rebasing nothing, because all the changes are isolated - which flies in the face of CI. And he does not work alone. Read up on LMAX and what he and the team did there, without branches or PRs.
@@ddanielsandberg they need to be short lived feature branches - merged back to master after the feature is complete and code review and unit tests have passed. For us, when a new feature is complete we do a release.
@@ddanielsandberg working directly on trunk sounds like it comes from the time when merging was much harder - eg SVN or CVS. Git makes this way easier.
@prime Google and Facebook both use trunk based development which as this guy says. You submit everything to master/head/trunk. This works remarkably well. You don't need to do mega merges and you don't need to do feature branches. Instead, you use feature flags to enable and disable certain code paths and manage releases using tags. You don't literally push to master, but if you pass all tests and code review, then you can merge your PR into master.
32:32 one issue with pushing direct to remote shared branch is you might break everyone tracking it even if you ran tests locally because you might've forgotten to add some new file to the commit before pushing, tests worked locally but breaks remotely (or worse yet the missing files don't cause tests to break and might go unnoticed until you have missing stuff in production or something .. not an every day occurrence but over the years has happened more often than I'd care to admit) 33:20 you make a PR from what? local repo needs to be pushed somewhere where the upstream can pull from, if it's the same branch locally and remotely there's no PRs
"I don't know what git flow is" ... "but I have a develop branch" Pretty much my problem with the argument of the guy you reacted to. git-flow is so core to most git workflows that people don't even realize they're using it... I find your reaction above very common since I'm an old fart that remembers when NCIE came out with it... and we all have CI/CD processes based around the develop branch and master branch going to different environments. He says CI/CD doesn't work well, but here we've all worked in several shops where it works (almost) flawlessly for us. I actually like that guy's stuff in general, but think he was WAY off base about git-flow.
I think you mention a very important limiting idea: If you don't think there is a way of making big changes with only small correct steps in between, then yeah you cannot beleive that having only one correct version of the code is all you need. The most fun I've had coding has been in teams with a more agile mentality, with really quick pased merging, or when sharing one unique branch (only possible in small teams). Also you don't need huge amounts of processes to verify the correctness of your changes if your applications are properly modularized (which I know, they are usually not).
@@archmad Yeah that was a poor choice of words, I was using the wrong meaning of it lol. I meant you don't need a lot of machinery or processes or github actiones et c.
30:50 To him, breaking master doesn't matter, is how I understood it. His master is just a develop branch, and instead of having a master he probably just tags releases. So tests just run on master.
Sending every commit straight to production is technically called "continuous deployment", not "continuous delivery". Continuous delivery is more like just continuously (or as near to continuously as practicable) staying *ready* to send all your work to production.
the timing, i was also researching different ways git is used yesterday. i currently just have a main and a dev branch. but have been thinking that feature branches might be a better approach after i go into production.
@@ThePrimeTimeagen Duuuuude! Great work with your reaction videos... very illuminating stuff. Random question.... how are you able to push out so many videos per week? Does this mean you're going to take it slow on your main channel ThePrimeage?
I think the main thing to keep in mind is that the best workflow heavily depends on the number of developers involved. All of these explanations are fine for 100+ developers on a single project, but if you're like me slaving away alone on a project or with maybe one or two extra devs, it's a completely different story.
Dave Farley has some good ideas, and I have adopted more continuous style of development thanks of his videos. But committing directly to master in a real project only to find out 30 minutes later that you broke the CI pipeline just doesn't sound like fun.
In that case it's the CI pipeline itself that's something wrong with. If you committed something faulty, it should simply be rejected by the CI pipeline and master staying unchanged.
@@Spiderboydk But that's not what Dave Farley is advocating for. At least he wasn't a year back when I was watching his videos, maybe he changed his mind since then. Also, I am *really* curious WHERE exactly you push your change to. To master? You yourself said that the change doesn't get to master until CI approves it, so you cannot push to master. To another branch then? But then how is that different from feature branches? Or my favorite - to another repo - which is a worse version of a feature branch, because it's still a different branch, but in a completely different repository.
@@kajacx I have been pondering over your answer for 5 minutes now, because I had no idea what you are talking about. But I think I found the misunderstanding now. When you said "broke the CI pipeline" in the original message, I thought you meant the series of tests that checks your commit actually crashed and left you nonethewiser. I think what you mean by "broke the CI pipeline" is you have some code with a problem that got though the tests. If this is the case, completely disregard everything I said. :-)
@@Spiderboydk I think I mean something else still. So let's summarize, you commit your change to master and then push, then ... If the CI pipeline actually fails (it cannot even run the test for some reason), then the CI pipeline itself is broken and needs fixing. We agree there. If the change contains a bug, but CI passes, that is a problem, but that's not what I meant. In this case, you should add a test so that next time this bug is re-introduced, the CI catches it correctly. What I meant was: you code contains a bug, and the CI pipeline (correctly) fails. Sorry for saying that this "breaks the pipeline", it's just a normal test fail. But this is already a problem. You said that in this third case, your change should be rejected by the pipeline and master saying unchanged. So, where do you push the change to? It can't be to master.
@@kajacx Okay, sounds like we are in agreement then. REDACTED --> And your question: you don't push it anywhere. The code is faulty because tests fail. Fix the error and try pushing again. Nevermind, remote testing is of course different from local testing. I guess my head is not with me today. Nevermind. :-)
What seems like the cause for culture shock to you is the idea of keeping the master releasable. If there's a problem with master, you roll back the latest change, which, by definition, is what broke the system. You keep it releasable. Constantly. It's never broken for more than a single commit, that never contains more than 1 day worth of work.
Yes, and it is so sad to see some comments in the live and here in the youtube stating that he is just a course seller or something like that... Some people cannot grasp a concept and just jump to attack without force himself to learn first.
as a former GoDaddy dev, working in consulting for years and working for government entities right now, I disagree. CD as advertised here, breaks production all the time.. everywhere. it's only a wet dream for project managers.
Exactly. Imagine disagreeing about the definition of CI and CD with the guy who literally coauthered the book on it. Running tests automatically on some remote machine every time you commit and push is not CI, you're not integrating anything!
Dave Farley has a lot of great ideas, but with this pushing straight to master and testing later, he jumps the shark. Trunk based development do work in small teama, where we trust each otger. However, the reality is that most programmers are idiots or don’t care about quality.
@@mhandle109 this.. and besides, there is always a dev pulling in megabytes of dependencies only to have a leftpad or similar. some sane checking has to be done before delivery that goes beyond simple cd.
14:30 About that problem where you may want to test a new version on a small part of the customers, I think the Idea in strict CD would be to have a setting that can turn the changes off and on and not have those changes on different branches. At first I thought, why to give up all these advantages Git brings with multiple branches that can exist at the same time, but I have seen design patterns that can very cleanly hide new features or bugfixes behind user options (which are not necessarily exposed to the normal user). You definitely have to get used to it and you have to clean up those structures after you are comfortable to declare the old version as deprecated (or maybe you want to keep it for backwards compatibility), but the advantage is really huge, because you test your code against all changes done by everyone working on every Bugfix or Feature at the same time.
I think discussion points out how tricky git branching strategies are and that it's a complex problem in the real-world that in large code bases there is often not a 'perfect' solution.
What you said plus issue of terminology. CI, "feature", "master" and many other terms are too abstract and in my experience very subjective, everyone develops their own understanding of that these mean and then miscommunications happen
CI (Continious Integration) is just an ideology with having a branch matching production (often "master") and create and merge small short lived feature branches into master. The standard practice is to run the full test suite on every feature branch before it can be merged. CD (Continous Delivery) is where any changest to the master branch automatically are tested and deployed to production. So for a full CI/CD workflow: 1. Create a feature branch 2. Add commits to your branch 3. Create a pr and get reviews + qa testing on your small feature branch 4. Your PR is automatically tested by a CI pipeline 5. You merge your green approved PR into master 6. Changes to master trigger a full test and build pipeline followed by an automatic release to production if all is still green.
I think you were assuming that a merge to master always means a release to production. His whole thing was that you can be merging to master without cutting a release, and then cut releases occasionally from that. Instead of doing develop and then occasionally merging to master and then cutting a release.
His whole thing falls apart if you just have a smart CI system... like even Jenkins can look at a whole repo and identify new branches and run your CI and testing on feature branches so you get that immediate feedback on every commit in your feature branch. Idk, there's a fine line between old man knowledge and experience and old man "I think I'm right when I'm clearly wrong" stubbornness
i am all about good tests, proper feedback, but this idea to merge into master, then run, then stop everything and give a dev X amount of minutes to fix it is crazy to me
If you have two developers working side by side on different feature branches for several days or weeks, neither of them merging, does that Jenkins setup check that the code each one is writing is compatible with what the other one is writing?
Guy is pining for the good old time of SVN and pushing to trunk - it was such a nightmare. You just rebase your feature branch in morning, and always stay current with develop/staging.
If multiple people work on the same feature I think it is reasonable to have a feature branch on a shared server. Or if there are features that you would like to delegate to some group of people. Am I wrong? (I have no practice experience in it)
There is a massive misunderstanding between CI/CD tools and what CI means. When you gate your feature branches with automated testing and then integrate longer than a daily period, you are not continuous in the same sense as what CI intended to address. I like calling that flow: Automated Integration. You do integrate, often first rebasing your branch and checking if your changes still work before merging it to the "main" state. That flow works well with any codebase because your VCS separates the changes and lets you have multiple "live" states across any feature branches. Unfortunately, David here only talks about how CI and GitFlow look from the VCS perspective. CI introduces requirements for your architecture and code organization to address the real world. You cannot apply CI to any codebase like how you can use AI. Integrating continuously means following a mindset where your VCS DO NOT separate changes but your code. It heavily requires inheritance and dependency injection with feature flags. Essentially, you do not change the production code but make extensions where you overwrite the behavior in separate, feature-gated areas. The main benefit to integrating that way is that you can test and gather feedback from multiple work-in-progress features without introducing merge trains. Your feature code would have an effect early within a system, and everyone could adapt quickly to minor differences instead of rebasing and raising significant conflicts that need to be resolved.
@@TheNewton yeah exactly: git is by far the most popular but for example I wouldn't have much trouble approximating most git workflows in pijul (on the other hand I can think of a bunch of pijul workflows that would be hell to try and approximate with git)
Git was created because existing source control couldn't handle Linux with it's hundreds to thousands of contributors. It was always intended for complex flows, even if the design was therefore very simple (in principle)
not sure I understand him saying that you don't get feedback with feature branches. We create a build for each branch and have fast forward merging strategy with pull requests to master so you are forced to have a copy of what master will look like before you merge...
The point is that when you have 30 developers, each working on their own branches for days on end, what does it help rebasing from master? You still won't know if anyone else's changes conflicts with all other changes until it's merged into master.
The single branch approach assumes you can feature flag everything. If you can - then you can use just one branch. But feature toggling can get complicated, you have both old and new mixed together. It's an additional burden you don't have to bear if you just use two branches. You can still have small commits and CI in the dev branch.
As long as you have tags on your releases you don't need a separate master/develop branch. You can always start out from any release tag and create a new hotfix/release branch if needed.
You didnt get it. The Problem with a Feature branch orientiert workflow ist, that no matter how offen you rebase your feature branch on the remote Master, you will never work on the recent changes of Team members, because their work is also located in a feature branch. Neither will they work based on your changes. So when both simultaneously developed feature Branches get merged the Chance of something going wrong is big. Branches in general are not a bad thing but using this worklfow statistically leads to less efficiency and quality. Furthermore ci is no Pipeline or test suit, that is a separate thing. Neither does ci imply that your head will always be deployed. In open source it would for example mean, that you merge a pull request only, if it is based on the current Master. I personally use ci mostly but i also work with branches. Stable versions are handled with Tags in my Software (1.0.0, 2.4.5 etc).
To deploy to production, the release to production, is a business decision, is not for devs to decide, unless we are the business man, also that decision should not be handed over to automation, it should happen deliberatively, and scheduled (maybe) so NO... you don't push to master and end up with a bug in prod, you push to a UAT env, or whatever, ready to be released, business should always be able to release the changes they see to prod, but maybe they want to test more, to just wait for the right time. you see? so just do a tag with the christmas logo and let business release that, or whatever other point in prod, because master isn't supposed to be broken.
I am watching this and reacting, the part I hate about Gitflow is DELETING branches. I do not see the point that is what the Gods made grep (or rg) for
The CD guy does not use PRs at all. So yeah, it's a pretty weird way of working. If I recall correctly, he uses Pair Programming for peer review instead of Pull Requests.
We're basically running three active branches at all times (Windows desktop app with three annual releases): - production: current released version that might still get hotfixes - pilot: next release being rolled out slowly, hotfixes and feature adjustments based on feedback - development: new feature development You start on where plan to merge and go uphill from there.
People selling CI/CD have muddied the waters so bad here and so embedded themselves that when we use CI as it originally was coined you just get confused. You need to just go and read up on trunk based development instead of watching a video that is later in a series.
ci came from the realisation that the original paper from the 70s saying that the waterfall development model, while common was fundamentaĺly broken, and agile realised that to fix it, you had to move things that appear late in the process to an earlier point, hence the meme about shift left. the first big change was to impliment continuouse backups, now refered to as version control. another big change was to move tests earlier, and ci takes this to the extreme by making them the first thing you do after a commit. these two things together mean that your fast unit tests find bugs very quickly, and the version control lets you figure out where you broke it. this promotes the use of small changes to minimise the differences in patches, and results in your builds being green most of the time. long lived feature branches subvert this process, especially when you have multiple of them, and they go a long time between merges to the mainline (which you say you rebase from). specifically, you create a pattern of megamerges, which get bigger the longer the delay. also, when you rebase, you are only merging the completed features into your branch, while leaving all the stuff in the other megamerges in their own branch. this means when you finally do your megamerge, while you probably don't break mainline, you have the potential to seriously break any and all other branches when they rebase, causing each of them to have to dive into your megamerge to find out what broke them. as a matter of practice it has been observed time and again that to avoid this you cannot delay merging all branches for much longer than a day, as it gives the other braches time to break something else resulting in the continual red build problem.
Sometimes in one sprint, i need to work on two different features which are related to two different release dates. It’s not possible to handle this situation without having separate release branches and separate CI pipelines for each release.
Yes it is. Use a feature flag (or keystone interface) you're building a feature that may not be wanted in the next deployment. It can be merged in to the main branch but not available to users.
So how do you handle code reviews when there is no PR possible because everyone is working on the master? In my opinion these enforced merge conditions are one of the most important benefits of using seperate (short lived) branches. I also feel that it becomes much more transparent what is being worked on. Most importantly I think the way you apply CI depends on the characteristics of the project and the team.
PRs kinda suck for code reviews. People just look at the diffs and complain about syntax or nothing at all. I've done many different kinds of code reviews before. You can review code that isn't even committed. You can review code in a branch. You can review code over someone's shoulder or on a whiteboard. You can print code out and discuss it in a meeting. It depends on what is being reviewed, the team and the purpose of the review.
You don't do traditional code reviews. Instead you do continuous code review i.e. pair programming. There isn't a need for a PR when the work is done because it was reviewed the moment each line of code was produced. This also means the second the code is written is the second it's ready to be pushed to master/main (so long as what was written works and passes your tests). Also if you are committing this often then main is going to be a near perfect representation of the work being done at any single point in time.
I'm late to the party here but CI is literally having a shared branch that everyone checks into instead of each developer having their own branches. Thus, integrating continuously. The automated build checking stuff most people think of as CI is a side-effect of CI. That said, there's nothing about the Gitflow development branch that stops people from doing CI on the whole.
@@ArnasKazlauskas-nu8vb it depends on how you define continuous actually. The development branch and production branch can actually both have CI with different teams. Sure some people will claim that's not CI but continuous the English word does not mean what those people insist it means. Do what you like though; CI has terrible official definitions and some would argue shouldn't ever be used in mission critical software.
@@MichaelBabcock Continuous integration for me means two things: trunk-based development + automated builds. Trunk-based development means that there is only 1 version of the code that we care about and the derivation from it is short-lived aka short-lived feature branches. Gitflow is the opposite of trunk-based development as you have multiple long-lived branches and release branches are also long-lived. The code is not continuously integrated as you need to do to release branches. Using Gitflow it's hard to understand what ref of code the machine runs as there are so many versions of the possible codebase. Thus, Gitflow also directly blocks continuous delivery. Gitflow also creates so many builds which slows down deployment process a lot. I'd say Gitflow works for public libraries or software that cannot be deployed continously, e.g. mobile application. However, Gitflow sucks for backend or web application.
Two things: 1. There is nothing continuous in a discrete world. At best it is continual. 2. Every developer will always have a branch on their local computer, even if it is called the same as on the github server, you can merge in any direction. GitHub is a peer to peer system. Stop saying that you should only have one branch.
During a release there will generally be multiple versions in production at once, so you can't have a single branch that represents what's in production. And if you want something in git to tell you what version(s) are or have been in production I think a series of tags is more correct than a branch. The tag can be set as part of the deployment process.
Maybe the video makes sense if you have context from his other videos. The real difference between CI and GitHub Flow is that you're not waiting for approval or remotely run automated tests to integrate your changes. Changes are integrated each time you make an atomic change. As the developer of the change, you are responsible for ensuring that after the change is made, the software is still working. Add on the constraint on at least one integration per day and you're ensuring a level of atomicity that provides a small incremental change that is easy to reason about. If an integration fails, you know which change caused it. The other practice that synergises wit CI/TBD is pair or mob programming which he is a big proponent of. This way, your code is being peer reviewed immediately and you're reducing the feedback loop. The concept needs context and discipline, hence the fact that he brought up the practice of engineering. You can't really consider yourself a software engineer unless you adopt the engineering practices of other engineers (of which he goes into detail on in other videos). Everyone calling the ideas in this video crazy is completely missing the point, and I hope I never have to work with you.
Haven't watched this fully yet, but I can say that I used to use Git Flow while I was more of a front-end dev. I have transitioned over time to more of a backend & devops role and pushed teams to drop git flow in favour of CI and promoting builds. Feature Branch workflow is what I'd generally recommend projects us.
The issue with Git Flow is now you actually have multiple integration points at any given time. Some are long lasting, while others are ephemeral (release, hotfix). When you integrate into develop, you are not integrated with these other branches. If you create a hot fix, then you need to integrate that back into multiple long lasting branches. The workflow really complicates things and can be resolved by having just one "main" branch that you integrate in, and build artifacts (binaries, container images, etc). Then you can deploy those artifacts to test or prod environments as you need without tying that complexity to the source control.
I worked for a company where we had just abou 6 devs and they had a master only approach. Even then, we regularly got a build error from our CI pipeline telling the master was broken and basically everyone else was now working on a broken version having to wait for the guy that created the broken commit, or (worse) someone else had to do it. In my opinion it's jsut too much chaos. If you work on a seperate branch it's your responsibilty to keep pull-rebase from whatever branch you're merged (being dev or master). That way there are hardly any merge conflicts.
That is because you weren't practicing CI, and by the sounds of it - didn't pay attention to the feedback and had a "team" of individual tasks over a team that collaborated. I have lead a team of six that managed 600 applications in a highly regulated environment where we practiced CI, PP, TDD and never used a branch.
I recommend watching more of Daves videos. It’s a very difficult topic and requires going down the rabbit hole of CI and TBD to properly understand all the thougts in this video. Edit: e.g. Google uses TBD as well so no, it is not “only suitable for small personal websites” Edit2: PRs are not necessarily against CI, they can be used as a way of running tests before they get to master, so you don’t need to run the pipeline locally or with git hooks. The important thing is that the PRs are either automatically merged once they pass the pipeline, or at least they do not live too long to stray further away from the mainline. So pushing straight to master (and potentially breaking other devs copies every 10 minutes) is not required.
Agreed. The other context is needed for prime. I thought I was stupid when I tried to dive into his videos two years ago! No... it just requires an open mind I think.
@@retagainez Exactly. I was lucky I had read the CD book before Dave started making videos, so already had some experience, but the videos expanded it significantly.
The real answer is to not use GIT. Everyone works on the same computer at the same time.
nah just pass one flash drive clockwise around the office, when it gets to you, upload whatever you have then pass to the next person.
@@DudV2 Double it and give it to the next person
That's how we did it in the old XP days.
Just deploy it over ftp. If you're working with someone else just shout loudly which files you're working on so they know to stay away from those. That was how I worked in my first job.
Sounds like mob programming and is actually a legitimately advocated strategy.
My favorte part of Mr. Prime Agen is that he's totally cool with "I was wrong" or "I misunderstood this before." Everyone needs to be okay with being wrong sometimes and it's so nice to see someone like Prime being so casual about learning new things and getting better.
Is that so special?
@@dasten123 Very much so. I envy anyone who disagrees.
But what if I'm absolutely correct 100% of the time
Mr. Prime has very hot takes, which why he knows he may be wrong most of the time
How is Mr. Prime Agen?
Continuous Integration is *NOT* the automation pipeline you build or the tests you run, or the platform on which those tests get run. Despite the confusion caused by platforms branding themselves as "CI" and the regular misuse of the acronym.
Continuous Integration is simply: "everyone merges their changes into the main line very frequently, ideally multiple times a day in small chunks". That's it.
The pipeline of tests, and the use of feature flags to disable incomplete work and all of these other things are there to facilitate developers integrating their code "continuously".
Continuous Delivery/Deployment are then just additional layers that sit on top, these layers are again facilitated by automated pipelines, checks, tests, etc.
This. And when someone inevitably merges something that breaks the remote/main, because you integrate in small steps (preferably doing TDD and merging every step after refactoring), it's easy to fix by anyone. Problem with some of the video's by CD is that they talk about one specific practice that should ideally be practices in a suite with other practices to really have the benefit he talks about.
In this case CI, as he describes it, doesn't really work well without the other technical parts of XP. And then I would suggest doing Trunk based development with short lived feature branches that are merged via (automated) PR's.
Also, no this indeed would not work for OSS with external contributers, quite sure CD also mentions that in some other video's.
It always gets me when people say things like "I will just run CI against my branch" or "I will just run my branch through CI"
CI and TBD are basically synonyms, and you wouldn't say I'll just run my branch through TBD quickly.
The other one is feature branches. Creating long lived branches to keep features separate for weeks at a time is bad, because at some point you will have to merge it, and if there is ever more than 1 of them that will fall apart. If there is not more than 1 of them then why bother doing it? But short lived branches that last an hour and only exist for code review and running tests before merging are fine IMHO, and I wouldn't personally call these feature branches at all. Then you have to say don't use *long lived* feature branches just to differentiate them.
@@Timelog88 Sorry, can't cover everything in every video 😉
@@ContinuousDelivery No worries 👍 Rather have short video's explaining the parts well then one long video skipping the important nuances.
Then Continuous Integration is either:
A. Incredibly stupid.
or
B. Redundant.
If people do work that can and should be merged with the main project - then they should do so.
If they have work that isn't ready for that yet, then they should wait.
How was there ever a situation that occurred which demanded a push for "Continuous Integration"? Did people work in the shadows for 3 months on a separate branch, never merging branches and then having a completely broken / incompatible project?
Said in other words: Is the need for outspoken "CI" not just a general indictment of the bad judgement of people working with Version-Control systems?
I think I can address a bit of confusion - When Dave talks about the minimum integration frequency of being 1 day, and you talk about being a rebaser and "Why wait a day to update?" - As I understand it, he means integrating all diverging branches, and not just the one you are on. You can update your branch as often as master is updated, but nobody else who is diverging is going to get your changes until you merge.
You don't need to keep your release branches. As long as they are tagged with a version, you can recover that branch at any time, create a new one, hotfix it, versioning, release and delete the branch.
Branching Strategies shouldn't be imposed like Dave is suggestion, but instead the team that is going to manage the repository and releases should find what fits their needs.
not sure how a branch, in practical terms, is any different from a tag if the purpose is just marking what commit is what version.
I think it would be super interesting if you can have Dave Farley on stream and discuss topics like CI/CD or TDD
TBF a conversation where he answers our questions could be beneficial at least for understanding his idea for this workflow 100%
Yes! There's something very attractive to old experienced developers.
Nah
this
+1 for this
I agree with you. I think his definitions of "git", "(feature) branches", "continuous integration" and "testing" are vastly different from ours.
umm you do realize that I agreed with Prime, right? "his definitions" were directed at the guy he's reacting to. Every team can define branching for themselves, but mine - and I think Prime's too - is that we should not break master. That's just a different workflow with a different definition of CI that works for us. I simply do not understand the other ways and I do not care actually.
@@daniilpintjuk4473 and as I say - you can define continuous integration for yourself.
there's a disconnect between how an evangelist for CI/CD talk vs how an average devs talk of "we use a CI-server". So I kinda avoid using the term CI/CD outright when describing what what team does.
I just describe it in a simple way: "we don't push directly to master. we use feature branches and on each push, a server runs our test suite. We can't merge a PR to master unless it's ✅. Master should always be releaseable. we TRY to make small branches, and ship often. But we'll make exceptions"
That sounds exactly like what CI is. Short lived smaller feature branches that are tested and then merged into production through a PR. With the CD part a master build and release will trigger automatically on PR's merged into master. The master branch should always be protected and only way to do changes to the master branch is through PR's with reviews and automatic testing.
@@alexandersagen3753 there is absolutely no reason why master should be *protected*. This obsession about protecting master is foolish. Master needs to be releasable, but it's absolutely okay to break master from time to time, as long as we can quickly fix it (it's the principle of the Andon cord from Lean). Establishing convoluted workflows just for the sake of protecting master can lead to all sort of inefficiencies.
@@andrealaforgia I absolutely agree :) we break master all the time and its okay.
@BenRangel very succintly put 🤝
@@andrealaforgia isn’t there a reason to protect master simply to avoid accidental pushes?
A) Sure you can enforce all devs to have local git settings that prevent it - but that will fail when that mom who was on leave for 9 months comes back to work and everyone forgot she doesn’t have it
B) it’s 100% ok to push to master in a solo project. It’s ok in a small team. It’s ok in a large team if everyone is working on the same change and are aware you’re gonna push to master.
C) BUT in a large team it’s often the case that you have multiple tasks worked on by multiple sevs and it might turn out A is completed and ready to release before B - but if team B pushed problematic code to master by accident that screws everything up. team B might be aiming to push to a featureBranch before ending the day - but accidentaly push to master. Which screws up the plan team A had for an early release the next day.
Especially if the policy is ”master should be release-ready” I thin the default should be that anything merged should have passed the remote test suite ✅
Which reduces the risk
I have been involved in urgent bug fix attempts where we certain our changes had no negative effects - we pushed - and then thanked the heavens that we did not not push to master cause the remote tests failed, and we were about to head into another meetkng sl didn’t want to break master. (And please don’t ask ”why didn’t you test locally before pushing?”)
PS: of course if the entire dev team is mob programming an urgent fix - it is nice if we’re allowed to push directly to master, but as part of a mid/large size team - that is a rare exception imo
DS: I might be misunderstanding something about your Workflow or CI/CD pipleline.
Continuous delivery is a way of working (with a lot of automation) to ensure only good builds makes it to production. It is a falsification process where the "bet" is that the build is a release candidate that will pass the crucible and make it to production - but most builds won't. CD is the codification of the organization's processes and SDLC cycle so that every change is verified and has an audit trail instead of process theatre where everyone pretends the process is followed and done doesn't actually means done.
Continuous Deployment is the same as Continuous Delivery but the final human "approval gate" is automated. Continuous Delivery is not "every commit goes straight to production".
This feels like the 'different worlds' problem to me. I've worked at places where CD works great and operates just as you describe, providing a way to ensure that all commits are rigorously tested before hitting production. However I've also seen it turn into a raging tire fire when people fail to actually make the 'rigorous' part a reality :)
YMMV and probably does as with most things in the software development world!
@@ChrisPatti Yes, context matters. The issue is when organizations optimizes for "open source, zero-trust workflows" and "we have juniors, must control them" and "we can't have seniors to hold their hands" and end up implementing processes and workflows based on the lowest common denominator. And even worse, they tend to implement it across the entire organization by edict, again missing the different contexts.
Working with CI/TBD is not "unprofessional" just because someone made a video on youtube saying "Use git like a professional". But it's at that level we are as a software industry. And it's sad.
This so much this. Was just about to make this post, you beat me to it!
@@kasper_573same. So many people think of CD and think of the deployment one. I find continuous delivery much more valuable, knowing the thing that made it to master is as good as we can make it ready for production (nothing is perfect). This means the deployment flow can be made to match the release needs of the organisation.
i think when talking about CI/CD you always have to consider the context: little library, application, oss, live service, devices, ... i assume his context was in the live service area. he also often mentions the DORA metrics and his co-author was a driving force for these. these were mostly focused on live services i think. in the end it has to evolve into whatever works best for your case so there cant ever be a one for all be all
You're missing some context here. David's idea of CI/CD is essentially based on the idea that your local copy of master is continuously updated so that you're never working on old code. Conversely, it means that your work is continously being pushed to master so that no one on your team is working on old code. You can do this how you want, but rebasing frequently is a good start.
I think he understands that. What hes saying is there could be a situation where you identify 2 bugs at once, and one of them has been deployed to production. Ideally you would want to hotfix the first bug, but now you have to fix both before you can deploy to production again.
I’d rather freebase frequently
@@olot100 In that case you would integrate the fix into master first, then integrate it into a hotfix branch for deployment.
@@olot100 Usually for that. There are 3 cases:
the bug discovered if it exists only on main, fix will apply to hotfix branch of the main branch.
If it exists on the develop/feature only there is no hotfix, you fix in develop/feature.
And if it exists on both. You apply fix in develop/feature Then cherry-pick to hotfix branch of the main branch.
This also apply on release branch where hotfix is mainly cherry-picked from other dev branch to staging of release-branch then merge later.
So If only first bug to be fix, hotfix will contain only cherry-picked fix from develop/feature. (Yes usually maintainer of main/release are not the same as develop, and he is responsible for fetch, cherry-pick and merge fix from other team)
(On 9:30. Hot fix never pull back to dev, That wrong, you do opposite)
Then CI do things afterwards. Usually only done CI of production on tags in release/test, where maintainer feels OK this ready for test/deploy, If devs want to do do CI/CD of their own just setup own test branch for it and push fucking test branch for other to fetch and see what you currently doing for feedback rather than commit to develop to messed up others work before getting merge by maintainer.
Meanwhile, on a mid-sized project:
1. Client needs a hotfix for 1.0 deployed today.
2. You've been working on 1.1 for a while, and will deploy it in two months.
3. Client decided that feature X will create issues for them, and decided to postpone it for 1.2, which will happen in four months.
4. Client requested a proof of concept that may or may not be approved.
Most of the clients I worked for preferred that all merges are done through peer-approved code reviews. I don't see how you could handle all these requests by pushing to master daily.
This guy talking about CI reminds me of the business people talking about "lean manufacturing"--it sounds very neat and pretty in theory, but it falls apart when subject to the constraints of reality. Ultimately, optimizing your production processes for philosophical purity does not create the best product, with the fewest hours invested, at the lowest cost.
i love this take
This applies to all the acronyms
I'm blown away how you've summed up the problem with that channel so succinctly
This
Have worked in systems engineering, devops, and SRE across about 8 companies from 10k+ employees down to 5 employees and I have to agree. Smells like somebody who attended a CI seminar and is now coming back trying to explain to everyone how they are doing CI wrong. Starts with some suss definition of CI crafted specially to make fit his arguments, while much more broad definitions exist(see Atlassian's definition and also people make this mistake with the definition of CD).
Also don't care if the definition came from the person who coined the term. You could go ask Germany about the definition of beer and come back to the USA touring breweries for the rest of your life telling them they aren't making beer correctly.
My interpretation of Dave’s central argument is to avoid all team members building large features/changes to completion in long-lived, isolated feature branches, then trying to merge back to main.
Even if everyone is rebasing frequently off main, all feature branches are diverging from each other which can cause hell when everyone wants to merge to main.
Feature branches are fine, but they should be short lived and merged to main quickly (ideally within a day). This means larger features/changes need to be broken up, merged to main, and shipped incrementally.
If the changes are not ready for users they need to be hidden somehow. This could be through feature flags or simply not including the “entry point” of the feature in the UI (Dave has a separate video on this).
Note: Dave argues taking it a step further by substituting code review with pair programming and pushing straight to main. I’ve never done this so I can’t comment, but I think a standard GitHub-style PR/review workflow aligned with what I mentioned above gets you 90% of the way there, particularly when coupled with good team communication and an emphasis on prompt code reviews.
Yup, that's exactly what he advocates. And I think he has a great point. The one downside is that it absolutely requires very solid testing and just as importantly, fast running testing. It's just not feasible if testing takes 1h+.
edit: I also find it super funny how comments are saying Dave doesn't understand CI/CD. Wonder if they know he actually came up with the idea in the first place.
What gets suggested by Dave seems to me what I would call "continuously YOLO it to production". (Especially the CD part of CI/CD). But with the help of feature-flags that may be possible? I regularly have code deployed in prod that is "ready when the orther parties want to use it". Typically I think backend should be able to release before (app-)frontend uses it.
This is usually fine but in larger projects/companies features might get cancelled. I have seen it painful to revert tons of "small features" that contributed to the actual feature.
@@MeriaDuckYou are missing 1 very important thing about what he was talking about. You need to separate source code from the builds.
So after someone merges to master it gets built and tested. If it fails testing that build does not move forward. If it passes it then gets deployed to the Test environment. Then it goes through more tests, this is where you will manually test if you have it. When it passes testing it goes to stage, then to prod. This is what continuous delivery is. The guy in the video was only talking about continuous integration
Yep, that's his argument.
But depending on how the software gets developed, you may not be able to actually work like this.
That's especially the case in open source (you have NO guarantee that the half finished feature you just merged gets finished in a later PR by that person).
Yeah, that video got me really confused (heh!), too. For larger projects, I haven't found anything better than this:
- You have a dev and a release branch (the release branch may be replaced by tagging releases on the dev branch, which has it's on subtle pros and cons)
- You branch off of dev, when you want to make any changes and develop thoese on your new branch (you try to keep the changes in one such feature branch to an absolute minimum, the goal is to merge as soon as possible and have as little opportunity for merge conflicts as possible, and favor repeating the entire process a couple of times over large merges)
- You occasionally run tests you care about on your new branch (be it via some automated process like git hooks or not, and be it locally or on origin/newbranch, doesn't matter)
- When you (and probably your team) are ready you merge it to dev which triggers a mandatory run of the entire test suite and maybe a canary or staging deployment
- If those checks are fine, you merge to your release branch and do a release
You try everything to never have to push directly to the release or dev branch, but pushing to dev might occasionally be necessary for a real critical hotfix or complicated checks that you need to do as close to production as possible.
if Farley works on master directly - and has a huge test suite how does he work efficiently? Doesnt he have to wait for the entire test suite to run locally before pushing?
Personally I'm often pushing my work to branchA switching to a new quick task, and I get a solid five minutes of work in before the test server has told me branchA is ✅ then I merge it and resume with B
would be one hell of a hassle to have to wait to run tests locally before I can push and go do other things
I feel so validated hearing your take on this video. I watched it once months ago and similarly thought it made no sense. And it makes no sense because of how he describes the branches/pushing/testing.
He mentions making changes to his fork of master, and then clones of master and doesn't really make clear which one he's talking about in the various references he makes to it. Like it seems like he's talking about pushing to master, but really means like a push from his local copy of the fork branch to his remote fork, then from the remote fork there's automated testing (or PR that triggers it) that has to run before allowing the PR to be merged.
But he doesn't mention PRs and he doesn't really describe the testing in his Ci Pipeline except when he mentions running the tests locally before pushing to his fork. Running the tests locally isn't bad per se, but that's not really part of the pipeline because how can you really enforce that when making a push to your fork or even master?
EXACTLY. i was confused.
now if you push to your master and it _auto_ merges to master after thorough testing, totally get it, but it _seemed_ like it was local -> local testing -> master
@@ThePrimeTimeagen This video makes more sense if you understand that Farley probably came up with the “pipeline” concept and that he modelled it after CPU pipelines hence those slower acceptance test and performance test stages etc are not run on every commit and the end goal is only as recent a build as possible that could be deployed to prod at will at the touch of a button. Each stage beyond the fast-ish artefact builds aims to falsify the assumption that the artefact is good to go. That’s what he means by CD (another term he came up with). Semantic diffusion is a real SOB :)
He probably omits PRs because he thinks pair programming is a better way to do code reviews (it might not work with open source), but it doesn't make any difference. If you find PRs prevent you from using this methodology, it probably points to an issue with the code review, not everything else.
For your last question, it takes a lot of trust in your team; believe they will code using BDD/TDD and run unit tests before pushing. It is far faster to run something locally than remotely regarding unit tests. If your tests fail at integration (your integration tests), your unit tests are insufficient. Your design needs to be simplified, and you need to break it apart or have better interfaces with well-defined ways to speak to other components. The same needs to be considered for unit tests that run far too long. Suppose your unit testing is too simplistic to detect issues between things you might think are "external components" external to the module you worked on. In that case, your code needs better interfaces between the two systems.
As for the rest of your questions, MANY resources cover these topics as the result of decades worth of software development. The XP book is an example of this, written decades ago.
I tried to consider your question and any other questions you might have in this explanation. If something needs clarification, I will try to explain it my best. This is my understanding from looking at Dave Farley's videos on CD for the past two years and reading his CD book (although I admit I need to re-read it sometime soon.)
Every branch is essentially forking the entire codebase for the project, with all of the negative connotations implied by that statement. In distributed version control systems, this fork is moved from being implicit in centralized version control to being explicit.
When two forks exist (for simplicity call them upstream and branch), there are only two ways to avoid having them become permanently incompatible. Either you slow everything down and make it so that nothing moves from the branch to upstream until it is perfect, which results in long lived branches with big patches, or you speed things up by merging every change as soon as it does something useful, which leads to continuous integration.
When doing the fast approach, you need a way to show that you have not broken anything with your new small patch. The way this is done is with small fast unit test which act as regression tests against the new code, and you write them before you commit the code for the new patch and commit them at the same time, which is why people using continuous integration end up with a codebase which has extremely high levels of code coverage.
What happens next is you run all the tests, and when they pass, it tells you it is safe to commit the change, this can then be rebased, and pushed upstream, which then runs all the new tests against any new changes, and you end up producing a testing candidate which could be deployed, and it becomes the new master.
When you want to make the next change, as you have already rebased before pushing upstream, you can trivially rebased again before you start, and make new changes. This makes the cycle very fast, and ensures that everyone stays in sync, and works even at the scale of the Linux kernel, which has new changes upstreamed every 30 seconds.
In contrast, the slow version works not by having small changes guarded by tests, but by having nothing moved to upstream until it is both complete and as perfect as can be detected. As it is not guarded by tests, it is not designed with testing in mind, which makes any testing slow and fragile, further discouraging testing, and is why followers of the slow method dislike testing.
It also leads to merge hell, as features without tests get delivered with a big code dump all in one go, which may then cause problems for those on other branches which have incompatible changes. You then have to spend a lot of time finding which part of this large patch with no tests broke your branch. This is avoided with the fast approach as all of the changes are small.
Even worse, all of the code in all of the long lived braches is invisible to anyone taking upstream and trying to do refactoring to reduce technical debt, adding another source of breaking your branch with the next rebase.
Pull requests with peer review add yet another source of delay, as you cannot submit your change upstream until someone else approves your changes, which can take tens to hundreds of minutes depending on the size of your patch. The fast approach replaces manual peer review with comprehensive automated regression testing which is both faster, and more reliable. In return they get to spend a lot less time bug hunting.
The unit tests and integration tests in continuous integration get you to a point where you have a release candidate which does all of the functions the programmer understood was wanted. This does not require all of the features to be enabled by default, only that the code is in the main codebase, and this is usually done by replacing the idea of the long lived feature branch with short lived (in the sense of between code merges) branches with code shipped but hidden behind feature flags, which also allows the people on other branches to reuse the code from your branch rather than having to duplicate it in their own branch.
Continuous delivery goes one step further, and takes the release candidate output from continuous integration and does all of the non functional tests to demonstrate a lack of regressions for performance, memory usage, etc and then adds on top of this a set of acceptance tests that confirm that what the programmer understood matches what the user wanted.
The output from this is a deployable set of code which has already been packaged and deployed to testing, and can thus be deployed to production. Continuous deployment goes one step further and automatically deploys it to your oldest load sharing server, and uses the ideas of chaos engineering and canary deployments to gradually increase the load taken by this server while reducing the load to the next oldest server until either it has moved all of the load from the oldest to the newest, or a new unspotted problem is observed, and the rollout is reversed.
Basically though all of this starts with replacing the slow long lived feature branches with short lived branches which causes the continuous integration build to almost always have lots of regression tests always passing, which by definition cannot be done against code hidden away on a long lived feature branch which does not get committed until the entire feature is finished.
CI is about how your team works together to write software in order not to have mega-merges. It is orthogonal to your deployment strategy, which could be direct CD to prod, release of specific version after UAT, canary, etc... You can use tools (e.g. deploy by tag instead of branch) to use any deployment strategy you prefer and the proper implementation techniques (e.g. feature flag) while doing CI (=no mega-merge) in your team.
I've worked/do work on a project where people push directly to master. Since we all work on completely separate things for the most part, it _works_ but it is awful. My 2 cents (which is overvaluing it): I think developers are particularly prone to falling for a trap of getting things done perfectly/most efficiently as possible. It's a mindset where any setback, any inconvenience is a failure. I'm fine with a system where i might have to do an awkward merge conflict resolution every once in a while. Dealing with surprise is just a cost of doing business in life.
Fwiw, when i do work on projects (usually pretty small), I like to have a protected master branch and then just branch off that for features/bugfixes etc and merge them in via PR, with CI only running on PR's, and using local unit-tests in between. I'm sure it has a name, but i just call it using git.
1000% agree with everything you just said
I've worked with all kinds of system, pushing to a single branch only, full-fledged Git-Flow with MR(PR in GitHub speak)/Code Reviews, and a kind of "hybrid" version, where you work on feature branches and a single dev branch, marking new releases to be created by a pipeline trigger operating on release-tags for a specific commit in the dev branch. The last approach seems - for now - to be the most sensible approach for me, and I'm curious why it didn't gain more traction (maybe other people didn't think about it lol). All approaches, of course, use automated testing for each commit and also for each merge that needs to be performed.
Pushing directly to the master never truly sounds like a good idea. Even in a team of two, I would never allow either of us to merge into main directly lmao
Branching creates multiple versions of the source code (e.g. "dev" vs "main"), thus creating the possibility for these versions to become desync'd. If you never branch then having desync'd versions is impossible because there is nothing that has to be sync'd. This eliminates tons of wasteful effort spent fixing bugs, git issues, merge conflicts, etc because they just can't happen. For example, just this last week I sat with 6 -7 developers for an hour because one developer squashed their commits on a feature branch and the PR itself broke the syncing between their feature and main branches. Add to this that they couldn't fix the issue on their own because the main branch was locked and you can see how this costs companies real money and productivity. Also PR's become a huge bottleneck when you transition to truly continuous delivery/deployment and deploy to production 10, 20, 40+, etc times a day. Streaming your changes "continuously" into prod is impossible when you use the discontinuous process of a PR.
@@jordanpavlic9745 rebasing is for sure a thing. So I'm not sure I understand how anything was messed up, if following that very obvious principle. That, and of course code reviews. 🫠
The entire concept of "continuous delivery" seems to be something like "discover footguns as quickly as possible by firing them as many times as possible in production systems".
"discover footguns as quickly as possible by firing them as many times as possible **before we hit production** systems". Fixed it for you.
Continuous Delivery is not Continuous Deployment.
@@ddanielsandberg did some reading, and it seems that continuous delivery and continuous deployment are described with overlapping terms. Continuous delivery indeed attempts to discover footguns just before production while continuous deployment automates the last step of sending undiscovered footguns to prod. Glad to have found clarity on the topic; this video had left me a bit confused about the terms.
@@k98killernot quite. Continuous integration attempts to find the foot guns before main testing using lots of fast automated unit and integration regression tests. This allows you to get something suitable for detailed long termed testing while at the same time attempting to spot any breakages with the regression tests before you even make the commit, and which results in tested code with no obvious regressions which does what the programmer understood it needed to do.
The output of continuous integration is then used by the deployment pipeline to further prove that the code is not fit for purpose by running longer lasting tests which try and discover regression in performance, memory usage, etc. These additional foot guns are not always discovered at this point, but the end result is something which has already been deployed to testing, and which looks as fit as possible for deployment to production.
Continuous deployment goes further and rolls out this new release to the load balanced server with the oldest codebase deployed, and gradually takes over the load from the next oldest, either succeeding, or discovering additional foot guns which don't show up until under heavy load, in which cae it is rolled back.
I think another main confusion was "feature branch".
CD means a "long" (which for him is more than a day) lived branch; something one interacts with and refers to by name.
Prime means wherever I store my changes before they are merged to master (or whatever).
I lean toward Prime's definition, and would qualify the CD-version with "longlived-" or "git-flow-". I think CD usually mentions this, but he might have left that out since the video was specifically about git-flow.
I think the big difference between your local master as the development branch vs a development branch on the central server is that the when it's on the central server rather than you pushing directly to master on the central server, the central server can run the test suite and make sure it passes *before* it is pushed to the master on the central server. The other one is that when it's a feature branch on the central server, others can code review and test it before you decide it is ready to push to master. Also, I don't think that long (ish) lived feature branches are the problem.. the problem is when they remain diverged from the master for long periods. As long as the feature branch is being rebased onto the central master at least daily, it's just fine if you sit on that feature branch ( especially if it is subject to daily integration tests and peer review on the central server ) for some time before doing the final merge. In other words, a WIP pull request. As long as it is rebased daily, it can stay as a WIP PR all week and that's just fine.
That only works as long as you have a single feature branch. As soon as there are two long-lived feature branches, it's not fine any more, because those two do not get rebased off each other.
@@willemm9356 They don't need to be rebased off each other. As long as their deviation from the shared master is limited and regularly rebased against it, then each feature branch contains a different set of deviations relative to the master, but within one feature branch, the deviations are relatively limited. As long as the longer lived feature branch gets rebased after shorter lived feature branches are merged to the master, then the deviation in the long lived branch remains relatively low, no matter how long it is around for.
First, I said two long lived feature branches. Second, "different set of deviations" is pure conjecture.
@@phillipsusi1791this is only true as long as the code changes in the new feature does not touch any code used by any other new feature. In it's own long lived feature branch. As the feature gets bigger, lives longer, and requires more modifications to pre-existing code this assumption becomes increasingly invalid, and that is before you consider any of the other fundamental problems with long living feature branches in actively changing code based.
@@grokitall What you are talking about is the amount of code churn, or how much change to existing code is in the branch. That has nothing to do with how long it has been around. No matter how long a branch has been around that churns a lot of code, once it is merged, it is going to cause problems for other branches that have not been merged yet. That's going to be a problem for them when they go to rebase, but so what? They will have to figure it out, just as they would even if that branch was only 1 day old and hardly touched anything else. They will have to resolve the conflicts in the rebase, retest, and by the time they submit their merge, everything will be fine.
It's also called Green Trunk. Everyone works and commit to mainline and, ideally, it should always compile. You can use bots in Slack or whatever to report on build failures and those should be addressed ASAP. With Perforce, you can even set triggers to prevent submitting code that contains files that don't compile or haven't passed review. These don't catch everything like the obvious mistake of forgetting to add a file but it increases the trust in the latest code.
Dave never said you should push into production directly. He says you should merge as often as possible into the master branch in the staging are which triggers the whole CI/CD pipeline, and you deploy the change into production when it passes every stages.
This like saying "Electricians work best with the main breaker says on. They'll know the second they do something wrong".
Continuous Integration and the Test Pipeline are different things that sometimes are called the same (i.e. CI)
agreed
Finally this guy is catching some heat. All of his videos seem to be "This thing you're doing is wrong"
thank you for saying posting another reason I find this guys videos, insufferable and impossible to watch. He is just about the only technical RUclipsr I’ve told RUclips to not recommend videos from ever.
"rebel scum" lol what an oxymoron
@@carlsjr7975 its a Star Wars reference...
Like the Wooly mammoth dishwasher says on the Flinstones says, “Hey, it’s a living!”
Yup, his videos are absolutely a waste of time for people that don’t have automated Q&A / testing strategies in place 😌
I am actually shocked that the word "trunk based development" was not mentioned a single time by any of the 2 devs with multiple years of experience, in this 30 minute video about trunk based development.
In summary:
- Branches are not supposed to live longer than a few days. They always branch off from, and merge directly into, master. As soon as they are merged, the branches are deleted.
- Use feature flags when working on larger features, so you can continually merge your changes, without giving customers access to an incomplete feature.
CD actually touched very briefly on trunk based development but Prime completely ignored it.
21:25
@@haakonness oof. I'm taking the L-train to clowntown
Yeah I think that’s the main point here. The entire video can be summed up by a couple of points:
- Follow trunk based development + feature flags to ensure mainline can always be deployed
- If you have bigger features don’t create a feature branch, rather break it up into smaller commits such that each commit can be merged into master
- Use tags for releases instead of release branches
The failure of trunk based development is evinced by how the expectation when getting an update these days is it'll break things. It's a developer-centric approach "I don't want to think about what I'm doing" instead of what it should be, user-centric.
I think you kinda got there at the end. You DO mostly agree with him, but you are using slightly different definitions of “feature branch” and “CI”. What you call a “feature branch”, he calls a “local copy”, and what he calls a “feature branch”, you would call “a long-lived feature branch that hasn’t merged in more than a day”. Whether the local copy is only local or is reflected in the central repo is completely arbitrary, even he would concede that. The tl;dr of what he is actually advocating is: “don’t use any workflow that keeps code from getting merged for longer than a day”. GitFlow and even GitHubFlow imply that features do not get merged until they are completed and since many features can take longer than a day, the risk is you finding out you’ve broken something way too late. To make this work in the real world you need a good automated test suite and some real feature flag kung fu. But the big shops invest in that so that it can work. One area where you diverge slightly is that you prefer to have a branch that mirrors production and he advocates just using a tagged release for that purpose.
There's probably some misunderstanding here, but I feel like you agree on the main premise. It's the policy of staying in synch with single remote branch. That's it. Every push to (main) remote is rebased onto current tip of master branch - not pushed as separate branch and attempted to merge later. It forces frequent integrations and discouraging big changes made in isolation, which are hard to merge without conflicts. Local/team feature branches are fine if they're frequently stacked up in main repo.
Been using gitflow with the nvie/gitflow extension for a few years now on small to large projects. Really enjoy the pattern myself and haven't hit any issues, yet. It's typically, features are branched from develop, merged back after all the PR/QA hoopla is done. A hotfix is branched from master and should be very small (for 911's), tagged and merged back into master and develop after things are appoved. A release branch is a tagged version of develop that gets merged into master after again, things are approved. I'm sure it doesn't work for all projects. Just use what works for you and your team.
I do not have a problem with gitflow except for the requirement to delete old branches....DUMB if you are using Jira and want your branches to continue to go to the right place in stories. But aside from that having a development branch is a fine idea
@@batboy49 Our company keeps track of the merge requests instead of the branches themselves, sometimes stuff is missed because there were some commits made to hotfix stuff, but you at least get a good idea of what changed and how it changed even if it's not a perfect representation.
Although we also archive the versioned branches for similar reasons to what you describe.
@@batboy49 I typically point to pull requests in Jira tickets (or alternatively commit diffs). Branches are only really useful for tracking live changes, you can always find changes between commits, regardless of whether or not the branch exists.
Also, I don't think the gitflow pattern requires the deletion of branches, are you saying a specific extension enforces this?
Peter - I see release branches as being used predominantly for isolation during a release. That way you can test your code and go through the release cycle while others are committing to the develop branch.
I'd then tag the commit that ultimately goes out for release, merge this back into develop, and also into master if you have a master branch.
Edit: your last sentence is perfect
@@sb_dunk I just looked it up and I think you are right it is not required to delete branches according to gitflow. The last time I was in a group that enforced gitflow THEY deleted the branches and I came to associate it with gitflow. I also agree with what you said about pointing to the pull request in the ticket. I do not guess I hate gitflow then. What I hated was deleting old branches and having that connectivity to the story broken. BTW...I do think it is useful knowing where that branch ended up at the end of the story. I think there is value in having that branch still around. What if some other change ends up breaking this functionality? If we step through the stories we can easily go right to where the changes were and diff them directly. But to your point no, I saw nothing indicating you had to delete branches in gitflow. I like keeping those book marks around so that if I go looking for them later it is easier to find. Generally we will not need the branch so much.
There are some tests you cannot run every time there is a small change. There's a reason we create release candidates (while develop can be ahead) while release is being thoroughly tested before release. The idea that develop and production can be the same thing is under the assumption that all tests are fast enough that daily changes don't break anything.
I have only worked with CI like workflows (short lived feature branches).and it works flawlessly. A lot easier for a QA to test a single feature change and to find what caused an issue in prod (that will happen anyway from time to time). We release around 20 times a day to production with around 25 PR's merged daily. Build/test takes around 25 minutes (many concurrent test-stages).
I hear about a lot of SAAS companies using develop and other branches where changes goes out in batches and I've never understood why.
@@alexandersagen3753 the person you're responding to made a really important point you totally glossed over. Prime makes it as well. If your exhaustive test suite takes 30 hours, how do you suggest running it twenty times a day? Assume for the sake of argument that the reason it's so long is that it has to be, not that the people building it are incompetent.
@@SteveKuznetsov that is a good point and I agree. I'm just overly happy that my work environment has ~20 minutes build and test time and that we can do deploys to production 20-25 times every day.
For most companies I'm sure the build/test time is a lot longer than it needs to be. Modularize the project with very clear boundaries and clear contracts. Test each module separately on every border + unit tests. And most importantly. Only test the modules changed. Then the test build doesnt take longer time than the longest module
If your test suite takes 30 hours to run, then you effed up somewhere.
@@Jabberwockybird I've worked on systems where there were 20 nodes of 8 hours of tests. Some companies just do not care about performance
Here is how I understand what he's saying (please don't argue with me about the statements it's not me making them):
You constantly push to main (yes, actual main, the one everyone else pulls from). You don't push if you don't think your changes are correct, which is why you use tdd to assure yourself that they are (if you your tests don't catch your bugs, get better at writing tests). That's what CI is. This doesn't preclude code reviews, or running automated tests before pushing, the main point is the frequency of the changes being integrated into main though so any processes you do here must not be time or labour intensive.
Automated tests are also running on main. Of course you don't release if your automated tests are failing. If you don't do CD, you don't necessarily release all changes that pass automated tests on main, but if you are you do. Even if you're not doing CD, you still release very frequently. The idea is that main should always be in a releasable state, and if it ever is not it becomes top priority to bring it back to a releasable state.
Any change that isn't immediately ready for production is hidden by feature flags and similar mechanisms but still merged and integrated with the main branch. When the feature is ready to release you just turn on the feature flag, and remove it ASAP if there doesn't seem to be a need to turn it back off again.
So that's why you don't need a separate production branch to make hotfixes too. If there's a bug you fix the bug on main and release, which you can do since like we just said main is always ready to release.
The advantages are that any QA that happens always happens on a version of the code that's very close to production, and you don't get huge merge conflicts because you don't make big changes accumulated over days or weeks. Things like code reviews happen very close to the time you're writing the code, not after some days when you're about to complete the feature.
---
I personally have never worked this way and while I understand the theoretical appeal I have a really hard time imagining how this would work out with pretty everyday tasks like schema migrations (OK that's not something you do literally every day but you can't really do those with feature flags exactly... You could _try_ to have multiple parallel versions of the same table based on your combination of features I guess, but that sounds awful), big reactors (yeah I'm sorry you can't always isolate these into neat little packages unless your code was already so well written it's hard to imagine why you need to refactor it).
Regarding your CI confusion. CI is short for "CI pipeline", but what it really means is a "build pipeline". CI on the other hand, is a concept that you check in your changes regularly (at least daily) to the master branch (no long lived branches).
Having used git flow, it takes a lot of maintenance unless you have a lot of automation. Our feature branches(long running features) kept getting out of sync with dev and main so merge conflicts happened all the time.
That's simply a sign that you're making too broad changes at once. I've added half-finished features to production, I just ensured it wasn't breaking stuff. That way there were far fewer conflicts to resolve and the process was far smoother. Frequent merges is a good thing.
@@CottidaeSEA we ended up partially solving it by pull main in to dev all the time and dev into the feature branches all the time. The merging into production is something you are not meant to do in git flow(at least that was according to the tech lead that took a 3 day training on it). These issues were not avoidable when two separate features touched the same file or worse one feature refactored code that was being changed by a different feature...
@@joejazdzewski Ouch, that statement within the parentheses have a lot of feelings behind them. I can totally relate.
Yeah, there will always be some friction when merging, especially if the interface is changed somehow. I'm not at all against changing/improving the interface, but I really think changing it in a feature branch is a bad idea.
Refactoring on that scale should be discussed internally and done in a separate branch to ensure those changes are applied as soon as possible.
This appears to me that is a problem of your project and team. In my work we used git flow on a daily basis for months and haven't much of these problems, they only happens when someone starts making changes and not updating his branch
Ideally, you should update your feature branch if it lives long enought with develop, and then at the end the conflicts would be inexisting or minimal
Yeah feature branches shouldn't last that long. We run with feature branches matching stories, so should only be a few days to fit in a sprint.
If the entire feature will not be ready then it's feature toggled out so that we can enable bits in test envs but not have it enabled in production.
So, he wants immediate feedback from his fellow devs? Well, I sure hope he's a sooo good, that EVERY SINGLE COMMIT he makes is flawless and doesn't break his fellows, whom otherwise have to wait on you fixing your mess (wait for the "feedback" you get after wasting the other team's time a few times).
You NEED an isolated working branch to test BEFORE pushing in everyone else's origin.
I suggest looking into this methodology more. It saddens me to see how the principles didn't land with Prime and commenters here
The whole point is you work so your software is always in a releasable state. If you push shit, that's on you. You heard him: he's committing after each new test he's written and made passing. How are you fucking anything up when you are working that incrementally. The pipeline ensures all existing requirements are still met. Getting fails? Push fixing change. You are like 10 min away from the failure you introduced
@@jangohemmes352 Then again, that's just a "feature branch" but in your local git staging area instead. Why not do exactly what you said in a feature branch? And merge your changes when the tests that you wrote on that feature branch pass. What if your feature is not atomic, and is not easily divided into multiple "releasable states"?
@@ahmedalsaey454100% agree. people don't realize this SVN single version approach leads to a local copies all over the place because people aren't ready to upload
How about when you're working on a feature with breaking changes, with other devs? Feature branches sure look good
You completely missed the point about what CI is. CI is not a build server/tests/linters/etc. The term appeared long before Jenkins was there.
It's about integrating your changes frequently to one branch, so you eliminate the risk of a merge hell. As simple as that. CI has nothing to do with your releasing/deployment strategy. Please, read the definition on Wikipedia.
I think GitFlow is a bit like microservices. It makes sense when Netflix does it, but hardly anyone else. Enterprise SaaS, which I'm sure is most software that gets written out there, could really benefit from a radical simplification of both process and architecture. I routinely see even internal tools with a few dozen of users being built like it's user facing Netflix services.
Here's my hot take on Git Flow: for legacy systems that are way too vast to realistically implement testing on and have management approve such an enormous endeavor, Git Flow the best system there is. On the other hand, if you're starting a new project from scratch in 2023, of course you'll be much better off adopting a single-trunk strategy with TDD.
EDIT: the subtitles are in portugese.
ani idea why it was in portuguese?
@@lenickramone
My guess is as good as yours tbh
@@lenickramone Honestly, I think he just didn't understand he could turn it off...
As a Brazilian, I thought the subtitles were being displayed on my video 😅😅
Does he speaks Portuguese?
@@JudahHolanda don't know, but hi does bring Brazil up sometimes, maybe his wife is brazilian or something
I'm a bit impressed by how Farley lays out his argument. cause after Primagen asks a question Farley's next segment is often a direct response to that question
He must've had this talk quite often and structured it in a way were he gives enough info at every step to adress common questions.
but still room for improvement - like how it's unclear until 20:00 if he works on master directly
There are different tools for different organization types and this is one of them. Knowing many tools will help you to pick the right one or create your own processes. I personally don't prefer git flow but I usually work on small teams and prefer faster feedback.
It does seem to assume that all changes are intended for production - maybe you're doing an experiment, testing to see if an idea might work.
I think prime was using CI and CD somewhat interchangeably , what he showed at the end and agreed on was actually CI and what he argued against was CD, CI doesnt mean every commit goes to production, but you can have CD and every merge to main can probably have CD flow that pushes into production, CD is hard and almost impossible for most, CI on the other hand is possible for most
Yes, and Continuous Delivery doesn't even mean every change you make goes directly to production, unchecked. That's the difference between Continuous Delivery and Continuous Deployment. In Continuous Deployment, every change goes to production. In Continuous Delivery, every change is built and ready for release but it's a product decision when you should release so you may have a manual step here.
What you described in the end is totally correct. Yes, your vision (my/dev -> check -> push tp central/dev -> fetch and rebase onto -> continue work) is the same as his with just slightly different explanation.
Used CI for many years at many companies. The whole fixing old bug thing you mentioned is generally fixed by disabling the feature flag. It's totally possible in real life, it's just hard for people who are not used to it to understand. You can also check something in disabled under a feature flag.
Google uses CI for everything. Feature branches cause mega merges and a host of issues because they are getting less testing.
Testing happens just before testing to master. Feature branches is when you have multiple groups working in groups on feature and then releasing them to master monthly or whatever. Large amount of code hitting the mainline at the same time causing integration issues.
I think you might be confused by what is ment by feature branches when we talk about CI. Feature branches are either long running branches to build a large feature or a branch with a lot of devs. Either way these branches end up being more complex then frequent incremental commits to mainline. Also feature branches often will have branches off the feature branch.
You can have main and production branches in CI however production and mainline are only two branches to maintain, essentially you split the focus testing between two branches. Note testing is a combination of automated, devs pulling down the code and QA.
With non-CI you can have hundreds of feature branches (again these are not short lived dev branches). These hundreds of branches it becomes extremely costly and expensive to have the same quality of testing that mainline gets because only a subset of developers are pulling from that branch.
So in summary. Feature branches (using the CI term of long running or shared branches) cause lower quality code or slowdown code delivery.
Also worth reading "how google tests software" if you want to find out more.
How do you fix a bug by disabling a feature flag?
Monke pushed code to master trying to refactor some code. Weird glitch appears. Glitch needs bugfix. What flag does monke switch off so bug goes away?
@tedchirvasiu changes are the leading cause of bugs. So disabling the feature flag disables the feature and the bug together until it can be fixed. If you introduce a feature and suddenly the app starts crashing. You can disable, diagnose the issue and re-enable. Also note that often the feature will be turned back on before it hits live, although sometimes features need to be disabled in live as well depending on what is worse.
Often features are not even consumer facing but are needed for the long term. Of course before something is released it should go through testing, automated test etc... however we all know bugs get through that. Having more eyes on it at dev time rather than just the few in the branch increases chances of catching issues early.
For more read the book "How Google Tests Software" which gives a good summary.
@@TheBestNameEverMade Changes to me means modifying something that already exists and additions would be new stuff.
For every change do you keep both the old code and the new code and switch between them whenever you do refactoring for instance? If you take down the entire thing until you fix the bug AND verify the rest of the commits in the mainline up until that bugfix, it seems way worse for the user.
Sounds like you either have to delay the bugfix or you rush more unverified code to prod, leading to more possible issues.
I'm quite curious how Google tests their software, because it has plenty of bugs. Right as I'm typing this message I'm experiencing a bug where deleting a new line removes the entire comment text. It has been happening for weeks. Hitting the back button on RUclips sometimes moves you 2 videos back. On the mobile app sometimes the video starts but without sound. I have to go back and open the video again (and then it works).
I tried removing a credit card from my billing but it said there was no credit card whenever I pressed the remove button. Had to contact support and they did it for me.
For years I have been unable to use the sync feature in Chrome because as soon as I enabled it and it loaded my profile, it would insta-crash my browser every time. After some years I just gave up trying to check if they fixed the issue, I just don't have sync.
And don't even get me started on their Cloud platform....
@tedchirvasiu the first thing in large software that needs to be accepted is there will be bugs unless you are working on a NASA high reliability project. Also we should expect humans to make mistakes.
Those takes many more years to develop and produces less functionality. Each line of code has to have automated tests, can't use many language features and requires multiple engineers spending days reviewing every aspect of the code. A company would be out competeted by someone else... you can't let perfect be the enemy of good. RUclips works well enough to be the most popular video platform.
Given there will be bugs how do you reduce the amount? You allocate more resouces to them in terms of automation and number of people looking at the issue. Having groups of people in separate branches reduces the amount of testing a feature gets. The uber feature ends up going into the codebase all in one go and if there is a bug the entire feature to have to be backed out or toggled off rather than just the part that broke.
The goal of the game is mostly to keep the dev branch stable. Having the ability to do it in live is nice to have. Also having the ability for users to toggle experiment modes such as google labs is also useful for getting more eyes on new features.
If someone commits something and it breaks, if there is a toggle it can be turned off within seconds to minutes (depending on if the unit test was able to disable the feature or it had to be done by humans) then a fix can be made in hours or days. Blackouts are another option although as new code comes in all the time they can be more complex. Also if the is a huge feature checkin (which we are trying to avoid by not having group feature branches) then that's a very large backout which depending on how long the feature has existed in dev it can take a very long time to disable, hours or days or months.
Also there is the problem of checkin queuing with feature branches. Google with the number of feature branches they had, they would have very long checkin queues because a team would try to checkin and some other massive feature would go in before them requiring a rewrites of their code. If the feature had been incremental then the amount of changes would have been smaller to make it adapt to changes. I mention this because it's a core part of feature branches that slow down features to users and reduces the amount of testing code gets.
With feature branches, If you need to get your feature in and the day before the api you use changes you have to fix it fast and get it in so the api doesn't change again however fixing it fast could bring a bunch of new issues. If you wait and test you might miss your window before new api changes comes in and the feature keeps getting more complex.
Feature branches are one of those things that sound nice on paper but don't work well reality. In most cases, they cause slow down in development, longer time to release and more bugs.
@tedchirvasiu also yes you might take down the entire feature but it should be a very small change not the large kinds of features that come from feature branches. It could be as small as a few lines of code.
Maybe it was an optimization or a new bit of helpful text. Sure the dev user (and wrost case end user) will appreciate those things but likely they won't be noticed. In anycase 99% would be fixed before going into the live branch, it would be disabled to allow other devs to keep working.
If it hits live then it's about making a call about what is worse, the feature or the bug it caused. If you discovered that half your users were crashing then disable it for a few hours and let your users actually use the product. That's an extreme example of couse... it should have been caught in dev and in that case the users would just have to wait for the next release in a few days for the feature.
There is also sample testing with feature flags where you only expose the feature to 1% of your users and see how they respond before rolling it out to more although that depends on the feature.
Also the feature flag can be used by QA or automation to figure out who caused the bug since it's easy to toggle rather than performing a backout and rebuilding code. QA can go through several hundred toggles in a few hours but they could probably only do a small number of blackouts to find the route cause of an issue.
Anyway I hope that was helpful.
Re. 11:32, the "mega merge" is actually caused by the diff between a feature branch and the merge target. In other words, it is caused by the drift of 2 branches over a significant period of time. So Continuous Integration is not isn't going to cause this, since merging more frequently will make the individual merges much smaller.
He means every commit of every dev goes to master without actually having, necessarily, a completed feature that is merged. Each commit is meant to be whole onto itself and, in theory, the dev ensures this by running a basic test (not the full suite) on their local copy (which reminder is master). They then push to remote (FCFS with multiple devs) and the CI takes over ensuring the sanctity of the master branch per commit and its release-readiness. Tags denote versions and for hotfix cases the idea is that this approach makes sure the fixes don’t introduce a scenario where the RC code acts bizarrely as a result of a patch/fix to prod. In the case of a broken master, he advocates for a swarm fixing on the off-chance it occurs because he also advocates pairing the approach with TDD first strategy.
All of that said, there are pieces of this I like but I don’t think I’m sold on the whole model. I do like the idea of a more continuous CI but personally would like extra gates at CD for A/B, Canary, etc rollouts to name just a couple gripes.
CI is included as part of CD...
@@retagainez True, and Dave Farley for sure is in the camp that there is no CD without CI but in practice I’ve found CI is implemented without CD. So the conversation around CI is usually a pre-CD conversation. I tend to shorthand CD-related CI as just CD for simplicity.
@@hm5236 You're right
I don't get it, how does his version work with PRs? Is the assumption that everything is correct and nobody will have anything to say about it, or does he create local feature branches that he works on while the PRs are pending?
At 13:30 you missed that he makes the difference between CI and CD.
Dave Harley says that CI builds into a prod-like environment. CD is something else.
Don't shoot the messenger
i am not quite buying this
I agree. Prime definitely missed something. I would revisit his distinction between the two: CI is about "making sure your code is always working," and CD is about "making sure your code is always releasable." CD includes CI.
So if I merge a feature into master, all tests pass, but in the end there's still a serious bug discovered by QA team, I'm blocking my whole team form deploying potentially for hours or days? Sounds like madness to me. Or is it just a constant rollback & cherrypick sh*tshow? And is QA even possible this way ... afterall sh*t already happened since I merged into master?
I must be misunderstanding the guy ... otherwise this sounds like a total gamble "lets hope the tests cover everything" ... which they never do
The main issue is that a bunch of consultants have sold these methodologies as a one size fits all approaches. Teams are different and have diffeeent requirements. There's a video from the guy who wrote the manifesto for agile Software development. He's bashing agile in it's current Form because it was never intended that way.
Very insightful stuff.
Could you send the title of the video? (youtube sometimes blocks comments with links :p)
Parking my comment for your reply on the video that you mentioned
ruclips.net/video/a-BOSpxYJ9M/видео.html
Agile is dead - Pragmatic Dave Thomas
agile was always corporate propaganda… i mean they even made a total BS origin story about about the smarted men in the room going on a ski trip. that guy can suck it because he was stupid or in on it
Totally agree with you. I don't see his argument that a feature branch won't match master - that is what rebase is for! It does sound like he is pushing to the main remote master - either he works alone or really likes pissing off his colleagues!
If everyone is working on a bunch of feature branches and only occasionally merges back to master - how will a rebase off master help keep all these feature branches up to date with each other? They won't, you'd be rebasing nothing, because all the changes are isolated - which flies in the face of CI.
And he does not work alone. Read up on LMAX and what he and the team did there, without branches or PRs.
@@ddanielsandberg they need to be short lived feature branches - merged back to master after the feature is complete and code review and unit tests have passed. For us, when a new feature is complete we do a release.
@@ddanielsandberg working directly on trunk sounds like it comes from the time when merging was much harder - eg SVN or CVS. Git makes this way easier.
@prime Google and Facebook both use trunk based development which as this guy says. You submit everything to master/head/trunk. This works remarkably well.
You don't need to do mega merges and you don't need to do feature branches. Instead, you use feature flags to enable and disable certain code paths and manage releases using tags.
You don't literally push to master, but if you pass all tests and code review, then you can merge your PR into master.
32:32 one issue with pushing direct to remote shared branch is you might break everyone tracking it even if you ran tests locally because you might've forgotten to add some new file to the commit before pushing, tests worked locally but breaks remotely (or worse yet the missing files don't cause tests to break and might go unnoticed until you have missing stuff in production or something .. not an every day occurrence but over the years has happened more often than I'd care to admit)
33:20 you make a PR from what? local repo needs to be pushed somewhere where the upstream can pull from, if it's the same branch locally and remotely there's no PRs
"I don't know what git flow is" ... "but I have a develop branch"
Pretty much my problem with the argument of the guy you reacted to. git-flow is so core to most git workflows that people don't even realize they're using it... I find your reaction above very common since I'm an old fart that remembers when NCIE came out with it... and we all have CI/CD processes based around the develop branch and master branch going to different environments. He says CI/CD doesn't work well, but here we've all worked in several shops where it works (almost) flawlessly for us.
I actually like that guy's stuff in general, but think he was WAY off base about git-flow.
I think you mention a very important limiting idea: If you don't think there is a way of making big changes with only small correct steps in between, then yeah you cannot beleive that having only one correct version of the code is all you need. The most fun I've had coding has been in teams with a more agile mentality, with really quick pased merging, or when sharing one unique branch (only possible in small teams). Also you don't need huge amounts of processes to verify the correctness of your changes if your applications are properly modularized (which I know, they are usually not).
you missed the point, if you have this huge CI, it's not actually CI, that's how git flow works.
@@archmad Yeah that was a poor choice of words, I was using the wrong meaning of it lol. I meant you don't need a lot of machinery or processes or github actiones et c.
"Long lasting feature branches are the devil"
The ImGui docking branch is SWEATING right now.
30:50 To him, breaking master doesn't matter, is how I understood it. His master is just a develop branch, and instead of having a master he probably just tags releases. So tests just run on master.
Sending every commit straight to production is technically called "continuous deployment", not "continuous delivery". Continuous delivery is more like just continuously (or as near to continuously as practicable) staying *ready* to send all your work to production.
Subtitles are Portuguese ❤️
the timing, i was also researching different ways git is used yesterday. i currently just have a main and a dev branch. but have been thinking that feature branches might be a better approach after i go into production.
i think branch -> master is fine
room for hotfixes, its ok
@@ThePrimeTimeagen Duuuuude! Great work with your reaction videos... very illuminating stuff. Random question.... how are you able to push out so many videos per week? Does this mean you're going to take it slow on your main channel ThePrimeage?
I think the main thing to keep in mind is that the best workflow heavily depends on the number of developers involved. All of these explanations are fine for 100+ developers on a single project, but if you're like me slaving away alone on a project or with maybe one or two extra devs, it's a completely different story.
At work we have feature branches that merge into DEVELOP and then cherry pick to the release branch.
If you pause the video at 2:33, it looks like Gitflow has snuck into Dave’s room to kill him
Dave Farley has some good ideas, and I have adopted more continuous style of development thanks of his videos.
But committing directly to master in a real project only to find out 30 minutes later that you broke the CI pipeline just doesn't sound like fun.
In that case it's the CI pipeline itself that's something wrong with.
If you committed something faulty, it should simply be rejected by the CI pipeline and master staying unchanged.
@@Spiderboydk But that's not what Dave Farley is advocating for. At least he wasn't a year back when I was watching his videos, maybe he changed his mind since then.
Also, I am *really* curious WHERE exactly you push your change to. To master? You yourself said that the change doesn't get to master until CI approves it, so you cannot push to master.
To another branch then? But then how is that different from feature branches? Or my favorite - to another repo - which is a worse version of a feature branch, because it's still a different branch, but in a completely different repository.
@@kajacx I have been pondering over your answer for 5 minutes now, because I had no idea what you are talking about. But I think I found the misunderstanding now.
When you said "broke the CI pipeline" in the original message, I thought you meant the series of tests that checks your commit actually crashed and left you nonethewiser. I think what you mean by "broke the CI pipeline" is you have some code with a problem that got though the tests. If this is the case, completely disregard everything I said. :-)
@@Spiderboydk I think I mean something else still. So let's summarize, you commit your change to master and then push, then ...
If the CI pipeline actually fails (it cannot even run the test for some reason), then the CI pipeline itself is broken and needs fixing. We agree there.
If the change contains a bug, but CI passes, that is a problem, but that's not what I meant. In this case, you should add a test so that next time this bug is re-introduced, the CI catches it correctly.
What I meant was: you code contains a bug, and the CI pipeline (correctly) fails. Sorry for saying that this "breaks the pipeline", it's just a normal test fail.
But this is already a problem. You said that in this third case, your change should be rejected by the pipeline and master saying unchanged. So, where do you push the change to? It can't be to master.
@@kajacx Okay, sounds like we are in agreement then.
REDACTED --> And your question: you don't push it anywhere. The code is faulty because tests fail. Fix the error and try pushing again.
Nevermind, remote testing is of course different from local testing. I guess my head is not with me today. Nevermind. :-)
What seems like the cause for culture shock to you is the idea of keeping the master releasable. If there's a problem with master, you roll back the latest change, which, by definition, is what broke the system.
You keep it releasable. Constantly. It's never broken for more than a single commit, that never contains more than 1 day worth of work.
Dave Farley has more experience than most RUclipsrs. He has successfully practiced what he preaches and has demonstrable evidence to back it up.
Yes, and it is so sad to see some comments in the live and here in the youtube stating that he is just a course seller or something like that... Some people cannot grasp a concept and just jump to attack without force himself to learn first.
as a former GoDaddy dev, working in consulting for years and working for government entities right now, I disagree.
CD as advertised here, breaks production all the time.. everywhere.
it's only a wet dream for project managers.
Exactly. Imagine disagreeing about the definition of CI and CD with the guy who literally coauthered the book on it.
Running tests automatically on some remote machine every time you commit and push is not CI, you're not integrating anything!
Dave Farley has a lot of great ideas, but with this pushing straight to master and testing later, he jumps the shark. Trunk based development do work in small teama, where we trust each otger. However, the reality is that most programmers are idiots or don’t care about quality.
@@mhandle109 this.. and besides, there is always a dev pulling in megabytes of dependencies only to have a leftpad or similar.
some sane checking has to be done before delivery that goes beyond simple cd.
14:30
About that problem where you may want to test a new version on a small part of the customers, I think the Idea in strict CD would be to have a setting that can turn the changes off and on and not have those changes on different branches.
At first I thought, why to give up all these advantages Git brings with multiple branches that can exist at the same time, but I have seen design patterns that can very cleanly hide new features or bugfixes behind user options (which are not necessarily exposed to the normal user). You definitely have to get used to it and you have to clean up those structures after you are comfortable to declare the old version as deprecated (or maybe you want to keep it for backwards compatibility), but the advantage is really huge, because you test your code against all changes done by everyone working on every Bugfix or Feature at the same time.
0:23 As a brazilian i imediatly recognized that the subtitles are in portuguese
#brazilmentioned #missedopportunity
I think discussion points out how tricky git branching strategies are and that it's a complex problem in the real-world that in large code bases there is often not a 'perfect' solution.
What you said plus issue of terminology. CI, "feature", "master" and many other terms are too abstract and in my experience very subjective, everyone develops their own understanding of that these mean and then miscommunications happen
Please more of this. CI/CD is very hard in my opinion, and can use your view on it!
CI (Continious Integration) is just an ideology with having a branch matching production (often "master") and create and merge small short lived feature branches into master. The standard practice is to run the full test suite on every feature branch before it can be merged.
CD (Continous Delivery) is where any changest to the master branch automatically are tested and deployed to production.
So for a full CI/CD workflow:
1. Create a feature branch
2. Add commits to your branch
3. Create a pr and get reviews + qa testing on your small feature branch
4. Your PR is automatically tested by a CI pipeline
5. You merge your green approved PR into master
6. Changes to master trigger a full test and build pipeline followed by an automatic release to production if all is still green.
So in short, CI/CD enforces many small releases with fast feedback from customers and easier to test smaller changes for QA/reviewers.
I think you were assuming that a merge to master always means a release to production.
His whole thing was that you can be merging to master without cutting a release, and then cut releases occasionally from that.
Instead of doing develop and then occasionally merging to master and then cutting a release.
This guy is well spoken, but takes a concept way too far as an extreme and acts like it’s the correct way to do it
How would you do a code review with this crazy solution?
Simple, all you have to do is regard the amulated flams and diff the hash squares and count the legs and divide by 4
Dave says he doesn't. He does pair programming instead
His whole thing falls apart if you just have a smart CI system... like even Jenkins can look at a whole repo and identify new branches and run your CI and testing on feature branches so you get that immediate feedback on every commit in your feature branch. Idk, there's a fine line between old man knowledge and experience and old man "I think I'm right when I'm clearly wrong" stubbornness
i am all about good tests, proper feedback, but this idea to merge into master, then run, then stop everything and give a dev X amount of minutes to fix it is crazy to me
If you have two developers working side by side on different feature branches for several days or weeks, neither of them merging, does that Jenkins setup check that the code each one is writing is compatible with what the other one is writing?
Guy is pining for the good old time of SVN and pushing to trunk - it was such a nightmare.
You just rebase your feature branch in morning, and always stay current with develop/staging.
Please invite Dave Farley on stream, would be super cool!
If multiple people work on the same feature I think it is reasonable to have a feature branch on a shared server. Or if there are features that you would like to delegate to some group of people. Am I wrong? (I have no practice experience in it)
Are you a SE if you havent broken production at least once? 😅
There is a massive misunderstanding between CI/CD tools and what CI means. When you gate your feature branches with automated testing and then integrate longer than a daily period, you are not continuous in the same sense as what CI intended to address.
I like calling that flow: Automated Integration. You do integrate, often first rebasing your branch and checking if your changes still work before merging it to the "main" state. That flow works well with any codebase because your VCS separates the changes and lets you have multiple "live" states across any feature branches.
Unfortunately, David here only talks about how CI and GitFlow look from the VCS perspective. CI introduces requirements for your architecture and code organization to address the real world. You cannot apply CI to any codebase like how you can use AI.
Integrating continuously means following a mindset where your VCS DO NOT separate changes but your code. It heavily requires inheritance and dependency injection with feature flags. Essentially, you do not change the production code but make extensions where you overwrite the behavior in separate, feature-gated areas.
The main benefit to integrating that way is that you can test and gather feedback from multiple work-in-progress features without introducing merge trains. Your feature code would have an effect early within a system, and everyone could adapt quickly to minor differences instead of rebasing and raising significant conflicts that need to be resolved.
I love how git started a simple tool to becoming a dedicated career like gitops lol
"git" in these talks is more like ageneralized placeholder for any VCS/process that can be thought of in branch/graph steps.
@@TheNewton yeah exactly: git is by far the most popular but for example I wouldn't have much trouble approximating most git workflows in pijul (on the other hand I can think of a bunch of pijul workflows that would be hell to try and approximate with git)
Git was created because existing source control couldn't handle Linux with it's hundreds to thousands of contributors. It was always intended for complex flows, even if the design was therefore very simple (in principle)
not sure I understand him saying that you don't get feedback with feature branches. We create a build for each branch and have fast forward merging strategy with pull requests to master so you are forced to have a copy of what master will look like before you merge...
The point is that when you have 30 developers, each working on their own branches for days on end, what does it help rebasing from master? You still won't know if anyone else's changes conflicts with all other changes until it's merged into master.
The single branch approach assumes you can feature flag everything. If you can - then you can use just one branch. But feature toggling can get complicated, you have both old and new mixed together. It's an additional burden you don't have to bear if you just use two branches. You can still have small commits and CI in the dev branch.
As long as you have tags on your releases you don't need a separate master/develop branch. You can always start out from any release tag and create a new hotfix/release branch if needed.
You didnt get it. The Problem with a Feature branch orientiert workflow ist, that no matter how offen you rebase your feature branch on the remote Master, you will never work on the recent changes of Team members, because their work is also located in a feature branch. Neither will they work based on your changes. So when both simultaneously developed feature Branches get merged the Chance of something going wrong is big. Branches in general are not a bad thing but using this worklfow statistically leads to less efficiency and quality. Furthermore ci is no Pipeline or test suit, that is a separate thing. Neither does ci imply that your head will always be deployed. In open source it would for example mean, that you merge a pull request only, if it is based on the current Master. I personally use ci mostly but i also work with branches. Stable versions are handled with Tags in my Software (1.0.0, 2.4.5 etc).
My team does CI the way he describes in the video, it’s great. I think you are misunderstanding
To deploy to production, the release to production, is a business decision, is not for devs to decide, unless we are the business man, also that decision should not be handed over to automation, it should happen deliberatively, and scheduled (maybe) so NO... you don't push to master and end up with a bug in prod, you push to a UAT env, or whatever, ready to be released, business should always be able to release the changes they see to prod, but maybe they want to test more, to just wait for the right time.
you see? so just do a tag with the christmas logo and let business release that, or whatever other point in prod, because master isn't supposed to be broken.
I am watching this and reacting, the part I hate about Gitflow is DELETING branches. I do not see the point that is what the Gods made grep (or rg) for
The CD guy does not use PRs at all. So yeah, it's a pretty weird way of working. If I recall correctly, he uses Pair Programming for peer review instead of Pull Requests.
Stop the madness!
"CI or LIGMA for short" actually killed me
RIP
We're basically running three active branches at all times (Windows desktop app with three annual releases):
- production: current released version that might still get hotfixes
- pilot: next release being rolled out slowly, hotfixes and feature adjustments based on feedback
- development: new feature development
You start on where plan to merge and go uphill from there.
People selling CI/CD have muddied the waters so bad here and so embedded themselves that when we use CI as it originally was coined you just get confused. You need to just go and read up on trunk based development instead of watching a video that is later in a series.
ci came from the realisation that the original paper from the 70s saying that the waterfall development model, while common was fundamentaĺly broken, and agile realised that to fix it, you had to move things that appear late in the process to an earlier point, hence the meme about shift left.
the first big change was to impliment continuouse backups, now refered to as version control.
another big change was to move tests earlier, and ci takes this to the extreme by making them the first thing you do after a commit.
these two things together mean that your fast unit tests find bugs very quickly, and the version control lets you figure out where you broke it.
this promotes the use of small changes to minimise the differences in patches, and results in your builds being green most of the time.
long lived feature branches subvert this process, especially when you have multiple of them, and they go a long time between merges to the mainline (which you say you rebase from).
specifically, you create a pattern of megamerges, which get bigger the longer the delay. also, when you rebase, you are only merging the completed features into your branch, while leaving all the stuff in the other megamerges in their own branch.
this means when you finally do your megamerge, while you probably don't break mainline, you have the potential to seriously break any and all other branches when they rebase, causing each of them to have to dive into your megamerge to find out what broke them.
as a matter of practice it has been observed time and again that to avoid this you cannot delay merging all branches for much longer than a day, as it gives the other braches time to break something else resulting in the continual red build problem.
Sometimes in one sprint, i need to work on two different features which are related to two different release dates. It’s not possible to handle this situation without having separate release branches and separate CI pipelines for each release.
Yes it is. Use a feature flag (or keystone interface) you're building a feature that may not be wanted in the next deployment. It can be merged in to the main branch but not available to users.
So how do you handle code reviews when there is no PR possible because everyone is working on the master? In my opinion these enforced merge conditions are one of the most important benefits of using seperate (short lived) branches. I also feel that it becomes much more transparent what is being worked on.
Most importantly I think the way you apply CI depends on the characteristics of the project and the team.
PRs kinda suck for code reviews. People just look at the diffs and complain about syntax or nothing at all. I've done many different kinds of code reviews before. You can review code that isn't even committed. You can review code in a branch. You can review code over someone's shoulder or on a whiteboard. You can print code out and discuss it in a meeting. It depends on what is being reviewed, the team and the purpose of the review.
You don't do traditional code reviews. Instead you do continuous code review i.e. pair programming. There isn't a need for a PR when the work is done because it was reviewed the moment each line of code was produced. This also means the second the code is written is the second it's ready to be pushed to master/main (so long as what was written works and passes your tests). Also if you are committing this often then main is going to be a near perfect representation of the work being done at any single point in time.
I'm late to the party here but CI is literally having a shared branch that everyone checks into instead of each developer having their own branches. Thus, integrating continuously. The automated build checking stuff most people think of as CI is a side-effect of CI. That said, there's nothing about the Gitflow development branch that stops people from doing CI on the whole.
It stops doing CONTINOUS integration if there are multiple trunks or long-lived branches, in this case, the development branch.
@@ArnasKazlauskas-nu8vb it depends on how you define continuous actually. The development branch and production branch can actually both have CI with different teams. Sure some people will claim that's not CI but continuous the English word does not mean what those people insist it means. Do what you like though; CI has terrible official definitions and some would argue shouldn't ever be used in mission critical software.
@@MichaelBabcock
Continuous integration for me means two things: trunk-based development + automated builds.
Trunk-based development means that there is only 1 version of the code that we care about and the derivation from it is short-lived aka short-lived feature branches.
Gitflow is the opposite of trunk-based development as you have multiple long-lived branches and release branches are also long-lived.
The code is not continuously integrated as you need to do to release branches.
Using Gitflow it's hard to understand what ref of code the machine runs as there are so many versions of the possible codebase.
Thus, Gitflow also directly blocks continuous delivery.
Gitflow also creates so many builds which slows down deployment process a lot.
I'd say Gitflow works for public libraries or software that cannot be deployed continously, e.g. mobile application.
However, Gitflow sucks for backend or web application.
Two things:
1. There is nothing continuous in a discrete world. At best it is continual.
2. Every developer will always have a branch on their local computer, even if it is called the same as on the github server, you can merge in any direction. GitHub is a peer to peer system. Stop saying that you should only have one branch.
@@CalifornianViking Your message doesn't make sense at all and provides no value in this discussion.
During a release there will generally be multiple versions in production at once, so you can't have a single branch that represents what's in production.
And if you want something in git to tell you what version(s) are or have been in production I think a series of tags is more correct than a branch. The tag can be set as part of the deployment process.
9:45 You said man... This stuff only works right for people who don´t work with this stuff.
Sums up most RUclips Influencer software advice
Maybe the video makes sense if you have context from his other videos. The real difference between CI and GitHub Flow is that you're not waiting for approval or remotely run automated tests to integrate your changes.
Changes are integrated each time you make an atomic change. As the developer of the change, you are responsible for ensuring that after the change is made, the software is still working.
Add on the constraint on at least one integration per day and you're ensuring a level of atomicity that provides a small incremental change that is easy to reason about. If an integration fails, you know which change caused it.
The other practice that synergises wit CI/TBD is pair or mob programming which he is a big proponent of. This way, your code is being peer reviewed immediately and you're reducing the feedback loop.
The concept needs context and discipline, hence the fact that he brought up the practice of engineering. You can't really consider yourself a software engineer unless you adopt the engineering practices of other engineers (of which he goes into detail on in other videos).
Everyone calling the ideas in this video crazy is completely missing the point, and I hope I never have to work with you.
Haven't watched this fully yet, but I can say that I used to use Git Flow while I was more of a front-end dev. I have transitioned over time to more of a backend & devops role and pushed teams to drop git flow in favour of CI and promoting builds. Feature Branch workflow is what I'd generally recommend projects us.
The issue with Git Flow is now you actually have multiple integration points at any given time. Some are long lasting, while others are ephemeral (release, hotfix). When you integrate into develop, you are not integrated with these other branches. If you create a hot fix, then you need to integrate that back into multiple long lasting branches. The workflow really complicates things and can be resolved by having just one "main" branch that you integrate in, and build artifacts (binaries, container images, etc). Then you can deploy those artifacts to test or prod environments as you need without tying that complexity to the source control.
I'm pretty sure the guy in the video is talking about pushing directly to the same branch, not just merging Release, Develop and Master into one.
I worked for a company where we had just abou 6 devs and they had a master only approach. Even then, we regularly got a build error from our CI pipeline telling the master was broken and basically everyone else was now working on a broken version having to wait for the guy that created the broken commit, or (worse) someone else had to do it.
In my opinion it's jsut too much chaos. If you work on a seperate branch it's your responsibilty to keep pull-rebase from whatever branch you're merged (being dev or master). That way there are hardly any merge conflicts.
That is because you weren't practicing CI, and by the sounds of it - didn't pay attention to the feedback and had a "team" of individual tasks over a team that collaborated. I have lead a team of six that managed 600 applications in a highly regulated environment where we practiced CI, PP, TDD and never used a branch.
I recommend watching more of Daves videos. It’s a very difficult topic and requires going down the rabbit hole of CI and TBD to properly understand all the thougts in this video.
Edit: e.g. Google uses TBD as well so no, it is not “only suitable for small personal websites”
Edit2: PRs are not necessarily against CI, they can be used as a way of running tests before they get to master, so you don’t need to run the pipeline locally or with git hooks. The important thing is that the PRs are either automatically merged once they pass the pipeline, or at least they do not live too long to stray further away from the mainline. So pushing straight to master (and potentially breaking other devs copies every 10 minutes) is not required.
Agreed. The other context is needed for prime. I thought I was stupid when I tried to dive into his videos two years ago! No... it just requires an open mind I think.
@@retagainez Exactly. I was lucky I had read the CD book before Dave started making videos, so already had some experience, but the videos expanded it significantly.