Farley is so phenomenal at articulating common sense in a digestible way. The hard part is getting your organization to buy into these core critical concepts - allow more autonomy of your teams to own their own designs, allow repeat work across teams, and allow different teams to work on the same problem with different solutions. They seem costly at face value, yet result in such higher velocity, quality, and most importantly, empowerment and joy of work.
How great to have this so clearly presented. I've spent years blending the elements to build the machine without a vocabulary to describe what I was trying to achieve. This channel is my "find of the year" (so far).
One thing that truly stands out for me about this channel, is the discussions in the comments. That is something very rarely seen on RUclips (not that the majority of content lends well to it anyway). Let's keep it flowing :-)
Thanks. The reason that I started the channel was to help spread some ideas that I think are useful, and important, to doing Software dev better. So I enjoy exploring these ideas in more depth in the conversations in the comments.
I knew this video was going to be great when I saw you were wearing a shirt from the USCSS NOSTROMO Clearly a man of culture, great content Keep it up!
Once in the university i was taught the concept of orthogonality which greatly helped me thinking about decoupled architectures. Orthogonality simply means that when you move up or down some X axis, it should never force change to the Y axis and vice versa.
I think that is a useful model, but also that it is, I suppose inevitably, more complex than that. There are always trade-offs and the difficulty in successful architecture is to balance the trade-offs while striving to asymptotically approach the orthogonality that you describe.
It’s interesting to me how universal these sorts of concepts are. I’m not a software developer and I was only watching this video out of curiosity. I have Military leadership training and experience and many of the concepts you’ve explained is directly relatable to command and control and small unit leadership. One of the major features of western/NATO militaries is small unit autonomy. The idea being that a commander back at HQ does not have the same clear situational awareness that a company or platoon commander does in the field. Instead of commanders in the field having to wait for permission to move or act they’re given the autonomy and even encouraged to take initiative since speed and action is critical in combat. To facilitate this there is the concept of “Commander’s Intent” which sounds a lot like what you described in “Bounded-Contexts”. Before an operation you’re briefed on what overall goal or objective is so in the field when a frontline leader uses their autonomy, they do so within the framework of the commanders intent and allow continuous movement to completing their objectives without micromanagement from above.
Yes, I agree completely! I think that there is something pretty deep at play here, I think that this is about dealing with "information" at a fairly fundamental level. In any situation that is not predictable and repeatable, we need to adopt a very different approach to coping with the situation. Too often failure is a result of applying techniques that work when things are predictable, to situations that are inherently not. Then people blame the accuracy of the plan, when an accurate plan was impossible - Doh! The military is a great example where this learning happened a long time ago, because people die when you get it wrong. My understanding of military history is that "mission-based planning" was one of the innovations that made Napoleon so successful at the start of the 19th century for example. You don't succeed in fluid, changing situations if you try to execute rigid "command and control" structures. There are lots of examples of analogs of this "mission-based-planning" approach in lots of different disciplines. Another common one is Toyota's Lean approach to production engineering, they are a learning organisation and are organised around small autonomous teams.
@@ContinuousDelivery Yeah, broken down fundamentally ridged centralization forces the amount of information and processing of that information into a bottleneck. Everyone, person, team, etc, has a limited amount of "bandwidth" to process information. Greater de-centralization allows a greater ability to disburse that bottleneck and organizations become more productive and efficient. The drawback is there has to be a degree of trust and competency that some organizations struggle with. Either through the human ego or poor quality in resources. The factor that played more into the incredibly one-sided victory the coalition had over Iraq in the 1st Gulf War had more to do with this than superior technology. Iraqi armed forces were modeled off of doctrines from the Warsaw Pact which was a heavy emphasis on ridged command and control. The idea being mass formations attacking along a broad front and exploiting any breakouts with reserve forces. It works to a point but requires a lot of manpower/resources, and it's not very efficient. Also, militaries of autocratic nations tend to want to limit the amount of independence and autonomy individual commanders have since oftentimes the goal of the military is keeping the leadership in power. Decentralized control is a powerful way of organizing but it becomes difficult or impossible for insecure leadership to implement.
Thank you for yet another very interesting video! This is the best software engineering channel that I have found so far! I understand the need to let the teams develop at their own pace but I don't understand why large projects are any different than small ones. Why is the development cycle not exactly the same? Collect requirements. Code each requirement in turn in the form of a set of tests. Write production code to satisfy each test. Refactor to keep the code base clean. Repeat these steps until the project is complete. The only differences I see between a large project and a smaller one is that the iteration cycle is longer for the large one (because it is operating on whole modules instead of individual procedures), more people are working in parallel, and the requirements and hence the tests are more general. What am I misunderstanding?
The problem is always the coupling. The problem with large projects is that the complexity explodes. If I write SW on my own, all changes are down to me, and while I may forget or misunderstand something, only I can break my code. If you and I work together, now we can break each other’s code. To defend against that we can talk a lot and understand what each of us is working on. That doesn’t scale up very well really. The limit of being able to work in a team like this and be able to know enough of what everyone else is doing is probably only 8 or so people. After that there is a measurable reduction in quality. If you grow much beyond that if everyone is working without any compartmentation of the system and teams, then they will almost certainly create rubbish. So then the next step in growth is to divide people, and their work, into compartments so that they can work more independently of one another. This is where things start to become more complex. The quality of the compartmentation matters a lot! If you do this really well, it scales up to overall team size (divided into many smaller teams) of probably low hundreds if you want to keep their work consistent and coordinated. After that you pretty much MUST de-couple to scale up further. These are all “rules of thumb”, approximately right rather than hard and fast laws. You can improve scalability of code and teams with great design, but the way to optimise for max scalability is to go back to independent small teams.
@@ContinuousDelivery Thank you very much for your prompt reply! I understand that coupling is a huge problem but I don't understand why there is not a refactoring step between two modules once they both satisfy their requirements. For example, once two teams complete their work why should they not get together and remove duplication? I would think that duplication would be an even larger quality problem in an enormous project than a small one if for no other reason than the union of two sets of tests should be more complete than either one alone. Also, if the duplicated code in one module is repaired in some way there is no reliable way to ensure that it will be similarly repaired in all of the duplications. I really enjoyed your talk and found it very informative. I was just a little surprised when you talked about not sharing across teams which is why I asked for clarification. Thanks for taking the time to explain things.
@@georged8644 You can certainly do that, but it comes at a significant cost. The coordination effort to ensure that each team is use the correct version will slow you down. The work to rationalise the work of the teams, as you describe, will also slow you down. My point is that you need to recognise the costs of the choices that you make and work to minimise them as appropriate. There are no simple solutions, this is not that kind of problem. All of the options have downsides, as well as upsides. The problem, as I see it, is that many, maybe even most, teams and orgs, assume that there is a simple perfect world where you can produce software as fast as possible, with the highest quality, and have it perfectly consistent. This is not possible, you have to pick either "fast and high quality" or "slower & consistent" you can't have both of these things, at least not for software beyond the really quite simple.
Really good! Maybe a video about this already exist on the channel, but who is in charge of the strategy for the whole solution / software and coordinating all the pieces? Is this person also in charge to give visibility to management without teams having to know what management think or want. Finally, how message about requirements are pass to each teams. Is there a human interface playing this role, abstracting everything that is going on to the team but passing messages about requirements / efficient strategy that other teams have tried / etc… In other words, as for systems, when communications are necessary, is an interface should be use to abstract as much noises as possible?
Another brilliant video. What's your take on applying "DRY" on tools/processes? For example, not sharing code between teams, but insisting that all teams use the same CI/CD tools (even the same instance) and insisting all teams adopt the same workflows?
It is complicated. For some SW it makes senses to have some standardisation. From an org’s perspective it is a nice idea to build tools, like CD pipelines, that help teams. I don’t like forcing tools or tech on teams from outside. To be successful, it works much better, when building tools and platforms, to adopt the approach that it is the job of the team producing the platform or tools to make stuff that people want to use, rather than that teams are forced to use. That is also in the orgs interest, because if there is a team that for some reason doesn’t fit into the standard, then. They can fix their own problems. It leaves space for teams to innovate! For this to work, the orgs need to be willing to give teams the freedom to make their own decisions, and the teams need to be willing to take on the responsibility for their work.
@@ContinuousDelivery Thanks for the reply! You may have guessed it was a bit of a loaded question. I'm currently in a situation where if I want to add a Jenkins plugin, I need to raise a ticket with the "DevOps" team, and it usually takes literally months for it to get actioned (if I'm lucky & various "committees" accept my justification). I understand why they want to be cautious about what gets installed, because many teams & many developers are dependant on the same instance so they don't want to break anything. I also completely get why from and org's perspective, having a single tool & everyone working in the same way would _seem_ to remove duplication of effort & perhaps appear to be more "cost effective". But for me, this is example of optimising for the wrong thing, resulting in stifling speed & efficiency, not to mention innovation like you say!
@@davemasters That is certainly a good example for coupling on the organizational level. Unfortunately, nothing out of the ordinary but rather common, at least in the organizations I have dealt with over the years.
Thank you very much, dear David for all of your kindness my exact question and ambiguity that for example i want to post one specific object to another service so how can i abstract it? how can i told another service to post (insert or save) this object for example have some specific fields such as name family image and etc? how can abstract this? this way is wrong that front end developer in another service that call my service to post say to me not exactly this field? in surface my opinion is to use DTOs in my api contacts but this fields are also need to contract between services so when i want to change a bit on these fields i have to change in related services that consume it. this coupling is ok? i overcome on this coupling only with contract testing? or can i improve it? amir bolouri
Hi, @Continuous Delivery, Could you please make a video going in depth with Remote teams? Or how can remote teams improve process? CUrrently I'm facing the challange where I feel like I'm a Freelancerr, however our team of 3 works with a team of 5 which are located in the US (and we are in the UK) but we kinda have some hiccups in communication and planning. Is there a good solution for this?
Yes, I can, thanks for the suggestion. I have a schedule of videos planned, so it will be a little while before I get to it, but this is an interesting topic. Thank you.
Anyone who has been in a management position will tell you that it's much "better" to work with a small team over a larger one. I'd imagine most feel that it takes a special kind of person to manage a large group. From my experience it's simply because it's easier to get less people on the same-page than more. Factors like how the group feels about one another or the morale of your team directly impacts how well they will work together as a single unit. When you have a bunch of odd-balls, gotta identify their strengths and weaknesses and kind of individually dictate each role to best fit the worker. One can imagine two kinds of people here: 1) The straight edge engineer guy (who I've always imagined should look like the guy from that move "Falling Down") -or- 2) That hippy guy with long hair and jeans ( who often exhibits a superiority complex due to being over qualified ) You can't manage these two kinds of employees the same. And, odds are they will not get along well together in the lunch room. You need to personally know each individual - so that one can identify the best way to manage. Managers tend to make plans based on preconceived ideals of employees. They are thought of as "Employees" not individual people with different skills. In other words: Employee_X should be capable of X_Y_Z. But in the real world, we all know every employee is not worth the same, even if they make the same money. Nice shirt by the way :)
I think that management is often poor in this respect. I think that the best managers coach the team to be great and then get out of their way. Too many act as though their job is to remote-control the team. I think of it more like an elite sports coach. You hire people who can 'play' much better than you, but then help them to 'play' to their maximum potential, but within the framework of the team and its goals.
Theses that chap who said "Small is beutiful" EF Schumacher There is a college called Schumacher College in Dartington, Devonshire who hold that at heart.
It's always difficult to find the right moment when to add people to small but effective team, because some work dome good only because team are small, yet sometimes small teams just not able to do something big team able to
Yes, none of this stuff is easy. I think that one of the mistakes that lots of teams and orgs make is to assume that it is easy, and so apply overly simplistic, naive solutions. I am not saying that there is only one way to do things. But all of them will depend, to some extent, on how you deal with the unavoidable problem of coupling. You pick your strategy and cope with the consequences! Thanks for your comment.
What a great video (and channel). The concepts discussed here are so crucial to building a successful product, yet are often misunderstood and overlooked by developers and organizations. Thank for making our industry more professional.
I am not a big game player, I do play games, but not a lot. I like flight sims, and space sims most. So I know about Star Citizen, but I have been getting my space-exploration fix playing Elite. I commented on the Cyberpunk 2077 game, not really because of the gaming, but because it was an interesting example of how software development can go wrong.
@@ContinuousDelivery It's interesting, I have watched quite a lot of videos of yours over the past 2 days, how to deliver software and make it a continuous effort, rather than one big public release with a set date. I have a friend from university with a small software startup (10 employees) and preaches about his 2 week delivery schedule. Star Citizen does a lot of the things you advise companies to do. Either Lock scale but be very flexible with release dates (according to whom you ask they are 4 years behind with the "targedted" release) or the other way round. They update the game quarterly, so they always keep it in a playable state. It seems like they operate in small teams (according to the website 3-20 people in a work group), but obviously have a lot of dependencies on each other. The thing is, Cloud Imperium Games gets criticized massively for this approach. People say it's an endless money dump without something to show. Do you think the game industry might just be too big of a backwater area in the realm of software development that such an approach is accepted by shareholders and fans, who obviously have no background in engineering or software development?
Could it be my solo developer bubble, but i am questioning myself, why do all these CI/CD/DevOps/GitOps/SecOps/YouNameIt RUclipsrs are exceptionally talking about only about large scale set-ups with gazillion of teams (and their typical challenges), massive pipelines and what not. In the RUclips... What auditory they are aiming for? TeamLeads? PO's? PM's? Is it that majority of the RUclips auditory consists of decision makers who are in power to implement these practices in their large scale organizations? Am i the only one there who are looking to these things from perspective of 1 man band or ... 3-5 man band at maximum? It feels more like there are some dark spot in the middle. There are GAZILLION of simple resources like tutorials, "how to's"... etc. in the one end... and the large scale practices in the other end. But there is little to no resources about the work-flows and the patterns about small team/project setups. 1, 5, 10 people organizations. Startups. 1 man led real-life SaaS product workflows and so on. I personally know many "freelance" full stack devs, who who work from home office, maintain some SaaS product/s, have some customers and they slowly scale out. And thus, I mean... there, in the RUclips, are middleground between "How to make ToDo App in Python" and "Why google stores billions of lines of code in a single repository" kinda auditory. And 1 man band doesn't mean that this should be monolithic project and it doesn't mean this is overengineered microservice project either. 1 man band can happily work in Bazel managed monorepo with multiple microservices projects with all the common characteristics of such projects. But when it comes to the workflows form the Provisioning, Config, Sec, Git, CI/CD, etc... perspective ... it is different what Netflix, Uber or Google does in their orgs. And nobody talks about this middleground. It is nice to have theories out there, but it is useless if the majority of the auditory are not in the power to apply these theories or does not work at the scale where these theories are really effective. This is not an rant. And not about this particular video. It's just my overall observations about the content in the RUclips from my personal "googling bubble" and mby somebody can get something useful out of these my thoughts.
I think it is down to where the problems are. If you are woking alone, then things are a lot simpler, I wouldn't recommend it, but you can get away with worse code. Also as a matter of scale, I'd definitely go monolith, doesn't mean that it can't be nicely distributed, nicely service-oriented, but Microservices is a team-decoupling strategy, so makes no sense at all for single-person dev or very small teams. I have a video explaining some thoughts on Microservices here: ruclips.net/video/zzMLg3Ys5vI/видео.html
@@ContinuousDelivery I would not agree that microservices is a team-decoupling thing only. And it does not mean i am the microservices fan-boy. But thinking in "microservices" way enables much more nicer interfaces and much better re-usability. Lately i started to think, that going "full-stack" as a single person implies building your own knowledge base (workshop with bunch of tools in it). Including libraries, tools, scripts you ever worked on. Etc. So... "The Reusability". Over time you refactor your libraries, tools etc. They got better. Which means, they also get nice interfaces and decoupling. In contrast, if you place yourself in the monolithic environment, it is much much harder to think about the real decoupling and reusability no matter how good are you. So microservices thinking helps a lot not only in the context of services itself. And if you think so... these days going "microservices" is not that hard at all. We have a plenty of great tools to ease the management. Think of automation. I have a pretty much fully automated provisioning pipeline to spin up the clusters and to CD the every service right into it. Yes... at the beginning it's really hard to get all the moving parts together, like Secrets and TLS management, networks, codebase, pipelines, tracing, monitoring etc... but once the foundation is done, releasing a new microservice is just a matter of composing several already build libraries and baking in some logic. I am not thinking from the perspective of "Get the job done and release that ToDo app ASAP". I am strongly talking from the perspective of person as a full-stack developer in long (lifetime) term, who works on one or several SaaS products over the time. In this context the products itself are just a side outcome. The real meat are in the knowledge base and the code libraries he as the developer developed over the time. Most likely most of the modules and libraries will be reused over and over, no matter what project he is working currently on. So... "microservices" makes huge sense for me personally if i look outside of single product release. Will take a look at that video at evening. Thank You for sharing your knowledge! :)
Decoupling in the sense that you say is nothing to do with Microservices, I have built software that looked like that since the 1990s. I worked with the people that invented the concept of Microservices, they did it for the reasons that I mention. If you read Sam Newman's stuff, he wrote the book that is most popularly user to define it, or watch some of his talks, he says "don't start with Microservices". True Microservices, that is independently deployable (you don't get to test them together before deployment) is a complex, sophisticated strategy. What you are describing is Service Oriented Design, which pre-dates microservices by a LOT. Microservices is NOT about REST APIs, they have been around for a lot longer too, Microservices is, very specifically, about independently developed components of a system. You do that when you want to scale-up a dev organisation significantly. Watch my video in Microservices to see what I mean.
And that's how you get a crappy inconsistent product that breaks here and there and can't display a date in the same format on two adjacent screens, lol. But you can move really fast for the pleasure of your investors.
this channel is gold!
Thanks
Farley is so phenomenal at articulating common sense in a digestible way. The hard part is getting your organization to buy into these core critical concepts - allow more autonomy of your teams to own their own designs, allow repeat work across teams, and allow different teams to work on the same problem with different solutions. They seem costly at face value, yet result in such higher velocity, quality, and most importantly, empowerment and joy of work.
Thanks, it is hard to change how people think. My ambition is to offer some ammunition that can help in that effort.
How great to have this so clearly presented. I've spent years blending the elements to build the machine without a vocabulary to describe what I was trying to achieve. This channel is my "find of the year" (so far).
Hmmm, "find of the year" would have been better yesterday🤣
Thanks.
One thing that truly stands out for me about this channel, is the discussions in the comments. That is something very rarely seen on RUclips (not that the majority of content lends well to it anyway). Let's keep it flowing :-)
Thanks. The reason that I started the channel was to help spread some ideas that I think are useful, and important, to doing Software dev better. So I enjoy exploring these ideas in more depth in the conversations in the comments.
I knew this video was going to be great when I saw you were wearing a shirt from the USCSS NOSTROMO
Clearly a man of culture, great content
Keep it up!
Keep a look out for my other SciFi nerd shirts :D
Once in the university i was taught the concept of orthogonality which greatly helped me thinking about decoupled architectures. Orthogonality simply means that when you move up or down some X axis, it should never force change to the Y axis and vice versa.
I think that is a useful model, but also that it is, I suppose inevitably, more complex than that. There are always trade-offs and the difficulty in successful architecture is to balance the trade-offs while striving to asymptotically approach the orthogonality that you describe.
It’s interesting to me how universal these sorts of concepts are. I’m not a software developer and I was only watching this video out of curiosity. I have Military leadership training and experience and many of the concepts you’ve explained is directly relatable to command and control and small unit leadership. One of the major features of western/NATO militaries is small unit autonomy. The idea being that a commander back at HQ does not have the same clear situational awareness that a company or platoon commander does in the field. Instead of commanders in the field having to wait for permission to move or act they’re given the autonomy and even encouraged to take initiative since speed and action is critical in combat. To facilitate this there is the concept of “Commander’s Intent” which sounds a lot like what you described in “Bounded-Contexts”. Before an operation you’re briefed on what overall goal or objective is so in the field when a frontline leader uses their autonomy, they do so within the framework of the commanders intent and allow continuous movement to completing their objectives without micromanagement from above.
Yes, I agree completely!
I think that there is something pretty deep at play here, I think that this is about dealing with "information" at a fairly fundamental level. In any situation that is not predictable and repeatable, we need to adopt a very different approach to coping with the situation.
Too often failure is a result of applying techniques that work when things are predictable, to situations that are inherently not. Then people blame the accuracy of the plan, when an accurate plan was impossible - Doh!
The military is a great example where this learning happened a long time ago, because people die when you get it wrong. My understanding of military history is that "mission-based planning" was one of the innovations that made Napoleon so successful at the start of the 19th century for example.
You don't succeed in fluid, changing situations if you try to execute rigid "command and control" structures. There are lots of examples of analogs of this "mission-based-planning" approach in lots of different disciplines. Another common one is Toyota's Lean approach to production engineering, they are a learning organisation and are organised around small autonomous teams.
@@ContinuousDelivery Yeah, broken down fundamentally ridged centralization forces the amount of information and processing of that information into a bottleneck. Everyone, person, team, etc, has a limited amount of "bandwidth" to process information. Greater de-centralization allows a greater ability to disburse that bottleneck and organizations become more productive and efficient.
The drawback is there has to be a degree of trust and competency that some organizations struggle with. Either through the human ego or poor quality in resources. The factor that played more into the incredibly one-sided victory the coalition had over Iraq in the 1st Gulf War had more to do with this than superior technology. Iraqi armed forces were modeled off of doctrines from the Warsaw Pact which was a heavy emphasis on ridged command and control. The idea being mass formations attacking along a broad front and exploiting any breakouts with reserve forces. It works to a point but requires a lot of manpower/resources, and it's not very efficient. Also, militaries of autocratic nations tend to want to limit the amount of independence and autonomy individual commanders have since oftentimes the goal of the military is keeping the leadership in power.
Decentralized control is a powerful way of organizing but it becomes difficult or impossible for insecure leadership to implement.
One of the best videos of the series so far.
Thanks Jim.
Thank you for yet another very interesting video! This is the best software engineering channel that I have found so far!
I understand the need to let the teams develop at their own pace but I don't understand why large projects are any different than small ones. Why is the development cycle not exactly the same? Collect requirements. Code each requirement in turn in the form of a set of tests. Write production code to satisfy each test. Refactor to keep the code base clean. Repeat these steps until the project is complete. The only differences I see between a large project and a smaller one is that the iteration cycle is longer for the large one (because it is operating on whole modules instead of individual procedures), more people are working in parallel, and the requirements and hence the tests are more general.
What am I misunderstanding?
The problem is always the coupling. The problem with large projects is that the complexity explodes. If I write SW on my own, all changes are down to me, and while I may forget or misunderstand something, only I can break my code. If you and I work together, now we can break each other’s code.
To defend against that we can talk a lot and understand what each of us is working on. That doesn’t scale up very well really. The limit of being able to work in a team like this and be able to know enough of what everyone else is doing is probably only 8 or so people. After that there is a measurable reduction in quality. If you grow much beyond that if everyone is working without any compartmentation of the system and teams, then they will almost certainly create rubbish.
So then the next step in growth is to divide people, and their work, into compartments so that they can work more independently of one another. This is where things start to become more complex. The quality of the compartmentation matters a lot!
If you do this really well, it scales up to overall team size (divided into many smaller teams) of probably low hundreds if you want to keep their work consistent and coordinated. After that you pretty much MUST de-couple to scale up further.
These are all “rules of thumb”, approximately right rather than hard and fast laws. You can improve scalability of code and teams with great design, but the way to optimise for max scalability is to go back to independent small teams.
@@ContinuousDelivery Thank you very much for your prompt reply!
I understand that coupling is a huge problem but I don't understand why there is not a refactoring step between two modules once they both satisfy their requirements. For example, once two teams complete their work why should they not get together and remove duplication?
I would think that duplication would be an even larger quality problem in an enormous project than a small one if for no other reason than the union of two sets of tests should be more complete than either one alone. Also, if the duplicated code in one module is repaired in some way there is no reliable way to ensure that it will be similarly repaired in all of the duplications.
I really enjoyed your talk and found it very informative. I was just a little surprised when you talked about not sharing across teams which is why I asked for clarification. Thanks for taking the time to explain things.
@@georged8644 You can certainly do that, but it comes at a significant cost. The coordination effort to ensure that each team is use the correct version will slow you down. The work to rationalise the work of the teams, as you describe, will also slow you down.
My point is that you need to recognise the costs of the choices that you make and work to minimise them as appropriate. There are no simple solutions, this is not that kind of problem. All of the options have downsides, as well as upsides. The problem, as I see it, is that many, maybe even most, teams and orgs, assume that there is a simple perfect world where you can produce software as fast as possible, with the highest quality, and have it perfectly consistent. This is not possible, you have to pick either "fast and high quality" or "slower & consistent" you can't have both of these things, at least not for software beyond the really quite simple.
I love this channel, it's super informative
Thanks :)
Really good! Maybe a video about this already exist on the channel, but who is in charge of the strategy for the whole solution / software and coordinating all the pieces? Is this person also in charge to give visibility to management without teams having to know what management think or want. Finally, how message about requirements are pass to each teams. Is there a human interface playing this role, abstracting everything that is going on to the team but passing messages about requirements / efficient strategy that other teams have tried / etc… In other words, as for systems, when communications are necessary, is an interface should be use to abstract as much noises as possible?
Another brilliant video.
What's your take on applying "DRY" on tools/processes?
For example, not sharing code between teams, but insisting that all teams use the same CI/CD tools (even the same instance) and insisting all teams adopt the same workflows?
It is complicated. For some SW it makes senses to have some standardisation. From an org’s perspective it is a nice idea to build tools, like CD pipelines, that help teams. I don’t like forcing tools or tech on teams from outside. To be successful, it works much better, when building tools and platforms, to adopt the approach that it is the job of the team producing the platform or tools to make stuff that people want to use, rather than that teams are forced to use.
That is also in the orgs interest, because if there is a team that for some reason doesn’t fit into the standard, then. They can fix their own problems. It leaves space for teams to innovate!
For this to work, the orgs need to be willing to give teams the freedom to make their own decisions, and the teams need to be willing to take on the responsibility for their work.
@@ContinuousDelivery Thanks for the reply! You may have guessed it was a bit of a loaded question. I'm currently in a situation where if I want to add a Jenkins plugin, I need to raise a ticket with the "DevOps" team, and it usually takes literally months for it to get actioned (if I'm lucky & various "committees" accept my justification). I understand why they want to be cautious about what gets installed, because many teams & many developers are dependant on the same instance so they don't want to break anything. I also completely get why from and org's perspective, having a single tool & everyone working in the same way would _seem_ to remove duplication of effort & perhaps appear to be more "cost effective". But for me, this is example of optimising for the wrong thing, resulting in stifling speed & efficiency, not to mention innovation like you say!
@@davemasters That is certainly a good example for coupling on the organizational level. Unfortunately, nothing out of the ordinary but rather common, at least in the organizations I have dealt with over the years.
Man if this was my software engineering class it would have saved me years of useless struggle...
Thanks, happy if I have helped.
Thank you very much, dear David for all of your kindness
my exact question and ambiguity that for example i want to post one specific object to another service so how can i abstract it? how can i told another service to post (insert or save) this object for example have some specific fields such as name family image and etc? how can abstract this? this way is wrong that front end developer in another service that call my service to post say to me not exactly this field? in surface my opinion is to use DTOs in my api contacts but this fields are also need to contract between services so when i want to change a bit on these fields i have to change in related services that consume it. this coupling is ok? i overcome on this coupling only with contract testing? or can i improve it?
amir bolouri
I really enjoy listening to such experienced professionals. So many interesting things to learn. Cheers!
Glad you enjoyed it!
Hi, @Continuous Delivery, Could you please make a video going in depth with Remote teams? Or how can remote teams improve process?
CUrrently I'm facing the challange where I feel like I'm a Freelancerr, however our team of 3 works with a team of 5 which are located in the US (and we are in the UK) but we kinda have some hiccups in communication and planning.
Is there a good solution for this?
Yes, I can, thanks for the suggestion. I have a schedule of videos planned, so it will be a little while before I get to it, but this is an interesting topic. Thank you.
@@ContinuousDelivery Thank you sir!
Anyone who has been in a management position will tell you that it's much "better" to work with a small team over a larger one. I'd imagine most feel that it takes a special kind of person to manage a large group. From my experience it's simply because it's easier to get less people on the same-page than more.
Factors like how the group feels about one another or the morale of your team directly impacts how well they will work together as a single unit. When you have a bunch of odd-balls, gotta identify their strengths and weaknesses and kind of individually dictate each role to best fit the worker.
One can imagine two kinds of people here:
1) The straight edge engineer guy (who I've always imagined should look like the guy from that move "Falling Down")
-or-
2) That hippy guy with long hair and jeans ( who often exhibits a superiority complex due to being over qualified )
You can't manage these two kinds of employees the same. And, odds are they will not get along well together in the lunch room. You need to personally know each individual - so that one can identify the best way to manage.
Managers tend to make plans based on preconceived ideals of employees. They are thought of as "Employees" not individual people with different skills. In other words: Employee_X should be capable of X_Y_Z. But in the real world, we all know every employee is not worth the same, even if they make the same money.
Nice shirt by the way :)
I think that management is often poor in this respect. I think that the best managers coach the team to be great and then get out of their way. Too many act as though their job is to remote-control the team. I think of it more like an elite sports coach. You hire people who can 'play' much better than you, but then help them to 'play' to their maximum potential, but within the framework of the team and its goals.
Theses that chap who said "Small is beutiful" EF Schumacher
There is a college called Schumacher College in Dartington, Devonshire who hold that at heart.
Yes, small steps give us more chances to inspect and adapt😎
It's one of the great videos I have seen so far. Please keep posting videos.
Thank you! Will do!
I'm glad I discovered this channel.
Thanks, and welcome
Your channel is a goldmine.
Thanks, welcome
It's always difficult to find the right moment when to add people to small but effective team, because some work dome good only because team are small, yet sometimes small teams just not able to do something big team able to
Yes, none of this stuff is easy. I think that one of the mistakes that lots of teams and orgs make is to assume that it is easy, and so apply overly simplistic, naive solutions. I am not saying that there is only one way to do things. But all of them will depend, to some extent, on how you deal with the unavoidable problem of coupling. You pick your strategy and cope with the consequences!
Thanks for your comment.
Super useful advice for future IT leaders.
Thanks
Cd Projekt Red management should watch this videos on a regular basis
Thanks
Happy new year Dave, loving the T-shirt too ;)
Happy New Year, thanks, I also have a "Cyberdyne Systems" T somewhere :D
@@ContinuousDelivery brilliant ;)
The internet was made for this.
Thanks 😊
whats the study to which you're referring? Show your references please
Great advice, keep up the good work.
Thanks!
Thanks for sharing your great insight
Glad you enjoyed it!
Good stuff thankyou!
Glad you enjoyed it!
What a great video (and channel).
The concepts discussed here are so crucial to building a successful product, yet are often misunderstood and overlooked by developers and organizations.
Thank for making our industry more professional.
Thank you
Do you know Star Citizen and Cloud Imperium Games?
I am not a big game player, I do play games, but not a lot. I like flight sims, and space sims most. So I know about Star Citizen, but I have been getting my space-exploration fix playing Elite.
I commented on the Cyberpunk 2077 game, not really because of the gaming, but because it was an interesting example of how software development can go wrong.
@@ContinuousDelivery It's interesting, I have watched quite a lot of videos of yours over the past 2 days, how to deliver software and make it a continuous effort, rather than one big public release with a set date. I have a friend from university with a small software startup (10 employees) and preaches about his 2 week delivery schedule.
Star Citizen does a lot of the things you advise companies to do. Either Lock scale but be very flexible with release dates (according to whom you ask they are 4 years behind with the "targedted" release) or the other way round. They update the game quarterly, so they always keep it in a playable state. It seems like they operate in small teams (according to the website 3-20 people in a work group), but obviously have a lot of dependencies on each other.
The thing is, Cloud Imperium Games gets criticized massively for this approach. People say it's an endless money dump without something to show. Do you think the game industry might just be too big of a backwater area in the realm of software development that such an approach is accepted by shareholders and fans, who obviously have no background in engineering or software development?
Could it be my solo developer bubble, but i am questioning myself, why do all these CI/CD/DevOps/GitOps/SecOps/YouNameIt RUclipsrs are exceptionally talking about only about large scale set-ups with gazillion of teams (and their typical challenges), massive pipelines and what not. In the RUclips... What auditory they are aiming for? TeamLeads? PO's? PM's? Is it that majority of the RUclips auditory consists of decision makers who are in power to implement these practices in their large scale organizations? Am i the only one there who are looking to these things from perspective of 1 man band or ... 3-5 man band at maximum?
It feels more like there are some dark spot in the middle. There are GAZILLION of simple resources like tutorials, "how to's"... etc. in the one end... and the large scale practices in the other end. But there is little to no resources about the work-flows and the patterns about small team/project setups. 1, 5, 10 people organizations. Startups. 1 man led real-life SaaS product workflows and so on. I personally know many "freelance" full stack devs, who who work from home office, maintain some SaaS product/s, have some customers and they slowly scale out. And thus, I mean... there, in the RUclips, are middleground between "How to make ToDo App in Python" and "Why google stores billions of lines of code in a single repository" kinda auditory.
And 1 man band doesn't mean that this should be monolithic project and it doesn't mean this is overengineered microservice project either. 1 man band can happily work in Bazel managed monorepo with multiple microservices projects with all the common characteristics of such projects. But when it comes to the workflows form the Provisioning, Config, Sec, Git, CI/CD, etc... perspective ... it is different what Netflix, Uber or Google does in their orgs. And nobody talks about this middleground.
It is nice to have theories out there, but it is useless if the majority of the auditory are not in the power to apply these theories or does not work at the scale where these theories are really effective.
This is not an rant. And not about this particular video. It's just my overall observations about the content in the RUclips from my personal "googling bubble" and mby somebody can get something useful out of these my thoughts.
I think it is down to where the problems are. If you are woking alone, then things are a lot simpler, I wouldn't recommend it, but you can get away with worse code.
Also as a matter of scale, I'd definitely go monolith, doesn't mean that it can't be nicely distributed, nicely service-oriented, but Microservices is a team-decoupling strategy, so makes no sense at all for single-person dev or very small teams. I have a video explaining some thoughts on Microservices here: ruclips.net/video/zzMLg3Ys5vI/видео.html
@@ContinuousDelivery I would not agree that microservices is a team-decoupling thing only. And it does not mean i am the microservices fan-boy. But thinking in "microservices" way enables much more nicer interfaces and much better re-usability. Lately i started to think, that going "full-stack" as a single person implies building your own knowledge base (workshop with bunch of tools in it). Including libraries, tools, scripts you ever worked on. Etc. So... "The Reusability". Over time you refactor your libraries, tools etc. They got better. Which means, they also get nice interfaces and decoupling.
In contrast, if you place yourself in the monolithic environment, it is much much harder to think about the real decoupling and reusability no matter how good are you. So microservices thinking helps a lot not only in the context of services itself.
And if you think so... these days going "microservices" is not that hard at all. We have a plenty of great tools to ease the management. Think of automation. I have a pretty much fully automated provisioning pipeline to spin up the clusters and to CD the every service right into it.
Yes... at the beginning it's really hard to get all the moving parts together, like Secrets and TLS management, networks, codebase, pipelines, tracing, monitoring etc... but once the foundation is done, releasing a new microservice is just a matter of composing several already build libraries and baking in some logic.
I am not thinking from the perspective of "Get the job done and release that ToDo app ASAP". I am strongly talking from the perspective of person as a full-stack developer in long (lifetime) term, who works on one or several SaaS products over the time. In this context the products itself are just a side outcome. The real meat are in the knowledge base and the code libraries he as the developer developed over the time. Most likely most of the modules and libraries will be reused over and over, no matter what project he is working currently on.
So... "microservices" makes huge sense for me personally if i look outside of single product release.
Will take a look at that video at evening.
Thank You for sharing your knowledge! :)
Decoupling in the sense that you say is nothing to do with Microservices, I have built software that looked like that since the 1990s. I worked with the people that invented the concept of Microservices, they did it for the reasons that I mention. If you read Sam Newman's stuff, he wrote the book that is most popularly user to define it, or watch some of his talks, he says "don't start with Microservices".
True Microservices, that is independently deployable (you don't get to test them together before deployment) is a complex, sophisticated strategy.
What you are describing is Service Oriented Design, which pre-dates microservices by a LOT.
Microservices is NOT about REST APIs, they have been around for a lot longer too, Microservices is, very specifically, about independently developed components of a system. You do that when you want to scale-up a dev organisation significantly. Watch my video in Microservices to see what I mean.
And that's how you get a crappy inconsistent product that breaks here and there and can't display a date in the same format on two adjacent screens, lol. But you can move really fast for the pleasure of your investors.