I’m so glad the industry is coming back to reality and not over doing micro services. Especially for smaller business where development speed is needed
No one talks about the software development companies. Companies that build software for SME’s. For these companies, vertical slice based monoliths are THE architecture to use. They are easy AND safe to change, and therefore deploy. Even for internal corporate team’s building business tools it makes sense
Exactly! Just the presence of microservices does not prevent the mess, nor the presence of boundaries themself if they are not structured well. Great video! 👍
I am of the opinion that service architectures really should be reserved for systems that require distributed processing, or high availability achieved through redundant processing nodes. Any other system should use a modular monolith. If you are always bound to one processing unit, don’t bother with a service architecture. It’s pointless, and will eat away at your productivity and program execution performance.
To me it's all about outcomes. What problems do you have, what's getting in your way? I find the DORA metrics a great way of evaluating whether your architecture is meeting your needs. Maybe there are other fitness functions for your architecture, like enabling innovation or five nines of availability. When you step away from specific implementations and focus on these outcomes or fitness functions, you can play around with different approaches. I have found specifically that you want separate deployable units when a key problems is enabling empowered teams. But if you have one team that supports multiple logical services, a single deployable unit in a single repo, with well-defined boundaries, seems quite reasonable.
Thanks so much for making this video! ❤ I’m trying to start a discussion at work on the (IMO) pointless 1:1:1 architecture they’ve chosen to build. While it works right now, I see a number of issues cropping up long term such as performance, a lot of added complexity and a very high burden on maintenance. This video (and others of yours) gives me something that I can use to concisely illustrate my complaints! 😄
Great video. This is why thinking vertical slices just makes life so simple. You end up with codebases that are safer to change AND are easier to change.
Neal Ford has a good name for this: architectural quanta. It's actually quite a fun idea that solves a lot of data issues as well, as components of these quanta can share a data source and still be logically independent.
Yes, the same way "everyone" misunderstood microservices, "everyone" is going to have trouble grasp this concept as well, and in a few years, well. You know how the story goes :D
Microservices are fine. Its just people dont do them correctly because they think of them as mini-monoliths and message brokers and event scare people. I treat the whole thing very simply. A service is a totally independent unit; it does not care what other services exist around it and it only communicates via REST for the Client side (if needed), and a Message Broker for internal comms.. NEVER, EVER allow two services to rely on REST calls for internal communication or even gRPC as some people try to tout. The second you do, you may as well just make a monolith
If you implement a microservice application using serverless, the cost will increase exponentially when it servers millions of users per day. The best use case for serverless applications is apps with low to medium-traffic or intranet apps.
Part of the problem is that people treat it as a pendulum, thinking it should be either one way or the other. The answer as always is that there are trade offs and where you need to be depends on your situation but it's likely to be somewhere in the middle.
There's not enough discussion about this in my opinion. Nice one Derek, great work. Maybe a vid focussed on (business) service composition for deployment would be nice? Just because Continuous Delivery/Deployment in microservices tends to strengthen that camp's case for 1:1, especially with docker as a preferred deployment artifact. I personally only use package management assets for my business repos outputs and compose them in a Hosts repo, the latter responsible for the projects that compose (most of) the system and is responsible for how the physical is structured (in docker containers). Not saying this is the only way, just one that works for me. There may be other ways of course so a worthy topic?
Microservices are past their hype cycle and industry has now accepted the practical solutions. Thanks for making this video, it gives food for thought to software architects and stuff to convince other stakeholders who are still in love with microservices across all views :) I could not understand the name of the person at 8:16 whose definition was quoted.
Is saying Logical doesn't need to equate to Physical saying that engineers should not be afraid of putting projects/daemons with completely different logical purposes on the same machine/set of running processes? I suppose this only applies when they need to work together, right? So in that sense, they are both Physically and Logically coupled?
Sometimes this kind of shift requires some big market player to implement and see the results. AWS did it with their streaming service and now the entire world is thinking did they do a simple design or just over design.
Extremes are usually bad. Monoliths are bad once they become too large / difficult to maintain. Microservices can be bad if they are too specific. A better argument would focus on devs/leads who can delegate responsibility appropriately and organize code vs ones who can't. Hard to make a specific argument that high-level, but service orientation is flexible enough to support monoliths and be reusable for other consumers.
I prefer to express this as: microservice as architecture pattern is distinct from microservice as deployment pattern. If your application is partitioned into components such that no two components share a persistence store (or at least for every row in a relational DB, kv-pair in a key-value store, stream/topic in a message queue/broker/log-structured store, you can unambiguously identify exactly one component which is responsible for making updates to it (note that if you have foreign key constraints, that counts...)) and all interactions between components cross an asynchronous boundary, your application is in fact architected as collaborating microservices. This applies even if you have all the code in a monorepo or deploy all the components at the same time or run all the components in the same OS process. Understanding this gives you the ability to reap the benefits of monolithic deployment and development processes (the ability, if using a more strongly typed language to have the compiler make useful guarantees is not to be underestimated!) and retain the option to apply microservice deployment patterns later should the need arise. Given that there are techniques that can make an app architected as microservices but not deployed as microservices infinitely scalable (implementing using an actor model approach (e.g. by using Akka) is such a technique), that need need only present itself when the development team has grown large enough that developer coordination is imposing a drag. Hot take: the only reason to adopt microservice deployment is Conway's Law downstream of applying the Universal Scalability Law to your team structure. Anything else is a symptom of suboptimal architecture.
The thing is people overhype anything, take for example. A.I , every investor and their grandma are throwing money at it, same with microservices, one crazy nut hypes it in twitter and all of a sudden people are shamed because they used monolith architecture in their apps, and throw all the kool aid words like: distributed, scaling, fast, team collaboration 😂. Dont even mention crypto 😂
But what made the pendulum swing back, what changed? It's not like tradeoffs between microservices and mono wasn't known before. Inb4, great video for those, who don't what are the tradeoffs.
It may have been known to some degree, but I'd argue it wasn't well known at all. And that particular knowledge got drowned out by the hype around microservices, when Netflix and LinkedIn and Twitter etc were touting it as the best thing since sliced bread.
EventStoreDB is applicable in any logical boundaries where you want to event source. You'd use events as a means to communicate with other boundaries. Since your event sourcing, it's a matter of converting those events or summarizing them into events you want to expose.
Microservices are about decomposition of a larger system, generally organizationally, about a logical boundary. Each logical boundary is then physically separate. How you choose to deploy that logical boundary physically does not make it a monolith or a microservice. The decomposition of the larger system physically does.
I think this is a great way to think about boundaries, however I do have a question that always comes to mind when reading about decoupling services via a message broker (as seen at the 8:16). If Service A (downstream) needs data from Service B (upstream) in order to fulfill a client request, how could this inter-service communication be accomplished through a message broker? Since messaging is asynchronous it would mean that Service A cannot respond to its client synchronously. Would these cases default back to gRPC or HTTP tight coupling between the two services?
You need to think why your process is split between 2 microservices in the first place. This is the problem of split logic boundary caused by not using vertical slicing in your architecture. If you need synchronicity and can't migrate your solution to single, decoupled module then i'm afraid gRPC or REST is a way to go. I had such case in my company some time ago and we were using CQRS, so user was starting the process, process was running asynchronously (message broker communnication) and at the end READ model was updated. At the same time main process was checking the READ model for updates in the loop. If update was detected then user got data back. This solution is STINNNKY but it's because previous architects thought that "throwing microservices" at every problem is a great idea :/
With java modules you can essentially have the compiler enforce logical boundaries as if they were physical boundaries. As long as your monolith is stateless its basically like a mono repo of microservices only better.
There isn't really such a _thing_ as a logical service. Where does it exist? Is it in documentation? Is it in people's heads? Does every developer have their own idea about the logical service boundaries? A logical boundary must be aligned with some physical boundary like a source repository or deployable unit to become tangible and well defined. If not, then your boundary will very quickly not be a boundary anymore when there is nothing to stop people from crossing it.
Is saying Logical doesn't need to equate to Physical saying that engineers should not be afraid of putting projects/daemons with completely different logical purposes on the same machine/set of running processes? I suppose this only applies when they need to work together, right? So in that sense, they are both Physically and Logically coupled?
Hello,s o in a nutshell, a logical boundry could be made up of multiple applications(physical boundries),whci could be cohesive,i mean beeing a single deployment unit? Thanks!
Ok, you've just demonstrated a big bunch of options for modularizing software, including communication and access to the persistence layer. Let me be so bold as to say that I have known these thoughts and options for 25 years. Of course, you always come to the conclusion that there are vertical and horizontal cuts and you can form logical modules from them. But you don't address the basic problem: How does a developer/architect keep all possibilities open? This is only possible with suitable abstractions at all breaking points. I even claim that it is only possible with special frameworks that solve such abstractions in a generally valid way. And this is still lacking today. I want to program bounded contexts, which I can bundle into an application individually or together. I want to be able to send queries, commands and events either in-process or via network plus middleware. I want to map the access of the frontend to the backend not fixed with REST, but an abstract interface - etc.
What you’re describing here would be mostly possible with traits in Rust or protocols in Swift. In C# you could sort of accomplish this with generic extensions… kinda.
Very well said. It gets especially incredibly difficult or even impossible to *design and build those abstractions* once you step out of the simplistic world of just thinking about sending a piece of data from one point to another in the system and actually start thinking about things like asynchrony, delivery guarantees, ordering guarantees, QoS, eventual consistency, transactions, error recovery etc. This basically is the reason why RPC is such a bad abstraction to abstract away network and make the whole communication seem like in-process function calls.
I think this is a bit of out of scope for this particular video where the goal is to distinguish between physical and logical boundaries and lament the senseless adoption of microservices architectures. He has many videos describing design patterns and architectural styles that are lower level than this. Like layered architecture, vertical slice, ports and adapters, etc
Maybe it's out-of-scope in some ways but the way I see it is that this video is somewhat suggesting that you can design your logical modules without thinking about your physical modules and it's really only possible when you have suitable abstractions at module boundaries. My comment is not strictly a criticism and definitely not a counter-argument to the video but I just feel that the video portrays accomplishing it much easier than it actually is.
Great thread here. I do mention at the end, loosely couple between logical boundaries. If you're doing that, you're not temporal coupled to begin with, nor do you need to think about in-process moving to RPC if you were to separate. Loosely couple from the get-go. And agreed, there's nuance to this and it's not "easy", but the point was to not assume they are need to be a 1:1.
Code to a interface and not a concretion. All word salad “boundaries”, bounded context, monolith, micro service etc… are useless. Logical or physical boundaries should absolutely not affect your interfaces.
in other words.... C++ is king and is coming back.... we're going to start building and maintaining custom libraries to do our shit... and we can deploy containers that pick and choose which code to include based on the slice of the app it represents. Got it
I’m so glad the industry is coming back to reality and not over doing micro services. Especially for smaller business where development speed is needed
No one talks about the software development companies. Companies that build software for SME’s.
For these companies, vertical slice based monoliths are THE architecture to use. They are easy AND safe to change, and therefore deploy.
Even for internal corporate team’s building business tools it makes sense
Thank you SO much for the dark background on the slideshows!!!
ha! I switched to a dark background over the last month or two. Glad it works for you. It actually helps reduce glare when I'm recording, side bonus.
Exactly! Just the presence of microservices does not prevent the mess, nor the presence of boundaries themself if they are not structured well. Great video! 👍
A microservice calling an other microservice is the worst kind of monolith.
A microservice calling another microservices (and in turn calling other microservices) all via RPC is a distributed turd pile.
I am of the opinion that service architectures really should be reserved for systems that require distributed processing, or high availability achieved through redundant processing nodes. Any other system should use a modular monolith. If you are always bound to one processing unit, don’t bother with a service architecture. It’s pointless, and will eat away at your productivity and program execution performance.
Define logical boundaries within a monolith and you can scale that pretty far. ruclips.net/video/rSCDuZLP9UM/видео.html
Glad to see this opinion spreading 👍🏻
To me it's all about outcomes. What problems do you have, what's getting in your way? I find the DORA metrics a great way of evaluating whether your architecture is meeting your needs. Maybe there are other fitness functions for your architecture, like enabling innovation or five nines of availability. When you step away from specific implementations and focus on these outcomes or fitness functions, you can play around with different approaches.
I have found specifically that you want separate deployable units when a key problems is enabling empowered teams. But if you have one team that supports multiple logical services, a single deployable unit in a single repo, with well-defined boundaries, seems quite reasonable.
Thanks so much for making this video! ❤
I’m trying to start a discussion at work on the (IMO) pointless 1:1:1 architecture they’ve chosen to build. While it works right now, I see a number of issues cropping up long term such as performance, a lot of added complexity and a very high burden on maintenance.
This video (and others of yours) gives me something that I can use to concisely illustrate my complaints! 😄
Great video.
This is why thinking vertical slices just makes life so simple.
You end up with codebases that are safer to change AND are easier to change.
Neal Ford has a good name for this: architectural quanta. It's actually quite a fun idea that solves a lot of data issues as well, as components of these quanta can share a data source and still be logically independent.
Yes, the same way "everyone" misunderstood microservices, "everyone" is going to have trouble grasp this concept as well, and in a few years, well. You know how the story goes :D
I like the reference to Kruchten's 4+1 views of architecture. Nicely done, dude!
Microservices are fine. Its just people dont do them correctly because they think of them as mini-monoliths and message brokers and event scare people.
I treat the whole thing very simply. A service is a totally independent unit; it does not care what other services exist around it and it only communicates via REST for the Client side (if needed), and a Message Broker for internal comms.. NEVER, EVER allow two services to rely on REST calls for internal communication or even gRPC as some people try to tout.
The second you do, you may as well just make a monolith
If you implement a microservice application using serverless, the cost will increase exponentially when it servers millions of users per day. The best use case for serverless applications is apps with low to medium-traffic or intranet apps.
Part of the problem is that people treat it as a pendulum, thinking it should be either one way or the other.
The answer as always is that there are trade offs and where you need to be depends on your situation but it's likely to be somewhere in the middle.
Coming back after watching the aws Serverless mircoservice to monolith
There's not enough discussion about this in my opinion. Nice one Derek, great work. Maybe a vid focussed on (business) service composition for deployment would be nice? Just because Continuous Delivery/Deployment in microservices tends to strengthen that camp's case for 1:1, especially with docker as a preferred deployment artifact. I personally only use package management assets for my business repos outputs and compose them in a Hosts repo, the latter responsible for the projects that compose (most of) the system and is responsible for how the physical is structured (in docker containers). Not saying this is the only way, just one that works for me. There may be other ways of course so a worthy topic?
Ya, good suggestion!
I like the lamp in your background. ^.^
Just FTP some PHP files up to server and call it a night.
Microservices are past their hype cycle and industry has now accepted the practical solutions. Thanks for making this video, it gives food for thought to software architects and stuff to convince other stakeholders who are still in love with microservices across all views :) I could not understand the name of the person at 8:16 whose definition was quoted.
Referencing Adrian Cockcroft about his definition of microservices.
Is saying Logical doesn't need to equate to Physical saying that engineers should not be afraid of putting projects/daemons with completely different logical purposes on the same machine/set of running processes? I suppose this only applies when they need to work together, right? So in that sense, they are both Physically and Logically coupled?
Sometimes this kind of shift requires some big market player to implement and see the results. AWS did it with their streaming service and now the entire world is thinking did they do a simple design or just over design.
Extremes are usually bad.
Monoliths are bad once they become too large / difficult to maintain.
Microservices can be bad if they are too specific.
A better argument would focus on devs/leads who can delegate responsibility appropriately and organize code vs ones who can't.
Hard to make a specific argument that high-level, but service orientation is flexible enough to support monoliths and be reusable for other consumers.
I pray I never have to work on an application that requires more than xcopy deployment.
I prefer to express this as: microservice as architecture pattern is distinct from microservice as deployment pattern.
If your application is partitioned into components such that no two components share a persistence store (or at least for every row in a relational DB, kv-pair in a key-value store, stream/topic in a message queue/broker/log-structured store, you can unambiguously identify exactly one component which is responsible for making updates to it (note that if you have foreign key constraints, that counts...)) and all interactions between components cross an asynchronous boundary, your application is in fact architected as collaborating microservices. This applies even if you have all the code in a monorepo or deploy all the components at the same time or run all the components in the same OS process.
Understanding this gives you the ability to reap the benefits of monolithic deployment and development processes (the ability, if using a more strongly typed language to have the compiler make useful guarantees is not to be underestimated!) and retain the option to apply microservice deployment patterns later should the need arise.
Given that there are techniques that can make an app architected as microservices but not deployed as microservices infinitely scalable (implementing using an actor model approach (e.g. by using Akka) is such a technique), that need need only present itself when the development team has grown large enough that developer coordination is imposing a drag.
Hot take: the only reason to adopt microservice deployment is Conway's Law downstream of applying the Universal Scalability Law to your team structure. Anything else is a symptom of suboptimal architecture.
I`m so glad, because this is what I`m doing this right now :D.
The thing is people overhype anything, take for example. A.I , every investor and their grandma are throwing money at it, same with microservices, one crazy nut hypes it in twitter and all of a sudden people are shamed because they used monolith architecture in their apps, and throw all the kool aid words like: distributed, scaling, fast, team collaboration 😂.
Dont even mention crypto 😂
Amen to that and thanks
But what made the pendulum swing back, what changed? It's not like tradeoffs between microservices and mono wasn't known before.
Inb4, great video for those, who don't what are the tradeoffs.
It may have been known to some degree, but I'd argue it wasn't well known at all. And that particular knowledge got drowned out by the hype around microservices, when Netflix and LinkedIn and Twitter etc were touting it as the best thing since sliced bread.
A video which is summarizing monolithic might be good option with a micro service oriented sponsorship 😂
EventStoreDB is applicable in any logical boundaries where you want to event source. You'd use events as a means to communicate with other boundaries. Since your event sourcing, it's a matter of converting those events or summarizing them into events you want to expose.
So must mircoservice be 1:1 physical boundary and logical boundary
Microservices are about decomposition of a larger system, generally organizationally, about a logical boundary. Each logical boundary is then physically separate. How you choose to deploy that logical boundary physically does not make it a monolith or a microservice. The decomposition of the larger system physically does.
Elixir/Phoenix FTW! Monoliths for the win.
I think this is a great way to think about boundaries, however I do have a question that always comes to mind when reading about decoupling services via a message broker (as seen at the 8:16). If Service A (downstream) needs data from Service B (upstream) in order to fulfill a client request, how could this inter-service communication be accomplished through a message broker? Since messaging is asynchronous it would mean that Service A cannot respond to its client synchronously. Would these cases default back to gRPC or HTTP tight coupling between the two services?
Service should own all data it needs, when service B got that piece of data it publish event which service A should consume and cache that data.
You need to think why your process is split between 2 microservices in the first place. This is the problem of split logic boundary caused by not using vertical slicing in your architecture. If you need synchronicity and can't migrate your solution to single, decoupled module then i'm afraid gRPC or REST is a way to go. I had such case in my company some time ago and we were using CQRS, so user was starting the process, process was running asynchronously (message broker communnication) and at the end READ model was updated. At the same time main process was checking the READ model for updates in the loop. If update was detected then user got data back. This solution is STINNNKY but it's because previous architects thought that "throwing microservices" at every problem is a great idea :/
With java modules you can essentially have the compiler enforce logical boundaries as if they were physical boundaries. As long as your monolith is stateless its basically like a mono repo of microservices only better.
There isn't really such a _thing_ as a logical service. Where does it exist? Is it in documentation? Is it in people's heads? Does every developer have their own idea about the logical service boundaries? A logical boundary must be aligned with some physical boundary like a source repository or deployable unit to become tangible and well defined. If not, then your boundary will very quickly not be a boundary anymore when there is nothing to stop people from crossing it.
It aligns with the business.
Logical != Physical, now if we could just get people to start applying that concept to the ridiculous amount of .csprojs people like to create.
This!!!
C#’s obsession with number of projects is strange 😂
It’s unintentionally making your code worse to change/work on
Is saying Logical doesn't need to equate to Physical saying that engineers should not be afraid of putting projects/daemons with completely different logical purposes on the same machine/set of running processes? I suppose this only applies when they need to work together, right? So in that sense, they are both Physically and Logically coupled?
Hello,s o in a nutshell, a logical boundry could be made up of multiple applications(physical boundries),whci could be cohesive,i mean beeing a single deployment unit? Thanks!
And a logical boundary could be composed of other logical boundaries in a single or multiple deployment units.
Disagreeing on above! We’ve 100s of points where this discussion can be challenged
Ok, you've just demonstrated a big bunch of options for modularizing software, including communication and access to the persistence layer. Let me be so bold as to say that I have known these thoughts and options for 25 years. Of course, you always come to the conclusion that there are vertical and horizontal cuts and you can form logical modules from them. But you don't address the basic problem: How does a developer/architect keep all possibilities open? This is only possible with suitable abstractions at all breaking points. I even claim that it is only possible with special frameworks that solve such abstractions in a generally valid way. And this is still lacking today. I want to program bounded contexts, which I can bundle into an application individually or together. I want to be able to send queries, commands and events either in-process or via network plus middleware. I want to map the access of the frontend to the backend not fixed with REST, but an abstract interface - etc.
What you’re describing here would be mostly possible with traits in Rust or protocols in Swift. In C# you could sort of accomplish this with generic extensions… kinda.
Very well said. It gets especially incredibly difficult or even impossible to *design and build those abstractions* once you step out of the simplistic world of just thinking about sending a piece of data from one point to another in the system and actually start thinking about things like asynchrony, delivery guarantees, ordering guarantees, QoS, eventual consistency, transactions, error recovery etc. This basically is the reason why RPC is such a bad abstraction to abstract away network and make the whole communication seem like in-process function calls.
I think this is a bit of out of scope for this particular video where the goal is to distinguish between physical and logical boundaries and lament the senseless adoption of microservices architectures. He has many videos describing design patterns and architectural styles that are lower level than this. Like layered architecture, vertical slice, ports and adapters, etc
Maybe it's out-of-scope in some ways but the way I see it is that this video is somewhat suggesting that you can design your logical modules without thinking about your physical modules and it's really only possible when you have suitable abstractions at module boundaries. My comment is not strictly a criticism and definitely not a counter-argument to the video but I just feel that the video portrays accomplishing it much easier than it actually is.
Great thread here. I do mention at the end, loosely couple between logical boundaries. If you're doing that, you're not temporal coupled to begin with, nor do you need to think about in-process moving to RPC if you were to separate. Loosely couple from the get-go. And agreed, there's nuance to this and it's not "easy", but the point was to not assume they are need to be a 1:1.
We are better off following microservices sooner than later. The topic should rely more on defining the boundaries correctly.
Code to a interface and not a concretion.
All word salad “boundaries”, bounded context, monolith, micro service etc… are useless. Logical or physical boundaries should absolutely not affect your interfaces.
in other words.... C++ is king and is coming back.... we're going to start building and maintaining custom libraries to do our shit... and we can deploy containers that pick and choose which code to include based on the slice of the app it represents. Got it