Really liked the last alternative. Instead of trying to come up with a complicated distributed transaction schema you went to the source of the problem and came up with a more elegant and reliable solution 👍
@@lextr3110 I'd love to hear how it's simpler. Service-to-service 2pc would require everything to be blocking. Unless you had some type of sync blocking orchestration, you'd have no idea the network hops to services you'd be making. In other words you're contending with high latency... which again is blocking.
@@CodeOpinion you can make async calls that will not block, even from service to service, and it will clearly be faster than events with all their complexity. I'm not ditching out events for some subscribing microservices that does not care about loosing events/data.. but you need to make sure all the data in all dependent microservice database is perfectly saved before returning to the user and for this, async distributed transaction is better than risking weird data drift between microservices databases.. not sure why you don't point out to peoples that there is easier and safer distributed async crud pattern that should be use most of the time for any important crud operations. You can also do parallel service to service call if needed where possible for faster execution.. To return a "stored" event to the client/user when the full data is yet to be stored totally in all dependent microservice database is just playing with the devil..
@@lextr3110 In-process threading is still blocking. You ultimately need it to return to confirm it completed. It's not durable. If you make a async call eventually you're going to await for it to make sure it completed. If your process crashes at any point during that, then what? Also, to the point it's faster is not true. The more threads in a process your producing the more memory consumed. You will hit a point at which you've consumed all resources and it will not be faster. You have an upper bound of throughput that is dependent on other services latency. I'd love to point out if there were easier and safer methods, but unless your using a DTC, you're not going to be getting a distributed transaction. Can you point me to what you're referring to as a "easier and safer distributed async crud"?
I have followed the thread. Yes, the last solution is the simplest and cleanest: Keep the concept of an Order in the Payment service, and listen to events from Order service to update the data kept in Payments service. I'm familiar with approach, but I do admit that it would take me some time to realize that this is an acceptable solution. Storing information related to Order in Payment service is perhaps not how I would intuitively do it. This is the solution to the author's (of the question) worries about decoupling and preferring to do doing synchronous calls from Order service to the Payment service.
@@CodeOpinion Programmers thinks in terms of sequential operations, even between services. It takes some time to get used to events and to learn what you can do with them. Syncing data with events is a useful technique. And it simplifies the way you reason about a system. In my experience, a lot of software developers would be uncomfortable with what you are suggesting. Even more so if they are committed to the old ways. So we need more pragmatism in the field.
Ya, related to messaging, I've noticed a trend of people feeling like it's magic and "what if the message is lost!?!". As in that it's not going to be reliable.
We have a domain with similar flows but different boundary names, the difference is that the "Payment" context can accept multiple types of inputs not only "Orders". In that case our solution was to create a service for each entity that should have a payment and each service creates payments synchronously, and the Payment service itself doesn't care about the origin of the payment.
Once again, great video! 6:12 mins into the video you have mentioned that you will put a link in the description to the request-reply pattern. Looks like the link is missing. Could you please add?
I'd just like to point out, if there is a policy of overdrafting orders, then that should be within the payment/ subscription service. The order service should be dumb to the policy. In other words, the order system should keep on plugging out orders, until it is told to stop. The payment/ subscription service is the one handling the "kill order" or "continue order" commands. So, if the order service sent out two orders, yet the customer hadn't bumped up the payment, the balance would become negative. This state would also be used to consider killing the order process too. The way the process was presented, the payment/ subscription process was oblivious to the outgoing orders, until the customer actually paid again. No way that is going to be working well.
When designing your system, especially the data boundaries, it would be extremely counter-intuitive to expect that a payment system would have to know about an order status, and that the order service itself does not have it (unless with a carry state event). I mean I realize that setting data boundaries is not simple as it seems.
In your example, I still have one remark/question: how do you find which order needs to be updated within the payment domain? From what I understood, the link exists between the subscription and the customer then the customer is linked with the orders within the Order domain. You can see it in your view when you move a part of the Order, there's no information to know that. I also have a hard time for the correlation ID used to communicate here between the different domain: sometimes you use the CustomerId (I suppose), sometimes it's the OrderId. I know that we can use the SagaFinder to solve the issue but it seems like we are missing something, what do you think?
The OrderID would be apart of pretty every message. In my last example, when an OrderPlaced event is published, that event would contain the OrderID. The payment service would consume it and add to its DB a record for that OrderID.
Nice video, especially the last part about presentation often dictating what happens at the backend. Looking forward to the follow up viewmodel composition series. As an NSB champ I'll assume you've heard of Mauro Servienti? If so, are you familiar with his blogs on viewmodel composition too?
In this solution there would be two Order tables - one in each boundary. They would have different sets of columns. Services would read and write only Orders table they own.
@@LawZist That's true - records in both tables would match by orderId. They would be synchronized with events sent from Orders to Payments. In example shown in video the tables are basically the same and it may feel like unnecessary duplication. However in real domain table in Orders service would probably have more attributes related to order than table in Payments which is concerned only about amount and status. This example shows how making entity based services - "everything related to orders" or "everything related to payments" - may not be a good idea.
About the view composition, may you show how to tackle data that should be composed and displayed on rows format (table)? My approach is to aggregate the data by listening to events and building a read model in a front end component like a BFF.
ok, to summarise, if you are good sale guy, you can do engineering well, by manipulate the words to customers, its really how we present data meanings to customer, haha
IMO boundaries are one of the most important things to do, the hardest to get right. In the vast majority of my video's I'm giving simple examples of a "small" data structure. Reality is, even in this video I'd assuming in the real world, the payment service isn't 3 tables/documents with a few properties. I'm using it as high level example. The point isn't to cut data up "as small as possible", it would be to cut it up into boundaries where the data actually relates. My HR department cares about very different data than the IT department.
@@CodeOpinion if 2 departments care about different data, it would be great to split them up but lets assume we have a case that it is creating a technical nightmare what are the issues/cons with joining that data into one service and db? The same service can have the exact same end points and exposed data as if they were separate.
@@brandonpearman9218 Ya for sure it's feasible that you compose the data into one service, but that's usually for query purposes. Data is still owned by it's respective boundary.
Making an RPC call but requiring consistency between boundaries is a distributed monolith. Both services need to be online and available for the process to work correctly. Otherwise your left in an inconsistent state.
@@CodeOpinion I get that but even the scenario where you have an order object in the payment service. Maybe it’s just too simple of an example but why not just have 1 service? Do you need separation release cycles, and all the other text book so called micro services benefits? Micro services like this to me are too small. Micro services should be seen as separate businesses. Imagine an external client having to call two services to get information about an order. They wouldn’t be too happy I don’t think. Then the solution is to have some sort of api gateway in front of that, and complexity grows. All that being said great articulated video tho
Should it be two services? Who knows really, I didn't know the full context from the person asking the question. I could see it being two, given how large they likely are based on my own experience in various domain. But absolutely having one boundaries instead if two could of been a solution. Assuming it's not and you can't/shouldn't, the last example is often the solution I these types of situations where you want a distributed transaction.
Just a feedback on all your videos, This may make me look stupid, Your videos are incredible, And I genuinely hope that you get more viewers going forward. The thing is, For a lot of us English is our second or third language and most of us never had a proper CS education, I find it difficult to understand some of the fancy words used in your videos, I am ashamed to say despite being almost 7 years in the industry I don't understand what is meant by the word 'idempotent' but I probably know what it represents. Yes, I understand that as a viewer and a student I need to put some more effort and take the responsibility to learn more of these words if I want to move forward in my career. But this could be one of those areas of improvement where the global audience could be benefited from your videos.
Thank you for the comment! It's much appreciated. Agree that sometimes I reference and use words that I assume the viewer is aware of. I should at least give a bit of definition to these words when I use them and not assume. Again, thanks for the feedback.
Really liked the last alternative. Instead of trying to come up with a complicated distributed transaction schema you went to the source of the problem and came up with a more elegant and reliable solution 👍
Glad you got to the end of the video to see it!
what is complicated about distributed transaction? its a lot simple than his event architecture
@@lextr3110 I'd love to hear how it's simpler. Service-to-service 2pc would require everything to be blocking. Unless you had some type of sync blocking orchestration, you'd have no idea the network hops to services you'd be making. In other words you're contending with high latency... which again is blocking.
@@CodeOpinion you can make async calls that will not block, even from service to service, and it will clearly be faster than events with all their complexity. I'm not ditching out events for some subscribing microservices that does not care about loosing events/data.. but you need to make sure all the data in all dependent microservice database is perfectly saved before returning to the user and for this, async distributed transaction is better than risking weird data drift between microservices databases.. not sure why you don't point out to peoples that there is easier and safer distributed async crud pattern that should be use most of the time for any important crud operations. You can also do parallel service to service call if needed where possible for faster execution.. To return a "stored" event to the client/user when the full data is yet to be stored totally in all dependent microservice database is just playing with the devil..
@@lextr3110 In-process threading is still blocking. You ultimately need it to return to confirm it completed. It's not durable. If you make a async call eventually you're going to await for it to make sure it completed. If your process crashes at any point during that, then what? Also, to the point it's faster is not true. The more threads in a process your producing the more memory consumed. You will hit a point at which you've consumed all resources and it will not be faster. You have an upper bound of throughput that is dependent on other services latency. I'd love to point out if there were easier and safer methods, but unless your using a DTC, you're not going to be getting a distributed transaction. Can you point me to what you're referring to as a "easier and safer distributed async crud"?
I have to say, one of the more concise and clear cut explanations for this topic by far. Looking forward to your micro-ui video!
I have followed the thread. Yes, the last solution is the simplest and cleanest: Keep the concept of an Order in the Payment service, and listen to events from Order service to update the data kept in Payments service. I'm familiar with approach, but I do admit that it would take me some time to realize that this is an acceptable solution. Storing information related to Order in Payment service is perhaps not how I would intuitively do it.
This is the solution to the author's (of the question) worries about decoupling and preferring to do doing synchronous calls from Order service to the Payment service.
Ya, I think most wouldn't find it intuitive for the reasons I mentioned: Focus on entity services and ultimately thinking about queries.
@@CodeOpinion Programmers thinks in terms of sequential operations, even between services. It takes some time to get used to events and to learn what you can do with them. Syncing data with events is a useful technique. And it simplifies the way you reason about a system.
In my experience, a lot of software developers would be uncomfortable with what you are suggesting. Even more so if they are committed to the old ways. So we need more pragmatism in the field.
Ya, related to messaging, I've noticed a trend of people feeling like it's magic and "what if the message is lost!?!". As in that it's not going to be reliable.
We have a domain with similar flows but different boundary names, the difference is that the "Payment" context can accept multiple types of inputs not only "Orders". In that case our solution was to create a service for each entity that should have a payment and each service creates payments synchronously, and the Payment service itself doesn't care about the origin of the payment.
Another option when you are small: use a decoupled monolith with shared databases in different schemas. This lets you use database transactions.
Does the order service is a background consumer ? In visual studio what project that we can use ?
Once again, great video!
6:12 mins into the video you have mentioned that you will put a link in the description to the request-reply pattern. Looks like the link is missing. Could you please add?
Oops. Here it is: ruclips.net/video/6UC6btG3wVI/видео.html
I was asked the same question in a different form in an interview.. now I know why I was rejected 😅😅
What was your answer/response?
I'd just like to point out, if there is a policy of overdrafting orders, then that should be within the payment/ subscription service. The order service should be dumb to the policy. In other words, the order system should keep on plugging out orders, until it is told to stop. The payment/ subscription service is the one handling the "kill order" or "continue order" commands. So, if the order service sent out two orders, yet the customer hadn't bumped up the payment, the balance would become negative. This state would also be used to consider killing the order process too. The way the process was presented, the payment/ subscription process was oblivious to the outgoing orders, until the customer actually paid again. No way that is going to be working well.
When designing your system, especially the data boundaries, it would be extremely counter-intuitive to expect that a payment system would have to know about an order status, and that the order service itself does not have it (unless with a carry state event). I mean I realize that setting data boundaries is not simple as it seems.
Cool love the last concept, instead of some type of sagas for distributed transaction. Have you done the 'UI and view compositions' video yet?
Not yet! I need to get to this! I will, I promise!
In your example, I still have one remark/question: how do you find which order needs to be updated within the payment domain? From what I understood, the link exists between the subscription and the customer then the customer is linked with the orders within the Order domain. You can see it in your view when you move a part of the Order, there's no information to know that.
I also have a hard time for the correlation ID used to communicate here between the different domain: sometimes you use the CustomerId (I suppose), sometimes it's the OrderId. I know that we can use the SagaFinder to solve the issue but it seems like we are missing something, what do you think?
The OrderID would be apart of pretty every message. In my last example, when an OrderPlaced event is published, that event would contain the OrderID. The payment service would consume it and add to its DB a record for that OrderID.
Nice video, especially the last part about presentation often dictating what happens at the backend. Looking forward to the follow up viewmodel composition series. As an NSB champ I'll assume you've heard of Mauro Servienti? If so, are you familiar with his blogs on viewmodel composition too?
Yup! I might reference his example project as one solution.
@@CodeOpinion Awesome. You do great content, good use of the diagrams, and just the right length to get the point across too. Keep it up please.
When there will be part about obtaining data for view models? Thanks for yours channel!
Coming soon!
Great Video!
Glad you enjoyed it
Regarding the boundaries section - Isn’t it a microservices anti pattern to query the same table (orders) from multiple services?
In this solution there would be two Order tables - one in each boundary. They would have different sets of columns. Services would read and write only Orders table they own.
@@guazpl So does it mean that for every record in one service db table there is a matching record on the other service db table?
@@LawZist That's true - records in both tables would match by orderId. They would be synchronized with events sent from Orders to Payments. In example shown in video the tables are basically the same and it may feel like unnecessary duplication. However in real domain table in Orders service would probably have more attributes related to order than table in Payments which is concerned only about amount and status.
This example shows how making entity based services - "everything related to orders" or "everything related to payments" - may not be a good idea.
About the view composition, may you show how to tackle data that should be composed and displayed on rows format (table)? My approach is to aggregate the data by listening to events and building a read model in a front end component like a BFF.
Yup, I'll talk about that.
You did not awnser the question?? why distributed async crud transaction is wrong?
12:03 Anybody got links to these vids? Finding those via the youtube UI is actually kind of hard :>
Here you go: ruclips.net/video/ILbjKR1FXoc/видео.html
ok, to summarise, if you are good sale guy, you can do engineering well, by manipulate the words to customers, its really how we present data meanings to customer, haha
5:15 Dead letter queue
Restructure your boundaries. There is a weird obsession amongst architects to cut data up as small as possible regardless of the use case.
IMO boundaries are one of the most important things to do, the hardest to get right. In the vast majority of my video's I'm giving simple examples of a "small" data structure. Reality is, even in this video I'd assuming in the real world, the payment service isn't 3 tables/documents with a few properties. I'm using it as high level example. The point isn't to cut data up "as small as possible", it would be to cut it up into boundaries where the data actually relates. My HR department cares about very different data than the IT department.
@@CodeOpinion if 2 departments care about different data, it would be great to split them up but lets assume we have a case that it is creating a technical nightmare what are the issues/cons with joining that data into one service and db? The same service can have the exact same end points and exposed data as if they were separate.
@@brandonpearman9218 Ya for sure it's feasible that you compose the data into one service, but that's usually for query purposes. Data is still owned by it's respective boundary.
I don’t understand the benefits of doing this. This feels like a distributed monolith
Making an RPC call but requiring consistency between boundaries is a distributed monolith. Both services need to be online and available for the process to work correctly. Otherwise your left in an inconsistent state.
@@CodeOpinion I get that but even the scenario where you have an order object in the payment service. Maybe it’s just too simple of an example but why not just have 1 service? Do you need separation release cycles, and all the other text book so called micro services benefits?
Micro services like this to me are too small. Micro services should be seen as separate businesses. Imagine an external client having to call two services to get information about an order. They wouldn’t be too happy I don’t think. Then the solution is to have some sort of api gateway in front of that, and complexity grows.
All that being said great articulated video tho
Should it be two services? Who knows really, I didn't know the full context from the person asking the question. I could see it being two, given how large they likely are based on my own experience in various domain. But absolutely having one boundaries instead if two could of been a solution. Assuming it's not and you can't/shouldn't, the last example is often the solution I these types of situations where you want a distributed transaction.
Just a feedback on all your videos, This may make me look stupid, Your videos are incredible, And I genuinely hope that you get more viewers going forward.
The thing is, For a lot of us English is our second or third language and most of us never had a proper CS education, I find it difficult to understand some of the fancy words used in your videos, I am ashamed to say despite being almost 7 years in the industry I don't understand what is meant by the word 'idempotent' but I probably know what it represents. Yes, I understand that as a viewer and a student I need to put some more effort and take the responsibility to learn more of these words if I want to move forward in my career. But this could be one of those areas of improvement where the global audience could be benefited from your videos.
Thank you for the comment! It's much appreciated. Agree that sometimes I reference and use words that I assume the viewer is aware of. I should at least give a bit of definition to these words when I use them and not assume. Again, thanks for the feedback.