It's almost as if the actually interesting and challenging parts in programming and computer science in general are cross-cutting concerns that can't be tidily encapsulated according to the management ideology in fashion and the remaining is trivial boilerplate code.
He's one of the few people who turn their brains ON and are not affected by cargo-cult stuff (Eric Evans calls this stuff "shiny objects" in another talk). I'm not saying microservices are bad: the current, widely-accepted, interpretation of distributed system is basically broken. Everywhere.
Great talk! This applies to any application development (web api, desktop app, etc). Sadly, many developers always just applies the n-tier layer architecture without understanding the domain or features required.
Excellent on first half of the presentation, but it went all the way down after that. Putting search aside which is a very specific problem that requires very specific solution, the other two examples are also too vague. The so called rules engine pattern looks just like pipe and filter pattern to me unless I missed something here, and how it's to be implemented from microservice perspective is not mentioned and forgive my slow brain but I can't imagine how it can be implemented without a commonly understood contract which will introduce coupling again.
Yes. I'd rather have the engine work as an orchestrator gathering the different pieces. The price is coupling from the engine. You can have a more complex setup to avoid somehow this decoupling by having each service on startup register itself with certain capabilities/interfaces (pricing, fraud,..) in a central registry and the engine finding services with certain capabilities and looping over the endpoints associated with them. Another completely different approach is replicate the data from the different sources via events in which case you can centralize all the logic in a single microservice. For all this to work you need to publish domain events
I thought for years SOA was several jars that were deployed to separate jvm containers. I'm a java developer. Not until I went to my next job and they had all the service in one app or monolith. Did I realize how wrong you could write SOA. We were writing microservices in 2010
Using interfaces to separate concerns is great. And maybe that will work fine for pricing or fraud scoring. But for search your approach will not scale past a few thousand objects. Even if google has spoiled us, indices exist for a reason, and searching is more than filtering. You say that searching is as simple as implementing a filtering interface, which ignores sort order and pagination, but let's put that aside for now. Even if we treat searching as equivalent to filtering alone, with your approach how could we search across millions of products for all that have, for example a user-input color and size? Well, we have to call each of those services and ask them for all of the products matching some criteria they own. Then we have to take the intersection of their results. This means we will be copying a few million IDs around, most likely 128-bit UUIDs, potentially across the network. This will be untenable as the product space and the number of variations grows. By the time you get to ten million products and 30 services, we're talking about a gigabyte of data being copied from service to service. Even in RAM, that's a tall ask. And what of sort order? If I want simple sort orderings (eg ORDER BY price ASC, rating DESC), we can have that logic live in the corresponding service, perhaps as an optional parameter, and make sure our merging logic preserves order. But what if I want a sort order that is more complicated, that takes some function of price and rating and other parameters, so that the customer can see a mix of cheap products and highly-rated products? That sort logic can't live in the child services, because eg the price service has no idea about ratings and vice versa. Such functionality simply cannot be done with a filtering interface alone; we need a way to see both at the same time, and convert those properties in to numeric values. So maybe the filtering interface returns the data as well, but then we're copying even more data back and forth between services. Or maybe we pass a list of venues in to some sort order scoring service or services, but again, more copying data back and forth - performance will be poor. Maybe I'm missing something, but it seems to me like you reached a little too far here.
I think this is possible if you have multiple feeders for search service building indices? I think the point is that helper services own pushing data to the search service. The individual teams own the feeders while the search service owns internal indexing, pagination, querying et cetera. But it should have been made clear in the video without being too cryptic about that. This is just my attempt at understanding Udi.
Yeah since those searchable attributes are very non volatible, even immutable, why wouldn't you just cache them in a search service as properly indexed read only reference data. The services that actually own the data will publish events when anything changes and the search will be up to date. Low coupling and actually works. Trying too much to make this a problem to be solved by his company's product...
It took too long to set the context for rule engines and when it came to it, it was mentioned too generically, like any other component that uses other service's sub-components... The presentation was so good, though and that is why I avoid labeling it "click-bait" via "rule engines"...
Microservices can be, among other things, an excellent unit of deployment. This is where I disagree with Udi Dahan. He's a smart cookie, no doubt about it - listen to what he says, but synthesise from your own experiences, too.
Good talk, but yet another example of "Sell what you know". Fast forwarded whole video to find out its all theoretical and the speaker demands money in the end to even show quick demo/live example.
You seem to be restricting the space of possible structures here. Most complicated functions depending on several inputs aren't products of input specific factors. Suppose a company has a coupon for 20% off for orders up to £50. Now it matters which order the services in. But maybe each service can implement a PriceTransform:float->float function. Now you have a regional discount. And a coupon discount. And a frequent user discount. Except customers shouldn't get multiple discounts at once. But they should get whichever 1 discount makes the price lowest. Or maybe your fraud detection algorithm is just to throw all your data into a big neural net.
Started a talk from fat monolith and finished with new re-creation of distributed monolith, highly coupled by "process". It's not good idea to follow this guy.
if you rebrand the same thing every ten years your keep the original thing ... so we should still have the same thing as at the dawn of computing, even those ingenious wooden adders and subtractors. Now, we clearly haven't. But this is not the only blast from the past in this type of presentation that generalizes problems. The tone of voice and melody of the speaker reflects this and mimics shills. The speaker doesn't need this as there are interesting lessons to learn but he totally spoils this by going the shilly path. He should decouple his shillness from the real meat, even if there isn't too much meat here. Meh.
Is it only me who think this talk is so misleading and full of mistakes! I didn't expect from someone like Udai with such reputation to give a vague, spooky talk like this one. Where is the business rule engine in the microservice architecture? how you started mocking microservices ... agile ... consultants .. to give at the end an idea not even related to BRE, seriously that's what you are trying to promote here? (A plugin system), for those who are giving positive feedback: he is talking about writing DLLs (Assemblies) which implement an interface where you can use (DI) to load them at run time. Period. Really shame. Mr. Udai, I'm dealing a real-world problem I'm trying to solve (Process Millions of items) through dynamic business rules, It led me to this video which I spent 1 hour watching and additional 10 minutes to respond to, to find out at the end plugins! what a disappointment.
I havent finished the talk yet but what Udi addresses here are the use cases where you need to apply multiple business logic and rules on data owned by other autonomous services. He refers to this system as an "engine". Now, you group these use cases into their function (search, custom pricing) and apply the same core principles of microservices unto them. Most important of which is autonomy, removing temporal coupling. Jimmy bogard spoke about the same thing in his talk Avoiding Microservices megadisasters, tho not as deeply but with a more concrete example.
Now that I've finished the talk, I understand your frustration. I thought that the dynamic assembly loading part is unnecessary, too. The talk felt like one big ad since there's no conclusion
@@Miggleness There is nothing in this talk related to BRE(s) as a matter of fact I found one commercial product to integrate BRE in microservices by decision, in reality what is called BRE is implicit flow in the microservices architecture, can be implemented with Saga for workflow. The traditional BRE is more or less obsolete in the systems requiring intense processing. We ended up creating our own DSL for business rules, I wish evangelists can stop using misleading buzz titles. What makes me wonder those likes and comments claiming that they have benefited from this talk! While I found it not that much useful.
All of this works great in paper, but nobody does this type of 'composition' in real world. If you watch talks from developers from Amazon, Netflix, Google, LinkedIn,...you never seen this type of architecture implemented.
@@mahermali Can you point a little towards custom DSL for business rules? are these rules processing large data? The existing rule engines don't fit the scenario? You had more simplified requirements that trade for performance over complexity of a BRE? Did you have a requirement for dynamic facts?
It's almost as if the actually interesting and challenging parts in programming and computer science in general are cross-cutting concerns that can't be tidily encapsulated according to the management ideology in fashion and the remaining is trivial boilerplate code.
0:41 "I tend to take a contrary viewpoint." One of the reasons I like Udi :)
He's one of the few people who turn their brains ON and are not affected by cargo-cult stuff (Eric Evans calls this stuff "shiny objects" in another talk). I'm not saying microservices are bad: the current, widely-accepted, interpretation of distributed system is basically broken. Everywhere.
This talk. clarifies the concept of Microservices.
He doesn't understand how microservices should work. He still wants to keep single source of data, while it's the monolith way
Good talk! Have applied these rules since 2012 and they work.
What's so special about rules engine? Isn't just a component where it aggregates all data from another service into its own?
Great talk! This applies to any application development (web api, desktop app, etc). Sadly, many developers always just applies the n-tier layer architecture without understanding the domain or features required.
I could say "well said" for lots of things said in this video
Excellent on first half of the presentation, but it went all the way down after that. Putting search aside which is a very specific problem that requires very specific solution, the other two examples are also too vague. The so called rules engine pattern looks just like pipe and filter pattern to me unless I missed something here, and how it's to be implemented from microservice perspective is not mentioned and forgive my slow brain but I can't imagine how it can be implemented without a commonly understood contract which will introduce coupling again.
The components are typically just sharing IDs and not specific data.
Basically, rules engines mean here the Command design pattern on the deployment level.
So basically it's dependency inversion on a service-level.
When we deploy these components/assemblies together as a single deployment create a dependency in terms of release upgrades.
Yes. I'd rather have the engine work as an orchestrator gathering the different pieces. The price is coupling from the engine.
You can have a more complex setup to avoid somehow this decoupling by having each service on startup register itself with certain capabilities/interfaces (pricing, fraud,..) in a central registry and the engine finding services with certain capabilities and looping over the endpoints associated with them.
Another completely different approach is replicate the data from the different sources via events in which case you can centralize all the logic in a single microservice. For all this to work you need to publish domain events
@@mcalavera81 BTW in the talk Udi Dahan explicitly goes against the data replication.
Good talk, to the point and concrete.
Thanks for this talk. But after listening I didn't get how rule engine works with microservices
This is good. This is a really good talk. Excellente!
Very interesting, especially the last fifteen minutes
I thought for years SOA was several jars that were deployed to separate jvm containers. I'm a java developer. Not until I went to my next job and they had all the service in one app or monolith. Did I realize how wrong you could write SOA. We were writing microservices in 2010
36:52 Rules Engines
Watching this now I realise we are in a similar boat, every single thing he said is true.
Very good talk - outstanding.
Great talk
Do the rules engines themselves belong to the IT/Ops service?
I love you udi .....!!!
Great presentation! Thank you!
Great talk! The Big Picture!
Using interfaces to separate concerns is great. And maybe that will work fine for pricing or fraud scoring. But for search your approach will not scale past a few thousand objects. Even if google has spoiled us, indices exist for a reason, and searching is more than filtering.
You say that searching is as simple as implementing a filtering interface, which ignores sort order and pagination, but let's put that aside for now. Even if we treat searching as equivalent to filtering alone, with your approach how could we search across millions of products for all that have, for example a user-input color and size? Well, we have to call each of those services and ask them for all of the products matching some criteria they own. Then we have to take the intersection of their results. This means we will be copying a few million IDs around, most likely 128-bit UUIDs, potentially across the network. This will be untenable as the product space and the number of variations grows. By the time you get to ten million products and 30 services, we're talking about a gigabyte of data being copied from service to service. Even in RAM, that's a tall ask.
And what of sort order? If I want simple sort orderings (eg ORDER BY price ASC, rating DESC), we can have that logic live in the corresponding service, perhaps as an optional parameter, and make sure our merging logic preserves order. But what if I want a sort order that is more complicated, that takes some function of price and rating and other parameters, so that the customer can see a mix of cheap products and highly-rated products? That sort logic can't live in the child services, because eg the price service has no idea about ratings and vice versa. Such functionality simply cannot be done with a filtering interface alone; we need a way to see both at the same time, and convert those properties in to numeric values. So maybe the filtering interface returns the data as well, but then we're copying even more data back and forth between services. Or maybe we pass a list of venues in to some sort order scoring service or services, but again, more copying data back and forth - performance will be poor.
Maybe I'm missing something, but it seems to me like you reached a little too far here.
Absolutely true. Adding facets to the mix it just won't work. Tools like elastic search, typesense exist for a reason
I think this is possible if you have multiple feeders for search service building indices? I think the point is that helper services own pushing data to the search service. The individual teams own the feeders while the search service owns internal indexing, pagination, querying et cetera. But it should have been made clear in the video without being too cryptic about that. This is just my attempt at understanding Udi.
Yeah since those searchable attributes are very non volatible, even immutable, why wouldn't you just cache them in a search service as properly indexed read only reference data. The services that actually own the data will publish events when anything changes and the search will be up to date. Low coupling and actually works. Trying too much to make this a problem to be solved by his company's product...
It took too long to set the context for rule engines and when it came to it, it was mentioned too generically, like any other component that uses other service's sub-components... The presentation was so good, though and that is why I avoid labeling it "click-bait" via "rule engines"...
Are we ever going to see the NDC Sydney 2017 videos? There's a playlist but it's full of deleted videos....
Microservices can be, among other things, an excellent unit of deployment. This is where I disagree with Udi Dahan. He's a smart cookie, no doubt about it - listen to what he says, but synthesise from your own experiences, too.
Good talk, but yet another example of "Sell what you know". Fast forwarded whole video to find out its all theoretical and the speaker demands money in the end to even show quick demo/live example.
אודי... תותח!
If search is not a service, where is the code that coordinates a search and calls all dependencies?
You seem to be restricting the space of possible structures here. Most complicated functions depending on several inputs aren't products of input specific factors.
Suppose a company has a coupon for 20% off for orders up to £50. Now it matters which order the services in. But maybe each service can implement a PriceTransform:float->float function.
Now you have a regional discount. And a coupon discount. And a frequent user discount. Except customers shouldn't get multiple discounts at once. But they should get whichever 1 discount makes the price lowest.
Or maybe your fraud detection algorithm is just to throw all your data into a big neural net.
how come this guy have less than 1000 likes? sounds like half of the corpos needs to hear this...
Started a talk from fat monolith and finished with new re-creation of distributed monolith, highly coupled by "process". It's not good idea to follow this guy.
Is not this smart pipeline and dumb endpoint
Meh, mellow
if you rebrand the same thing every ten years your keep the original thing ... so we should still have the same thing as at the dawn of computing, even those ingenious wooden adders and subtractors. Now, we clearly haven't. But this is not the only blast from the past in this type of presentation that generalizes problems. The tone of voice and melody of the speaker reflects this and mimics shills. The speaker doesn't need this as there are interesting lessons to learn but he totally spoils this by going the shilly path. He should decouple his shillness from the real meat, even if there isn't too much meat here. Meh.
Is it only me who think this talk is so misleading and full of mistakes! I didn't expect from someone like Udai with such reputation to give a vague, spooky talk like this one. Where is the business rule engine in the microservice architecture? how you started mocking microservices ... agile ... consultants .. to give at the end an idea not even related to BRE, seriously that's what you are trying to promote here? (A plugin system), for those who are giving positive feedback: he is talking about writing DLLs (Assemblies) which implement an interface where you can use (DI) to load them at run time. Period. Really shame. Mr. Udai, I'm dealing a real-world problem I'm trying to solve (Process Millions of items) through dynamic business rules, It led me to this video which I spent 1 hour watching and additional 10 minutes to respond to, to find out at the end plugins! what a disappointment.
I havent finished the talk yet but what Udi addresses here are the use cases where you need to apply multiple business logic and rules on data owned by other autonomous services. He refers to this system as an "engine". Now, you group these use cases into their function (search, custom pricing) and apply the same core principles of microservices unto them. Most important of which is autonomy, removing temporal coupling. Jimmy bogard spoke about the same thing in his talk Avoiding Microservices megadisasters, tho not as deeply but with a more concrete example.
Now that I've finished the talk, I understand your frustration. I thought that the dynamic assembly loading part is unnecessary, too. The talk felt like one big ad since there's no conclusion
@@Miggleness There is nothing in this talk related to BRE(s) as a matter of fact I found one commercial product to integrate BRE in microservices by decision, in reality what is called BRE is implicit flow in the microservices architecture, can be implemented with Saga for workflow. The traditional BRE is more or less obsolete in the systems requiring intense processing. We ended up creating our own DSL for business rules, I wish evangelists can stop using misleading buzz titles. What makes me wonder those likes and comments claiming that they have benefited from this talk! While I found it not that much useful.
All of this works great in paper, but nobody does this type of 'composition' in real world. If you watch talks from developers from Amazon, Netflix, Google, LinkedIn,...you never seen this type of architecture implemented.
@@mahermali Can you point a little towards custom DSL for business rules? are these rules processing large data? The existing rule engines don't fit the scenario? You had more simplified requirements that trade for performance over complexity of a BRE? Did you have a requirement for dynamic facts?
Great talk