I added Open Instrumentation to my personal project. Using Zipkin, I see where errors occur and potential bottlenecks when executing a something from one end. Very useful!
This really goes to show that guaranteed ordering granted by Kafka is not only a blessing, but can also be a curse. It allows you to worry less about compensation logic for mistimed events, but it does come with a price.
Yup. As always, tradeoffs. There are many situations where you can handle out of order events. Typically these are in workflow situations where you're using events as notifications. Where order feels "required" is when you're using it for data distribution.
About the observability, we use in production the Elastic Search APM suite that has the advantage to correlate logs within a transaction. And it's really simple to make events being correlated between each other, I don't know why this suite is quite never used or quoted (we use the free version). For the rest, we use Kafka as well rather than a classic message broker because we want to be able to process again ordered integration events in the future: for coming microservices that will have to handle a specific retroactive process on ordered events / for BI tools which want to ingest data from years in any number of data sink / for building 'instant' multiple projections models, right after an Event sourcing engine persistence trigger ... Nice video as usual
@@uziboozy4540 we also use Dynatrace but the pricing is really expensive. On our current needs in backend, we never found Dynatrace more useful than Elk APM
When you think your project is hard then you realize people are building monsters with 2k services wtf., it's almost impossible to comprehend the scale and size of some of these projects
Agree'd. I don't have any knowledge to what the size of their services, so I can't really comment. I can only imagine they are using Kafka a lot of data distribution if you have that many independently deployable services. But that's a different topic!
I am curious when you start to add more than a few events and have several microservices how do you keep track of what service publishes or consumes specific events? We have been trying to come up with a way to document the interactions in an easy to understand and meaningful way. If you haven't covered this topic in your previous videos I think that it would be great to see how you are able to stay organized when there are lots of services and events.
[author of this blog post here] At wix we use dedicated Grafana dashboards to see all event related information or service. We also plan to add "service" view to our back office (which is currently more topic oriented)
[author of blog post here] many of the services have database tables, some of the services are just aggregates, or only interact with other services without keeping there own data. The dev platform at wix makes it really easy to add a document type table in MySQL
Great video, however having 2000 services is just nuts, and likely overengineerd. On Kafka and event order guarantee: I think event order guarantee is highly overrated. I can only imagine one use-case and that is for projections/state synchronisations. However with fast and smart caching you can just use events for notifications that something has been changed and use the cache or update the cache accordingly (by fetching from original data source which also can have caching). In a sense you just use the events as a cache invalidation trigger. Additionally this will keep your events really small. What do you think @CodeOpinion?
@@natansil It depends how "real-time" the value needs to be. The value can always be stale, even 1ms after fetching from the origin source (db, etc). You should embrace stale data, especially when doing event driven architecture.
Ordering is an interesting "requirement". I have a video coming out shortly about the different ways to use events. If you're using them for data distribution, then that's a different use case than using events for workflow. With workflow, ordering isn't often required. When using data distribution (aka event carried state transfer) built around "entities", then you'd need to keeping track of versions. I've done a video on ordering awhile ago, I should do a new one. ruclips.net/video/ILEb5LsSf5w/видео.html
@@CodeOpinion In my opinion data distribution problem can be solved in simpler ways, introducing events that carry state will eventually end in events that have large payload and are hard to maintain (I think you had a video about that). Also having duplicate state is a form of optimalisation and can be solved with caching instead. For example you can have a caching layer which invalidates and refetches from the origin when event x occurs (used as a notification).
I'm new in kafka, as we know in consumer group single consumer can consume from only one partition, my question is, how we can do horizontal scaling, if we have 3 partitions in topic a 3 consumers in group and now we want 4 consumers?
According to their blog post. I'm unaware of the details so I'll hold judgement but overall I think our industry doesn't get that logical boundaries don't absolutely have to be physical boundaries.
@@CodeOpinion because wix built a platform for our developers to easily write microservices, for simplification it was decided that each single entity CRUD will be deployed in a separate service. It also relates to the concept of open platform where all these services APIs are exposed as building blocks to the website developers.
I added Open Instrumentation to my personal project. Using Zipkin, I see where errors occur and potential bottlenecks when executing a something from one end. Very useful!
This really goes to show that guaranteed ordering granted by Kafka is not only a blessing, but can also be a curse. It allows you to worry less about compensation logic for mistimed events, but it does come with a price.
Yup. As always, tradeoffs. There are many situations where you can handle out of order events. Typically these are in workflow situations where you're using events as notifications. Where order feels "required" is when you're using it for data distribution.
@@CodeOpinion Can you provide some few examples of data distribution scenario? I was wondering about it, but it could be another thing.
About the observability, we use in production the Elastic Search APM suite that has the advantage to correlate logs within a transaction. And it's really simple to make events being correlated between each other, I don't know why this suite is quite never used or quoted (we use the free version).
For the rest, we use Kafka as well rather than a classic message broker because we want to be able to process again ordered integration events in the future: for coming microservices that will have to handle a specific retroactive process on ordered events / for BI tools which want to ingest data from years in any number of data sink / for building 'instant' multiple projections models, right after an Event sourcing engine persistence trigger ...
Nice video as usual
Because there's far superior products to Elastic Search APM such as Dynatrace 😅
@@uziboozy4540 we also use Dynatrace but the pricing is really expensive. On our current needs in backend, we never found Dynatrace more useful than Elk APM
2000? Do they make a new one every time developers quit or something?
When you think your project is hard then you realize people are building monsters with 2k services wtf., it's almost impossible to comprehend the scale and size of some of these projects
Agree'd. I don't have any knowledge to what the size of their services, so I can't really comment. I can only imagine they are using Kafka a lot of data distribution if you have that many independently deployable services. But that's a different topic!
Thanks! Useful for my Kafka cases.
I am curious when you start to add more than a few events and have several microservices how do you keep track of what service publishes or consumes specific events? We have been trying to come up with a way to document the interactions in an easy to understand and meaningful way. If you haven't covered this topic in your previous videos I think that it would be great to see how you are able to stay organized when there are lots of services and events.
[author of this blog post here] At wix we use dedicated Grafana dashboards to see all event related information or service. We also plan to add "service" view to our back office (which is currently more topic oriented)
2000 microservices, my goodness.
Very interesting. As someone else said 2000 microservices is a lot. Does that mean Wix has 2000 databases as well?
Good question! Maybe I should get Natan on who created the post!
[author of blog post here] many of the services have database tables, some of the services are just aggregates, or only interact with other services without keeping there own data. The dev platform at wix makes it really easy to add a document type table in MySQL
Great video, however having 2000 services is just nuts, and likely overengineerd.
On Kafka and event order guarantee: I think event order guarantee is highly overrated. I can only imagine one use-case and that is for projections/state synchronisations. However with fast and smart caching you can just use events for notifications that something has been changed and use the cache or update the cache accordingly (by fetching from original data source which also can have caching). In a sense you just use the events as a cache invalidation trigger. Additionally this will keep your events really small. What do you think @CodeOpinion?
Regarding caching, you still need to make sure you don't update with stale values...
@@natansil It depends how "real-time" the value needs to be. The value can always be stale, even 1ms after fetching from the origin source (db, etc). You should embrace stale data, especially when doing event driven architecture.
Ordering is an interesting "requirement". I have a video coming out shortly about the different ways to use events. If you're using them for data distribution, then that's a different use case than using events for workflow. With workflow, ordering isn't often required. When using data distribution (aka event carried state transfer) built around "entities", then you'd need to keeping track of versions. I've done a video on ordering awhile ago, I should do a new one. ruclips.net/video/ILEb5LsSf5w/видео.html
@@CodeOpinion In my opinion data distribution problem can be solved in simpler ways, introducing events that carry state will eventually end in events that have large payload and are hard to maintain (I think you had a video about that). Also having duplicate state is a form of optimalisation and can be solved with caching instead. For example you can have a caching layer which invalidates and refetches from the origin when event x occurs (used as a notification).
How do you run the events or records that failed on the micro services?
I'm new in kafka, as we know in consumer group single consumer can consume from only one partition, my question is, how we can do horizontal scaling, if we have 3 partitions in topic a 3 consumers in group and now we want 4 consumers?
You'd need another partition. 4 Consumers in a group, with 3 Partitions, that means that 1 consumer in that group won't be doing anything.
2000?
According to their blog post. I'm unaware of the details so I'll hold judgement but overall I think our industry doesn't get that logical boundaries don't absolutely have to be physical boundaries.
@@CodeOpinion because wix built a platform for our developers to easily write microservices, for simplification it was decided that each single entity CRUD will be deployed in a separate service. It also relates to the concept of open platform where all these services APIs are exposed as building blocks to the website developers.